Google Interview Question
Software Engineer / DevelopersCountry: United States
Interview Type: In-Person
Caches are present in server side as well.
For example a DB cache that chaches frequent SQL query results. This will prevent frequently run queries from hitting the DB server very often. Example, if you have a dropbox for the list of states and if the list is maintained in DB you dont have to always fetch from the DB. THe webserver will fetch that only once and then caches at the server side.
This question submitter, a user named 'Guy', has been submitted over 50 "Google" questions. He/She has been spamming with incomplete and ambiguous questions and yet all the other users are trying to answer these fake questions. What a waste of time!
I reported this user so many times and yet the questions are keep coming!
I think it means the cache is a LRU (Least Recently Used). It's implemented as linked list + hash map (eg. LinkedHashMap in Java). When the cache is presented with an URL, it first checks if the URL is present in the hash map. If so, it returns the corresponding content, and also moves the corresponding element in the linked list to the front to indicate that this URL has just been fetched. Otherwise, it removes the last element from the linked list and also from the hash map, and inserts this new URL to the front of the linked list and to the hash map.
So this means, when there is only one server, there will be limited cache and the process of removing the last element from the linked list and hash, and inserting new URL to the front will be frequent. In this situation, if the link list has keys in order of most frequently requested URLs instead of most recently requested should work better
The question looks a little incomplete in terms of the problem statement. Hashmap and linkedlist seems to me LRU cache implementation. The items (keys) of the hashmap point to the nodes in linked list and the payload of linked list has the value (page content cache). After every fetch operation the head points to the element just fetched. That way the last element is always the least recently used and would be thrown off the cache when the cache is full with n keys. This way the caching server would work optimally by having a high cache hit for frequently used pages and a cache miss would occur more frequently for infrequently visited pages.
- NEO May 06, 2014