Amazon Interview Question
Country: India
This is a page replacement question asked in a different way
We can use the Least Frequently Used (LFU) page replacement algorithm
Each record in the cache can be seen as a node in a linked list. Whenever a record is accessed its count is increased by 1. Whenever a page replacement has to me made we can select the nodes with the least count as the victims.
Well yeah ..there can also be another version instead of incrementing a count and then search could be inefficient..so what we could do is when a cache hit occurs we delete the entry from there and shift it at the end.... and also new records are inserted at end....so when we have to delete any thing we delete the first element in the link list... this is actually more near to Least Recently Used... but LRU and LFU are almost the same....
This is a page replacement question asked in a different way
We can use the Least Frequently Used (LFU) page replacement algorithm
Each record in the cache can be seen as a node in a linked list. Whenever a record is accessed its count is increased by 1. Whenever a page replacement has to me made we can select the nodes with the least count as the victims.
(nobrainer .co cc)
i) A table to maintain cache
ii) A hash table to maintain (unique ID) -> Location in cache
iii) A two dimensional array of size [1000,2]. One column will have unique Id and the other will have count. When a unique id figured out in cache, the count is increased. If not, a query to DB is made. The array is copied and sorted in ascending order using count column. Pickup the unique ids with lowest count, and replace them with the returned query result.
hashmap
have a hash table with a pointer pointing to a node in a queue. The queue's head is least recently used.
everytime you insert, check if cache limit is hit. If hit, remove head of queue and insert the new file in the hash record, add node in queue tail and update hash pointer to queue tail.
This is fine if there are no hash collisions, but with collisions, each bucket should have another hash table or heap-tree. But with heap-tree O(1) cannot be done. With second level hierarchical hash, collisions cannot be prevented.
What is the question? What we have to do?
- loveCoding January 30, 2012