Amazon Interview Question
Country: United States
Interview Type: Phone Interview
Extend linkedhashmap and override
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
return size() > CACHE_SIZE;
}
everytime you put element in linkedhashmap you get call back to the method mentioned above for which the parameter is eldest entry in the linked hashmap .. so you remove the eldest entry if your linkedhashmap size is greater than the defined CACHE_SIZE
Implementing the LRU cache algorithm. Given following interface.
interface CacheStrategy<K, T> {
// get the data by key from the cache.
T get(K key);
// put <key, data> pair to the cache
void put(K key, T data);
}
Answer :
The get and put can be done in O(1) time.
1. the data structure
The core of the algorithm is a doubly linked list that indexed by a hash map. The double link is implicitly sorted by the age of the data. Once the data is accessed, it is refreshed. So, the data is moved to the head of the list(age==0).
class Node {
Node prev; //doubly linked list
Node next;
T data; //data and the key
K key;
}
Map<K, Node> map; //node is indexed by the key to support node access in O(1).
2 doubly linked list
For a doubly linked list, the insert and deletion only takes O(1). So, it is quite efficient to implement the LRU algorithm. The reason to use doubly linked list rather than the singly linked list is to delete the nodes in the middle.
By define the head and tail node, the insertion and deletion of a node can be simplified.
a.Initialize the head nodes.
//hook up the head and tail nodes
head = new Node(null, null);
tail = new Node(null, null);
head.next = tail;
tail.prev = head;
b.Remove the node from the list.
private void detach(Node node) {
// detach it from the queue
node.prev.next = node.next;
node.next.prev = node.prev;
}
c. Attach the node to the list.
private void attach(Node head, Node node) {
// add a node after the head.
node.prev = head;
node.next = head.next;
node.next.prev = node;
node.prev.next = node;
}
3 maintain the cache
Get the data from the cache needs to refresh the data in the cache. That is, move the node from the middle to the head.
@Override
public T get(K key) {
Node node = map.get(key);
if (node == null)
return null;
if (map.size() == 1)
return node.data;
// refresh
// - detach it from the queue
detach(node);
// - attach it to head
attach(head, node);
return node.data;
}
Besides a refresh action, putting data into the cache also needs to maintain the cache size and update the content.
/**
* put <key,data> into cache.
*/
@Override
public void put(K key, T data) {
if (maxsize <= 0)
return;
Node node = map.get(key);
if (node != null) {
// hit. Refresh the data
detach(node);
attach(head, node);
node.data = data;
} else {
// miss. Create new node and adjust the size.
node = new Node(key, data);
map.put(key, node);
attach(head, node);
if (map.size() > maxsize) {
// overflow, remove the tail
detach(tail.prev);
map.remove(tail.prev.key);
}
}
How about a circular linked list where item to replace will always be the first element in the queue
class CacheService
{
static List<string> LRUKeys;
static Hashtable lookup;
static int capacity;
internal CacheService(int _capacity)
{
capacity = _capacity;
lookup = new Hashtable();
LRUKeys = new List<string>();
}
internal void Put(string ikey, object ival)
{
if (LRUKeys.Count == capacity)
{
LRUKeys.RemoveAt(0);
lookup.Remove(ikey);
}
if (LRUKeys.Count < capacity)
{
LRUKeys.Add(ikey);
if (!lookup.ContainsKey(ikey))
lookup.Add(ikey, ival);
}
}
internal object Get(string ikey)
{
object retObj=null;
if (lookup.ContainsKey(ikey))
{
retObj = lookup[ikey];
LRUKeys.RemoveAt(LRUKeys.IndexOf(ikey));
LRUKeys.Add(ikey);
}
return retObj;
}
internal void Delete(string ikey)
{
if (lookup.ContainsKey(ikey))
{
lookup.Remove(ikey);
LRUKeys.RemoveAt(LRUKeys.IndexOf(ikey));
}
}
internal void Update(string ikey,object newVal)
{
if (lookup.ContainsKey(ikey))
{
lookup[ikey]=newVal;
LRUKeys.RemoveAt(LRUKeys.IndexOf(ikey));
LRUKeys.Add(ikey);
}
}
}
- sk June 21, 2012