Amazon Interview Question
Software Engineer / DevelopersA linked list + hashtable of pointers to the linked list nodes is the usual way to implement LRU caches. This gives O(1) operations (assuming a decent hash). Advantage of this (being O(1)): you can do a multithreaded version by just locking the whole structure. You don't have to worry about granular locking etc.
Briefly, the way it works:
On an access of a value, you move the corresponding node in the linked list to the head.
When you need to remove a value from the cache, you remove from the tail end.
When you add a value to cache, you just place it at the head of the linked list.
#include<stdio.h>
#include<stdlib.h>
#define bufferSize 3
int listSize=0;
struct node {
int data;
struct node *link;
};
typedef struct node *node;
node getnode() {
node x;
x = (node) malloc(sizeof (struct node));
if (x == NULL) {
printf("out of memory\n");
exit(0);
}
return (x);
}
void freenode(node x) {
free(x);
}
node push(int item, node first) {
/*
node temp;
temp = getnode();
temp->data = item;
temp->pos=pos;
temp->link = first;
return temp;*/
node temp;
node prev;
node cur=first;
temp=getnode();
temp->data=item;
int flag=0;
//insert always at front always to mark it as recent and delete always from last.
if(first==NULL){
listSize++;
temp->link=first;
return temp;
}
if(cur->data == temp->data){
return cur;
}
while(cur->link!=NULL){
if(cur->link->data == temp->data){
flag=1;
//delete the cur element and insert the same at front
prev=cur->link;
cur->link=prev->link;
free(prev);
listSize--;
temp->link=first;
listSize++;
return temp;
}else {
prev=cur;
cur= cur->link;
}
}
if(!flag){
if(listSize<bufferSize){
temp->link=first;
listSize++;
//printf("\n%d",listSize);
return temp;
}else {
printf("not found\n");
free(cur);
prev->link=NULL;
listSize--;
temp->link=first;
listSize++;
printf("\n%d->",listSize);
return temp;
}
}
return temp;
}
void display(node first) {
node temp;
if (first == NULL) {
printf("list is empty\n");
return;
}
printf("the contents of stack are \n");
temp = first;
while (temp != NULL) {
printf("%d", temp->data);
printf("->");
temp = temp->link;
}
}
int main(){
//int cache[10]={-1,-1,-1,-1,-1,-1,-1,-1,-1,-1};
node first = NULL;
int choice, item;
for (;;) {
printf("\t\n1:push\t\n3:display \t\n4:exit");
printf("enter the choice\n");
scanf("%d", &choice);
switch (choice) {
case 1:
{
printf("enter the item\n");
scanf("%d", &item);
first = push(item, first);
break;
}
case 2:display(first);
break;
case 0:exit(0);
case 3: display(first);
break;
default:printf("invalidchoice\n");
break;
}
}
return 0;
}
Hi Bru,
I am shocked, shocked, that there is such article exist!! But I really think you did a great job highlighting some of the key #topic in the entire space.
I am curious, since basically all the paid-for services are already mentioned in current billing statement (pay-as-you-go EC2 instances, and two reserved instances - counted already for all the hours remaining till this month's end). Apart from other services, which will add small sums to the final invoice, all major parts of upcoming invoice are all in place.
Excellent tutorials - very easy to understand with all the details. I hope you will continue to provide more such tutorials.
Thank you,
Ajeeth
Hash table and linked list implementation of the Map interface, with predictable iteration order. This implementation differs from HashMap in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (insertion-order). Note that insertion order is not affected if a key is re-inserted into the map. (A key k is reinserted into a map m if m.put(k, v) is invoked when m.containsKey(k) would return true immediately prior to the invocation.)
- addict April 27, 2010This implementation spares its clients from the unspecified, generally chaotic ordering provided by HashMap (and Hashtable), without incurring the increased cost associated with TreeMap. It can be used to produce a copy of a map that has the same order as the original, regardless of the original map's implementation:
void foo(Map m) {
Map copy = new LinkedHashMap(m);
...
}
This technique is particularly useful if a module takes a map on input, copies it, and later returns results whose order is determined by that of the copy. (Clients generally appreciate having things returned in the same order they were presented.)
A special constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches. Invoking the put or get method results in an access to the corresponding entry (assuming it exists after the invocation completes). The putAll method generates one entry access for each mapping in the specified map, in the order that key-value mappings are provided by the specified map's entry set iterator. No other methods generate entry accesses. In particular, operations on collection-views do not affect the order of iteration of the backing map.