Amazon Interview Question for Software Engineer / Developers


Country: India
Interview Type: Phone Interview




Comment hidden because of low score. Click to expand.
0
of 0 vote

Two basic options come to mind:

1. Dictionary (hash table) and priority queue (binary heap). The dictionary maps from addresses to a record containing the value stored at that address and the last access time. The priority queue is maintained in parallel to the hash table and orders the records by last access time. Cache hits and misses are both O(log n) time. Cache hits take O(log n) time rather than O(1) time because they have to update an item in the priority queue with the current time in addition to looking up that item's address in the hash table.
2. Ordered dictionary (binary search tree). Much the same as the above. The ordered dictionary (most likely implemented as a balanced binary search tree) supports deleting the minimum element, lookups, insertions and modifications in O(log n) time, so cache hits and misses both take O(log n) time.

The difference between the two options is in the constant factors for both space and time. The hash table and binary heap will take up more space than a single red-black tree or AVL tree. On the other hand, the hash table lookup plus heap modification (one constant time plus one logarithmic time operation) will likely beat the corresponding operations for the binary search tree (three logarithmic time operations for a cache miss: one for the lookup, one for the eviction, one for the insertion).

Finally, a true LRU cache is expensive to implement and compromises are often used. A common trick is quantizing access times by deferring updates. The idea is to not update access times on cache hits straightaway, but to only set a bit to mark the entry as having been accessed. Then every once in a while the whole cache is traversed and entries that have the access bit set have their access times updated to the current time. Their bits are also reset for the next cycle. This means that cache hits are now only O(1) time right when they happen (the priority queue isn't immediately updated). We can charge the cost of the periodic linear-time traversal to each element, so as long as the period doesn't happen more than every n accesses, the amortized cost per element is still O(1).

- psykotic February 13, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

As this is an interview question, i would say first use this approach -
Have a doubly-linked list to hold the items.
Have a Map(use a red-black tree) where key = item, value = location of the item in the linked-list.

Whenever you want to access an element, get the location from the Map, insert the item at the beginning of the linked-list and update the location in the Map.
Whenever you want to delete the least recently used item, delete it from the end of the linked-list and remove it from the Map.

There can be ā€™Nā€™ number of improvements on caching mechanism but those are best kept for research.

- Snehasish Barman February 13, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Google -> java.util.LinkedHashMap

- kartikaditya February 14, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

I think we can also implement using Stack
Latest referenced frame we can put on the top & older below it.
While replacing we can pop all the elements & put the new element on the top.
Only thing need to take care is stack overflow.

Complexity will be O(max no of frames in main memory)
Let me know if it can cause any prblm

- joker February 14, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

I think we can also implement using Stack
Latest referenced frame we can put on the top & older below it.
While replacing we can pop all the elements & put the new element on the top.
Only thing need to take care is stack overflow.

Complexity will be O(max no of frames in main memory)
Let me know if it can cause any prblm

- joker February 14, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

Duplicate posts.

- Snehasish Barman February 14, 2012 | Flag


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More