Amazon Interview Question


Country: India




Comment hidden because of low score. Click to expand.
1
of 1 vote

What is the question? What we have to do?

- loveCoding January 30, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

The requirements are not clear. Can you please elaborate the question ?

- Srikant Aggarwal January 31, 2012 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 votes

Requirement is how will you buffer the data from hard disk to RAM so that buffer will be efficient. i.e. sizeof buffer can be of 1000 records. so how will you search record in it & how to remove old records when new records come from h/d.

- Anonymous February 01, 2012 | Flag
Comment hidden because of low score. Click to expand.
1
of 1 vote

This is a page replacement question asked in a different way
We can use the Least Frequently Used (LFU) page replacement algorithm
Each record in the cache can be seen as a node in a linked list. Whenever a record is accessed its count is increased by 1. Whenever a page replacement has to me made we can select the nodes with the least count as the victims.

- Anony February 08, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

Well yeah ..there can also be another version instead of incrementing a count and then search could be inefficient..so what we could do is when a cache hit occurs we delete the entry from there and shift it at the end.... and also new records are inserted at end....so when we have to delete any thing we delete the first element in the link list... this is actually more near to Least Recently Used... but LRU and LFU are almost the same....

- p February 19, 2012 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

how will u buffer the data in a fixed size cache, so that old records will be flushed and new can be saved and second we should be able to do search on a unique key.

- Anonymous February 01, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Use hashtable to store the unique ID records and then store them again in a min heap where the top element denotes the element that is used least amount of time. When trying to clear the hash table values always use the top ID from the min heap.

- hello world February 01, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

This is a page replacement question asked in a different way
We can use the Least Frequently Used (LFU) page replacement algorithm
Each record in the cache can be seen as a node in a linked list. Whenever a record is accessed its count is increased by 1. Whenever a page replacement has to me made we can select the nodes with the least count as the victims.

(nobrainer .co cc)

- Anony February 08, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

Dont you think implementing caches using linked list will take a lot of time for data retrieval? Every time a query needs to be executed, the list needs to be traversed and checked for the elements presence.

- /var/usr February 09, 2012 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

Heap it is!

- Anonymous March 14, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

i) A table to maintain cache
ii) A hash table to maintain (unique ID) -> Location in cache
iii) A two dimensional array of size [1000,2]. One column will have unique Id and the other will have count. When a unique id figured out in cache, the count is increased. If not, a query to DB is made. The array is copied and sorted in ascending order using count column. Pickup the unique ids with lowest count, and replace them with the returned query result.

- Harihara Sudhan August 09, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

hashmap
have a hash table with a pointer pointing to a node in a queue. The queue's head is least recently used.
everytime you insert, check if cache limit is hit. If hit, remove head of queue and insert the new file in the hash record, add node in queue tail and update hash pointer to queue tail.
This is fine if there are no hash collisions, but with collisions, each bucket should have another hash table or heap-tree. But with heap-tree O(1) cannot be done. With second level hierarchical hash, collisions cannot be prevented.

- Ravikanth Samprathi March 18, 2014 | Flag Reply


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More