Amazon Interview Question for Software Engineer / Developers






Comment hidden because of low score. Click to expand.
0
of 0 vote

Initially my response was to use a grep command in unix and filter the stuff and also make a count. but it turns out to be not perfect. Then i responded with another idea. Using a HashMap which has the key as a node(clientid,url) and the value as the count it is visited. It was right and she just asked me for a pseudocode.

- GBC February 24, 2011 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

can u plz write d psuedocode

- kiwi March 27, 2011 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Please write the psuedocode.

- Ritesh August 14, 2011 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

class Node{
int clientId;
String url;
}

HashMap<Node> h = new HashMap<Node>();
FileInputStream fstream = new FileInputStream("textfile.txt");
// Get the object of DataInputStream
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
//Read File Line By Line
while ((strLine = br.readLine()) != null) {

//Parse the content and store the clientId and url in the Node
// If the clientid is the same o the url is not in the HashMap (depending on what the interviewer want), add it to the hashmap. If present, get the value and increment it.
}

- Anonymous August 23, 2011 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

hashmap with only key no value?

- Amey January 05, 2015 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

We can also implement the same by making bucket of set of records, say bucket/node of size 1kb(assuming 1 kb block size).
class bucket
{long client_id[]=new long[x]; // can be calculated assuming id of fix size
long offset[]=new long[]; // offsets of records in original file
}
now we can use extedible hashing/external hashing to add ids to same buckets by using hash fx:key mod(2 power g),where g is global depth of hash table/array ,which is incremented at each bucket split.
now we can write these buckets to an external file/hash file as we know each bucket size is 1 kb and number of buckets too by using following formulas
seek(pos) where pos takes values like 0,1024,2048 etc...
at retrieval time , we can hash the client id, using same hash function,and seek to hash file,from that we can reach the actual record in input file , which we can print.Also we can increment the count in input file so next time we get correct informaton.

- Aman December 07, 2011 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

We can use the below data struct :

HashTable<(Customer ID), HashTable<(URL), Frequency>>

It should work on daily basis

- Anonymous January 27, 2013 | Flag Reply


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More