Bloomberg LP Interview Question for Senior Software Development Engineers


Country: United States




Comment hidden because of low score. Click to expand.
3
of 3 vote

Paxos leader election. Apache Zookeeper implements paxos. And Zookeeper is used by hadoop, solr etc.

- Jen August 28, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

I think this is the perfect solution.
though you must understand paxos properly before giving it as a solution in an interview, expect the Interviewer to grill you on it.

- sarangkunte1991 November 01, 2014 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

I think, we need two or more nodes holding and exchanging the metadata (about all existing working and non-working nodes and data backups/synchronization). All other nodes should query the other ones as needed to distribute the queries and collecting the results.

- igorfact August 13, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

How about distributing the data across evenly on all the nodes ,

eg: Every node has a configuration params that defines the near cache nodes(the data will be present in any of the near cache nodes if not present in it).

As data gets added to the node , depending on the optimal configuration , data is deplicated among all the near nodes ...

This would solve the distributed data and improves overall cputime in transaction oriented operations...

- Teja August 14, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

In hadoop,we must have to create master node for storing metadata about the datanodes(on which actual data is stored)...if tht node also fails thr is provision of backup for master node

- sparkingsun143 August 17, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

In networking there are algorithms using which each router maintains the updated routing tables and then it is updated timely by collaboration between all the routers. In the same way each node can keep the configuration of which node is free and available and that configuration can be regularly updated using some collaboration algorithm. The job submitted can be placed in a queue and then from the queue it can be picked up by a node. All nodes can poll the queue at certain frequency. The node who picked up the job can then send request to different node to schedule the job. If node does not accept the request then it will send the request to other node which it knows are free. The node which did not accept request it will mark as busy.
This may not be exact solution since there may be certain trade off but something which can be proposed.

- Anonymous September 26, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

push all the the metadata to the datanodes or slaves.Let each node know how many nodes are exists.

- vi April 18, 2015 | Flag Reply


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More