Google Interview Question for Software Engineer / Developers


Country: United States




Comment hidden because of low score. Click to expand.
6
of 8 vote

I think what the interviewer meant was distributed cache, much like memcached or coherence. In essence, distributed cache is internally implemented using Distributed Hash Table. While implementing DHT, we need to keep in mind about CAP theorem i.e Consistency, Availability and Partition tolerance. Only two out of this three can be fulfilled at one time and depending upon your requirements, you can pick any two. I

- chandershivdasani September 04, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
1
of 1 vote

By parallel cache mean the cache memory shared by a number of cores (in processor).
As the L1 Cache is dedicated to seperate cores. L2 Cache is dedicated to 2 cores (in dual core processor) And is generally larger in space then L1 Cache the L2 Cache is jointly used by the 2 processors.Parallel Caching means sharing of cache memory to more than one processor.(Used to overall increase performance increasing cache hit latency....

- Anonymus VIP B@l@rk September 06, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

In my experience with parallel programming, the major threat I found is false sharing. It occurs when your threads are modifying variables that reside in a single cache making it invalidate and do updates on all others. To ensure data consistency across multiple caches, multiprocessor-capable Intel® processors follow the MESI (Modified/Exclusive/Shared/Invalid) protocol. On first load of a cache line, the processor will mark the cache line as ‘Exclusive’ access. As long as the cache line is marked exclusive, subsequent loads are free to use the existing data in cache. If the processor sees the same cache line loaded by another processor on the bus, it marks the cache line with ‘Shared’ access. If the processor stores a cache line marked as ‘S’, the cache line is marked as ‘Modified’ and all other processors are sent an ‘Invalid’ cache line message. If the processor sees the same cache line which is now marked ‘M’ being accessed by another processor, the processor stores the cache line back to memory and marks its cache line as ‘Shared’. The other processor that is accessing the same cache line incurs a cache miss.

- Spurthi chag September 19, 2012 | Flag Reply
Comment hidden because of low score. Click to expand.
-1
of 1 vote

By parallel cache you mean multithreaded (on one computer) or muti-machine like memcached?

- Anonymous September 04, 2012 | Flag Reply


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More