eugene.yarovoi
BAN USERSanjay: I don't believe your algorithm will always yield the correct answer. Consider 3 12 24 20 31. The correct answer is 3 20 31  12 24. I believe your algo would try 3 24 31  12 20 and then consider this unpartionable.
 eugene.yarovoi January 19, 2012Obviously there's a solution. In fact, I have at least a few in mind.
The question is how efficient your solution is. That's always the question in computer science. Algorithms for most things are obvious if you're willing to accept inefficient ones.
Is this restricted to integer rectangle dimensions? Otherwise, there's an infinite number of possibilities.
 eugene.yarovoi January 19, 2012Perhaps your exact suggestions weren't quite right, but on a relatively easy question like this, the interviewer would in fact be looking at how careful you're being.
 eugene.yarovoi January 19, 2012Even in the case of threads, you'd need to be careful when updating. Another process may have its own cached versions of those variables, and in the absence of synchronization, there may not be strong guarantees as to when a particular variable is read anew from memory.
 eugene.yarovoi January 19, 2012What I mean to say is: didn't we all kind of invent DeMorgan's Laws? It's ridiculous for someone's name to be attached to something so obvious and basic.
 eugene.yarovoi January 19, 2012I'd like to contribute to this discussion by saying that I really hate it when absolutely trivial results are called rules or theorems and have a name attached to them. I see you, DeMorgan, I see you.
 eugene.yarovoi January 19, 2012Sorting will be at least O(NlogN), though, unless you know the range of values and can do something akin to a counting sort.
 eugene.yarovoi January 19, 2012I downvoted this because you used a theorem in your proof whose proof is pretty close to the point of this whole problem. Use of that theorem makes this problem way too easy, and so it's clear that an interviewer wants to see you prove that theorem, or else find another elementary proof. You'd be missing the whole point by giving the solution that you gave.
 eugene.yarovoi January 19, 20121
"The same logic for 3." No, it's not the same logic for 3. Here, the condition that p+2 is prime comes into play, something that wasn't even considered in the proof for 2.
A simpler way to argue divisibility by 3: if p and p+2 are both prime, p = 2 (mod 3). This has to be the case because p = 1 (mod 3) implies p+2 (mod 3) = 0. Since p = 2 (mod 3), p+7 = 0 (mod 3).
 eugene.yarovoi January 19, 2012Sure, there could still be benefits. I don't know what you mean by "process itself is pretty responsive", but I would note that having multiple threads allows you to do tasks while one thread is, for example, waiting on disk I/O or network I/O to complete.
_
Furthermore, processors employing hyperthreading can execute instructions from multiple threads simultaneously. This may offer a performance advantage because instructions from one thread can run when the instruction stream from another thread is stalled because of data hazards or memory access latency.
1: unclear
 eugene.yarovoi January 19, 2012Anonymous is right. In fact, since the array's unsorted, there's not really any more efficient way to find out.
 eugene.yarovoi January 17, 2012This question should have been posted on the forum.
 eugene.yarovoi January 17, 2012The whole point of this question is that you don't have original point x, y
 eugene.yarovoi January 17, 2012Center of mass...implies some kind of weighting. How would you derive such, or is every point weighted the same? In that case, your answer is no different from that of the person who suggested taking the average. I still think that the only reasonable answer based on what's given in the question is to ask "well, what's the distribution of the guessing process?"
 eugene.yarovoi January 17, 2012Fair enough. I supposed getting the bytes was the important part. How's this:
char * get_ip_address(int ip_address)
{
unsigned char bytes[4];
bytes[0] = (unsigned char) (ip_address >> 24);
bytes[1] = (unsigned char) (ip_address >> 16);
bytes[2] = (unsigned char) (ip_address >> 8);
bytes[3] = (unsigned char) ip_address;
//up to 3 chars for 4 groups of digits, 3 dots, and 1 null char
char* out = (char*) malloc((3*4+3+1)*sizeof(*out));
sprintf (out, "%u.%u.%u.%u", bytes[0], bytes[1], bytes[2], bytes[3]);
return out;
}

eugene.yarovoi
January 16, 2012 Yep. There's also an algo that avoids the logarithmic factor entirely and just computes the answer in O(n). Look up the rank algorithm
 eugene.yarovoi January 16, 2012@tsichevski: No, afraid that's not right. For all you know, every number *could* be put into the heap, so the worst case is, in fact, O(n log k). Consider the situation where the elements are supplied to you in monotonely increasing order.
 eugene.yarovoi January 16, 2012What kind of guesses? How were these guesses formed?
 eugene.yarovoi January 16, 2012Well, constant memory with the caveat that you will destroy the original array. For most situations, destroying inputs is undesirable, and so this needs linear space to make a copy of the array. Still, the constant factor on the linear space is less than for a set.
 eugene.yarovoi January 16, 2012Or just look at any of the 10 times this problem has already been solved on this site.
 eugene.yarovoi January 15, 2012Tries don't have to have a huge space complexity.
 eugene.yarovoi January 15, 2012Not defined by the standard. short >= 8, int >= 16, pointer >= max number of bits that could possibly be needed for an address, which is usually = number of bits in architecture (32 or 64)
 eugene.yarovoi January 15, 2012That's really weird.
 eugene.yarovoi January 15, 2012This is a very good approach.
 eugene.yarovoi January 15, 2012I doubt they're meant to be continuous. Why would you assume that?
 eugene.yarovoi January 15, 2012What's BSS?
 eugene.yarovoi January 15, 2012Answered many times before on this site.
 eugene.yarovoi January 15, 2012char * get_ip_address(int ip_address)
{
unsigned int ip = (unsigned int) ip_address;
char* bytes = (char*) malloc(4*sizeof(*bytes));
bytes[0] = (char) (ip_address >> 24);
bytes[1] = (char) (ip_address >> 16);
bytes[2] = (char) (ip_address >> 8);
bytes[3] = (char) ip_address;
return bytes;
}
This would really take an unsigned int and return an unsigned char* if I got to pick...
 eugene.yarovoi January 15, 2012This approach is a bit slow, like you mentioned. One way to improve it would be to use a min heap of size m to keep track of the next smallest element of each array. Until there is no more data remaining, remove the min element in the heap, and then add to the heap the next element from the same array the removed element originated from (so you'd have to keep track of that info together with the values themselves). What this does is cut your search time for getting the minimum of m arrays down to O(log m) from your earlier O(m), giving a total complexity of O(m*log(m)*n). Note that heapsort is a special case of this algorithm where n = 1.
_
Merging pairs of arrays is a decent approach too. It's basically like making the final log(m) passes of merge sort, and so will run in O(m*n*log(m)) time. A standard merge sort is a special case of this algorithm where n=1 (and then there are m*1 = m total elements, and they are sorted in O(m*log(m) time)).
_
Which of the two efficient algos discussed here is better probably depends on use case. The heap algo requires more operations on highly localized data but makes only one pass over the input. In an environment where the data doesn't fit into memory and disk reads are expensive, the heap approach may perform better. On the other hand, the second algorithm (which reads each element a logarithmic number of times) is easy to parallelize across multiple machines.
It won't be O(m*n). Assuming some favorable things, maybe it'll be O(m*log(m)*n).
 eugene.yarovoi January 15, 2012Well, I guess the heap solution can also be done inplace if you're clever. But the rank algorithm avoids the logarithmic factor.
 eugene.yarovoi January 15, 20121 on myself.
 eugene.yarovoi January 12, 2012I had attempted to package a fairly tough argument in an informationtheoretic shell to make the argument easy to understand for people that have seen such arguments before. Similar argumentation can be used to show that sorting based on comparisons cannot be faster O(N logN) and such.
_
This argument's actually kind of tough. Here's what I'm saying. When the professor makes an observation, by the calculations I showed, he makes 1 of at most 1287 distinct observations. Now, there are 13! possible arrangements of students. So, let's say the professor *partitions* the space of 13! possible outcomes into 1287 partitions, partition K representing the outcomes that are still possible after making first observation K. Even if he can make each partition the same size, there is at least some partition that has 13!/1287 = 4838400 outcomes.
Repeating this process, after a second observation, there's still some partition of size 3760 (ceil of 4838400/1287). After a third observation, some partition still has size at least 3. After a 4th observation, he could narrow it to 1 outcome.
Note that this proof doesn't show K can be as low as 4, it just shows no K less than 4 is possible. In fact, I don't think it's possible to do K < 5, but K= 5 is easily possible. These informationtheoretic arguments just give a useful lower bound.
You could do a binary search for the square root. This is the most elementary approach that requires you to know the least math. It's also arguably a very compsciency way of doing it.
 eugene.yarovoi January 09, 2012Doublechecked locking can actually be incorrect, and is thus often considered an antipattern. Find the article on it on Wikipedia.
 eugene.yarovoi January 09, 2012I actually think the algorithm described sounds more like preorder. There is no indication that *all* nodes on the same level are traversed before nodes on the next level.
 eugene.yarovoi January 09, 2012I'm sure the expected answer is polymorphism. Certainly, though, polymorphism does contribute to abstraction.
 eugene.yarovoi January 08, 2012There's only one answer for the sum of integers from 1 to 100. One test case. The answer's 5050.
 eugene.yarovoi January 08, 2012Internally, it's possible for different threads to allocate on different heaps to avoid the need for synchronization. In that situation, threadspecific heaps may be copied over to a shared heap periodically. These details are highly language, compiler, and runtimespecific, though.
The answer above should be more than sufficient for your typical interviewer.
If the domain is words, that could be a somewhat larger space than you might hope.
 eugene.yarovoi January 08, 2012Seriously, what is it with everyone trying to use LCS for everything? LCS is a relatively inefficient algorithm (quadratic) at best, and incorrect in most problems, like this one, since the words can appear in a different order in the magazine than they do in the ransom note.
Remember, LCS is applicable in a relatively narrow range of situations.
This is a classic question. Been asked this in at least 2 interviews.
 eugene.yarovoi January 08, 2012This assumes you can destroy the original array, which you should ask the interviewer about.
 eugene.yarovoi January 08, 2012Agreed, if N is small, insertion sort should probably be used. Depends on how small we're talking about.
 eugene.yarovoi January 08, 2012Of course it won't work. You need an array of length 2^161. Whether or not this is acceptable depends on the situation.
 eugene.yarovoi January 08, 2012This wouldn't work because the longest common subsequence could include more than 1 type of character.
 eugene.yarovoi January 08, 2012
RepGayle L McDowell, CEO at CareerCup
Gayle Laakmann McDowell is the founder / CEO of CareerCup, which provides programming interview prep for candidates interviewing with Microsoft, Google ...
RepLoler, Employee at Google
Rep
You can even do it without a stack.
 eugene.yarovoi January 19, 2012