## eugene.yarovoi

BAN USERNot quite. If y is divisible by x, the answer should be just y/x.

- eugene.yarovoi October 25, 2012C++ doesn't have such a keyword.

- eugene.yarovoi October 23, 2012No. The top-voted solution uses O(n) memory where n is the number of countries. There is no way your solution can use any less, and will use exponentially more in some cases.

- eugene.yarovoi October 23, 2012Actually the idea from "Approach 1" is correct (in that it gives the optimal number of moves) if modified a little to remove bugs. If implemented properly, it can run in O(n).

- eugene.yarovoi October 22, 2012Wiki Dutch National flag for another technique

- eugene.yarovoi October 22, 2012I can promise you that there are cases for which the number of needed moves is larger. The answer is always between num_not_at_correct_pos and 2*num_not_at_correct_pos.

- eugene.yarovoi October 22, 2012@Prash: the original solution has complexity O(number of set bits); your solution has complexity O(number of set bits + number of not-set bits)

- eugene.yarovoi October 21, 2012Unfortunately there are about n! states here, so this solution won't scale. Yes, there are better ways to do this, even in O(n) time. I'll try to post a solution later.

- eugene.yarovoi October 21, 2012@Peter: yes, the O(1) runtime will be amortized runtime in this solution as it is currently stated. I would argue that this is probably acceptable, but should be checked with the interviewer.

@Epic_coder: I second algos opinion that remove() was probably intended to take a parameter. Otherwise, what is its contract? To just remove any element whatsoever? That doesn't seem tremendously useful. I suppose its contract could be to remove the last element inserted, but there's no particularly good evidence to suggest this was the intention.

which reduces to 2^2N - 2^N

- eugene.yarovoi October 20, 2012@Prash: You might still use an approach similar to what you describe if k is large. Loler's solution is optimized for k being small. Also, I don't think dealing with the "distinct" condition, if indeed it was intended to be part of this problem, is that easy. You can't just not print duplicate pairs because the problem requires that you form n/2 pairs. You must form the pairs in such a way that none are duplicates.

- eugene.yarovoi October 20, 2012Nope. The rand(5) in this problem doesn't give you something random in a continuous interval; rather, it gives you an integer.

If further explanation is required: rand(5) only produces 5 possible outputs. If those are the inputs to the rand7() function you're trying to build, your rand7() function can have at most 5 possible outputs, which is certainly wrong.

@Anonymous: How about 1 1 1 1 1 1 3 3 and k=4? Sum is 12. 12 % 4 == 0, but you cannot form 4 pairs such that each will be divisible by k. So the condition Jason indicates is necessary but insufficient.

- eugene.yarovoi October 17, 2012Under what precise conditions is a person to be considered a legend?

- eugene.yarovoi October 11, 2012@Simple Coder:

I'd like to start by pointing out that Loler's solution is O(n), which is optimal.

Your solution would have been O(n) as well, but unfortunately it's completely incorrect. Consider k=4 and 1 1 1 1 1 1 3 3. Sum divisible by 4? Yep. 3+3 > 4? Yep. But there is no way to do the splitting into pairs.

@Anonymous above: It takes O(log n) to find the minimum. You end up doing this O(n^2) times.

- eugene.yarovoi October 10, 2012A[0][0][N-1] is not guaranteed to be smaller than A[0][1][0]. The only sortedness property here is that if you have two positions whose coordinates differ in only one dimension, the position with the greater value of the differing dimension holds a value >= to the position with the lesser value of the differing dimension.

- eugene.yarovoi October 09, 2012Anonymous: not the same as my algorithm. Mine costs O (NlogN) preprocessing and O (logN) per subsequent query; yours costs up to to O (N) per subsequent query.

@Sukusuku: this problem has both initial preprocessing complexity and per-query complexity that need to be discussed. See my analysis in the original answer.

Sort all the login times in one list, sort all the logout times in another list. This is O(NlogN) preprocessing. Now when a query comes along, do a binary search on both lists to count the number of logins and the number of logouts prior to the specified time. Then the number of logged in users is logins minus logouts. This query answering process is O(logN). So for N elements and D queries, we achieve O((N+D) log N) when we count both preprocessing and query cost.

- eugene.yarovoi October 08, 2012"The key idea is for the merge subroutine to maintain the invariant that T[i] beat its preceeding elements and lost to its succeeding elements. "

This invariant cannot possibly always be maintained because wins are not acyclic. It could be that T[1] loses to T[2] and T[2] to T[3], but T[1] wins against T[3].

I suppose this modified topological sort is correct. But if you're going to achieve O(N^2), you might as well use a simpler algorithm. For instance, take the first element, and declare it to be a chain. Now for each additional element, try inserting it somewhere into this chain (it's not hard to show that given any element E not in the chain and any chain, E can be inserted somewhere into the chain).

- eugene.yarovoi October 07, 2012It's just like quicksort in that respect. It's possible that we can find a way to guarantee that we don't choose these edge cases, similarly to how we can apply the median of medians algorithm on quicksort to guarantee that quicksort performs in deterministic O(NlogN).

- eugene.yarovoi October 06, 2012You can use an algo that gets its idea from quicksort. Pick a team, and partition the other teams into a set of teams that lost to that team and a set of teams that won against that team.

```
def order (list S)
{
team t = pickRandomElementFrom(S)
// + here on a list means append
return order (all x in S such that x lost to t) + t + order (all x in S such that t lost to x)
}
```

The expected time complexity is O(n log n) for n teams. The analysis is the same as for quicksort.

- eugene.yarovoi October 05, 2012@RS: Why do you think the list of numbers is random? This is not stated in the problem. What if they're adversarially chosen?

- eugene.yarovoi October 05, 2012That would probably be my first thought too. To give some analysis: the person who posted this answer probably (and apologies if I'm mistaken) meant that we should choose an integer uniformly at random from the space of all possible integers we're considering (let's say this question is about 32-bit ints). Then, because there's > 4*10^9 possibilities (in the case of 32-bit ints), if we scan our array of size 10^9 for the number we just chose, the chances we'll find it is < 1/4. If we don't find it, the algo terminates and returns the number. If we find it, we just run the whole algorithm again. So, we have that E[running time] = O(n) + P(we need to run again)*E[running time]. This implies E[running time] = O(n) / (1-P). Since we know P < 1/4, we know (1-P) > 3/4. This means E[running time] < 4/3 O(n), which means E[running time] is still O(n).

- eugene.yarovoi October 05, 2012I believe it's called a converging geometric series.

Suppose we want to sum from the 0th to the k-th term of a geometric series.

Let x = 1 + 3/4 + (3/4)^2 + ... + (3/4)^k

Then 3/4 x = 3/4 + (3/4)^2 + ... + (3/4)^(k+1)

subtracting, 1/4 x = 1 - (3/4)^(k+1) = 1 in the limit that k is large

x=4 in the limit, so the algo just ends up being 4*O(n) which is still O(n).

You do achieve a lower constant factor here.

n^2 log(n^2) = 2 n^2 log n.

That's vs. roughly n^2 log n.

But yes, the amount of disorder is still sufficiently high that you don't achieve any reduction in asymptotic complexity. Really this makes sense in some way if you think about it. The list is about half-sorted. An unsorted list would have n^2 sorted chunks of size 1, a sorted list 1 sorted chunk of size n^2, and here you have n sorted chunks of n, which is right in the middle (halfway on an exponential scale). The complexity is half.

(The above is not meant to be any sort of formal or precise argument.)

We'll want to be taking the smallest elements first. However, we want to avoid sorting to avoid having an O(n log n) algorithm. Of course if we could sort this would render the algorithm trivial: in the sorted array, take all the negative elements and zeros, and then as many positive elements as you can, starting from the smallest, until taking the next one would make your sum > M.

We need an O(n) algo, so we'll have to do something different, something similar to quickselect. First, partition the list into one half having negative and zero elements, and the other half having positive elements. Take all the negative and zero elements and include them in your subset. Suppose their sum is S (which is negative or zero). You now need to find the largest subset of the positive integers that sums to T = N-S or less. Now we apply the following for the positive integers:

Pick a random pivot element (or pick the median using a deterministic method if you need determinism). Partition the set into two parts, one part less than and one part greater than or equal to the pivot. Sum the entire less than part. If the sum <= T, include all of these elements in the total count and look for the largest subset <= T-Sum in the greater than or equal part. If the sum is >T, don't include any elements in this step and repeat this step just on the less than part, with the same target T. Whenever we reach a set of size 1, we determine whether or not that one element is small enough to be included , and then stop (base case).

This is O(N) in the expectation because partioning the elements will first take O(n). Then the algo is repeated on just one of the partitions. Under perfect conditions, we'd get partition sizes of n/2 and n/2, but in the average case we might see something like 3n/4 and n/4. Even if it's more likely that the element we're looking for is in the 3n/4 part, we have n + n*3/4 + n*(3/4)^2 + ... = O(n). This is the expectation (on all inputs), but the worst case scenario if alll the dice roll the wrong way can be O(N^2) or even infinite depending on the precise implementation. However, if you use a deterministic O(N) algo for partitioning an array, this algo will then also be deterministic and worst-case O(N).

A queue approach like outlined above by dr.house seems reasonable then. What are your requirements when aborting? We could just flush the queue. Do you need to roll back to the state everything was in before any of the jobs took effect?

- eugene.yarovoi October 03, 2012Right. I was thinking of trying to prove it directly by counting the number of possible interleavings of two sorted arrays. But reducing sorting to this problem is definitely a good idea for an easy proof. In fact we can see that even if the merge step took, say N^(1-e) for some e>0, it still would give a sort that's O(n). So O(n) is at least very close to the bound on the merge operation.

- eugene.yarovoi October 03, 2012It takes O(n^2 log n) to make a sorted 1D array out of this.

- eugene.yarovoi October 03, 2012"You would also want to invent some concept of batching identical tasks together so that resource utilization is maximized. "

Define "identical tasks". What exactly do you mean here?

"The queue is interesting. It is required because there is no guarantee how long a particular task would take."

Not sure I'm following your train of thought there. It's certainly not required, though maybe desired. What sort of situation are you contrasting this with? One where you can try to partition the tasks into roughly equal sets because they have known durations?

"It actually, doesnt matter if the VMs reside on the same host or not."

I'm not so sure about that one. If the JVMs are all sitting on one machine that has no multiprocessing or anything like that, it might be wiser for it to not waste memory by loading the same classes in many different JVMs. I'm not saying this is a good strategy all the time, but I could see certain situations where running everything on one JVM might give the best performance. So the answer to such a question is not completely irrelevant. I agree, however, that the interviewer was probably looking for some sort of distribution strategy and was probably thinking of these JVMs being on multiple machines or at least on a machine with multiprocessing capabilities.

@pradegup: how is this in-place?

@CupidVogel: linear time is probably optimal for merging two sorted arrays. I am not aware of any kind of O(logN + logM) algorithm and in fact am reasonably confident that that's not possible, just based on information-theoretic considerations.

What sorts of tasks? Can these tasks be done in any order? Does each JVM reside on a separate machine, or in such an environment that it gets its own resources?

- eugene.yarovoi October 02, 2012Bluesky: the algorithm as given is O(n^2 log n). I think you're considering something different.

- eugene.yarovoi October 01, 2012It's not possible to do better than O(nlogn) if each row provides no information regarding anything in any other row. You'll have to do a binary search on each row.

You have n independent problems, the optimal solution to each of which is O(log n). So O(n log n) is the minimum.

The potential issue here is that you need O(n) extra space, and you also need to make memory writes, instead of just reads. The fact that you're writing a bunch of stuff to memory can slow you down because those writes are being made to memory instead of much faster registers. I wouldn't necessarily expect this technique to be faster than the naive technique of just keeping two variables with the top two numbers as you traverse the array, as there you don't spend any time doing memory writes.

- eugene.yarovoi October 01, 2012I have not included such things in the past. Prospective employers can request references if they choose.

- eugene.yarovoi October 01, 2012It's actually possible in O(n) if the elements are all distinct, and that's what people are talking about elsewhere on this thread. My proof sketch above makes certain assumptions that don't hold up in the event that the numbers are guaranteed to be distinct.

- eugene.yarovoi October 01, 2012The point of a resume is to look more impressive, and most of the time, publications are fairly impressive, so I would say yes, include them.

I would advise staying at one page unless your publications list has more than a few items. If you have a lot of publications, I would think that it's acceptable to include additional pages, provided that the additional pages don't include anything other than publications. I would also try to make sure your most impressive publications are on the first page.

Remember, people spend 15-30 seconds reading your resume, so they won't read the details. This guiding principle should inform every aspect of how you craft your resume.

I doubt that's possible even then. The thing is that there may be a very small number of these local minima, and because they're local and don't have any sort of global structure, you can't know where they are until you inspect those locations. I haven't thought about it enough to write a precise proof or even be sure that it's not possible, but the basic reasoning behind an impossibility proof would be this: if you show that there are O(1) local minima in some cases, and further show that nothing but some O(1) sized neighborhood around a given local minimum can have any information about that minimum, then you can show that there is some O(1) sized set of locations that reveals any information at all about the answer to this problem. This rules out the possibility of any algorithm, deterministic or probabilistic, of being better than O(n^2) in the expectation for those cases.

- eugene.yarovoi September 30, 2012I guess the question is ambiguous. Using a plural together with "any" ("find any local minima") can mean "find all local minima". "Find any one local minimum" would be more clear.

- eugene.yarovoi September 30, 2012Oh, any local minima. I thought it was all local minima.

- eugene.yarovoi September 30, 2012There can be up to O(N^2) local minima, so this won't be doable in less than O(N^2).

- eugene.yarovoi September 30, 2012You don't need to generate all possible (x1,y1) and (x2,y2).

- eugene.yarovoi September 30, 2012This can be done in expected O(1) time if you unleash the power of randomization.

Pick two distinct indices at random and get the values there. If they are the same, you've found the identical element. If not, repeat this process. Each time you try this, the chance of success is roughly 1/4 (the first element is one of the identical elements with 1/2 probability, and the second element is one of the identical elements with (n/2-1)/n probability, which is almost 1/2 for all but the smallest of inputs). Therefore, the expected number of attempts needed is ~4, which is O(1) attempts. Each attempt just requires looking at 2 random elements, which takes O(1), so this entire algorithm is O(1) in the expectation.

Without randomization, it's not possible to solve this in less than O(N) because an adversary that knows your algorithm would be able to predict all the locations you will read in advance and place only distinct elements in them, ensuring you never see the identical element until you've first read up to N/2-1 locations, which takes O(N) time.

Unless this is some sort of augmented hash table, hash tables don't normally contain any kind of order information. So there's not really any way you're going to sort a hash table.

Of course, you can retrieve all the values, place them into some sort of list, and sort the list, if that's what you want.

Looking at the question, it would seem that it asks us to minimize both as much as possible. There are of course tradeoffs: in your solution, you trade a whole lot of preprocessing time for the optimal query time. In another solution on this page, preprocessing is avoided entirely but results in a trivial O(n) lookup time.

- eugene.yarovoi September 26, 2012But something like a conditional branch instruction can be so much more expensive than some bitwise operations. Are we to take such things into account as well?

- eugene.yarovoi September 25, 2012

Rep**Gayle L McDowell**, CEO at CareerCupGayle Laakmann McDowell is the founder / CEO of CareerCup, which provides programming interview prep for candidates interviewing with Microsoft, Google ...

Rep**Loler**, Employee at Google

Rep

**CareerCup**is the world's biggest and best source for software engineering interview preparation. See all our resources.

I suppose it just depends on how you interpret the question. I would think that this is being used for some sort of random bag datastructure, where I think it would probably be more useful to return elements with probability proportional to their frequency. But we really don't know what the requirements are, so we can't say.

- eugene.yarovoi October 25, 2012