IvgenyNovo
BAN USERA possible approach:
1. Build a map of counters whose keys are the elements and the values are the appearance counters of each element.
2. Create an array of the map's entries  the entries are of the form (key,value) where key is an element and value is the counter of the number of times the element appeared.
3. Use the median of median algorithm to find the kth highest entry according to value. This means that comparison between two entries is made according to their value fields (the counters).
4. Iterate over the array and return all the keys of the entries that have a higher value (counter) than the median counter we found in step 3. There should be at least k such elements (maybe more in case there are several elements whose counter is the median the counter  in this case we can either return all of them or just the first k elements we find).
Complexity: O(n) amortized (because of the counter map) runtime complexity (median of median is O(n) worst case). Space complexity is O(n) as well.
Regarding the follow up question  I think it can also be done in O(n) worstcase complexity.
Note: The solution is for the case where the "last index" is outside the array. In other words  finding the minimum number of jumps required to jump out of the array. It should be easy to modify the algorithm if the last index means the last index in the array.
The main difficulty here is finding out the minimum number of jumps that are required in order to reach each index in the array. If we knew that, then the moment we reach an index from which we can jump out of the array we can return the minimum number of jumps required to reach this index incremented by 1 (notice that the minimum number of jumps increases as the index increases).
To do it efficiently we define a queue of pairs (end,jump). We iterate from the beginning of the array and maintain a max variable which holds the maximum reachable index. Whenever we encounter an element arr[i] from which we can jump further than the max (i+arr[i]>max), we update the max variable and add the element (i+arr[i],jumps+1) to the queue.
What is jumps? jumps+1 would be the minimum jumps required to reach i+arr[i] when the last jump is from i to i+arr[i].
How do we find jumps? We remove elements from queue until the element in the top of the queue  (end,jumps) satisfies end>=i. This means that jumps is the minimum jumps required to reach the index i.
Code:
public static int getMinimumNumOfJumps(int[] arr){
if ((arr==null)  (arr.length==0)  (!canJumpOut(arr))){
return 1;
}
Queue<Index> queue = new LinkedList<>();
queue.offer(new Index(0,0));
int max = 0;
for (int i=0;(i<arr.length) && (max<arr.length);i++){
while (queue.peek().getEnd()<i){queue.poll();}
if (i+arr[i]>max){
max = i+arr[i];
queue.offer(new Index(i+arr[i],queue.peek().getJumps()+1));
}
}
return queue.peek().getJumps()+1;
}
Notes:
canJumpOut()  returns true if it's possible to jump out of the array and false otherwise (the original question). It's implementation is similar only without the queue.
Index class  just a simple class which contains two int fields  end and jumps. The methods getEnd() and getJumps() return the corresponding end and jumps values. The constructor is Index(int end, int jumps).
Complexity: At most we can add 1 element to the queue per iteration. This means that we can remove at most n elements from the queue (in total). Thus the runtime complexity is O(n) worstcase. Space complexity is O(n) worstcase as well.
Hopefully I haven't missed something.
rotinom, I respectfully disagree.
First of all, the algorithm and the code are pretty trivial. The heart of the algorithm, the method getKthElement(), is just 24 lines of code, the rest is pretty trivial (Interval and IntervalPoint classes, building an interval end points array) and is simply done to provide fully working code. It isn't particularly hard to explain either, especially when you can communicate verbally and have the ability to draw on a board  options which I don't have when posting an answer on this site.
Second of all, the goal in these type of interview questions, especially in big companies like Amazon or Google, isn't to provide a fully working trivial napkin solution. It is to see how a candidate tackles a problem, what's his thinking process, does he recognize the challenge the problem offers and finally  can he formulate his ideas into a working efficient solution.
For the question above, it's pretty obvious that the interviewer wasn't shooting for any solution that depends on the ranges of the intervals  that's pretty much a bruteforce approach which may pass the "cocktail napkin test" but probably won't impress the interviewer all that much. The line of thinking here should be  how do I solve this using only the intervals themselves, not their elements? Do I really need to iterate over each interval element? Can I know how many times each element appears using the intervals alone? That was my approach to solving this problem and it didn't take a lot of time to come up with the solution above.
I can accept the criticism for my lack of ability to provide a clear explanation but I don't think my approach to solving this problem was wrong.
Anonymous, the algorithm doesn't ignore the three nines, the code above accounts for all of them.
To make the algorithm clearer, lets take your example: {[5,10],[8,13],[7,15]} and suppose we want to find the 10th element (which is 9 in this case):
1. First of all we build the end points array and sort it, it should look like this:
points = {(5,start),(7,start),(8,start),(10,end),(13,end),(15,end)}
2. We set the overlap count to 0 (overlap = 0) and the elements count to 0 as well (elements = 0).
3. Start iterating over the points array. In the first iteration the element is (5,start) and because it's just the first element the only thing we do is increment the overlap counter as the point (5,start) is an interval starting point, so overlap = 1.
4. In the next iteration, the point is (7,start). Because it's not the first iteration we want to count the number of elements the interval [5,7] contributes.How much does it contribute?
It contributes (point[1].value  point[0].value)*overlap + ((point[0].type == start) ? 1 : 0) = (75)*1 + 1 = 3 elements (elements = 3). Notice that we add 1 if point[0].type is start, this is because we increment the overlap counter only in the end of the iteration so in the previous iteration we did not count the additional point[0].value appearance which results from opening the previous interval (In fact, when we calculate how many elements each interval contributes we actually count the amount of elements without the first element because we accounted for it in the previous iteration except maybe 1 appearance in case the previous point was a starting point). This is also the reason why we count 7 only once even though it appears twice for our input intervals, we'll count its other appearance in the next iteration.
We check whether the total number of elements is greater/equal to k and its not because elements=3<10=k, that tells us that the 10th element is not in the interval [5,7].
Once again because the current point (7,start) is a starting point we increment the overlap counter by 1 (overlap = 2). By the end of the second iteration:
elements = 3, overlap = 2.
3. In the third iteration the point is (8,start). Once again we want to calculate how many elements the interval [point[1],point[2]] = [7,8] contributes and we also know that each element in this interval should appear twice because the overlap counter is 2 which means that 2 of the input intervals overlap on the interval [7,8]. How much does [7,8] contribute? The same calculation as before:
(point[i].value  point[i1].value)*overlap + ((point[i1].type == start) ? 1 : 0) = (87)*2 + 1 = 3 (elements +=3 => elements = 6). Again, notice that we add 1 because point[i1].type == start  this accounts for the missing 7 from the previous iteration. Also notice that once again we count only 2 appearances of 8 while in fact there should be 3, we'll account for the other appearance of 8 in the next iteration (like we did for 7).
Once again we check whether the number of elements we encountered so far is greater/equal to k but its not: elements=6<10=k.
Because (8,start) is a starting point we increment the overlap counter again so in the end of this iteration:
elements = 6, overlap = 3
5. The next iteration point is (10,end). We calculate how many numbers it contributes:
(point[3].value  point[2].value)*overlap + (point[2].type == start) ? 1 : 0 = (108)*3 + 1 = 7 (elements += 7 => elements = 13). Once again we added the missing appearance of 8 from the previous iteration.
We check whether the total number of elements is greater/equal to k and this time it is: elements = 13>10 = k. This means that the 10th number is one of the numbers in the interval [point[2].value,point[3].value] = [8,10].
How do we find it? Because the overlap counter is equal to 3, it means that every element except the first in the interval [8,10] appears 3 times. The first element  8, appears once because the previous iteration point is a starting point (8,start) so we counted this extra 8 only in the current iteration. The elements the current interval ([8,10]) contributes to our count are 8,9,9,9,10,10,10 (7 numbers in total). We saw 6 elements until the previous point so we want to extract the 106 = 4th element from the elements in the current interval. Extracting the 4th element from the beginning of the current interval is equivalent to extracting the 3rd element from the ending of this interval which is exactly the technique used in the following formula:
d = elements  k = 13  10 = 3
kth_element = points[3].value  (d/overlap) = 10  (3/3) = 10  1 = 9.
So the value we return is 9 as expected.
Hopefully, it makes it a bit clearer. The key here is to understand that when we count elements in an interval [a,b], we account for a only by adding 1 if it's a start point in one of the original intervals, we do not count it multiple times because we already did so in the previous iteration.
In the final Note (before the complexity analysis)  we add 1 to d if the previous point was a starting point because we didn't account for it in the previous iteration. Sorry for the confusion.
 IvgenyNovo April 04, 2014I think this can be done in O(nlogn) worst case where n is just the number of intervals (not the sum of their ranges).
Suppose we have a single interval [a,b] and we want to find the kth element (k=1,2,3,...) in this interval. There's no point in iterating over the entire range, the kth element is just
element[k] = b  ((ba+1)k) = a + k  1
What if we knew that all the elements in the interval [a,b] appear twice, what is the kth element? In this case the kth element can be calculated by
element[k] = b  (2*(ba+1)k)/2 = b  (ba+1)  k/2 = a 1 + k/2
In general, if we know that each element in [a,b] appears m times then the kth element is
element[k] = b  (m*(ba+1)k)/m = a  1 + k/m
The observation above implies that by knowing the interval and the number of times each element appears inside the interval, we can find the kth element in O(1) time.
Now, the end points of all the intervals induce a partition of the real line into intervals. For instance if we have 2 intervals [a,b] and [c,d] such that a<=c<=b<=d then the real line is partitioned into 5 intervals [infinity,a], [a,c], [c,b], [b,d] and [d, infinity] . We can discard the infinity intervals as they contribute nothing. We also know for each interval how many times each element appears in it (using the intersections of the original input intervals):
[a,c]  each element appears once (c should appear twice but we'll count his other appearance in [c,b])
[c,b]  each element except c appears twice (c appears once because we already counted one of his appearances in [a,c])
[b,d]  each element appears once except b which does not appear at all (we counted b's 2 appearances in [c,b])
Knowing how many appearances of numbers there are in each interval allows us to count the total number of elements up until any point (a,b,c and d). We want to find the kth element and suppose we counted elements_c<k elements until point c but then after adding the elements of the interval [c,b], the total number of elements is elements_b>=k. This means that the kth element must lie within the interval [c,b] and we can use our initial observations to find it in O(1) worst case complexity.
So basically it comes down to the following steps:
1. Build an interval end points array (if the intervals are [a1,b1],[a2,b2],...,[an,bn] then the points array is {a1,a2,...,an,b1,b2,...,bn}). For each point maintain information whether it's an interval starting point (a1,a2,...,an) or an ending point (b1,...,bn).
2. Sort the points array using the following strategy according to their values. For equal values sort according to whether it's a starting or ending point  start points will appear before end points for equal values.
3. The points array from steps (1)(2) represents a partition of the real line. Iterate over it while maintaining a counter that's incremented whenever we encounter a starting point and decremented whenever we encounter an ending point. This counter represents the number of intersecting original intervals for each interval in the points partition. This will allow us to use our previous observations to count the total number of elements up until each point throughout the iteration.
4. Once we reach a point in the points array for which the total number of elements exceeds k, then we know that the kth element must lie within interval [point[i1],point[i]]. We'll use our initial observations to extract it in O(1).
Code (Java):
Interval Class
public static class Interval {
private int start;
private int end;
private void validateInput(int start, int end){
if (start>end){
throw new IllegalArgumentException("start cannot be greater than end");
}
}
public Interval(int start, int end){
validateInput(start,end);
this.start = start;
this.end = end;
}
public int getStart(){return start;}
public int getEnd(){return end;}
}
IntervalPoint class (for the points array)
public class KthElementInIntervals {
private static class IntervalPoint implements Comparable<IntervalPoint> {
public enum Type implements Comparable<Type> {
OPEN, CLOSE;
}
private int value;
private Type type;
public IntervalPoint(int value, Type type){
this.value = value;
this.type = type;
}
public int getVaule(){return value;}
public Type getType(){return type;}
@Override
public int compareTo(IntervalPoint ip){
int b1 = (this.getVaule()<ip.getVaule()) ? 1 : (this.getVaule()==ip.getVaule()) ? 0 : 1;
int b2 = (this.getType()==ip.getType()) ? 0 : (this.getType()==Type.OPEN) ? 1 : 1;
return (2*b1) + b2;
}
@Override
public String toString(){
return "(" + String.valueOf(this.getVaule()) + "," + this.getType() + ")";
}
}
A method for building the IntervalPoint[] array (the interval end points array which represents a partition)
private static IntervalPoint[] getIntervalPointArray(Set<Interval> intervals){
if (intervals==null){
return new IntervalPoint[0];
}
IntervalPoint[] res = new IntervalPoint[intervals.size()*2];
int i=0;
for (Interval interval : intervals){
res[i++] = new IntervalPoint(interval.getStart(),IntervalPoint.Type.OPEN);
res[i++] = new IntervalPoint(interval.getEnd(),IntervalPoint.Type.CLOSE);
}
return res;
}
The main method for retrieving the kth element from a set of intervals:
public static int getKthElement(Set<Interval> intervals, int k){
if (k<1){
throw new IllegalArgumentException("k must be positive");
}
IntervalPoint[] points = getIntervalPointArray(intervals);
Arrays.sort(points);
int overlap = 0;
int elements=0;
for (int i=0;i<points.length;i++){
if (i>0){
int d = (points[i].getVaule()  points[i1].getVaule())*overlap +
((points[i1].getType()==IntervalPoint.Type.OPEN) ? 1 : 0);
elements += d;
if (elements>=k){
int p = elementsk;
return points[i].getVaule()  (p/overlap);
}
}
if (points[i].getType()==IntervalPoint.Type.OPEN){overlap++;}
else {overlap;}
}
throw new IllegalArgumentException("k is greater than the total number of elements");
}
Note: In the method above when we calculate the number of elements in the current interval  d, we reduce 1 if the previous point was a starting point because we've already counted it in the previous iteration.
Complexity: Let n be the number of intervals (the size of the set of the intervals) then the worstcase runtime complexity is O(nlogn). Space complexity is O(n).
I can think of two approaches:
First approach  A naive approach using an adjacency map
The adjacency map is a Map whose keys are vertices and whose values are sets of vertices which are all the neighbors of the key vertex. For every vertex, we'll check for every pair of its neighbors whether there is an edge between them and increment the triangle counter if so.
The total number of triangles will be the number of triangles we counted divided by 6 (we count each triangle 6 times).
Code:
private static class Edge {
private final Object from;
private final Object to;
public Edge(Object from, Object to){
this.from = from;
this.to = to;
}
public Object getFrom(){return from;}
public Object getTo(){return to;}
@Override
public String toString(){
return "(" + ((from!=null) ? from.toString() : null) + "," + ((to!=null) ? to.toString() : null) + ")";
}
}
public static Map<Object,Set<Object>> buildAdjacencyMap(List<Edge> edges){
if ((edges==null)  (edges.isEmpty())){
return Collections.<Object,Set<Object>>emptyMap();
}
Map<Object,Set<Object>> graph = new HashMap<>();
for (Edge e : edges){
if (!graph.containsKey(e.getFrom())){
graph.put(e.getFrom(), new HashSet<Object>());
}
if (!graph.containsKey(e.getTo())){
graph.put(e.getTo(), new HashSet<Object>());
}
graph.get(e.getFrom()).add(e.getTo());
graph.get(e.getTo()).add(e.getFrom());
}
return graph;
}
public static int getNumberOfTriangles1(List<Edge> edges){
Map<Object,Set<Object>> graph = buildAdjacencyMap(edges);
int triangles = 0;
for (Set<Object> neighbors : graph.values()){
for (Object v2 : neighbors){
for (Object v3 : neighbors){
if ((!v2.equals(v3)) && (graph.get(v2).contains(v3))){
triangles++;
}
}
}
}
return (triangles/6);
}
Complexity: The overall runtime complexity is O(n*d^2) where n is the number of vertices and d is the maximum degree of a vertex in the graph. This is a good approach for graphs with small maximum vertex degree. But if the graph contains a vertex whose degree is O(n) then the overall complexity in this case would be O(n^3).
Second approach  Using matrix multiplication
Suppose A is the graph's adjacency matrix (A[i][j] = 1 if and only if there is an edge between i and j in the graph). It can be shown that trace(A^3)/6 is the number of triangles in the graph (using the fact that A^k[i][j] is the number of paths with k edges from i to j). This means that all we need to know the number of triangles is to calculate the matrix A^3 and its trace.
This means that our algorithm complexity would depend on the complexity of the matrix multiplication algorithm:
Naive: O(n^3)
Strassen: O(n^{2.8074})
CoppersmithWinograd: O(n^{2.3729})
I can post a code for this approach using Strassen matrix multiplication but it's rather long and isn't pretty.
First of all, I think Ramesh N's solution with a deque might be right, at least if I understood it correctly (haven't tested it).
Anyway, my approach would be slightly different. Here are the general steps in my solution which I'll explain later:
1. Implement a MinStack class which is a stack that also keeps track of the minimum element with O(1) worstcase complexity. It supports the regular stack operations with O(1) worstcase complexity and a getMin() operation which returns the minimum of all the elements in the stack in O(1) worstcase as well.
2. Implement a MinQueue class which is a queue and similarly to the MinStack keeps track of the minimum element in the queue. Unlike the MinStack class, in MinQueue the worstcase complexity for the remove() operation might be O(n).
3. Use MinQueue to determine the minimum element for every kelement window in the input array. The complexity analysis will show that even though MinQueue's remove() operation's worstcase complexity is O(n), the overall runtime complexity for determining the minimum values for all the windows is still linear in the worstcase.
Implementation:
1. The MinStack class. Our goal is to implement a stack that supports all operation with O(1) worstcase complexity including the getMin() operation. We'll implement this by using two stacks: the first (stack) will store all the present elements in the stack and the second (called minStack) will be used to maintain the minimum element at each point. When we push an element to the stack, we'll push him to the first stack and if the element is lesser/equal than the element at the top of the minStack we'll push him to minStack as well. When we pop an element from the stack, we pop the element at the top of the stack and if the element we just popped is equal to the element at the top of the minStack then we'll pop an element from minStack as well. The getMin() operation will be implemented simply by returning (not popping) the element at the top of the minStack stack.
Code:
public class MinStack <T extends Comparable<T>> {
private final Stack<T> stack;
private final Stack<T> minStack;
public MinStack(){
stack = new Stack<T>();
minStack = new Stack<T>();
}
public void push(T e){
if (e==null){
throw new NullPointerException("element cannot be null");
}
stack.push(e);
if ((minStack.isEmpty())  (minStack.peek().compareTo(e)>=0)){
minStack.push(e);
}
}
public T pop(){
if (stack.isEmpty()){
throw new NoSuchElementException("Stack is empty");
}
T e = stack.pop();
if (minStack.peek().compareTo(e)==0){minStack.pop();}
return e;
}
public T peek(){
if (stack.isEmpty()){
throw new NoSuchElementException("Stack is empty");
}
return stack.peek();
}
public boolean isEmpty(){return stack.isEmpty();}
public T getMin(){
if (minStack.isEmpty()){
throw new NoSuchElementException("Stack is empty");
}
return minStack.peek();
}
}
2. The MinQueue class. We'll implement the MinQueue class using two MinStack classes. The idea is similar to that of implementing a Queue using a Stack  you use two stacks: one (called pushStack) for adding elements to the queue and the other (called popStack) for removing elements from it. To add an element, simply push it into the pushStack stack. To remove an element, we first check whether the popStack is empty. If it is then we pop all the elements from pushStack and push them (in the order they were popped) to the popStack and then we pop one element from it. If popStack isn't empty then we just pop one element from it. Notice that the worstcase runtime complexity of the remove() operation is O(n) but we'll later see that it doesn't make the overall asymptotic complexity of our algorithm worse. The getMin() operation will be implemented simply by choosing the minimum of the minimum elements in pushStack and popStack.
Code:
public class MinQueue <T extends Comparable<T>> {
private final MinStack<T> pushStack;
private final MinStack<T> popStack;
public MinQueue(){
pushStack = new MinStack<T>();
popStack = new MinStack<T>();
}
public void add(T e){
if (e==null){
throw new NullPointerException("e is null");
}
pushStack.push(e);
}
private void popFromStack(){
while (!pushStack.isEmpty()){
popStack.push(pushStack.pop());
}
}
private T peekRemove(boolean remove){
if ((pushStack.isEmpty()) && (popStack.isEmpty())){
throw new NoSuchElementException("Queue is empty");
}
if (popStack.isEmpty()){this.popFromStack();}
return (remove) ? popStack.pop() : popStack.peek();
}
public T remove(){
return peekRemove(true);
}
public T peek(){
return peekRemove(false);
}
public T getMin(){
if ((pushStack.isEmpty()) && (popStack.isEmpty())){
throw new NoSuchElementException("Queue is empty");
}
if (popStack.isEmpty()){return pushStack.getMin();}
else if (pushStack.isEmpty()){return popStack.getMin();}
return ((pushStack.getMin().compareTo(popStack.getMin()))<0) ? pushStack.getMin() : popStack.getMin();
}
}
3. The main algorithm implementation is pretty straight forward using MinQueue. We add the first k (k=windowsSize) elements in the array to the MinQueue. We use getMin() to get the minimum element in the MinQueue. Then in each iteration step we move the window to the right by removing one element from the queue and add the next element in the array. For every new window we use getMin() to retrieve the minimum element.
Code:
public static <T extends Comparable<T>> void getMinArray(T[] input, T[] output, int k){
if ((input==null)  (output==null)){
throw new NullPointerException("Input arrays are null");
}
if ((k<1)  (k>input.length)){
throw new IllegalArgumentException("k is illegal");
}
MinQueue<T> queue = new MinQueue<T>();
for (int i=0,j=0;(i<=input.length) && (j<output.length);i++){
if (i>=k) {
output[j++] = queue.getMin();
queue.remove();
}
if (i<input.length){queue.add(input[i]);}
}
}
Complexity Analysis: It would seem that because MinQueue's remove() operation's worstcase runtime complexity is O(n) that the overall algorithm complexity should be O(n^2) but in fact, the overall complexity is linear  O(n) in the worstcase as well.
To see this, let us count the total number of push() and pop() operations performed by the MinQueue throughout the entire runtime of our algorithm:
1. We push k elements to the first stack (adding the first k elements to the queue).
2. We want to remove an element from the queue. Elements are removed from the second Stack (popStack) which is currently empty after step 1 (but not necessarily after step 3) so we pop all the elements from the first stack (pushStack) and push them to the popStack  another 2k pushes and pops.
3. Now the popStack includes k elements which means that the next k elements removal would be require just a single pop() operation (for every remove() call). Every time we move the sliding window to the right we pop one element and push the new element in the window to the pushStack. This means that we'll need to move our sliding window k elements to the right before we popped all the elements from the popStack  total number of 2k pushes and pops.
4. We repeat steps 23 until the sliding window reaches the end of the original array.
How many times are steps 23 executed (each 23 steps execution requires 4k push and pop operations  2k to push all the elements from pushStack to popStack when popStack becomes empty and another 2k to push and pop operations for moving the window k elements to the right)? At the end of steps 23 the sliding window moved k elements to the right so the number of times 23 are executed is equal to the number of times the sliding window can move k elements to the right. That number is (nk)/k. It's nk because of step 1 where we insert the first k elements without popping any element from the popStack. Thus, the total number of push and pop operations performed on the stacks is:
k + 4k*(nk)/k = 4n  3k.
Because MinStack::push() and MinStack::pop() are all O(1) worstcase operations and there are 4n3k such operation then the overall worstcase runtime complexity is O(n) as desired (getMin() operations are O(1) as well).
Space complexity is obviously O(n) worstcase.
getCylceLengths(), getCyclesLCM() and getOrder() methods are part of the Permutation class, not helper methods  sorry about that.
 IvgenyNovo March 11, 2014Interesting question, I think it can be done in O(n) worst case time complexity using symmetric groups (if I understand the question correctly).
First of all, here is the way I understood the question:
1. There is a Deck (Queue of cards) and a Table (Stack of cards). The Deck contains n distinct cards.
2. In each iteration step (that is  an inner step in one iteration)  two cards are removed from the Deck (removed from the Deck). The first is place at the Table (pushed to the Stack) and the second is added to the back of the Deck (added to the Queue).
3. Step 2 is repeated until the Deck contains 1 card which is then removed from the Deck and placed on the Table (pushed to the Stack).
4. We collect the cards on the table and build a new Deck according to their current order (pop cards from the Stack and push them into the Queue). Steps 14 are considered a single iteration.
5. Repeat steps (1)(4) until we restore the original Deck order.
Example for n=4:
Initial Deck = {0,1,2,3}
After 1 iteration: {3,1,2,0}
After 2 iterations: {0,1,2,3}
Which means that 2 iterations are required to restore the initial state.
My approach would be to determine how the cards are permuted in each iteration and use this information to find out how many iterations would be needed to restore the original state.
The first iteration implies the permutation on the deck elements (the permutation is applied on the indices, not on the elements themselves) and the same permutation applies for every subsequent iteration (the deck elements order may be different but as mentioned the permutation applies only to the indices of the elements, not the elements themselves). This implies that finding the number of iterations is equivalent to finding the order of the initial permutation (in the symmetric group Sn).
The order of a permutation can be found by calculating the lcm (least common multiply) of all the cycle lengths in the cycle representation of the permutation. We'll also use the fact that lcm(a,b) = (a*b) / gcd(a,b) and Euclid's algorithm for calculating the gcd.
Here are two examples for this approach:
Example 1 (n=4): Initial deck is {0,1,2,3}. The deck after 1 iteration is {3,1,2,0} The corresponding permutation p is defined as follows: p(0)=3, p(1)=1, p(2)=2, p(3)=0 (it receives the index of an element in the deck at the beginning of the iteration and returns the index of the same element at the end of the iteration). The cycle representation of p is: p= (0 3) (think of p as an array which represents several circular linked lists and find all these lists/cycles within the array  cycles whose length is 1 are considered trivial). This means that p contains a single non trivial cycle whose length is 2 and thus the order of p is 2. In other words p^2 = identity permutation. Applying this to our example:
Iteration 1: p({0,1,2,3}) = {3,1,2,0}
Iteration 2: p({3,1,2,0}) = p(p({0,1,2,3})) = p^2({0,1,2,3}) = identity({0,1,2,3}) = {0,1,2,3}
Thus the number of iterations required for this case is 2.
Example 2 (n=3): Initial deck is {0,1,2}. The deck after 1 iteration is {1,2,0}. The permutation p is defined as follows: p(0)=2,p(1)=0,p(2)=1. The cycle representation of p is p = (0 2 1). The permutation p has a single non trivial cycle whose length is 3, this means that p's order is 3, in other words p^3 = identity. Applying this to our example:
Iteration 1: p({0,1,2}) = {1,2,0}
Iteration 2: p({1,2,0}) = p(p({0,1,2})) = p^2({0,1,2})={2,0,1}
Iteration 3: p({2,0,1}) = p(p({1,2,0})) = p(p(p({0,1,2}))) = p^3({0,1,2}) = identity({0,1,2}) = {0,1,2}
Thus the number of iterations is 3.
Implementation:
1. Find the permutation p by performing a single iteration  can be done in O(n) using a Queue and a Stack.
2. Find all the nontrivial cycle lengths of the permutation p. We can view the permutation p as an array where p[i] is the new index of the the element in index i. Finding the lengths of all the cycles in the array p can be done in a similar fashion to finding the number of elements in a circular list (two pointers, one slow and the other is fast and the length will be determined when they meet). We can maintain a flag array to mark all the indices we've already visited so that we won't check the same cycle twice. Overall complexity of this step: O(n).
3. Once we have all the cycle lengths, we need to calculate the lcm of them. This is done by using the identity lcm(a,b) = (a*b) / gcd(a,b). The time complexity for finding the gcd(a,b) is O(logm) where m = min(a,b) (I may be wrong here, please correct me if I am). Suppose there are k nontrivial cycles and their lengths are m1,m2,...,mk. First we calculate lcm(1,m1)  O(log(m1)). Then lcm(lcm(1,m1),m2)  O(log(m2)). And so on, the overall runtime will be:
O(log(m1)) + O(log(m2)) + ... + O(log(mk)) <= O(m1) + O(m2) + ... + O(mk) <= O(n)
Because the total sum of the cycle lengths is less/equal than the number of elements  n.
4. Return the lcm which was calculated in the previous step.
Code:
Permutation class:
public static class Permutation {
private final int[] permutation;
private void validateLegalPermutation(int[] permutation){
if (permutation==null){
throw new NullPointerException("permutation is null");
}
int res = 0;
for (int i=0;i<permutation.length;i++){
res ^= permutation[i]^(i);
}
if (res!=0){
throw new IllegalArgumentException("illegal permutation, may include duplicate elements");
}
}
public Permutation(int[] permutation){
validateLegalPermutation(permutation);
this.permutation = permutation;
}
private static int gcd(int a, int b){
if ((a<=1)  (b<=1)){return 1;}
int max = Math.max(a, b);
int min = Math.min(a, b);
while ((min>1) && (max>min)){
max = max  min;
int tmp = Math.max(max, min);
min = Math.min(max, min);
max = tmp;
}
return min;
}
private static int lcm(int a, int b){
return ((a*b)/gcd(a,b));
}
private int findCycleLength(int index, boolean[] seen){
if ((index<0)  (index>=permutation.length)){
throw new ArrayIndexOutOfBoundsException("illegal starting index");
}
if ((seen==null)  (seen.length!=permutation.length)){
throw new IllegalArgumentException("seen array either null or not in the right size");
}
int slow = index;
int fast = index;
int length = 0;
do {
seen[slow]=true;
slow = permutation[slow];
fast = permutation[permutation[fast]];
length++;
} while (slow!=fast);
return length;
}
Helper Methods:
private Stack<Integer> getCycleLengths(){
boolean[] seen = new boolean[permutation.length];
for (int i=0;i<seen.length;i++){seen[i]=false;}
Stack<Integer> cycles = new Stack<Integer>();
for (int i=0;i<permutation.length;i++){
if (!seen[i]){
int cLength = findCycleLength(i,seen);
if (cLength>1){
cycles.push(cLength);
}
}
}
return cycles;
}
private int getCyclesLCM(Stack<Integer> cycles){
if (cycles==null){return 1;}
int lcm = 1;
while (!cycles.isEmpty()){
int cur = cycles.pop();
lcm = Permutation.lcm(lcm, cur);
}
return lcm;
}
public int getOrder(){
return getCyclesLCM(getCycleLengths());
}
}
private static int[] performIteration(Queue<Integer> deck){
if ((deck==null)  (deck.isEmpty())){return null;}
Stack<Integer> table = new Stack<Integer>();
while (!deck.isEmpty()){
table.push(deck.poll());
if (!deck.isEmpty()){deck.offer(deck.poll());}
}
int[] p = new int[table.size()];
for (int i=0;i<p.length;i++){p[i] = table.pop();}
return p;
}
The main method to calculate the number of iterations:
public static int getNumberOfIterations(int n){
if (n<=1){return 0;}
Queue<Integer> deck = new LinkedList<Integer>();
for (int i=0;i<n;i++){deck.offer(i);}
Permutation initialPerm = new Permutation(performIteration(deck));
return initialPerm.getOrder();
}

IvgenyNovo
March 11, 2014 If I understand the problem correctly  this is a variant of the Knapsack01 problem where the goal is to find the maximum value with the minimum number of items. In other words, out of all possible combinations which result in the maximum value (in our case the maximum value is the sum), we want to find the one with the minimum number of items.
In the original Knapsack01 problem we define an array dp[number_of_coins+1][sum+1] and each array element dp[i][j] represents the maximum sum which can be achieved using the first i coins (without repetitions):
dp[i][j] = (j>=coin_value[i1]) ? Max(dp[i1][jcoin_value[i1]+coin_value[i1],dp[i1][j]) : dp[i][j1]
In order to calculate the minimum number of coins for the maximum sum, we'll store both the sum and the corresponding number of coins for each dp[i][j].
Now, whenever dp[i1][jcoin_value[i1]].sum() + coin_value[i1] == dp[i1][j].sum() we'll choose the value for dp[i][j] according to the minimum number of coins:
1. if dp[i1][jcoin_value[i1]].coins() + 1 < dp[i1][j].coins() then
dp[i][j].coins dp[i1][jcoin_value[i1]].coins() + 1
2. Otherwise, dp[i][j].coins = dp[i1][j].coins
3. For both (1) and (2), dp[i][j].sum = dp[i1][jcoin_value[i1]].sum() + coin_value[i1] (it can also be dp[i1][j] because both values are equal).
Basically, every time we need to decide between two equal maximum sums (one that includes the current coin and that doesn't) we choose the one which can be achieved with less coins according to dp[i][j].coins() value.
Here's an implementation of this idea (admittedly, I haven't tested it thoroughly):
private static class SumCoins {
private int sum;
private int coins;
public SumCoins(int sum, int coins){
this.sum = sum;
this.coins = coins;
}
public int getSum(){return sum;}
public int getCoins(){return coins;}
public void setSum(int sum){this.sum = sum;}
public void setCoins(int coins){this.coins = coins;}
}
public static int findMinimumCoins(int[] coins, int sum){
if ((coins==null)  (sum<=0)){return 0;}
SumCoins[][] dp = new SumCoins[coins.length+1][sum+1];
for (int i=0;i<dp.length;i++){
for (int j=0;j<dp[i].length;j++){
dp[i][j] = new SumCoins(0,0);
}
}
for (int i=1;i<dp.length;i++){
for (int j=1;j<dp[i].length;j++){
if (j>=coins[i1]){
if ((dp[i1][jcoins[i1]].getSum()+coins[i1] > dp[i1][j].getSum()) 
((dp[i1][jcoins[i1]].getSum()+coins[i1] == dp[i1][j].getSum()) &&
(dp[i1][jcoins[i1]].getCoins()+1 < dp[i1][j].getCoins()))){
dp[i][j].setCoins(dp[i1][jcoins[i1]].getCoins()+1);
dp[i][j].setSum(dp[i1][jcoins[i1]].getSum()+coins[i1]);
}
else {
dp[i][j].setCoins(dp[i1][j].getCoins());
dp[i][j].setSum(dp[i1][j].getSum());
}
}
else{
dp[i][j].setCoins(dp[i][j1].getCoins());
dp[i][j].setSum(dp[i][j1].getSum());
}
}
}
return dp[coins.length][sum].getCoins();
}
Complexity: O(n*s) runtime and space complexity.
Note: If the sum cannot be reached using the given coins then the returned value in the code above would be the minimum number of coins for the closest value to sum which can be achieved using the input coins. Using dp[coins.length][sum].getSum() we can determine that value and decide what to do in this case.
Here's an idea:
1. Define an array of 256 counters for the needle characters. Count the number of occurrences for each character in needle.
2. Define the same array as in (1) for the first needle.length() characters in haystack.
3. Compare between the arrays, if all the counters are equal return true. Otherwise:
4. Iterate from i=needle.length() to haystack.length() and in each iteration:
4.1. Increase the occurrence of haystack.charAt(i) by 1 in the array defined in (2).
4.2. Decrease the occurrence of haystack.charAt(ineedle.length()) by 1 in the array defined in (2).
4.3. Compare the two arrays just like in step (3), if they are equal return true, otherwise continue to the next iteration.
5. If we finished the loop without finding a match then we should return false because there is no needle anagram substring in haystack.
Basically, we just check whether for each consecutive needle.length() characters are an anagram of needle by comparing the number of appearances of every character.
Complexity: Assuming the number of characters is constant (256), the runtime complexity is O(n) where n=haystack. Space complexity is O(1) (fixed size arrays).
private static int[] buildFreqArray(String s, int len){
if ((s==null)  (len<0)){
throw new IllegalArgumentException("input string must not be null, len must be positive");
}
int[] freq = new int[NUM_OF_CHARS];
for (int i=0;i<freq.length;i++){freq[i]=0;}
for (int i=0;(i<s.length()) && (i<len);i++){freq[s.charAt(i)]++;}
return freq;
}
private static boolean areFreqEqual(int[] freq1, int[] freq2){
if ((freq1==null)  (freq2==null)){
throw new NullPointerException("freq1 or freq2 are null");
}
if (freq1.length!=freq2.length){return false;}
boolean b = true;
for (int i=0;(i<freq1.length) && (b);i++){
b = (freq1[i]==freq2[i]);
}
return b;
}
public static boolean anaStrStr(String needle, String haystack){
if ((needle==null)  (haystack==null)){
throw new NullPointerException("needle or hatstack are null");
}
if (needle.length()>haystack.length()){return false;}
int[] needleFreq = buildFreqArray(needle,needle.length());
int[] haystackFreq = buildFreqArray(haystack,needle.length());
if (areFreqEqual(needleFreq,haystackFreq)){return true;}
for (int i=needle.length();i<haystack.length();i++){
haystackFreq[haystack.charAt(i)]++;
haystackFreq[haystack.charAt(ineedle.length())];
if (areFreqEqual(needleFreq,haystackFreq)){return true;}
}
return false;
}

IvgenyNovo
February 19, 2014 Here's an idea:
Initialize m to n and initialize an empty integer stack: mul.
Iterate from 9 to 1 and do the following:
while m divides i (i is the current iteration value) push i into the mul stack and divide m by i.
Stop each of the loops above if the number of elements in the stack is equal to 3. Notice that the mul stack will contain digits in an increasing order (from the head) whose multiplication is either n or a smaller number that divides n. The multiplication will be equal to n if and only if the variable m is equal to 1. Notice that if m is greater than 1 it means that there is no 3 digit number whose digit multiplication is equal to n (every such number has more than 3 digits).
Finally, if m is 1 then we can just return the number which is constructed from the digits in the mul stack. Another thing to note is that the case for 0 should be handled separately (return 100).
Complexity: Because we push 3 elements at most to the stack and then stop, the runtime complexity is O(1) worst case. Space complexity is O(1) as well.
private static int power(int base, int n){
if (n<=0){return 1;}
int res = base;
for (int i=1;i<n;i++){res*=base;}
return res;
}
public static int findSmallestNumber(int n, int digits){
if (n<0){throw new IllegalArgumentException("n cannot be negative");}
if (digits<1){throw new IllegalArgumentException("digits must be positive");}
if (n==0){return power(10,digits1);}
Stack<Integer> mul = new Stack<Integer>();
int m = n;
for (int i=9;(i>=1) && (mul.size()<digits);i){
while ((m % i == 0) && (mul.size()<digits)){
mul.push(i);
m /= i;
}
}
int res = 0;
while (!mul.isEmpty()){res = 10*res + mul.pop();}
return (m>1) ? 1 : res;
}
In the code above digits specifies the number of desired digits in the result (for the original problem: digits = 3).
 IvgenyNovo February 19, 2014Are you sure that example is correct? 531+23 = 0, not 1.
Anyway, one possible solution is to maintain a HashSet of partial sums. Partial sum is defined as follows:
partial_sum[0] = 0
partial_sum[i] = input[0] + input[1] + ... + input[i1]
Notice that in order to find a subarray whose elements sum to a given number k, it would suffice to find 0<=i<j<=input.length such that:
k = partial_sum[j]  partial_sum[i] = input[i] + input[i+1] + ... + input[j1]
This is equivalent to finding an index i<j given some j which satisfies:
partial_sum[i] = partial_sum[j]  k
Using this observation, iterate over the input array and maintain the following:
1. Current partial sum (partial_sum[i+1]) where i is the current index
2. A HashSet of the previous partial sums: hashset = {partial_sum[0], partial_sum[1],...,partial_sum[i]}. At the end of each iteration we add the current partial sum to the hashset.
During each iteration, check whether the hashset contains partial_sum[j+1]k. If it does then all that's left is to find an appropriate index i such that:
partial_sum[i] = partial_sum[j+1]  k
and return the subarray from i to j.
Notice the found i will satisfy i<=j because of (2) above.
Complexity: O(n) average runtime complexity with O(n) space complexity.
private static int getIndexForSum(int[] arr, int sum){
if (arr==null){
throw new NullPointerException("arr cannot be null");
}
int i=0;
int curSum=0;
for (;(curSum!=sum) && (i<arr.length);i++){curSum+=arr[i];}
return i;
}
public static int[] findSubSet(int[] arr, int k){
if (arr==null){
throw new NullPointerException("arr cannot be null");
}
Set<Integer> sumsSet = new HashSet<Integer>();
int sum = 0;
sumsSet.add(sum);
int start = 1;
int end = 1;
for (int i=0;i<arr.length;i++){
sum+=arr[i];
if (sumsSet.contains(sumk)){
start = getIndexForSum(arr,sumk);
end = i;
break;
}
sumsSet.add(sum);
}
return (end!=1) ? Arrays.copyOfRange(arr, start, end+1) : null;
}
In the code above the k is the desired subarray sum (in the original problem it should be 1)
Not sure if it's the best solution though, maybe someone can offer a worst case linear algorithm.
1. Create an array of pairs with the form (value,index) where value = input_array[index] for every index in the input array.
2. Sort the pairs array according to "value" using Radix Sort.
3. Iterate over the sorted pairs array and find the minimum index of unique elements (because the pairs are sorted according to "value", the final step can be done in a single pass).
The assumption that the array values are 32 bit numbers means that Radix Sort is done with O(n) worst case complexity. Space complexity is also O(n).
Code: snipt.org/Gghgd5
glebstepanov1992, in my code I count from 0 to n1 instead of from 1 to n. So the input for the example above should be {1,2,3,1,5,4} which yields the output 2 as desired.
Anyway, Julian offered a much more elegant solution below whose correctness is based on the following claims:
1. Let C be a (weakly) connected component in a directed graph. Suppose the incoming degree of each vertex in C is exactly 1. Then there exists some vertex r in C from for which every other vertex v in C is reachable from (in the directed graph). In other words, r is a root of C. This can be proved using the correctness of the algorithm for finding a root in a directed graph.
2. Let C be a (weakly) connected component in a directed graph. Suppose the incoming degree of every vertex in C, except some vertex r, is exactly 1 while r's incoming degree is 0. Then r is a root of C in the directed graph. This can be proved by simply determining the orientation of each edge in the undirected path from r to every other vertex in C (using the fact that the incoming degree is exactly 1 for every other vertex).
Removing the outgoing edge from the vertex 1 means that the vertex 1 has an incoming degree of 0 in the edge reversed graph while all the other vertices have an incoming degree of exactly 1. The claims above imply that by finding the number of (weakly) connected components in the new directed graph we'll be able to determine the minimum amount of roots in a dfs spanning forest where 1 is a root as well. So the answer would be the number of connected components minus 1 (for the component of the vertex 1) as Julian noted.
I think this problem can be reduced to finding a spanning dfs forest with minimum number of trees.
1. First, we'll create a new graph by reversing all the edges in original input graph, this graph will simply be denoted by G. Notice that all the vertices reachable from 1 in G are "good vertices".
2. We'll build a stack of the roots of the dfs forest which is the result of running dfs on G starting from the vertex 1 and when you need to select a new root, select the minimum possible (notice that if all the vertices are reachable from 1 then instead of a forest we have a single tree). Whenever we encounter a new root throughout the dfs run we'll just push it to the stack.
3. We'll run dfs again on G but this time, instead of starting from 1, we'll use the roots stack from (2) whenever we need a new root to traverse from (roots that were already encountered throughout the dfs run will just be popped and ignored). Throughout this dfs run we'll yet again build a root stack of the roots in the dfs forest (for this dfs run).
4. The resulting root stack from step (3) should contain the minimum number of trees in the dfs forest for the graph G (considering the special structure of G  exactly 1 incoming edge to each vertex). The result is the number of roots in the stack from (3) minus 1 if 1 is a root as well.
Complexity: Two dfs runs  O(n) (the number of vertices and edges is O(n)).
Code: snipt.org/BihG4
The garbage collector will remove the next node.
 IvgenyNovo January 28, 2014Good solution.
I think it's possible to use a Stack instead of a priority queue because the numbers are added in a non increasing order, hence there's no point for the data structure itself to maintain the order of its elements.
public static int getK(int n){
if (n<1){return 1;}
if (n==1){return 1;}
Stack<Integer> factors = new Stack<Integer>();
int rem = n;
for (int d=9;d>=2;d){
while (rem % d == 0){
factors.push(d);
rem/=d;
}
}
int k = 0;
while (!factors.isEmpty()){k = 10*k + factors.pop();}
return (rem==1) ? k : (1);
}

IvgenyNovo
January 28, 2014 A recursive implementation, call reverseList(head,null) to reverse the list:
public static<T> ListNode<T> reverseList(ListNode<T> head, ListNode<T> prev){
if (head==null){return prev;}
ListNode<T> temp = head.next();
head.setNext(prev);
// the head became the previous element and temp became the new head
return reverseList(temp,head);
}

IvgenyNovo
January 28, 2014 By knowing that the node to be removed is in the middle of the linked list we can deduce that there is another node after the current node (which we want to remove). So instead of removing the current node, we'll just set its data to the data of the next node and remove the next node (by setting the next of the current node to the next of the next node).
public static <T> void removeMiddleNode(ListNode<T> node){
if ((node==null)  (node.next()==null)){return;}
ListNode<T> nextNode = node.next();
node.setData(nextNode.data());
node.setNext(nextNode.next());
}
Complexity: O(1) runtime.
 IvgenyNovo January 28, 2014My approach would be to count appearances of each digit according to the number of digits in the range instead of the range itself, in other words I'll count the number of appearances of each digit 09 as the lsb, the 2nd digit from the lsb, the 3rd digit from the lsb and so forth. This would result in an O(logn) algorithm.
Given n and some digit 0<=d<=9, how do we count the number of times the digit d appears in n? First, we'll need to divide it to two cases: d=0 and 1<=d<=9.
1. 1<=d<=9. We want to count how many times d appears in the range 0...n as the ith digit (from the lsb). Let k = 10^(i+1). How many times does d appear as the ith digit in the range 0...k? It's easy to see that it appears exactly k/10 times (For example: How many times does the digit 1 appear as the second digit in the range 0...100? It appears exactly 10 times for the numbers 10,11,12,...,19). The same applies to the range k+1,...,2k and so on.
Hence, (n/k)*(k/10) is the number of appearances (of d as the ith digit) in the range 0...Integer(n/k)*k.
We're not done yet though because there may be a remainder from the division of n by k which we didn't handle. Let us look at the remainder 0 <= n % k < k. If (n % k) / (k/10) is greater than d that means the remainder (n % k) > d*(k/10) which means that the digit d appears as the ith digit in the remainder k/10 times as well (For example: for the number n=120 and d=1, the remainder n % 100 = 20. Then, 20/10 = 2 > 1 which means that there are 10 more appearances of 1 as the ith digit in the remainder: 10,11,12,...,19).
If (n % k)/(k/10)<d then there are no appearances of d as the ith digit in the remainder.
All that's left is to handle the case where (n % k)/(k/10)==d. In this case, the number of appearances would be (n % k) % (k/10) + 1 (1 is added because the count starts from 0). For example: n=115 and d=1: (115 % 100) / (100/10) == 1 and (115 % 100) % 10 +1 = 5+1=6 which accounts for the appearance of d=1 as the 2nd digit in the numbers 110 (10 in the remainder), 111 (11), ... , 115 (15).
These observations allow us to calculate the number of appearances of 1<=d<=9 as the ith digit in the range 0...n with O(1) runtime complexity.
2. d=0. This case is slightly trickier because 0 cannot appear as msb (We cannot count 10 appearances of 0 as the 2nd digit in the range 10). Still, we'll try and solve it with a similar but slightly modified approach. Let us look again at (n/k)  k is the same as in (1), the number of appearances of 0 as the ith digit in the range 0...k1 is 0 because all the numbers in that range have i digits or less. On the other hand, in the range k...2k1 it appears k/10 times (For instance, d=0 appears 0 times as the 2nd digit in the range 0...99 but it appears appears 10 times in as the 2nd digit in the range 100...199 and another 10 times in the range 200...299). So the number of appearances of d=0 as the ith digit in the range 0...Integer(n/k)*k is Math.max((n/k)1,1)*(k/10).
As before, we're not quite done yet because n/k might have a remainder. But the remainder n % k has at most i digits, when should we count d=0 as msb considering that remainder? Using the same idea as in (1), we'll count it relative to k/10. If n % k >= k/10 that means we need to count k/10 appearances in the remainder (For instance: n=110, n % 100 = 10 and 10 >= 100/10 which means we have 10 appearances in the remainder: 100(00), 101(01),...,109(09)). If n % k < k/10 then the number of appearances is (n % (k/10)) + 1.
The resulting (slightly confusing) code is:
private static int countForDigit(int n, int d){
if ((n<0)  (d<0)  (d>9)){return 0;}
if ((n==0) && (d==0)){return 1;}
int res = 0;
for (int k=10;(k==10) && ((n/k)>0)  (n/(k/10)>0);k*=10){
if (k==10){res+=(n/k) + ((n % k >= d) ? 1 : 0);}
else {
if (d>0){
res+=(n/k)*(k/10) + ((((n % k)/(k/10))) > d ? k/10 : (((n % k)/(k/10) == d) ? ((n % k) % (k/10))+1 : 0));
}
else{
res+=(Math.max((n/k)1, 0))*(k/10) + Math.min((n/k),1)*((n % k >= (k/10)) ? (k/10) : (n % (k/10)) + 1);
}
}
}
return res;
}
public static int[] getAllDigitsCount(int n){
if (n<0){return null;}
int[] count = new int[10];
for (int d=0;d<count.length;d++){count[d]=countForDigit(n,d);}
return count;
}
Complexity: 10*O(logn) = O(logn) runtime complexity.
 IvgenyNovo January 28, 2014One possible solution is the sum solution which was already suggested. The problem with this solution is that for a general n (n=100 in the original problem), calculating the sum may cause overflow (for example n > sqrt(Integer.MAX_VALUE)).
Another solution is to use the xor operator. Recall that xor is commutative, associative and also satisfies the following:
1. a xor a = 0
2. 0 xor a = a
Using these observation, it's easy to see the following:
missing_element = 1 xor 2 xor ... xor n xor arr[0] xor arr[1] xor ... xor arr[n2] (the array has n1 elements).
public static int findMissing(int[] arr) throws NullPointerException {
if (arr==null){throw new NullPointerException();}
int res = 0;
for (int i=0;i<arr.length;i++){res^=arr[i]^(i+1);}
res^=(arr.length+1);
return res;
}

IvgenyNovo
January 26, 2014 Anonymous, the output of the linear algorithm for the two cases you suggested:
Input:14114111, m=2
Output: 11411411
Input: 14111411, m=2
Output: 11411411
In both cases the output is right.
The linear algorithm uses the same idea as kkr.ashish suggested only instead of iterating through the array and swapping elements, it just stores consecutive identical elements in the form of (element,count) in a stack according to the order of their appearance.
Then it pops elements from the stack and builds a new stack whose elements are yet again of the form (element,count) but the count is not greater than m for all the elements except maybe the element at the top (that's the equivalent of iterating right to left in kkr.ashish's algorithm).
After that we do the same once again with the new stack (the equivalent of iterating left to right in kkr.ashish's algorithm). If the element at the top of the resulting stack has a count greater than m then it's not possible to swap elements correctly. Otherwise, all that's left is to use the order and the count of the elements in the stack to construct the output array.
Run example (stack head is the rightmost element):
Input: {1,4,1,1,1,4,1,1} and m=2
1. Stack1 = {(1,1),(4,1),(1,3),(4,1),(1,2)}
2. Split elements in Stack1:
2.1. pop (1,2) from Stack1
2.2. The count in (1,2) is 2<=m so we push it to Stack2 (Stack2={(1,2)}
2.3. pop (4,1) from Stack1
2.4 The count in (4,1) is 1<=m so we push it to Stack2 (Stack2={(1,2),(4,1)})
2.5. pop (1,3) from Stack2
2.6. The count in (1,3) is 3>m so we need to split it, we push (1,2) to Stack2 (Stack2={(1,2),(4,1),(1,2)}. We update the count in (1,3) to (1,1) because we removed two elements from it. Then we need to find an element other than 1 to insert to Stack2 and with the way we built Stack1 that element is in its head  (4,1). Hence we push (4,1) to Stack2 (Stack2={(1,2),(4,1),(1,2),(4,1)}), we decrement the count of Stack1's head element (Stack1 = {(1,1),(4,0)}). Because the count in the head of Stack1 is 0 we pop this element. Because the head of Stack1 includes the same element (1) as the element we are currently iterating over ((1,1)) we pop it and increase (1,1)'s count by 1 (because the head element we popped is (1,1)).
2.7. Our current element is (1,2=1+1) and Stack1 is empty so we push it to Stack2.
3. Stack2 = {(1,2),(4,1),(1,2),(4,1),(1,2)}. You can see that the count of all elements in Stack2 is already lesser/equal than m=2 which means that the next step is not really necessary but in general the last element in step 2.7 was pushed to Stack2 without checking whether its count is less/equal than m so it might be greater than m.
4. Perform step 2 on Stack2 and store the result in Stack3 (Stack3={(1,2),(4,1),(1,2),(4,1),(1,2)}).
5. The head of Stack3 includes an element whose count is not greater than m which means that a solution is feasible.
6. Fill the output array from Stack3 the same way you created Stack1 in step 1. For Stack3={(1,2),(4,1),(1,2),(4,1),(1,2)}, the output array is {1,1,4,1,1,4,1,1} as desired.
Hopefully its clearer now.
That's a good idea.
To improve runtime complexity (at the expense of space complexity), we can create a stack which stores for each consecutive appearance of an element  a point (element,consecutive_appearance). For instance, if the input is 2,1,1,1,3,4,4,4,5  the stack would look like: {(2,1),(1,3),(3,1),(4,3),(5,1)}. This can be done in linear time.
The way the stack was built we know that it does not have any two consecutive elements whose integer values are the same. This means that when we extract an element from the stack the only thing we need to do in order to find an integer which differs from it is just look at the head of the stack.
private static class Occurence<T> {
private T element;
private int count;
public Occurence(T element, int count){
this.element = element;
this.count = count;
}
public boolean isPositiveCount(){return count>0;}
public void increment(){count++;}
public void decrement(){count;}
public void reduceCount(int k){count=k;}
public void increaseCount(int k){count+=k;}
public T getElement(){return element;}
public int getCount(){return count;}
}
private static <T> Stack<Occurence<T>> buildCountStack(T[] arr){
if (arr==null){return null;}
Stack<Occurence<T>> stack = new Stack<Occurence<T>>();
for (int i=0;i<arr.length;i++){
if ((!stack.isEmpty()) && (stack.peek().getElement().equals(arr[i]))){
stack.peek().increment();
}
else {
stack.push(new Occurence<T>(arr[i],1));
}
}
return stack;
}
private static <T> Stack<Occurence<T>> splitOccurences(Stack<Occurence<T>> stack, int m){
if ((stack==null)  (stack.isEmpty())  (m<1)){return stack;}
Stack<Occurence<T>> res = new Stack<Occurence<T>>();
Occurence<T> cur = stack.pop();
while (!stack.isEmpty()){
if (cur.getCount()>m){
res.push(new Occurence<T>(cur.getElement(),m));
cur.reduceCount(m);
res.push(new Occurence<T>(stack.peek().getElement(),1));
stack.peek().decrement();
if (!stack.peek().isPositiveCount()){
stack.pop();
if ((!stack.isEmpty()) && (stack.peek().getElement().equals(cur.getElement()))){
cur.increaseCount(stack.pop().getCount());
}
}
}
else {
res.push(cur);
cur = stack.pop();
}
}
res.push(cur);
return res;
}
public static <T> boolean removeConsecutives(T[] arr,int m){
if (m<1){return false;}
Stack<Occurence<T>> stack = splitOccurences(splitOccurences(buildCountStack(arr),m),m);
if ((stack==null)  (stack.isEmpty())  (stack.peek().getCount()>m)){return false;}
int i=0;
while ((!stack.isEmpty()) && (i<arr.length)){
Occurence<T> cur = stack.pop();
for (int j=0;(j<cur.getCount()) && (i<arr.length);j++){arr[i++]=cur.getElement();}
}
return true;
}
Complexity: O(n) worstcase runtime complexity and O(n) space complexity.
 IvgenyNovo January 26, 2014It can be done with O(1) space complexity using the first row and column to mark whether an entire row/column should be nullified. We'll also keep two additional boolean values to determine whether the first row/column should be nullified as well (This is necessary because otherwise both of them will be nullified even if just one of them contains a 0).
public static void nullify(int[][] arr){
if (arr==null){return;}
boolean nullifyFirstRow = false;
boolean nullifyFirstColumn = false;
for (int i=0;i<arr.length;i++){
if (arr[i]==null){return;}
for (int j=0;j<arr[i].length;j++){
if (arr[i][j]==0){
arr[i][0]=0;arr[0][j]=0;
if (i==0){nullifyFirstRow=true;}
if (j==0){nullifyFirstColumn=true;}
}
}
}
for (int i=1;i<arr.length;i++){
for (int j=1;j<arr[i].length;j++){
arr[i][j] = ((arr[i][0]==0)  (arr[0][j]==0)) ? 0 : arr[i][j];
}
}
for (int i=0;i<arr.length;i++){arr[i][0] = (nullifyFirstColumn) ? 0 : arr[i][0];}
for (int j=0;j<arr[0].length;j++){arr[0][j] = (nullifyFirstRow) ? 0 : arr[0][j];}
}
Complexity: O(mn) runtime complexity and O(1) space complexity.
 IvgenyNovo January 26, 2014Why would you need to know for which interval each coordinate corresponds? It suffices to know whether it is a starting coordinate (to increase count) or an ending one (to decrease count).
When count > maxCount both maxStart and maxCount are updated (maxCount to count and maxStart to the current coordinate). The situation where count > maxCount can only occur when we encounter an interval starting coordinate so it will always hold a starting coordinate of some interval.
Maybe I have not understood the question correctly, the algorithm I described above is for finding an interval which overlaps a maximum amount of intervals from the input. For example: if the input intervals are [1,10],[3,6],[5,8] then the output would be [5,6].
If the requirement is to return an interval from the input set which intersects the most set intervals (which, in the example above is [1,10]) then it can be done using Interval Trees (see Wikipedia).
Let S = {[a1,b1],[a2,b2],...,[an,bn]} be the set of intervals.
1. Create the following array:
array = {(a1,start),(b1,end),(a2,start),(b2,end),...,(an,start),(bn,end)} (the first coordinate is a number and the second coordinate is a label).
2. Sort the array according to lexicographic order where start<end (the first coordinate is a number).
3. Initialize count=0 (count will mark the current number of overlapping intervals).
4. Initialize maxCount=0 (maxCount will mark the maximum number of overlapping intervals).
5. Define maxStart (maxStatr will mark the start of the interval with maximum overlapping intervals).
6. Initialize maxInterval = null (maxInterval will mark the interval with maximum overlapping intervals).
7. Iterate over the sorted array (beginning to end):
7.1. Increment count whenever you encounter a value whose label is start (x,start).
7.2. Decrement count whenever you encounter a value whose label is end (x,end).
7.3. If count>=maxCount then set maxStart to the current number value of the array element (the x in (x,label)) and set maxCount=count.
7.4. If the current array element is labeled end and after updating count it equals to maxCount1 it means that the current array point marks the end of the maximum overlapping interval. So we set maxInterval to the interval (maxStart,current_array_element.x).
8. return maxInterval.
Complexity: O(nlogn) worstcase runtime complexity where n is the number of intervals. Space complexity is O(n) for the additional array.
Java Implementation: snipt.org/Bfjaf0
The idea is right. You should add:
count = (count >= 1) ? count : 1;
before the if clause for it to work.
Not sure if I understand the problem correctly, here's what I gather: The game starts with two boxes, each containing different amount (not 0) of chocolates. At each turn a player takes all the chocolates from one of the boxes and divides the chocolates in the other box to two nonequal boxes (in terms of chocolate amount). If at some turn the player cannot select a box and divide the other box into two nonequal sized boxes then this player loses.
The solution to the problem as I described above is as follows:
public static void determineWinner(int c1, int c2){
if ((c1<=0)  (c2<=0)  (c1==c2)){
System.out.println("Bad input arguments");
return;
}
if ((c1 % 3 == 0)  ((c1>3) && ((c12) % 3 ==0)) 
(c2 % 3 == 0)  ((c2>3) && ((c22) % 3 ==0))){
System.out.println("Starting player wins.");
}
else {
System.out.println("Starting player loses.");
}
}
To see why this solution works, let us define the following:
N  The set of natural numbers (N={1,2,3,...})
L = {1+3k  k in N} union {1,2}
W = N\L = {3k  k in N} union {3k+2  k in N}
Claim: Given (c1,c2) in N^2 (the boxes) where c1!=c2 and a player x whose turn is the current turn, the following hold:
1. If c1 and c2 are in L then the second player has a winning strategy.
2. If c1 in W or c2 in W then player x has a winning strategy.
Proof: Using induction on n=max{c1,c2}.
Base: To cover the base we can check the following cases:
n=2 : (c1,c2)=(1,2),(2,1). In all these cases no matter which box player x takes, he cannot divide the second box to two nonequal boxes (1 cannot be divided at all and 2 can only be divided to two equal boxes containg one chocolate).
n=3: In this case at least one of the boxes contains 3 chocolates. Player x takes the second box to himself and divides the remaining 3 chocolates to two boxes: (1,2). Using the case of n=2 we deduce that the second player loses no matter what he does and hence player x wins.
Induction step: Assume correctness for every 3<=k<n and let us prove correctness for n=max{c1,c2}. Let us consider the following cases:
1. n in W. There are two possibilities here:
1.1. n = 3m = 3*(m1) + 3 = (3*(m1) + 2) + 1. In this case player x takes the box which does not contain n chocolates. He then divides the remaining n chocolates to the following boxes: (3*(m1)+2,1) which by the induction assumption means that the second player loses.
1.2. n = 3m+2 = 3m+1 + 1. In this case player x takes the box which does not contain n chocolates. He then divides the remaining n chocolates to the following boxes: (3m+1,1) which by the induction assumption means that the second player loses.
2. n in L but the other box contains k>=1 chocolates and k is in W. In this cases we use the same proof as in (1) to show that player x has a winning strategy (just replace n with k).
3. n in L and the other box contains k>=1 chocolates and k is in L as well. For the sake of contradiction, assume that there exist a,b in L such that n = a+b. By the definition of L:
3.1. a = 1 + 3m, b = 1 + 3l where m,l>=1. In this case n = a+b = 1+3m+1+3l=3(l+m)+2 in W which is a contradiction to the fact that n is in L.
3.2. a = 1 + 3m, b = 1. In this case n = a+b = 3m+2 in W and yet again we get a contradiction to the fact that n is in L.
3.3. a = 1 + 3m, b = 2. In this case n = a+b = 3m+3 = 3(m+1) in W which is also a contradiction to the fact that n is in L.
3.4. The other cases are similar and result in a contradiction as well (notice that the case of a=2 and b=2 is not possible).
3.1  3.4 show that it is not possible to divide n chocolates to two nonequal boxes with both of them containing a and b chocolates respectively where a and b are in L. The same applies for k (the number of chocolates in the other box) as well. This observation combined with the induction assumption implies that no matter what player x does the second player will win regardless (he has a winning strategy for every move player x makes). Thus, in this case the second player has a winning strategy.
This completes the proof.
An alternative (but expensive) approach can be to use dynamic programming in order to calculate the winner (basically, in each turn the player selects the best move for himself out of all possible moves).
This is actually an O(n) algorithm.
Consider the following case: n = 2^k and i=n1. The first wand vanishes after k+1 tests while testing box 2^k. Then we use the second wand to test boxes 2^{k1}+1 to 2^k  1. The second wand would vanish after 2^{k1}  2 = n/2 2 tests. Overall, you would require k + n/2  2 = logn + n/2  2 = O(n) tests  not O(logn).
Raji has offered a good idea to achieve O(sqrt(n)) asymptotic number of tests. Another similar approach to achieve the same asymptotic number of tests but with less tests in practice can be described as follows:
Like in Raji's solution, we will fix some number k later. We will use the first magic want to check the boxes in the following order: k, k+(k1), k+(k1)+(k2),...
In other words we skip ki boxes between test i1 and i. This also implies that if the first wand vanishes after i tests, we will have at most ki tests left to do with the second wand in order to determine where the first empty box is. The total number of tests using this algorithm will be i + (k  i) = k.
Now, how should we select k? We would want to select the minimum possible k such that k + (k1) + (k2) + ... >= n. The desired k will be the solution for the following equation (The other cases would contradict k's minimality):
n = k + (k1) + (k2) + ... + 1 = k*(k+1)/2
Thus, because k is positive its value is:
k = ceiling((sqrt(1+8n)  1)/2)
First, consider the following problem (Maximizing Histogram rectangle area problem): Given an array of nonnegative integers which represents column heights in a column graph, what is the maximum area rectangle in the given graph?
For instance, suppose array = {1,3,5,2,4,1,3} (first column height is 1, second is 3,...). The maximum area rectangle's height in this case is 2 and its width would be 4 (corresponding to the indices 1,2,3,4 in array).
To solve this problem we'll maintain a stack of array indices with the following constraints:
1. The indices are in increasing order.
2. The column heights which correspond to the stack indices are in a nondecreasing order.
We'll iterate over the array and we'll push indices into the stack as long as conditions (1) and (2) hold. If we've reached a point where we can't push an index without violating constraint (2) then that means that the height of the current column (column i) is smaller than the column whose index is at the top of the stack. In this case, we'll pop indices and compare rectangle areas (with the maximum area) until we can finally push the current index into the stack. The rectangle area at each stage (when we pop) will be calculated by multiplying the popped column height  height(popped_index) and the number of columns between him and the current column (the columns we already popped)  current_index  popped_index. Notice that constraint (1) implies that the column heights of columns between the column at the top of the stack and the current column are all greater/equal to the stack's top column height which implies that the area we calculate is indeed of a rectangle which is included in the column graph.
Here is an implementation of this idea (the method maxHistRect()):
private static class AreaIndices {
public final int from;
public final int to;
public final int height;
public AreaIndices(int from, int to, int height){
this.from = from;
this.to = to;
this.height = height;
}
public int area(){return (tofrom+1)*height;}
@Override
public String toString(){return "(" + from + "," + to + "," + height + ")";}
}
private static AreaIndices maxHistRect(int[] histogram){
if (histogram==null){return null;}
Stack<Integer> stack = new Stack<Integer>();
AreaIndices res = new AreaIndices(0,0,0);
int i=0;
while ((i<histogram.length)  (!stack.isEmpty())){
if ((stack.isEmpty())  ((i<histogram.length) && (histogram[i]>=histogram[stack.peek()]))){stack.push(i++);}
else {
int cur = stack.pop();
if (histogram[cur]*(icur) >= res.area()){res = new AreaIndices(cur,i1,histogram[cur]);}
}
}
return res;
}
Now that we know the solution to the Maximum Histogram Rectangle problem, we'll solve the original problem. We'll build an mxn array (assuming the input matrix is mxn)  dp in the following way:
dp[i][j] = 0 if matrix[i][j] == 0
dp[i][j] = 1 + dp[i][j1] if matrix[i][j] == 1 (I'll consider dp[i][1] as 0)
The resulting matrix dp is a matrix where dp[i][j] represents the number of consecutive 1's (without 0 in between) in column j which end in row i (including). For instance, if dp[i][j] = 4 then it means that matrix[i][j3] = matrix[i][j2] = matrix[i][j1] = matrix[i][j] = 1.
Here is the code that builds the matrix dp:
private static int[][] buildHistograms(int[][] arr){
if (arr==null){return null;}
for (int i=0;i<arr.length;i++){
if ((arr[0]==null)  ((i>0) && (arr[i].length != arr[i1].length))){return null;}
}
int m = arr.length,n = arr[0].length;
int[][] dp = new int[m][n];
for (int i=0;i<dp.length;i++){
for (int j=0;j<dp[i].length;j++){
dp[i][j] = (arr[i][j]==0) ? 0 : 1 + ((i>0) ? dp[i1][j] : 0);
}
}
return dp;
}
After building the matrix dp, we can look at each row dp[i] in this matrix as a column graph (the height in this case is the number of consecutive 1's in each column ending in the row i). We already know how to find the maximum area rectangle in a column graph.
For instance, suppose we found that the maximum area rectangle for dp[i] has height of h and corresponds to columns k,k+1,...,l. By the definition of dp we conclude that our matrix has the following rectangle: start_row = ih+1, end_row = i, start_column = k, end_column = l.
All that's left is to find the maximum rectangle for every row dp[i] and to take the maximum between all of them (getMaxOnesRectange()):
public static class RectangleCoordinates {
public final int rowFrom;
public final int rowTo;
public final int colFrom;
public final int colTo;
public RectangleCoordinates(int rowFrom, int rowTo, int colFrom, int colTo){
this.rowFrom = rowFrom;
this.rowTo = rowTo;
this.colFrom = colFrom;
this.colTo = colTo;
}
@Override
public String toString(){
return "Rows: " + rowFrom + "" + rowTo + ", Columns: " + colFrom + "" + colTo;
}
}
public static RectangleCoordinates getMaxOnesRectange(int[][] arr){
if (arr==null){return null;}
int[][] dp = buildHistograms(arr);
if (dp==null){return null;}
AreaIndices max = null;
RectangleCoordinates res = null;
for (int i=0;i<dp.length;i++){
AreaIndices cur = maxHistRect(dp[i]);
if ((max==null)  (max.area()<cur.area())){
max = cur;
res = new RectangleCoordinates(icur.height+1,i,cur.from,cur.to);
}
}
return res;
}
This solution is O(m*n) runtime and space. It also finds the maximum area rectangle consisting from 1's. It is easy to use it to find the maximum area rectangle consisting from 0's as well (just apply it to the matrix with all elements switched from 0 to 1 and from 1 to 0).
It worked for the couple of tests I ran but I didn't test it too thoroughly.
Open Chat in New Window
Assumption: n is positive (a slightly modified similar approach should work for negative numbers as well).
 IvgenyNovo April 23, 20141. Initialize a digits array (10 element int array) which would hold counters for all the digits we encountered while iterating  to 0 (all counters are 0).
2. Start iterating from the last digit of the input number. In each iteration:
2.1. Increment the digit counter of the current digit.
2.2. Divide n by 10
2.3. Use the digits array to check whether there exists a digit that's strictly bigger than the current digit and that the list of all the digits we encountered so far (including the current digit) minus this digit (the one that's greater than the current digit) contains at least one even digit. Because the digits array only has 10 elements, this can be done in O(1).
2.4. If we found an appropriate digit in step 2.3 that means that we can now construct the desired output number:
2.4.1. Put the digit we found in 2.3 as the last digit in n (remember that n was already divided by 10) and reduce its counter in the digits array by 1.
2.4.2. Use the digits array to find the maximum even digit whose counter is positive (such digit should exist according to 2.3), denote it by lastDigit and reduce its counter by 1.
2.4.3. Using the digits array again put the remaining digits whose counters are positive as last digits of n in an increasing order.
2.4.4. Put lastDigit as the last digit of n and return the number.
3. If a number wasn't returned during step 2 that means that no appropriate number exists.
Example (n=8234961):
Iteration 1  encountered digits  {1}. The current digit is 1 and we have yet to encounter a digit greater than 1. n = 823496
Iteration 2  encountered digits  {6,1}. The current digit is 6 and again we have yet to encounter a digit greater than 6. n = 82349
Iteration 3  encountered digits {9,6,1}. The current digit is 9 and again we have yet to encounter a digit greater than 9. n = 8234
Iteration 4  encountered digits {4,9,6,1}. The current digit is 4. We have encountered 2 digits greater than 4  9 and 6. Replacing the current digit (4) with either 6 or 9 is possible because in both cases we have at least 1 even digit remaining to serve as the last digit (for 9, we can choose either 4 or 6 as last digits and for 6 we only have 4 as a possible last digit). Because we want to return the minimum possible number, we'll choose the smaller between the 2 which is 6.
We put 6 as the current last digit (n=8236). The remaining digits to use are {1,4,9}  we choose the maximum even digit to serve as the last digit of the returned number  4 (it's the only remaining even digit). The rest of the digits we put as last digits in an increasing order to n (n=823619). Finally, we add the chosen even digit (4) to the end of n (n=8236194) and return n.
Code: pastebin.com/k4NgZFfG
Complexity: O(logn) worstcase (because the digits array is of constant size and the sum of all its counters cannot be greater than the total number of digits in n).
It seems to work but I haven't tested it too thoroughly.