aye.bettikk
BAN USERHard to believe this is a Google interview question ! If so, then whoever got this one is a very lucky person :-)
- aye.bettikk February 24, 2015A simple one using ArrayDeque in Java ?
public int[] liss(int[] A) {
ArrayDeque<Integer> stack = new ArrayDeque<>();
for (int i = 0; i < A.length; i++) {
if (stack.isEmpty() || A[i] > stack.peek()) {
stack.push(A[i]);
} else if (A[i] == stack.peek()) {
continue;
} else {
if (A[i] < stack.peekLast())
continue;
while (stack.peek() > A[i]) {
stack.pop();
}
stack.push(A[i]);
}
}
int[] result = new int[stack.size()];
for (int i = result.length - 1; i >= 0; i--) {
result[i] = stack.pop();
}
return result;
}
I don't see the point in hashing which is unreliable inherently in case of keys which are not of fixed size such as URLs.
The requirement is to find the very first non-repeating URL and stop any further processing.
Do you see the need of mapping each URL in the data set and updating each URL's count ?
Good catch :-/
- aye.bettikk March 07, 2015I think the solution could be
1) Pop all elements into a separate list at that point
2) Cache the list somewhere and compare size of cached and newly created list, keep the longest one ?