CT
BAN USERID (Iterative Deepening solution). Please recall that ID has same asymptotic time complexity as either BFS or DFS.
public static void printLevels(Node root) {
int depth=0;
while(printDepth(root, 0, depth) > 0){
depth++;
System.out.println();
}
}
public static int printDepth(Node curNode, int curDepth, int targetDepth) {
if (curDepth==targetDepth){
System.out.print(curNode.val());
return 1;
}
int sum=0;
if (curNode.left() != null) {
sum += printDepth(curNode.left(), curDepth+1,targetDepth);
}
if (curNode.right() != null) {
sum += printDepth(curNode.right(), curDepth+1,targetDepth);
}
return sum;
}
1. Recursion is always stock. If you are dealing with huge amount of data you sometimes must avoid it because most languages have limit to such stack. Try implementing some recursive method - let's say Fibonachi recursively ( F(i)=F(i-1)+F(i-2) even though DP method much more efficient, but for the education purpose) and run for large ith Fibonachi number. You will be surprised how quickly you will reach the limit.
2. Recursion is not bad in many many problems. They ARE NOT taboo in Google interview unless the SPECIFIC problem places taboo.
3. In-place and O(1) memory are two different things. May be this problem asked about in-place but not about O(1), may be a person who posted can verify. If your stack runs out of stack memory, the fact it is in place algorithm will not help you.
4. The problems Google ask is not always about what they do, but for testing problem solving.
5. And one more note about recursion. If a method is not dependent on recursive call return to complete, it may not need to stack it up. This is why perhaps sorting method do not consider its memory cost. Once you are given your divide, you can finish it independently and compiler may realize it and not need to stack it for later un-stacking. However, this is normally not the case in tree walking because you have to return from left to go to right so the method is dependent on un-stacking to complete it.
You cannt save one back link, you have to stack them up for DFS, that is why there is a stack.
All the tradeoff things are nice, dynamic programing are nice for wide range of many different problems they will ask you. But this particular problem may all be about O(1) memory. Google's problem are beyond just divide and conquer skills, their do make sure you have them - but more.
I think this problem clearly wants you too reuse partial result to enable DFS or BFS or whatever reasonable walk.
Why we always use recursion (with implied stack) or queue for BFS? Because we don't have link back to parent.
Now if no-one can solve this problem in 45 minutes, Google will compare who had done more, who come closer and all the thing you mentioned will be considered. But it will all be considered as partial, top result. As far as if they will let you use recursion in the framework of partial answer, it is up to them - but O(1) memory may be the most important statement of the problem and they may not. They don't care if "you know how to solve problem using divide and conquer", they already know it from screening. They wan to see you with the problem and O(1) mem may very well be the problem.
@zortlord, than you for feedback. I did not understand it though because you said node 5 returns after left side is complete, but 5 is on left side, can you clarify.
Consider this:
How many time you re-walk the links?
Every incomplete sub-tree will have shortcut back.
Think about the walk trace in DFS of tree, it also traces back waht had been walked but still walk is O(n)
For Google, it is normally just first step to make sure problem requirements are fulfilled, normally if you don't get optimal, you get no vote. However, this problem is ridiculously difficult, it took me hours to get both right. I am thinking this is the goal to learn how to solve problems like this optimally in 45 minutes.
- CT January 08, 2015@zortlord, you seem to be verifying code rigorously. I made sure I fulfilled every requirement of the problem, maybe the only person in this wall. I also tested my solution rigorously to make sure it works. Just curious, is there any reason you are not up-voting my solution?
- CT January 08, 2015If a word contains 2 a, it can be potentially matched by 2 a, 3 a, 4 a, but not less than 2.
If a word contains 0 b, it can be potentially matched by 0b, 1b and etc (Interesting observation - it cannot be matched by 4b because it leaves no more letters for a word. If it contains to letters, it cannot be matched by 3b)
Just thinking about cross-sectioning sets with given properties to get very small set to work with.
Content as a key - common, this can be large and esquire time complexity for every hash operation, they will no longer be O(1) but the size of Tolstoy's War and Piece, if this is what Website is displaying.
How about a Tray with every content trimmed after the first unique character. (Will need to be temporary retrieved for inserting more URLs)
Here is the code that loosely follows my previous explanation (below the code).
Relinking a node with its next on the left (grand)*child that has no right child.
Do this for every (often unlinked) left child.
Back up right very same links that were relinked until left does not point back.
If reached end (no right node), work completed.
Otherwise, relink right (similarly as left in first step) just once and repeat from the begining for (often unlinked) right child.
//return tail of double-linked list
static Node convert(Node root) {
while (true) {
while (root.left != null && root != root.left.right) {
root = relinkLeft(root);
}
//go back
while (root.right != null && root == root.right.left) {
root = root.right;
}
if (root.right == null) return root;
root = relinkRight(root);
}
}
static Node nextOnLeft(Node n) {
Node left = n.left;
while (left.right != null && left != left.right.left) left = left.right;
return left;
}
//return orphan
static Node relinkLeft(Node n) {
Node orphan = n.left;
Node left = nextOnLeft(n);
n.left = left;
left.right = n;
return orphan;
}
static Node nextOnRight(Node n) {
Node right = n.right;
while (right.left != null && right != right.left.right) right = right.left;
return right;
}
//return orphan
static Node relinkRight(Node n) {
Node orphan = n.right;
Node right = nextOnRight(n);
n.right = right;
right.left = n;
return orphan;
}
EDIT2
Node properties:
noLeftChild is a Node that has no left child
nextOnLeft is a next Node on the left to noLeftChild
noRightChild is a Node that has no right child
nextOnRIght is a next Node on the right to noRightChild
Single node can have multiple of above roles
right of nextOnLeft points to noLeftChild
left of noLeftChild points to nextOnLeft
right of noRightChild points to nextOnRight
left of nextOnRight points to noRightChild
Traversal, that I will detail later, that takes O(1) space, including no stack from recursion, by leveraging build so far re-linking.
For two consecutive elements that have matching properties do re-linking.
Code will follow in some time.
Here is why: You solution requires the maximum size of queue to fit entire level, as it is evident from the debugging above. The ID requires maximum size of stock to fit as many element as there exists levels. So ID is much more efficient in the space, it requires log of the space that this method is using.
- CT January 06, 2015Let's walk it.
Enqueuing root >root>
Enqueuing null >null,root>
Peeking at root
Dequeuing root >null>
Enqueuing left >left,null>
Enqueuing right >right,left,null>
Dequeuing null >right,left>
Enqueuing null >null,right,left>
Peeking at left
Dequeuing left >null,right>
Enqueuing left.left >left.left,null,right>
Enqueuing left.right >left.right,left.left,null,right>
Peeking at right
Dequeuing right >left.right,left.left,null>
Enqueuing right.left >right.left,left.right,left.left,null>
Enqueuing right.right >right.right,right.left,left.right,left.left,null>
Dequeuing null >right.right,right.left,left.right,left.left>
Enqueuing null >null,right.right,right.left,left.right,left.left>
It seemed working. I was in disbelief because I did not understand for what purpose then ID (Iterative deepening) was invented. I was thinking it is because less space required, but in each ID iteration that is DFS that still maintains the stock, even if recursion is used for DFS because recursion maintains stock internally.
ID (Iterative Deepening solution). Please recall that ID has same asymptotic time complexity as either BFS or DFS. The problem I have with all of the solutions above - it seems the problem is asking to separate each level one from another. I don't see anyone do that.
public static void printLevels(Node root) {
int depth=0;
while(printDepth(root, 0, depth) > 0){
depth++;
System.out.println();
}
}
public static int printDepth(Node curNode, int curDepth, int targetDepth) {
if (curDepth==targetDepth){
System.out.print(curNode.val());
return 1;
}
int sum=0;
if (curNode.left() != null) {
sum += printDepth(curNode.left(), curDepth+1,targetDepth);
}
if (curNode.right() != null) {
sum += printDepth(curNode.right(), curDepth+1,targetDepth);
}
return sum;
}
m log n, where m is number of unique ages and n is number of elements. Since m is practically a constant (unfortunately our age is bound), the time complexity is log n. If you look at this very simple and very short code, this is nothing else than binary search for the border point between two consecutive ages.
//return: index age, value count of this age
public static int[] count(int[] input) {
int[] count = new int[input[input.length-1]+1];
count (input, 0, input.length-1, count);
return count;
}
private static void count(int[] input, int begin, int end, int[] count) {
if (input[begin] == input[end]) {
count[input[begin]] += end-begin+1;
}
else {
count(input, begin, (begin+end)/2, count);
count(input, (begin+end)/2 + 1, end, count);
}
}
We can just have a background thread that searches for new max after deleting max (deleting non-max does not change it, inserting greater than max is trivial update)
If you are getting max and this thread is still running, let it finish. Also, if you are doing insert or delete operation and this thread is still running, let it finish so you can be certain whether you are deleting non-max or inserting greater than max.
It is obvious that worst case scenario is n when you do operation that has to wait for background thread while it is running. However, given this thread starts up only on deleting max we can be certain in average case of O(1)
P.S. When inserting, you don't have to wait for thread to finish as soon as it finds greater element than is being inserted. Also, when deleting, as soon as it finds greater element than is being deleted.
VERSION2: Stop (or pause) background thread on every insert and delete and then restart it. When the only one operation that may wait for it is get max. If insert/delete call frequency never let to complete it, it will eventually complete on next getting max and will not start until next delete max.
This is good lesson for me because I got stock thinking edges have to be handled on every recursion. If you just take care of them once, the whole problem becomes easier. I had another solution that was not working because of that, where I could place missing into array instead of Set. It also was passing missing as param, only missing from-to indexes, so you could know exactly where to place each missing item. May be will post it later.
- CT December 05, 2014But very clever to include middle in both half so we can look at every pair of interest, I thought about pair solution but could not come up with this trick. And if no missing, discard the entire chunk so deepening only where missing. You could have also not pass m but detect that nothing is missing instead of m==0, would make it even simpler :)
- CT December 05, 2014I also thought about this approach, but have problem. You have a pair. How can you be certain to get to the other pair? For instance:
1 3 5 7
Let's say you visit pair 1 3 and detect 2. How can you be certain you will also visit 3 5 and detect 4. Once you split, you cannot get element from the other half.
Also, you are missing the situation of missing at the beginning and the end. for instance, starting with 5. need to detect 1, 2, 3, 4. And also, ending with N-5. You need to detect N-4 and etc.
This is m log (n-m) solution. Provided that m is within constant (find very few missing numbers out of very many), this would be close to log n solution.
The idea is divide and conquer. Think about missing numbers as making indexes of array delay from value. If delay is the same within entire partition, there are no missing here and we are not computing this region. We are deepening only in partitions where there is missing, and for one missing this would be log (n-m).
static Set<Integer> findMissing(int[] input, int N) {
Set<Integer> result = new HashSet<Integer>(N-input.length);
findMissing(input,result,0,input.length-1, 0, N+1);
return result;
}
static void findMissing(
int[] input, Set<Integer> result,
int start, int end,
int valBefore, int valAfter) {
int delayAtBegin = input[start] - start;
int delayAtEnd = input[end] - end;
if ( delayAtBegin == delayAtEnd ) {
for (int i = valBefore+1; i < input[start]; i ++) result.add(i);
for (int i = input[end]+1; i < valAfter; i ++) result.add(i);
return;
}
int endM = (start+end)/2;
int startM = endM+1;
findMissing(input,result,start,endM,valBefore,input[startM]);
findMissing(input,result,startM,end,input[endM],valAfter);
}
Test with:
System.out.println(findMissing(new int[]{1,2,4,5,6,8},8));
There way I understood the definition of the problem, for one person the ranges are non-overlaping. "increasing sequence of pair of numbers", does not say increasing begin point, but increasing pair. Also, reffers to calendar of one person, you cannot make it busy for two things at a time, I believe. It is almost obvious from problem statement, but @Phoenix, please confirm.
- CT December 03, 2014EDIT: Handle any number of people by arranging their range iterators into a heap. This reorganization also improves readability of the algorithm logic.
static class Range {
int B;
int E;
Range ( int b, int e) {
B=b;
E=e;
}
public String toString() { return "("+B+","+E+")";}
}
public static List<Range> findFreeRanges
(int start, int end, List<Range> ... schedules) {
int beginCounter = 0;
int next = 0;
Range curRange = new Range(start,end);
List<Range> freeRanges = new ArrayList<Range>();
MergedTimePointsIterator iterator =
new MergedTimePointsIterator(schedules);
while (iterator.hasNext()) {
TimePoint tp = iterator.next();
next = tp.time;
if (beginCounter == 0 && next > curRange.B) {
curRange.E = next-1;
freeRanges.add(curRange);
}
beginCounter += (tp.begin)?1:-1;
if (beginCounter == 0 && next < end){
curRange = new Range(next+1,end);
}
}
if (beginCounter == 0 && next < end) {
freeRanges.add(curRange);
}
return freeRanges;
}
private static class TimePoint {
private int time;
private boolean begin;
private TimePoint(int t, boolean b) {
time = t;
begin = b;
}
}
private static class MergedTimePointsIterator
implements Iterator<TimePoint>{
private PriorityQueue<TimePointsIterator> rangeHeap =
new PriorityQueue<TimePointsIterator>();
private MergedTimePointsIterator( List<Range> ... schedules ) {
for ( List<Range> s : schedules ) {
TimePointsIterator tpi = new TimePointsIterator(s);
if (tpi.hasNext()) rangeHeap.offer(tpi);
}
}
public boolean hasNext() {return !rangeHeap.isEmpty();}
public TimePoint next() {
if (!hasNext()) throw new NoSuchElementException();
TimePointsIterator tpi = rangeHeap.poll();
TimePoint ret = tpi.next();
if (tpi.hasNext()) rangeHeap.offer(tpi);
return ret;
}
public void remove() {throw new UnsupportedOperationException();}
}
private static class TimePointsIterator
implements Comparable<TimePointsIterator>, Iterator<TimePoint>{
private List<Range> rangeList;
private int rangePointer = 0;
private boolean begin = true;
private TimePointsIterator( List<Range> list ) {
rangeList = list;
}
private int peek () throws NoSuchElementException {
if (!hasNext()) throw new NoSuchElementException();
Range range = rangeList.get(rangePointer);
return (begin) ? range.B : range.E;
}
public int compareTo(TimePointsIterator o) {
return Integer.compare(peek(), o.peek());
}
public boolean hasNext() {
return rangePointer < rangeList.size();
}
public TimePoint next() {
TimePoint ret = new TimePoint(peek(),begin);
if (!begin) rangePointer++;
begin = !begin;
return ret;
}
public void remove() {throw new UnsupportedOperationException();}
}
From the two range lists, pick integers in order, regardless if they are beginning of range or the end, advancing either in one list or the other in order to keep ascending order of integers.
For every picked integer that is begining of range, increment begin counter, for every end decrement.
The free range is when begin counter is 0.
Obviously, linear time complexity.
static class Range {
int B;
int E;
Range ( int b, int e) {
B=b;
E=e;
}
public String toString() { return "("+B+","+E+")";}
}
public static List<Range> findFreeRanges
(List<Range> per1, List<Range> per2, int start, int end) {
int ind1=0;
int ind2=0;
int beginCounter = 0;
int next = 0;
Range curRange = new Range(start,end);
List<Range> freeRanges =
new ArrayList<Range>(per1.size() + per2.size());
while (ind1 < per1.size()*2 || ind2 < per2.size()*2) {
int p1next = (ind1 < per1.size()*2)?
((ind1%2==0)?per1.get(ind1/2).B:per1.get(ind1/2).E) :
Integer.MAX_VALUE;
int p2next = (ind2 < per2.size()*2)?
((ind2%2==0)?per2.get(ind2/2).B:per2.get(ind2/2).E) :
Integer.MAX_VALUE;
next = (p1next<p2next)?p1next:p2next;
if (beginCounter == 0 && next > curRange.B) {
curRange.E = next-1;
freeRanges.add(curRange);
}
if (p1next<p2next) {
beginCounter += (ind1%2==0)?1:-1;
ind1++;
} else {
beginCounter += (ind2%2==0)?1:-1;
ind2++;
}
if (beginCounter == 0 && next < end){
curRange = new Range(next+1,end);
}
}
if (beginCounter == 0 && next < end) {
freeRanges.add(curRange);
}
return freeRanges;
}
public static void main ( String[] arg ) {
System.out.println(findFreeRanges (
new ArrayList(){{ add(new Range(1,5));
add(new Range(10,14));
add(new Range(19,20));
add(new Range(21,24));
add(new Range(27,30));}},
new ArrayList(){{ add(new Range(3,5));
add(new Range(12,15));
add(new Range(18,21));
add(new Range(23,24));}},
1, 30));
}
Brilliant, I was about to do as in-place partition stage in quick sort but this way saves one of 3 array accesses in the swap procedure. And also, this helps understanding quick sort in-place partition stage in a way that you never have to memorize it. Because if you get how one side of swap works - the other must be on the other side
- CT December 02, 2014
@zortlord, should e2g2 also map to egg?
- CT January 09, 2015