Sunny
BAN USERPretty much same idea as Ankita, where each recursive call does 2 things:
(1) Replace the values for all nodes falling under the current node (including the current node)
(2) Return the new value of the leftmost descendant falling under the current node
static int sumGreater(Node node, int n) {
if(node == null)
return n;
node.value += n;
node.value += sumGreater(node.right, 0);
return sumGreater(node.left, node.value);
}
Assuming binary tree means every node has either two or no children (pass true for leftMost & rightMost initially):
static void printBorder(Node node, boolean leftMost, boolean rightMost) {
if(node == null)
return;
if(leftMost || rightMost || (node.left == null && node.right == null))
System.out.println(node.value);
printBorder(node.left, leftMost, false);
printBorder(node.right, false, rightMost);
}
If we can't make that assumption then see my other solution using BFS.
- Sunny April 21, 2013(1) Perform BFS (breadth-first search) on the tree
(2) For each level in the BFS, include the first & last nodes at each level. Include the last only if it's different from the first.
(3) Include all the nodes at the bottom-most level
(4) Return the list of nodes you have included from (2) & (3)
static List<Node> border(Node node) {
ArrayList<LinkedList<Node>> bfs = new ArrayList<LinkedList<Node>>();
LinkedList<Node> currentLevel = new LinkedList<Node>();
currentLevel.add(node);
do {
bfs.add(currentLevel);
LinkedList<Node> nextLevel = new LinkedList<Node>();
for(Node n : currentLevel) {
if(n.left != null)
nextLevel.add(n.left);
if(n.right != null)
nextLevel.add(n.right);
}
currentLevel = nextLevel;
} while(!currentLevel.isEmpty());
LinkedList<Node> borderNodes = new LinkedList<Node>();
for(int i=0; i<bfs.size()-1; i++) {
currentLevel = bfs.get(i);
borderNodes.add(currentLevel.getFirst());
if(currentLevel.getLast() != currentLevel.getFirst())
borderNodes.add(currentLevel.getLast());
}
borderNodes.addAll(bfs.get(bfs.size()-1));
return borderNodes;
}
It seems like companies have started using more of these "design questions" to test candidates, in addition to the algorithmic ones. I agree asking for clarifications is a good start. Then just start giving the simplest solution while reiterating the assumptions and making the interviewer aware that you know this solution won't do this or that. As you go on the interviewer would probably start adding requirements like how to scale this to millions of users, how to make it robust and fault-tolerant, or how you might need to redesign given new functionalities etc.
Personally find these less stressful because you always have something to say rather than getting stuck.
First of all, I don't think the s(i) is useful here. All we need is the capacity of each station, and the distance to the next one. Below is an O(n) solution. Start at station 0, with zero fuel. Add current station's capacity to the fuel, then subtract current station's distance from the fuel. If the fuel is negative, we need to keep advancing the starting station till we have non-negative fuel again. Otherwise we just move on to the next station and do the same capacity/distance accounting. We quit when we have either completed a loop or no valid starting station exists.
These accounting can be tricky. Since I haven't tested my code, it might not work but the general idea should work.
static int start(int[] cap, int[] dist) {
int N = cap.length;
int start=0;
int curr=0;
while(start>=N) {
int fuel=0;
while(true) {
fuel+=cap[curr];
fuel-=dist[curr];
if(fuel<0)
break;
curr = (curr+1) % N;
if(curr==start)
return start;
}
while(fuel<0) {
fuel-=cap[start];
fuel+=dist[start];
start++;
if(start>=N)
break;
}
}
return -1;
}
I wrote a brute-force O(n^3) program where each of i,j,k can range from -100 to 100 and that our goal is to check whether the inputs can be covered by any of these:
(1) i
(2) j
(3) k
(4) i+j
(5) i+k
(6) j+k
(7) i+j+k
My program couldn't find any, so I am convinced that no such 3 integers exist. The question should have mentioned that possibility, but I guess it's our job to prove or disprove whether such 3 integers will always exist?
His solution is O(n^2). The three for-loops in the brute force method is merely for double checking the validity of his solution. Basically his solution will traverse array A and for each element, perform at most O(2*n) comparisons. So overall it's still O(n^2).
- Sunny January 10, 2013(1) Store all characters and their counts in string B into a hashtable.
(2) Set numZeros to 0 initially
(3) Iterate through the first |B| characters of A. For each character, check if it's in the hashtable. If not, ignore it. If yes, decrement the count. If the count is 0, increment numZeros. If numZeroes == #keys in the hashtable, then we have found a substring that's an anagram of B.
(4) For each subsequent character in A, do the same as in (3) above. Also, we need to "subtract" the character that no longer belongs in the current substring. Again, check if that character is in the hashtable. If not, ignore it. If yes, increment the count. If the count is now 1, which means it's previously 0, decrement numZeros. Repeat step (4) until we reached end of array or have numZeros == #keys in the hashtable.
Seems like no one posted the DP version, so I will post mine. Basically I am using a String to represent a sequence of bricks. So the string might look like "223" or "33". For each i, ways[i] will store a list of strings that represent the different legitimate brick configurations.
Note that I am printing out all the possibilities (instead of just returning the number) because that's what the question asks for at the end.
static void printWays(int n) {
ArrayList<String> ways[] = new ArrayList[n+1];
ways[0] = new ArrayList<String>();
ways[1] = new ArrayList<String>();
for(int i=2; i<=n; i++) {
ways[i] = new ArrayList<String>();
if(i==2 || i==3)
ways[i].add(""+i);
for(String way : ways[i-2]) {
ways[i].add(way + "2");
}
if(i<3)
continue;
for(String way : ways[i-3]) {
ways[i].add(way + "3");
}
}
for(String way : ways[n]) {
System.out.println(way);
}
}
I think it's not even clear whether we can modify the heap directly, or whether we are only exposed to its API such as insert() and extractMin(). I suspect this is an easy problem where we are just supposed to extract them all into another data structure then add them back into the heap while de-duping.
- Sunny January 07, 2013(1) Insert all numbers in 2nd array (and their counts) into a hashtable.
(2) For each number in the 1st array, if it's not a key in the hashtable, just ignore it. Otherwise decrement the count. If the count is now 0, increment a counter, which I call numZeros. Basically when numZeros is equal to the number of keys in the hashtable (numKeys), then we have found the sequence we need.
(3) Since the sequence should be the same length as the second array, after step (2) above, we need to kick out the number that's no longer covered by the current sequence. If that number is not a key in the hashtable, just ignore it. Otherwise we increment its count. If the count is now 1, that means the count was previously 0, and so we should now decrement numZeros.
Below is my code to hopefully make this more clear.
static int start(int[] arr, int[] arr2) {
HashMap<Integer, Integer> map = new HashMap<Integer, Integer>();
for(int i : arr2) {
if(!map.containsKey(i))
map.put(i, 0);
map.put(i, map.get(i) + 1);
}
int numKeys = map.keySet().size();
int numZeros = 0;
int len = arr.length;
int len2 = arr2.length;
for(int i=0; i<len; i++) {
int n = arr[i];
if(map.containsKey(n)) {
map.put(n, map.get(n)-1);
if(map.get(n) == 0)
numZeros++;
}
if(i>=len2) {
n = arr[i-len2];
if(map.containsKey(n)) {
map.put(n, map.get(n)+1);
if(map.get(n) == 1)
numZeros--;
}
}
if(numZeros == numKeys)
return i-len2+1;
}
return -1;
}
Extra memory generally just means O(1). In other words, don't use another data structure such as Heap/Tree/Stack that would take O(n) space.
The questions says "if suppose no of 0s exceed no. of 1s or vice versa then keep them untouched." I interpret that to mean if it's impossible to rearrange the 0s and 1s in the desired fashion, then leave the array unchanged. I don't think this solution addresses that, but then I don't think there's such a solution anyways if we want 1-pass.
First store a pointer to first node. Then keep traversing the first node's parent till we hit the root. Note whether we encounter the second node along the way with the "encountered" variable. If we encountered the second node along the way, then we already printed the path from the second node to the root. Otherwise, keep traversing the second node's parent pointer till we either hit the root or encounter the original first node along the way. Advantage of this is that we only traverse both paths once, and we aren't using any additional data structure.
(Didn't really test this code though)
static void findRoot(Node n, Node n2) {
boolean encountered = false;
Node tmp = n;
System.out.println("FIRST");
while(n != null) {
if(n == n2)
encountered = true;
System.out.println(n.value);
n = n.parent;
}
System.out.println("SECOND");
while(!encountered && n2 != null) {
System.out.println(n2.value);
n2 = n2.parent;
if(n2 == tmp)
encountered = true;
}
}
I don't see a way of doing this in-place without an additional data structure like a stack or a simulated one through recursion. Below is my recursive version. Basically pos, numEven and numOdd are all 0 initially. The idea is to store arr[pos] as a variable, then rely on recursion to properly place all numbers in the rest of the array. After the recursive call, we can then put arr[pos] in the appropriate place. We know where to place it because we keep track of how many even & odd numbers we have seen so far.
Note that I am assuming there's enough even and odd numbers to properly place in the array.
static void rearrange(int[] arr, int pos, int numEven, int numOdd) {
if(pos >= arr.length)
return;
int n = arr[pos];
if(n%2 == 0) {
rearrange(arr, pos+1, numEven+1, numOdd);
arr[2*numEven] = n;
} else {
rearrange(arr, pos+1, numEven, numOdd+1);
arr[2*numOdd+1] = n;
}
}
This question is ambiguous because if guest 1 enters at the same time as guest 2 exits, will we still need 2 cups or just 1? So just to be safe, let's assume exits occur after entries, even if they occur at the same time.
My approach is to first multiply all the entry & exit times by 2. Furthermore, add 1 to all the exit times. That way we can enforce the "exit should occur after entry" condition above. It will also allow us to tell whether a given time refers to an entry or exit by looking at its oddity. So now we just need to put all the entry & exit times in an array, sort them, and keep track of the guests at each event.
// assuming the input array is {entry, exit, entry, exit...}
static int maxGuests(int[] arr) {
int len = arr.length;
int times[] = new int[len];
for(int i=0; i<len; i++) {
times[i] = 2*arr[i] + (i%2 == 0 ? 0 : 1);
}
Arrays.sort(times);
int maxGuests = 0;
for(int time : times) {
if(time%2 == 0) {
guests++;
maxGuests = Math.max(maxGuests, guests);
} else {
guests--;
}
}
return maxGuests;
}
If we are talking about using a Min-Heap as a maximum heap, and that the values are integers, then we can negate the values before inserting. So given {8,3,7,1,5} the minimum is 1 but if we insert their negations then we would get back -8, and we can simply negate it back.
- Sunny January 02, 2013Sorry got to downvote for several reasons:
(1) It's O(n^2), slower than O(n) with Hashtable and O(nlogn) with sorting
(2) The inner-for loop is iterating till "arr1.length-1" instead of "arr2.length-1"
(3) Perhaps most importantly, if arr1 = {5, 6, 4, 2} then you will still print 2 twice
The hashing approach is faster but requires O(n) memory, while this sorting approach is slower but requires O(1) memory. And this is the kind of question interviewer likes because if you answer one version they can impose some constraints and make you come up with the other as well.
- Sunny January 02, 2013Since I don't remember B+ or Splay trees I would just write the following code to implement the range method. Basically I store the (key, value) in a Node and then store the Nodes in a BST. For lookup, I can either just do a binary search or have a HashMap as well as many have suggested.
static ArrayList<Node> range(Node n, int smaller, int larger) {
ArrayList<Node> result = new ArrayList<Node>();
boolean gteSmaller = (n.value >= smaller);
boolean lteLarger = (n.value <= larger);
if(gteSmaller && lteLarger)
result.add(n);
if(n.left != null && gteSmaller)
result.addAll(range(n.left, smaller, larger));
if(n.right != null && gteSmaller)
result.addAll(range(n.right, smaller, larger));
return result;
}
I have a similar approach, and while traversing from current "node" up to the root, I store all intermediate nodes on a stack so that I can set their heights after I reached the root. I also would stop traversing once I encounter a "node" with a known height.
static int height(int[] parent) {
int len = parent.length;
int[] height = new int[len];
int maxHeight = 0;
for(int i=len-1; i>=0; i--) {
int n = i;
Stack<Integer> stack = new Stack<Integer>();
while(parent[n] != -1) {
if(height[n] > 0) // optimization when height is already known
break;
stack.push(n);
n = parent[n];
}
int depth = height[n] + 1;
while(!stack.isEmpty()) {
n = stack.pop();
height[n] = depth++;
maxHeight = Math.max(maxHeight, height[n]);
}
}
return maxHeight + 1;
}
This problem is basically the same as finding the unique element in an array where all other elements have exactly one duplicate. The idea is if you XOR the same element (e.g. number or character) twice then you get zero, and so by XORing every element in this array you would be left with the unique element at the end.
After being inspired by mystry's answer, I wrote the following Java program to perform this XOR. The basic idea is if the longest string has size n, then first set aside n characters of storage char[]. As you process each string s, perform the XOR: char[i] ^= s[i]. After processing both lists of strings, char[] should be left with the bits of the unique element.
Note that I cheated a little by using StringBuilder here and avoided having to explicitly find this longest length first.
static String findExtra(ArrayList<String> lst, ArrayList<String> lst2) {
StringBuilder sb = new StringBuilder();
ArrayList<String> arr[] = new ArrayList[2];
arr[0] = lst;
arr[1] = lst2;
for(ArrayList<String> lst3 : arr) {
for(String s : lst3) {
if(sb.length() < s.length())
sb.setLength(s.length());
for(int i=0; i<s.length(); i++) {
sb.setCharAt(i, (char)(sb.charAt(i) ^ s.charAt(i)));
}
}
}
return sb.toString();
}
I whole-heartedly agree with you guys. Haven't used DP even once in my almost 10 years of web/backend programming. I am beginning to realize that these interviews are really just standardized tests like SAT/GRE. They don't really measure actual job performance (because the actual job usually requires a bunch of different skills, and algorithm skill ranks almost near the bottom). But at the end, at least it's standardized such that you can ask almost any candidate the same question and evaluate based on that. So just treat it as a standardized test like GRE, really prepare, do well, and get it over with.
In the past I refuse to even prepare for interview problems.
The question says to return the Node itself while most solutions are returning the sum of this Node, so I guess the question was probably edited. Here is my recursive version to returning the Node.
The recursive method does 2 things: (1) set the sum of current node and (2) return the largest-sum Node under the current node.
static Node largestSum(Node n) {
if(n == null)
return null;
Node leftLargest = largestSum(n.left);
Node rightLargest = largestSum(n.right);
n.sum += (n.left != null ? n.left.sum : 0);
n.sum += (n.right != null ? n.right.sum : 0);
Node largest = n;
if(n.left != null && leftLargest.sum > largest.sum)
largest = leftLargest;
if(n.right != null && rightLargest.sum > largest.sum)
largest = rightLargest;
return largest;
}
Recursive solution. First define a state (men, lions, men2, lions2, forward) as meaning there are currently so many men & lions on original side, so many men2 and lions2 on opposite side, and that we are now moving forward. The capacity is a constant that indicates the capacity of the ferry.
At each step, for 3 men and 3 lions, we can either put M, MM, ML, L or LL on the ferry, represented by integers 1,2,4,3,6 respectively (see comment below). We simply try all ferry configuations and see if any will eventually lead to all men & lions on the opposite side. Note that I am using a HashSet to prevent us from ever revisiting the same state, otherwise we might get into infinite loop.
I have included a sample print method as well.
static HashSet<String> seen = new HashSet<String>();
static boolean cross(int men, int lions, int men2, int lions2, boolean forward, int capacity) {
if(men == 0 && lions == 0)
return true;
if((men > 0 && men < lions) || (men2 > 0 && men2 < lions2))
return false;
if(men < 0 || lions < 0 || men2 < 0 || lions2 < 0)
return false;
String state = "" + men + lions + men2 + lions2 + forward;
if(seen.contains(state))
return false;
else seen.add(state);
int base = capacity + 1;
for(int i=1; i<=capacity*base; i++) {
// if capacity=2, then base=3
// imagine i as base3, where the digits represent the number of men or lion
// 1,2 means 1,2 men respectively
// 3,6 means 1,2 lions respectively
// 4 means 1 man and 1 lion
int numMen = i%base;
int numLions = i/base;
if(numMen+numLions > capacity)
continue;
numMen *= (forward ? 1 : -1);
numLions *= (forward ? 1 : -1);
if(cross(men-numMen, lions-numLions, men2+numMen, lions2+numLions, !forward, capacity)) {
print(numMen, numLions);
return true;
}
}
return false;
}
static void print(int numMen, int numLions) {
boolean forward = (numMen > 0 || numLions > 0);
numMen = Math.abs(numMen);
numLions = Math.abs(numLions);
if(forward) {
for(int i=0; i<numMen; i++)
System.out.print("M");
for(int i=0; i<numLions; i++)
System.out.print("L");
System.out.print("->");
System.out.println();
} else {
System.out.print(" <-");
for(int i=0; i<numMen; i++)
System.out.print("M");
for(int i=0; i<numLions; i++)
System.out.print("L");
System.out.println();
}
}
Each time I encounter a 1, I use BFS to mark all the 1s in this group as -1. That way we can restore the matrix afterwards if needed and also avoid the need for extra memory.
static int numGroups(int[][] matrix) {
int rows = matrix.length;
int cols = matrix[0].length;
int count = 0;
for(int i=0; i<rows; i++) {
for(int j=0; j<cols; j++) {
if(matrix[i][j] == 1) {
count++;
traverse(matrix, i, j);
}
}
}
return count;
}
static void traverse(int[][] matrix, int i, int j) {
if(i<0 || j<0)
return;
if(i>=matrix.length || j>=matrix[0].length)
return;
if(matrix[i][j] != 1)
return;
matrix[i][j] = -1;
traverse(matrix, i-1, j);
traverse(matrix, i+1, j);
traverse(matrix, i, j-1);
traverse(matrix, i, j+1);
}
Recursive approach. Some will contend that recursion isn't really O(1) space, but sometimes I doubt whether the interviewer just says O(1) space to prevent you from creating additional array or using auxiliary data structure.
Idea is as follows. We have 2 sorted subarrays, so let's have i & j refer to the first element of both subarrays. In each recursive call, we pick the smaller element that we will eventually write to arr[p], while performing a recursive call that advances either i or j depending on which element is smaller or which subarray we have exhausted etc.
static void mergeAt(int[] arr, int k) {
int i = 0;
int j = k+1;
int p = 0;
placeAt(arr, p, i, j, k);
}
// place min(arr[i], arr[j]) into arr[p]
static void placeAt(int[] arr, int p, int i, int j, int k) {
if(i>k && j>=arr.length)
return;
int min = 0;
if(i>k || (j<arr.length && arr[i] > arr[j])) {
min = arr[j];
placeAt(arr, p+1, i, j+1, k);
} else {
min = arr[i];
placeAt(arr, p+1, i+1, j, k);
}
arr[p] = min;
}
My approach to simulate the division. I think the tricky part is to detect the repeating pattern efficiently, and I am simply doing it the straightforward way by checking if the last i digits are repeating, where i <= (length of the decimals)/2. The rest is relatively straightforward.
static void divide(int n, int d) {
int intPart = 0;
boolean printDecimal = false;
StringBuilder decimals = new StringBuilder();
String repeating = "";
while(n>0) {
if(n>=d) {
intPart = n/d;
} else {
printDecimal = true;
n *= 10;
decimals.append(n/d);
int len = decimals.length();
for(int i=1; i<=len/2; i++) {
String s = decimals.substring(len-i);
String s2 = decimals.substring(len-2*i, len-i);
if(s.equals(s2)) {
repeating = s;
break;
}
}
if(repeating.length() > 0)
break;
}
n %= d;
}
String result = "" + intPart + (printDecimal ? "." : "");
if(repeating.equals("")) {
result += decimals;
} else {
result += (decimals.substring(0, decimals.length()-repeating.length()*2));
result += "(" + repeating + ")";
}
System.out.println(result);
}
HashMap approach. The only "cool" thing is I didn't use extra data structure to prevent from printing duplicate pairs. The embedded comment should be clear on how I am using the hashmap alone to prevent printing duplicate pairs.
static void findPairs(int[] arr, int sum) {
HashMap<Integer, Boolean> map = new HashMap<Integer, Boolean>();
for(int i : arr) {
int j = sum-i;
// suppose sum=8 and i=6, then j=2
// we want to check whether j=2 is already in the map and whether
// the value is false (indicating we haven't printed it yet)
// furthermore, we want to check whether i=6 is in the map because
// if so, then this pair must have been printed as well
// exception is when i=j, such as i=4 and j=4
if(map.containsKey(j) && !map.get(j)) {
if(i == j || !map.containsKey(i)) {
System.out.println("" + j + "+" + i);
map.put(i, true);
map.put(j, true);
}
} else {
map.put(i, false);
}
}
}
I give up on understanding the "median of the rows" approach. So instead I am solving it the straightforward way, which is to keep a minimum heap. Each time I pop out the minimum element, I will try to add the element to the right and to the bottom (unless they have been added already) to the heap. Will keep doing this until I popped enough elements (depending on whether total elements is odd or even).
The Element class is omitted for brevity. It basically just contains 3 fields: (1) value (2) row (3) col
static double median(int[][] matrix) {
int numRows = matrix.length;
int numCols = matrix[0].length;
boolean added[][] = new boolean[numRows][numCols];
int n = numRows * numCols;
PriorityQueue<Element> heap = new PriorityQueue<Element>();
heap.add(new Element(matrix[0][0], 0, 0));
Element e = null;
for(int i=(n-1)/2; i>=0; i--) {
e = heap.poll();
if(i == 0)
break;
int row = e.row;
int col = e.col;
if(row != numRows-1 && !added[row+1][col]) {
heap.add(new Element(matrix[row+1][col], row+1, col));
added[row+1][col] = true;
}
if(col != numCols-1 && !added[row][col+1]) {
heap.add(new Element(matrix[row][col+1], row, col+1));
added[row][col+1] = true;
}
}
if(n%2 == 1)
return e.value;
else return (e.value + heap.poll().value)/2.0;
}
// first sort
static void printPairs(int arr[], int target) {
Arrays.sort(arr);
int i = 0;
int j = arr.length-1;
while(i<j) {
int a = arr[i];
int b = arr[j];
int sum = a + b;
if(sum == target) {
System.out.println("(" + a + ", " + b + ") ");
i++;
j--;
} else if(sum < target) {
i++;
} else {
j--;
}
}
}
// using hashtable
static void printPairs2(int arr[], int target) {
HashSet<Integer> set = new HashSet<Integer>();
for(int i : arr) {
if(set.contains(i))
System.out.println("(" + (target-i) + ", " + i + ") ");
else set.add(target-i);
}
}
I glanced through the code and after discounting for the main() and randomAsciiString() methods, the remaining code still seems too long even if it works. It might be good exercise to try rewriting or optimizing for shorter code here. Even if you get to do this on a computer during an interview, the more code you write the higher the chance there's some bug and also the harder it is for the interviewer to agree with your solution.
- Sunny December 29, 2012Since the example in the question seems to suggest that we should consider entry time before exit time, I just got an idea from reading Julian's answer that proposes incrementing the exit time. Let's first multiply all the entry/exit time by 2. Furthermore, add 1 to each exit time. So [1, 4] [2, 5] [9, 12] [5, 9] [5, 12] becomes [2, 9] [4, 11] [18, 25] [10, 19] [10, 25].
This has 2 benefits. First we respect the implicit rule that entry should be accounted for before exits. Secondly, we can now use even/odd to indicate whether it's an entry/exit event. So at t=10 we have the max guests, and we simply divide that by 2 to get t=5 as the final answer.
static int timeMax2(int[] arr) {
int len = arr.length;
int times[] = new int[len];
for(int i=0; i<len; i++) {
times[i] = 2*arr[i] + (i%2 == 0 ? 0 : 1);
}
Arrays.sort(times);
int maxGuests = 0;
int timeMax = 0;
int guests = 0;
for(int time : times) {
if(time%2 == 0) {
guests++;
if(guests > maxGuests) {
maxGuests = guests;
timeMax = time;
}
} else {
guests--;
}
}
return timeMax/2;
}
This is also the approach I took. I define a class that contains the time as well as a boolean indicating entry/exit. I throw them into a heap (PriorityQueue in Java) and start popping them. For each event, I first set the current time to the time of this event. Then I increment or decrement the count depending on whether it's an entry or exit. I also keep track of the max guests (and the time at which that occurs).
The only confusing part is why in the example the answer is t=5, because at that time there's also 1 exit event. But you can handle that easily in the compareTo method used for sorting.
class Event implements Comparable<Event> {
int time;
boolean entry;
public Event(int _time, boolean _entry) {
time = _time;
entry = _entry;
}
public int compareTo(Event e) {
if(time < e.time)
return -1;
else if(time > e.time)
return 1;
else if(entry)
return -1;
else if (e.entry)
return 1;
else return 0;
}
}
class MostGuests {
static int timeMax(int[] arr) {
boolean entry = true;
PriorityQueue<Event> heap = new PriorityQueue<Event>();
for(int t : arr) {
heap.add(new Event(t, entry));
entry = !entry;
}
int maxGuests = 0;
int timeMax = 0;
int guests = 0;
int time = 0;
while(!heap.isEmpty()) {
Event e = heap.poll();
time = e.time;
if(e.entry) {
guests++;
if(guests > maxGuests) {
maxGuests = guests;
timeMax = time;
}
} else {
guests--;
}
}
return timeMax;
}
}
In that case "head == a" and so it will return the head as the LCA, which is correct.
The code is indeed quite clean, but can be a little confusing too because the method the intent of this recursive method is different between the main call and the recursive ones. The main call finds the LCA while the recursive one just finds whatever node is equal to "a" or "b". But no big deal.
Another "brute-force" approach, but hopefully less brute-force than others. Time complexity is still O(n^2).
int minStart(char[] arr) {
int len = arr.length;
char[] repeat = new char[2*len];
System.arraycopy(arr, 0, repeat, 0, len);
System.arraycopy(arr, 0, repeat, len, len);
int minStart = 0;
for(int i=1; i<len; i++) {
for(int j=0; j<len; j++) {
if(repeat[minStart+j] < repeat[i+j])
break;
if(repeat[minStart+j] > repeat[i+j]) {
minStart = i;
break;
}
}
}
return minStart;
}
So does this solution rely on the assumption that given the XOR of 2 numbers, there's only 1 pair of unique numbers which can lead to this XOR? If so, I can produce many pairs of numbers with the same value of XOR.
Furthermore, the code is weird because while it works for some cases, it won't work for others, depending on what duplicates we have, as well as the 2 non-repeating numbers of course. Has anyone tried verifying that the code actually work by setting different pairs of numbers as the non-repeating ones?
I am also treating the tree as a sorted array, but I find it difficult (if not impossible) to simulate pointing to the first & last node, then advancing either pointer recursively. Note that I am assuming we don't have a parent pointer for each node, otherwise it would be easier. So instead of starting off with first & last node, I am going to start off with the middle 2 nodes instead. The root is one of them, and I will try using both its left & right children as the other "middle" node. Hence the 2 calls in findNodes().
Idea is to recursively compare the current sum with K. If current sum is equal, we found the 2 nodes. If current sum < K, we can increase the sum by trying with the right child of either node. Similarly if current sum > K.
I haven't proved that starting from the middle will work, but I did test my code against several cases and got the right answers so far. This solution is O(n) time and O(1) space. It can be further optimized by avoiding having to call findNodes() twice, but that would still be O(n).
static Node[] findNodes(Node n, int K) {
Node[] result = findHelper(n.left, n, K);
if(result != null)
return result;
else return findHelper(n, n.right, K);
}
static Node[] findHelper(Node left, Node right, int K) {
if(left == null || right == null || left == right)
return null;
if(left.value + right.value == K) {
Node[] result = new Node[2];
result[0] = left;
result[1] = right;
return result;
} else if(left.value + right.value < K) {
Node[] result = findHelper(left.right, right, K);
if(result != null)
return result;
else return findHelper(left, right.right, K);
} else {
Node[] result = findHelper(left.left, right, K);
if(result != null)
return result;
else return findHelper(left, right.left, K);
}
}
The following are (n, #shuffles) for n<=50. I get the same answer for n=312 & 314 as neo so my program should be correct. For anyone who thinks this is a mathematical one, try deducing the pattern. It's also funny that it doesn't terminate for n=313
(1, 1)
(2, 2)
(3, 3)
(4, 2)
(5, 5)
(6, 6)
(7, 5)
(8, 4)
(9, 6)
(10, 6)
(11, 15)
(12, 12)
(13, 12)
(14, 30)
(15, 15)
(16, 4)
(17, 17)
(18, 18)
(19, 10)
(20, 20)
(21, 21)
(22, 14)
(23, 24)
(24, 90)
(25, 63)
(26, 26)
(27, 27)
(28, 18)
(29, 66)
(30, 12)
(31, 210)
(32, 12)
(33, 33)
(34, 90)
(35, 35)
(36, 30)
(37, 110)
(38, 120)
(39, 120)
(40, 26)
(41, 41)
(42, 42)
(43, 105)
(44, 30)
(45, 45)
(46, 30)
(47, 60)
(48, 48)
(49, 120)
(50, 50)
Lastly, neo provided a C++ version and here's my Java version for those interested:
static int numShuffles(int n) {
LinkedList<Integer> hand = new LinkedList<Integer>();
LinkedList<Integer> table = new LinkedList<Integer>();
for(int i=1; i<=n; i++)
hand.addLast(i);
int numShuffles = 0;
while(true) {
int handSize = n;
while(handSize > 0) {
table.addFirst(hand.removeFirst());
handSize--;
if(handSize == 0)
break;
hand.addLast(hand.removeFirst());
}
numShuffles++;
boolean original = true;
int i = 1;
for(int e : table) {
if(e != i) {
original = false;
break;
}
i++;
}
if(original)
return numShuffles;
LinkedList<Integer> temp = hand;
hand = table;
table = temp;
}
}
You are given the occurrences of each of those K words. So essentially you have K variable-size arrays (ArrayList in Java). The first valid window we can construct is the min & max among the words' first occurrences.
So say K=3 and the occurrences are:
{1, 7}
{8, 12}
{9}
The first window will be [1, 9]. To construct the next window, we can either use the 7 from first word or 12 from second word. Since our goal is to find the smallest window, we should pick whatever word is at the start of the current window (in this case the first word) and consider its next occurrence (in this case 7). Our new window is now [7, 9] and you get the idea.
(1) Let N be the array length and create a new array called mod, where mod[i] = (sum of all elements up to index i) % N.
(2) As we build the mod array, look for a previous position j where mod[j] = mod[i]. If such a j exists, then we know the subarray(j+1, i) should have a sum that's divisible by array length. We can use a Hashtable to store this info, or even an array (just need to initialize it first).
(3) By pigeonhole principle, there can be at most N different modulos. Either one of them is 0 or two of them are the same, so we should always have such a subarray.
If my logic above is correct, then the runtime is O(n).
- Sunny May 01, 2013