Sunny
BAN USERFor starter, convert each letter to a number between 0-25. Then you can create a 3-dimensional array "LinkedList<Integer> license[][][]", where license[i][j][k] is a list of the 3-digit suffixes for license that starts with ('A'+i)('A'+j)('A'+k). This should support the search quite efficiently and allow you reconstruct all stored data.
Downside is that it takes extra storage because many of the elements might be empty, and so it also takes longer to reconstruct the whole list than needed. An alternative would be to use a multi-level HashMap instead, but the implementation would be slightly more cumbersome.
My solution might be similar to some others since this is a standard DP problem. Basically have 2 arrays:
(1) max[i] stores the maximum number of elements in the longest subsequence ending with arr[i]
(2) prev[i] stores the index of the previous element in the longest subsequence ending with arr[i]
So basically we just need to iterate through the array, update the 2 arrays above, and then keep traversing the "prev" array until we can't anymore.
static LinkedList<Integer> longest(int[] arr) {
int len = arr.length;
int max[] = new int[len];
int prev[] = new int[len];
for(int i=0; i<len; i++) {
max[i] = 1;
prev[i] = i;
for(int j=0; j<i; j++) {
if(arr[j] < arr[i] && max[j] + 1 > max[i]) {
max[i] = max[j] + 1;
prev[i] = j;
}
}
}
LinkedList<Integer> sequence = new LinkedList<Integer>();
int index = len-1;
while(true) {
sequence.add(0, arr[index]);
if(prev[index] == index)
break;
index = prev[index];
}
return sequence;
}
I use a HashMap instead to keep track of the 2 characters (and their frequencies) so far. Basically iterate through the string and either increment the count for an existing character, or determine that we now have more than 2 unique characters. In the latter case, keep advancing the "start" pointer until we have only 2 unique characters in the HashMap again.
static String longest(String s) {
int positions[] = new int[2];
int max = 0;
int len = s.length();
int start = 0;
HashMap<Character, Integer> freq = new HashMap<Character, Integer>();
for(int i=0; i<len; i++) {
char ch = s.charAt(i);
if(freq.containsKey(ch)) {
freq.put(ch, freq.get(ch) + 1);
} else {
freq.put(ch, 1);
while(freq.size() > 2) {
ch = s.charAt(start++);
if(freq.get(ch) == 1)
freq.remove(ch);
else freq.put(ch, freq.get(ch) - 1);
}
}
if(i-start+1 > max) {
max = i-start+1;
positions[0] = start;
positions[1] = i;
}
}
return s.substring(positions[0], positions[1]+1);
}
Implementation same as everyone here: basically keep the List<List<Integer>> as an instance variable, as well as an index pointing to current list (currentList) and another index pointing to current position of current list (currentIndex). Just hopefully more compact and correct :)
It would actually be much easier had I first convert the List<List<Integer>> into just a List<Integer>, but I actually want to do it the harder (and less efficient) way.
class FlattenList {
private List<List<Integer>> lists;
private int currentList = 0;
private int currentIndex = 0;
public FlattenList(List<List<Integer>> lists) {
this.lists = lists;
findNext(false);
}
public boolean hasNext() {
return (currentList < lists.size() && currentIndex < lists.get(currentList).size());
}
public int next() {
// assuming hasNext() would return true
int result = lists.get(currentList).get(currentIndex);
findNext(true);
return result;
}
private void findNext(boolean advance) {
if(advance)
currentIndex++;
while(currentList < lists.size() && currentIndex >= lists.get(currentList).size()) {
currentList++;
currentIndex=0;
}
}
}
Because the quick-select algorithm is harder to implement correctly? I think if the candidate can code the heap solution up, then that might be good enough already for a phone interview. The interviewer might press for the quick-select algorithm, at which point you can then offer a verbal explanation.
But of course that's just me. Other people can code something like this fluently.
1. Assume the cakes are already sorted by volume, so V1 is smallest and Vn is largest
2. First try setting V=V1 and see if that works (usually doesn't)
3. Now, check the wastage from each cake, call them W1...Wn
4. Imagine we want to reduce V by as little as possible to produce 1 additional piece
5. We can do so by picking the cake with the most wastage, and setting V such that we can produce that 1 additional piece from this cake. If there's a tie for most wastage, use the cake with largest volume.
6. Repeat this process until we have enough pieces
For example, assume the volumes are {10, 14, 16, 22, 30} and K=10 people.
First set V=10. That would produce {1, 1, 1, 2, 3} pieces, or 8 total.
Calculate wastage as {0, 4, 6, 2, 0}.
We realize that the 3rd cake has the most wastage (6), so we should try to produce the additional piece from this cake. To produce 2 pieces from cake 3, V should be 8.
Now we can produce {1, 1, 2, 2, 3} pieces, or 9 total.
Calculate wastage as {2, 6, 0, 6, 6}.
So we should use last cake to produce the additional piece, and set V=30/4 or 7.5
Now we can produce {1, 1, 2, 2, 4}, or 10 pieces.
Sorry I don't have proof that this greedy algorithm works, nor even 100% sure it works.
Using a min-heap of size n as ashot has mentioned already. I am merely providing Java code to illustrate the idea.
static int nthMax(int[] arr, int n) {
PriorityQueue<Integer> heap = new PriorityQueue<Integer>();
for(int i : arr) {
heap.add(i);
if(heap.size() > n)
heap.poll();
}
return heap.poll();
}
To remove each node until there's only 1 left, that requires n iterations right? For each iteration, it takes O(k) to traverse your linked list to identify the node to be removed. So the overall algorithm is O(n*k), and not O(n).
And please explain how in one round, you will delete 1/k-th of the nodes. Just want to make sure I am understanding your complexity analysis correctly.
Represent the coordinates as a 2-dimensional array, where the value is 0 if the coordinate is empty and x if it's occupied by ship x. Also use a HashMap to keep track of each ship's health. Update both as we fire at each coordinate.
// positive integer represents a ship
// zero represents an empty space
// -1 represents it has been shot already
private int[][] coord;
// shipHealth.get(i) represents the initial area of ship i
// it gets decremented as the ship gets shot
private HashMap<Integer, Integer> shipHealth;
public Battleship(int[][] coord) {
this.coord = coord;
shipHealth = new HashMap<Integer, Integer>();
for(int i=0; i<coord.length; i++) {
for(int j=0; j<coord[0].length; j++) {
int value = coord[i][j];
if(value == 0)
continue;
if(!shipHealth.containsKey(value))
shipHealth.put(value, 0);
shipHealth.put(value, shipHealth.get(value) + 1);
}
}
}
public void shoot(int x, int y) {
int value = coord[x][y];
if(value == -1) {
System.out.println("Already Attacked");
} else if(value == 0) {
System.out.println("Missed");
} else {
shipHealth.put(value, shipHealth.get(value) - 1);
if(shipHealth.get(value) == 0)
System.out.println("Ship sunk");
else System.out.println("Attacked Ship " + value);
}
coord[x][y] = -1;
}
If the sorted tiles is "aabbccs" and the word is "as", you won't be able to determine this word can be formed from the tiles by checking that it's a substring of the tiles, unless by substring you mean something like Longest Common Substring and checking if that's equal to the word itself.
- Sunny June 27, 2013This is O(N). Note that each recursive call tries searching for the left and then the right. So if the answer is very near the right end of the array, essentially you will be calling the method almost N times before you stumble upon the answer.
For it to be sublinear, you will need to be able to skip checking on some parts of the array.
I am going to handle this in a controversial way, which is to use a HashMap to store the #requests for each second. Then to calculate the #requests for the past second, minute and hour, I only need 3,600 checks to the HashMap, which is O(1). The downside is that the HashMap can get big really fast, and so we might also want to throw in some code to remove any obsolete keys from it.
class CountRequests {
private long startTime = new Date().getTime();
private HashMap<Long, Integer> counts = new HashMap<Long, Integer>();
void handleRequest() {
long currentTime = new Date().getTime();
long offset = (currentTime - startTime) / 1000;
synchronized(counts) {
if(!counts.containsKey(offset))
counts.put(offset, 0);
counts.put(offset, 1 + counts.get(offset));
}
}
// return the #requests in past second, minute, hour as an array
int[] getCounts() {
int[] freq = new int[3];
long currentTime = new Date().getTime();
long offset = (currentTime - startTime) / 1000;
int total = 0;
synchronized(counts) {
for(int i=0; i<3600; i++) {
long index = offset-i;
if(counts.containsKey(index))
total += counts.get(index);
if(i == 0)
freq[0] = total;
else if(i == 59)
freq[1] = total;
else if(i == 3599)
freq[2] = total;
}
}
return freq;
}
}
If a request comes in 4.75 hours after start-time and another comes in 5.25 hours after start-time, then when you do the counts at 5.50 hours after start-time, are you going to count the 4.75 one as well?
Not sure if I understand your code correctly, but looks like when the 5.25 one comes in you would reset your lastHourRequests to 1?
I would use a HashMap<Node, List<Node>> to store the nodes as keys and the list of adjacent nodes as values. That way both operations take O(1). The adjacency matrix approach isn't optimal for supporting these 2 operations, because you would need to expand the matrix (a 2-dimensional array) with each additional node.
class Node {
private Object nodeData;
public Node(Object nodeData) {
this.nodeData = nodeData;
}
}
class GraphRepresentation {
private HashMap<Node, LinkedList<Node>> nodes = new HashMap<Node, LinkedList<Node>>();
public void addEdge(Node n1, Node n2) {
nodes.get(n1).add(n2);
nodes.get(n2).add(n1);
}
public void addNode(Object nodeData) {
nodes.put(new Node(nodeData), new LinkedList<Node>());
}
}
Recursion. Given a node n, the method returns a BST object, which contains the node as well as the size of the largest BST under node n. We then check whether the BST under left child is the left child itself and whether the BST under right child is the right child itself. If so, we check whether the current node has value larger than left node and smaller than right node. If so, return the current node as largest BST. Otherwise, return the BST that has a larger size.
class Node {
int value;
Node left;
Node right;
}
class BST {
Node node;
int size;
public BST(Node node, int size) {
this.node = node;
this.size = size;
}
}
class LargestBST {
static BST largestBST(Node n) {
if(n == null)
return new BST(null, 0);
BST leftLargest = largestBST(n.left);
BST rightLargest = largestBST(n.right);
if(leftLargest.node == n.left && rightLargest.node == n.right
&& (n.left == null || n.value >= n.left.value)
&& (n.right == null || n.value <= n.right.value)) {
return new BST(n, 1 + leftLargest.size + rightLargest.size);
} else if(leftLargest.size > rightLargest.size) {
return leftLargest;
} else {
return rightLargest;
}
}
}
Yeah seems like you can only conclude "r" comes before "c" because "arc" comes before "act". So I guess in this case both "n r c a t" and "r n c a t" are valid solutions, and I returning any one of them is fine? The question should indeed specify which solution to return when there are multiple...
- Sunny June 25, 2013Recursion. Not much to elaborate on.
static LinkedList<String> generate(String pattern) {
LinkedList<String> strings = new LinkedList<String>();
int pos = pattern.indexOf("?");
if(pos < 0) {
strings.add(pattern);
} else {
String zeroPattern = pattern.substring(0, pos) + "0" + pattern.substring(pos+1);
String onePattern = pattern.substring(0, pos) + "1" + pattern.substring(pos+1);
strings.addAll(generate(zeroPattern));
strings.addAll(generate(onePattern));
}
return strings;
}
Then consider {7, 4, 2, 1, 0}. If I understand his for-loop properly, he would try removing 1 then 2 then 4 then 7 then finally conclude no solution exists. That's because after removing each of these numbers the sum%3 is still not 0.
The solution should be to just remove 2 because that would immediately make sum%3 = 0.
Dumbo - that's a good catch. I lost sight of cases like these after I cleaned up my code. And for this problem I really have a hard time cleaning up the code. Even this version isn't that clean. Would love to see a cleaner solution. Perhaps I will try again myself.
Amit - my algorithm should return 5 as it also processes the strings in the "middle". Unless there's a bug of course.
Use 3 HashMaps:
(1) First one keeps track of the length of the longest prefix consisting of a given character
(2) Second one keeps track of the length of the longest suffix consisting of a given character
(3) Third one keeps track of the TOTAL length of strings that consist entirely of a given character
So for {aa, aac, ba, aaa}:
(1) First one has {a:2, b:1}
(2) Second one has {c:1, a:1}
(3) Third one has {a:5}
The algorithm consists of 2 steps:
(1) Process each string and update the 3 HashMaps above. Also process the characters in the "middle", after dealing with the prefix & suffix. The characters in the middle matter too because it might actually contain the longest string consisting of the same character.
(2) At the end, for each possible character, sum the counts returned by the 3 HashMaps and see which one is longest.
Code below. I tried cleaning it up already, but still kinda messy and not something I could have written on a whiteboard.
static char longestChar = '\0';
static int longestLength = 0;
static HashMap<Character, Integer> counts[] = new HashMap[3];
static {
for(int i=0; i<3; i++)
counts[i] = new HashMap<Character, Integer>();
}
static void process(String s) {
int len = s.length();
int pos = 1;
int pos2 = len-2;
int count = 1;
char current = s.charAt(0);
for(; pos<len; pos++) {
if(s.charAt(pos) != current)
break;
count++;
}
if(pos >= len) {
// entire string is same character
if(!counts[2].containsKey(current))
counts[2].put(current, 0);
counts[2].put(current, counts[2].get(current) + count);
return;
}
// update longest prefix for this character
if(!counts[0].containsKey(current) || count > counts[0].get(current))
counts[0].put(current, count);
current = s.charAt(len-1);
count = 1;
for(; pos2>=pos; pos2--) {
if(s.charAt(pos2) != current)
break;
count++;
}
// update longest suffix for this character
if(!counts[1].containsKey(current) || count > counts[1].get(current))
counts[1].put(current, count);
// process the characters in the middle
count = 0;
current = '\0';
for(; pos<=pos2; pos++) {
if(s.charAt(pos) != current) {
current = s.charAt(pos);
count = 1;
} else {
count++;
}
if(count > longestLength) {
longestLength = count;
longestChar = current;
}
}
}
static void longest(String[] arr) {
for(String s : arr) {
process(s);
}
for(int i=0; i<3; i++) {
for(char c : counts[i].keySet()) {
int count = counts[i].get(c);
for(int j=0; j<3; j++) {
if(i == j)
continue;
count += (counts[j].containsKey(c) ? counts[j].get(c) : 0);
}
if(count > longestLength) {
longestLength = count;
longestChar = c;
}
}
}
}
Several mathematical properties:
(1) Number is divisible by both 2 & 5 only when last digit is 0
(2) Number is divisible by 3 only when sum of digits is divisible by 3
Having said that, this is my algorithm:
(1) First sort all the digits
(2) If the ones-digit isn't 0, then no solution
(3) Otherwise, the sum of digits modulo 3 is either 0, 1 or 2
(4) If mod3 is 2 and there isn't at least a digit with modulo=2 or 2 digits with modulo=1, then no solution
(5) If mod3 is 1 and there isn't at least a digit with modulo=1 or 2 digits with modulo=2, then no solution
(6) Otherwise, we proceed to create the number by carefully checking which digits we should skip over to ensure the final number has modulo=0
(7) The code is actually rather messy and I don't know how to drastically clean it up
static int maxDivisible(int[] arr) {
Arrays.sort(arr);
int mod3 = 0;
int numMod2 = 0;
int numMod1 = 0;
for(int i : arr) {
int modulo = i%3;
mod3 = (mod3 + modulo)%3;
if(modulo == 2)
numMod2++;
else if(modulo == 1)
numMod1++;
}
if(arr[0] != 0)
return -1;
if(mod3 == 2 && numMod2 < 1 && numMod1 < 2)
return -1;
if(mod3 == 1 && numMod1 < 1 && numMod2 < 2)
return -1;
int total = 0;
int multiplier = 1;
for(int i=0; i<arr.length; i++) {
int digit = arr[i];
int modulo = digit%3;
if(mod3 == 2) {
if(modulo == 2) {
// skip this digit
mod3 = 0;
continue;
} else if(modulo == 1 && numMod2 == 0) {
// skip this digit only if numMod2 == 0,
// because it's better to remove 1 digit than 2 digits
mod3 = 1;
continue;
}
} else if(mod3 == 1) {
if(modulo == 1) {
mod3 = 0;
continue;
} else if(modulo == 2 && numMod1 == 0) {
mod3 = 2;
continue;
}
}
total += digit * multiplier;
multiplier *= 10;
}
return total;
}
Actually 8760 would be the biggest in this case.
This problem essentially boils down to "given an array of non-zero digits, generate the largest number that can be divisible by 3", because you simply need to place all the zeros on the right. If there's no zero, then no solution exists, because only when the last digit is 0 can the number be divisible by both 2 & 5.
This question reminds me of adding 2 numbers stored in linked lists. If that's what they looking for, I wonder why the extra silly complexity of asking candidates to read the number into a linked list (or vector) instead of just giving these directly. This question would seem much more reasonable IF the numbers are stored in reverse order. In other words, a file containing "001" actually means the number is 100. Then you can compute and write to the third file on the fly without additional data structure.
- Sunny June 24, 2013Rather the opposite: the task is to find the largest sum using the given numbers, with the condition that no 2 numbers can be next to each other in the array. So if array is {2,4,6,8,10}, then the largest sum is sum of {2,6,10}, and that's fine because none of these 3 numbers are adjacent in the original array. If the array is {-2, -4, -6, -8, -10}, then the largest sum is simply {-2}.
- Sunny June 24, 2013My divide & conquer approach:
(1) The method takes the array, starting position, ending position, and a TreeMap
(2) Each entry in the TreeMap (startPos, endPos) represents the start & end positions of a subarray that satisfies the condition
(3) First check whether array[startPos] == startPos && array[endPos] == endPos
(4) If so, that means all elements in this subarray satisfy the condition, so insert entry into TreeMap
(5) Otherwise, check if array[startPos] > startPos or array[endPos] < endPos
(6) If so, that means none of the elements in the subarray[startPos..endPos] can satisfy the condition. For instance, if startPos=0 and endPos=4 and that arr=[2,3,4,5,6], you can see how even if the array elements increase in the slowest manner, it would have still exceeded the endPos before it reaches the endPos.
(7) Otherwise, recursively check the left & right subarrays.
static void findMatching(int[] arr, int startPos, int endPos, TreeMap<Integer, Integer> matching) {
int startValue = arr[startPos];
int endValue = arr[endPos];
if(startValue == startPos && endValue == endPos) {
matching.put(startPos, endPos);
} else if(startValue <= startPos && endValue >= endPos) {
int middlePos = (startPos + endPos)/2;
if(middlePos != endPos)
findMatching(arr, startPos, middlePos, matching);
if(middlePos+1 <= endPos)
findMatching(arr, middlePos+1, endPos, matching);
}
}
I don't think this is a good argument that no O(log N) solution exists. Consider an algorithm that first checks the first & last element. In this case, it would have immediately determined that there are n elements and that since the array is strictly increasing, none of the elements can be equal to its index.
It takes more than coming up with a counter-example for a reasonable algorithm to prove no faster algorithm exists.
First traverse the tree and store the node values in an ArrayList (basically an array).
Then sort the ArrayList. Then iterate the ArrayList backwards, while keeping track of the
total value so far. In each iteration, I store (current value, total) into a hash table.
So an entry (5, 27) would mean the sum of values >= 5 in the tree is 27.
Finally, I iterate through the tree again. When I see a node with say value 5, I would replace it with value 27.
static void replaceValues(Tree t) {
ArrayList<Integer> lst = new ArrayList<Integer>();
traverse(t, lst);
Collections.sort(lst);
int total = 0;
HashMap<Integer, Integer> values = new HashMap<Integer, Integer>();
for(int i=lst.size()-1; i>=0; i--) {
int value = lst.get(i);
total += value;
values.put(value, total);
}
replace(t, values);
}
static void traverse(Tree t, ArrayList<Integer> lst) {
if(t == null)
return;
lst.add(t.value);
traverse(t.left, lst);
traverse(t.right, lst);
}
static void replace(Tree t, HashMap<Integer, Integer> values) {
if(t == null)
return;
t.value = values.get(t.value);
replace(t.left, values);
replace(t.right, values);
}
Use backtracking. At each point we have a string to partition as well as a list of phrases we have so far. If the string is empty & the number of phrases equals N (required number of phrases), then we have found a partition. Otherwise, for each phrase we have, we check if the string starts with this phrase and if so, recurse by adding this phrase to the list and checking on the remaining substring.
The code below is not optimal because I could have stopped checking once the list size is larger than N.
static ArrayList<String> partition(String s, int n, ArrayList<String> phrases) {
int len = s.length();
if(len == 0 && phrases.size() == n)
return phrases;
int numPhrases = phrases.size();
for(int i=0; i<numPhrases; i++) {
String phrase = phrases.get(i);
int len2 = phrase.length();
if(len > len2 && s.startsWith(phrase)) {
phrases.add(s.substring(0, len2+1));
ArrayList<String> result = partition(s.substring(len2+1), n, phrases);
if(result != null)
return result;
else phrases.remove(phrases.size()-1);
}
}
return null;
}
public static void main(String[] args) {
String s = args[0];
int n = Integer.parseInt(args[1]);
ArrayList<String> phrases = new ArrayList<String>();
phrases.add("");
phrases = partition(s, n+1, phrases);
if(phrases == null) {
System.out.println("no solution");
} else {
for(String phrase : phrases)
System.out.println(phrase);
}
}
Hmm I did miss the p0 = (empty string). Having said that, then I believe the method is supposed to take a parameter n that restricts how many phrases you can have. Otherwise you can simply have each phrase being a character and that would always satisfy the condition of pi = p0 + c.
That's a slightly different question than what I assumed the question to be, which is to partition the string into phrases without letting p0 = (empty string) and always requiring p1 to be the first letter.
(a, ab, abc, X, abcd) won't work because X is not a concatenation of some string before it plus a character. But yeah had it been "aababcababcd" then the partition would be (a, ab, abc, ab, abcd) - the 2nd ab is acceptable because it's a+b and we are only requiring each phrase to be the concatentation of SOME previous phrase plus a character. In this case that previous phrase is "a".
- Sunny June 21, 2013Let A=[An-1, An-2, ..., A1, A0].
Let i be the smallest value such that Ai is > 0.
Let j be the smallest value > i such that Aj is < 9.
If no such i & j exist, then there's no solution.
Otherwise, decrement Ai and increment Aj.
We know all digits to the right of i are 0, and that all digits between j and i are 9, by definition of i & j.
So the digits should look like An-1, ..., Aj, 9, ..., 9, Ai, 0, ..., 0
That means if we simply reverse all the digits to the right of Aj, we should have the answer.
For example, if A=[1, 1, 1], then i=0 and j=1. We then have [1, 2, 0] and that's the answer.
If A=[0, 9, 9, 9, 9], then i=0 and j=4. We then have [1, 9, 9, 9, 8], so after reversing we have [1, 8, 9, 9, 9].
If A=[9, 9, 9] then i=0 but no valid value for j, so no answer.
The reason I use a HashMap is because I don't know the #ships in advance and don't want to iterate through the whole map just to find out the #ships and hence the array size. Alternatively I can probably use an ArrayList, which grows dynamically.
- Sunny July 03, 2013