galli
BAN USERThe previous solutions are wrong. We need a recursive procedure which tests different combinations. E.g. we have words "abc", "def", "ghi", "abcdef" and "abcdefghi". What score does "abcdefghi" have? Either we put it as "abc", "def", "ghi" or as "abcdef", "ghi". We can't say it has score 4 because it should be composed of the constituent words.
You should ask the interviewer a few questions: What if a word appears repeatedly? Does it count for 1 or more? Is it even allowed to use one word twice?
I would insert all strings into a trie structure. Than, for each word, I would follow this pseudocode, which tries to use the shortest string but backtracks if it does not lead to a solution:
boolean backtrack(int i, int found)
Node n = root
while n has child word[i] and i < word.length
n = child word[i]
i = i + 1
if n.isEndOfWord
if i == word.length
maxFound = Math.max(maxFound, found)
return true
else if backtrack(i, found + 1)
return true
return n has child word[i]
My solution holds an iterator to the outer list and a second iterator to the inner list. It seems to me simpler than what I see from others.
public class ListOfLists {
private List<List<Integer>> outerList;
private Iterator<List<Integer>> outerListIter;
private List<Integer> innerList;
private Iterator<Integer> innerListIter;
public void init() {
outerListIter = outerList.iterator();
if(outerListIter.hasNext()) {
innerList = outerListIter.next();
innerListIter = innerList.iterator();
}
}
public boolean hasNext() {
// run through empty lists
while(outerListIter.hasNext() && ! innerListIter.hasNext()) {
innerList = outerListIter.next();
innerListIter = innerList.iterator();
}
return innerListIter.hasNext();
}
public Integer next() {
if(hasNext()) {
return innerListIter.next();
} else {
throw new NoSuchElementException();
}
}
public static void main(String... arg) {
new ListOfLists().test();
}
public void test() {
outerList = new ArrayList<List<Integer>>();
outerList.add(new ArrayList<Integer>());
outerList.add(Arrays.asList(new Integer[] { 3, 4 }));
outerList.add(Arrays.asList(new Integer[] { 5, 7 }));
outerList.add(new ArrayList<Integer>());
init();
while(hasNext()) {
System.out.print(next() + " ");
}
}
}
Use trie together with backtracking. Trie gives a perfect pruning algorithm, so that you don't test longer words if the prefix is not in the trie:
Map<Character, Integer> lc = new HashMap(); // letter count
void init(char[] tiles) {
for(int i=0; i < tiles.length; i ++)
if(! lc.contains(tiles[i]))
lc.put(tiles[i], 1);
else lc.put(tiles[i], lc.get(tiles[i]) + 1);
}
void BT(char[] prefix, int k) {
if(! trie.isPrefix(prefix, k))
return;
if(trie.isInDictionary(prefix, k))
addSolution(prefix, k);
for(Entry e : lc.entries()) {
if(e.value() > 0) {
lc.put(e.key(), e.value()-1);
prefix[k] = e.key();
BT(prefix, k+1);
lc.put(e.key(), e.value()+1);
}
}
I like the heap idea and the Iterator idea. The whole merge could be also wrapped as Iterator:
public class MergeIterator implements Iterator<Integer> {
public MinHeap h = new MinHeap();
class Node { int v; Iterator<Integer> i; }
public MergeIterator(Collection<Iterator<Integer>> inputs) {
for(Iterator<Integer> i : inputs) {
if(i.hasNext())
h.insert(new Node(i.next(), i));
}
}
public boolean hasNext() { return ! h.isEmpty(); }
public Integer next() {
Node min = h.extractMin();
if(min.i.hasNext())
h.insert(new Node(min.i.next(), min.i));
return min.v;
}
}
I am thinking about a solution which behaves better in concurrent environment.
To lock less often and to save memory, I suggest that we keep the resolution say about 1%. This means we update the count each 10ms for the 1s number, each 0.6 s for the 1min and each 0.6 min for the 1 hour. The 1 min and 1 hour counts are just simpler variant of the same, so let's think only about the 1 sec timer.
We can have a AtomicLong count, to which it adds. When it realizes that 10ms has passed, it takes this count and sends it to a list with the timestamp, similarly to what others described. We can lock on sendToList, but usually there will be no concurrency because it happens just once in 10 ms. The "optimistic locking" in AtomicLong.compareAndSet takes care about the rest of the concurrency.
AtomicLong count = new AtomicLong(0);
// last timestamp
AtomicLong lt = new AtomicLong(System.currentTimeMillis());
void newRequest() {
long c = count.getAndIncrement();
long t = System.currentTimeMillis();
long lt2 = lt.get();
// each 10 ms
if(t / 10 != lt2 / 10) {
// try to write the new timestamp - on failure, assume other thread is just doing the same and do nothing
if(lt.compareAndSet(lt2, t)) {
sendToList(lt2, c);
}
}
}
Matrix inversion is not recommended, it takes more time than Gaussian elimination, which reduces the matrix to a triangular one:
// put the numbers in matrix m, row i-1 contains ai bi ci xi, n=3
for(int i=0; i < n; i ++)
for(int r=i+1; r < n; r ++) { // row
k = m[r][i] / m[i][i];
for(int c=i; c < n+1; c++) // column
m[r][c] -= k * m[i][c];
}
From this matrix you can get x1..xn as result[0..n-1]:
for(int i=n-1; i >= 0; i--) {
r = m[i][n]; // right side of the equation
for(int j=n-1; j > i; j --)
r -= m[i][j] * result[j];
result[i] = r / m[i][i];
}
As we need to remove duplicates, a BST could serve better than heap: no need for the additional hash table. And still O(N log N). On the other hand, BST uses pointers and can't be stored in an array like a heap.
Also, we can't stop when we have N items in the tree - e.g. say N=4, the tree would contain 2, 3, 5, 7 after the first round. We must keep adding until we output all N items.
Insert sort:
Node pn = head; // previous to n
for(Node n=head.next; n != null; n = n.next) {
Node pi = null; // previous to i
Node i;
for(i = head; i != n && i.value < n.value; i = i.next)
pi = i;
// change pointers from pi -> i and pn -> n -> n.next
// to pi->n->i and pn->n.next
if(pi == null)
head = n;
else pi.next = n;
pn.next = n.next;
n.next = i;
pn = n;
}
Good point. What about this:
* Choose the pivot randomly (e.g. the last number in the current sub-partition on the first machine)
* Tell all machines to partition around the pivot and report the three array indexes: a, p, b
* Use the answers to find whether the median lies to the left or right of the pivot
* Repeat recursively
int distributedMedian() {
return distributedMedian(x[n-1], true);
}
// pv = pivot value; left = partition the left or right sub-partition?
int distributeMedian(int pv, boolean left) {
lsum = 0;
rsum = 0;
// run on all machines - the real implementation would need to run this asynchronously on all machines at once
for(int j=0; j < m; j ++) {
// each machine reports the indexes, but it also remembers them to be able to run the next distributedPartition call
(aj, pj, bj) = distributedPartition (j, left, pv);
lsum += pj - aj - 1;
rsum += bj - pj;
}
left = lsum > rsum;
pv = // choose randomly next pivot in the left or right partition
distributeMedian(pv, left);
}
You can do that in O(N/M). As Skiena describes in his book, you can use the Partition method from the Quick-sort algorithm to search for a median in O(N), because you run Partition only on one side of the array each time. If all the machines work paralelly, they can finish in O(N/M).
int median(int[] x, int a, int b) {
if(a >= b-1)
return a;
int p = partition(x, a, b);
if(p < (a+b)/2) return median(x, a, p);
else return median(x, p+1, b);
}
int partition(int x[], int a, int b) {
int p = b-1;
firstHigh = a;
for(int i=a; i < b; i++)
if(x[i] < x[p]) {
swap(x, i, firstHigh);
firstHigh ++;
}
swap(x, p, firstHigh);
return firstHigh;
}
This takes care of cycles and is independent of whether the graph is directed or not.
public class CloneGraph {
private static class Node {
int data;
List<Node> neighbors = new ArrayList<Node>();
}
public Node clone(Node i) {
return clone(i, new HashMap<Integer, CloneGraph.Node>());
}
public Node clone(Node i, Map<Integer, Node> nodes) {
Node r = new Node();
r.data = i.data;
nodes.put(r.data, r);
for(Node n : i.neighbors) {
Node j = nodes.get(n.data);
if(j == null) {
j = clone(n, nodes);
}
r.neighbors.add(j);
}
return r;
}
We can use BFS or DFS here. DFS is generally a bit easier to implement.
- galli October 27, 2013