lucas.eustaquio
BAN USERBecause I like challenges.
Must be easy to correct.
As for the easy way to find the height given kth: the total number of cups it is an arithimetical progression of h: (1+2+3+4+5...+n). That creates a relationship between h and n. n = h(h+1)/2 (A.P. formula). n = h²/2 + h/2 or h²/2 + h/2 - n = 0. Solve this equation using bhaskara ((-b + sqrt(b²-4ac))/2a, with a=0.5, b=0.5 and c=n).
That should do it!
i believe it wont. But it is easy to fix...
- lucas.eustaquio September 15, 2011Space O(k) is right. K one based means the first k is 1, instead of 0. The first k will be 1, second 2, and so on.
- lucas.eustaquio September 12, 2011A second call to a recursive path alerady found will be O(1), so it won`t alter the overall complexity.Just look at the big picture: all this method does is build a found array with n entries at most. At it does n comparations to build each entries, therefore it is O(n^2). If you doubt, run the code doubling the string length and see how time grows.
If this wasn't dynamic programming, you would be correct, and the simple recursive method would have O(n!) complexity.
It would make the code more understandable. Thanks!
- lucas.eustaquio August 18, 2011Thats a pretty good idea. I just implement and tested it:
I didnt used another array to store the count. Only preprocessed k doing:
1. for each k in i position, do k = k-1.
By doing that, each element with value greater or equal this new k, will be offset by the position of k (equals to the number of elements lower or equal r in k vector).
Space was O(k) and time O(logk) as you suggested.
import java.util.Random;
class KDistinct2 {
private int[] k;
private Random rand = new Random();
public KDistinct2(int[] k) {
this.k = k.clone();
for (int i=1; i<k.length; ++i) {
this.k[i] -= i;
}
}
public int rand(int n) {
int r = this.rand.nextInt(n-k.length);
return r + countLowerOrEqualThan(r);
}
private int countLowerOrEqualThan(int val) {
int l = 0, r = k.length-1;
while (l <= r) {
int m = (l+r)/2;
if (k[m] <= val && (m == (k.length-1) || k[m+1] > val))
return (m+1);
if (k[m] > val)
r = m-1;
else
l = m+1;
}
return 0;
}
}
The ideia is to move 16 to the 16th slot on available numbers:
Available: [1, 4, 5, 7, 8, 10, 11, 13, 14, 15, 16, 18, 19, 22, 23, 24, 24, 26, 27, 28, 29, 30]
So r = 24 in this case.
It is important that k be in increasing order for this method to work. If you take a second look at the method, you will see that it isn't just incrementing r for each below it in k, it is incrementing based on its previous incrementend value.
So for the above situation, the iterations will be:
k[0] -> 16 > 2, so r = 17
k[1] -> 17 > 3, so r = 18
k[3] -> 18 > 6, so r = 19
k[4] -> 19 > 9, so r = 20
k[5] -> 20 > 12, so r = 21
k[6] -> 21 > 17, so r = 22
k[7] -> 22 > 20, so r = 23
k[8] -> 23 > 21, so r = 24
And just for the records i ran several tests on the code, generating milions of n, and milions of k, all of then random. Not even once the program selected a number in k, and it also distributed choosen values evenly among the valid results.
- lucas.eustaquio August 11, 2011yes, thats right. The code is working. I tested it with with junit, but the algorithm description is wrong, and the right one is as you said.
- lucas.eustaquio August 11, 2011Just had this question on my onsite interview. Best answer is to build an iterator based on a stack. Its is basically doing a depth first search.
Iterator logic:
1. from root add right node (if has one), current node and left node, repeat this step for left node.
2. when poping from stack, check if the node at top is the right node of the poped one. If so, then repeat step 1 for the left node of the node at the top of stack.
3. use the iterator to print in sorted order.
Time: O(n)
Space: O(log). By the way, you algorithm is also O(n), because thats the size of the stack used in your recursion.
public class TreeIterator {
private Stack<TreeNode> stack = new Stack<TreeNode>();
public TreeIterator(TreeNode root) {
addInOrder(root);
}
private void addInOrder(TreeNode node) {
if (node == null) return;
if (node.right != null)
stack.add(node.right);
stack.offer(node);
addInOrder(node.left);
}
public boolean hasNext() {
return !stack.isEmpty();
}
public TreeNode next() {
if (stack.isEmpty())
return null;
TreeNode next = stack.poll();
if (!stack.isEmpty() && stack.peek() == next.right)
addInOrder(next.right.left);
return next;
}
}
Found an approach that is O(log(a)+log(b)). The algoritmh is based on binary search as described:
1. find the the median position (a.length+b.length)/2
2. select the median from a, and do a binary search for elements lower than a[median] in b.
3. if the sum of the position of median + lower elements in b = median pos, you have it.
4. if that sum is lower than wanted, do the same for the second half of a. since all elements of this hal are greater than previous you can also restric the lower bound in b search.
5. if that sum i higher than want, search in the previous half of a.
6. if the median was not found, is because is in b. so, invert a with b and start from step 1.
Code:
public class FindMedian {
public static int searchMedian(int[] a, int[] b) {
return searchMedian(a, 0, a.length-1, b, 0,
b.length-1);
}
private static int searchMedian(int[] a, int al,
int ar, int[] b, int bl, int br) {
int medianPos = (a.length + b.length)/2;
while (al <= ar) {
int am = (al+ar)/2;
int blow = bl +
countLowerThan(b, a[am], bl, br);
int curPos = am+blow;
if (curPos == medianPos)
return a[am];
if (curPos < medianPos) {
al = am+1;
bl = blow;
} else {
ar = am-1;
br = blow;
}
}
return searchMedian(b, 0, b.length-1,
a, 0, a.length-1);
}
private static int countLowerThan(int[] a, int val,
int l, int r) {
int start = l;
while (l <= r) {
int m = (l+r)/2;
if (a[m] < val &&
(m == (a.length-1) || a[m+1] >= val))
return (m+1) - start;
if (val <= a[m])
r = m-1;
else
l = m+1;
}
return 0;
}
}
This code just rotates. The trick is to figure out which index to swap. To figure this out, i just draw a 2x2 matrix, and tried to rotate the element 0,1 (row 0, col 1). Did it by hand, writing where which element goes. Then i named row = 0, mirroredRow = 4 (length - row), col = 1, mirroredCol = 3 (length - col), and saw the exactly pattern
As for the iterations between elements, since we swap 4 elements at a time, we just need to iterate half of each dimension. If length is odd, we need to round up one of the iterations (try do rotate a 3x3 matrix by hand and you will see)
Code:
public class RotateMatrix {
public static void rotate90Degree(int[][] matrix) {
int n = matrix.length;
for (int row=0, mirroredRow = n-1, halfN = n/2, halfNUp = n - n/2;
row<halfN; ++row, --mirroredRow) {
for (int col=0, mirroredCol = n-1; col<halfNUp;
++col, --mirroredCol) {
int temp = matrix[mirroredCol][row];
matrix[mirroredCol][row] = matrix[row][col];
matrix[row][col] = matrix[col][mirroredRow];
matrix[col][mirroredRow] =
matrix[mirroredRow][mirroredCol];
matrix[mirroredRow][mirroredCol] = temp;
}
}
}
}
I think you misunderstood the pop procedure. The pop hash isn't mapping count to all elements with same frequency, it is mapping count to another hash with all elements at that frequency. Its a hash of hashes. With a good hash function it will be O(1) on average.
It behaved pretty well in my 1.000.000 elements tests. There was way more than 1000 elements at the same frequency, and the pop time was constant, actually it decreased with more elements (amortization effect).
By the way, i just pushed 1.000.000 elements and then poped all of then. The total time was 1.6s, and that for the whole operation(adding and poping).
Add avg: 0.0011
Pop avg: 0.0005
For 100.000 elements it was:
Add avg: 0.00208
Pop avg: 0.00155
For less elements the avg was higher because of the overhead for some operations was less amortized. But it shows consntant time behavior.
That is correct. But in my implementation, if you push a number 5 times you will have to pop it 5 times also.
Operations:
1. Push: Check the current count of the pushed item, getting it from a hashmap. If the count is zero, add count = 1, else increments count, and adds the the element in another map indexed by its count. Updates the current max count if needed. All oprations are O(1).
2. Pop: get the list of items from the map indexed by count using max count. Retrieve a element from it, and remove from the list. This list is also a hash, so removal O(1). If this list is now empty, decrements max count. It also decrements the current count of the element (another map) by one.
All I used was hash, so it must be O(1) since all operations i use are constant, dont you agree?
By the way, the output for your pushs are:
Pop 1 -> 7
Pop 2 -> 7
Pop 3 -> 7
Pop 4 -> 3
Pop 5 -> 9
Pop 6 -> 7
Pop 7 -> 3
Pop 8 -> 9
Pop 9 -> 7
Pop 10 -> 5
Pop 11 -> 9
Pop 12 -> 3
Solved using DP. Algorithm of main method "fill(int tankSize)":
1. Checks exits conditions:
2. is tank size == 0? then a solution was found previous in the stack
3. is tank size < 0 the the current branch of soltutions is unsolvable, turn back
4. has any solution in cache? if so return it.
5. choose a container and make a recursive call to solve subproblem for tanksize = tanksize - container
6. If a solution was found, check if it is the best up to date.
The time analisys is hard, but it takes time proportional to the time needed to fill all solutions. Since the number of solutions will be TankSize/MinContainerSizes, and it does N (container count) operations for it, the complexy will be O(N*TankSize/MinContainerSize).
Space is also O(N*TankSize/MinContainerSize) because of the format of result class.
Java code:
public class TankFill {
private class Result {
public int[] used;
public int count;
}
private int[] containers;
private Map<Integer, Result> solutions = new HashMap<Integer, Result>();
public int[] fill(int tankSize, int[] containers) {
this.containers = containers;
this.solutions = new HashMap<Integer, Result>();
Result result = fill(tankSize);
return result == null ? null : result.used;
}
private Result fill(int tankSize) {
if (tankSize == 0) {
Result empty = new Result();
empty.used = new int[containers.length];
return empty;
}
if (tankSize < 0)
return null;
if (solutions.containsKey(tankSize)) return solutions.get(tankSize);
Result best = null;
for (int i=0; i<containers.length;++i) {
Result next = fill(tankSize - containers[i]);
if (next != null && (best == null || (next.count+1) < best.count)) {
best = new Result();
best.used = next.used.clone();
best.count = next.count;
++best.used[i];
++best.count;
}
}
solutions.put(tankSize, best);
return best;
}
}
Just use a bitset. It fits confortably the memory (150 mb or so).
I just runned the code below in 55 seconds in my desktop. It removed the duplicates and sorted the numbers!
Time is O(n) because it reads the input only once
Space is O(max range) because it alocates the bitset for possibly all numbers in the range.
public class RemoveDuplicates {
public void removeDuplicate(File input, File output) throws IOException {
Scanner scan = new Scanner(input);
BitSet foundSet = new BitSet(100000000);
while (scan.hasNextInt()) {
foundSet.set(scan.nextInt());
}
scan.close();
Writer writer = new FileWriter(output);
for (int i=0; i<foundSet.size(); ++i) {
if (foundSet.get(i)) {
writer.write(String.valueOf(i));
writer.write("\n");
}
}
writer.close();
}
}
What about storing numbers in the range form?
3154470000, 3154479999? { 3154470000 to 3154479999}? I just compressed 10.000 numbers in one line with it. For that many numbers, there must be plenty of consecutive ones. And binary search would still work if properly modified....
The max is being updated in method pop.
Code:
if (set.size() == 0) {
stackMap.remove(maxCount);
--maxCount;
}
A set with size zero means that it has no more values at maxCount. Tested it with JUnit.
A sorted map would increase the overall complexity in logn. A linkedHashSet has predictable iteration order, that is the order that elements are added. Unfornatelly, i'm poping the element that appeared first at that count. It shoul be the last one (stack behavior). But i would stick to that implementation, and say i that using another implementation of a map that have a way of retrieving the last key would be correct. For exemple the same linkedhashset of apache commons codec.
Forgot to say the complexity:
O(n+i) where n is the number of elements in the array and i is the number of interceptions. Leon suggestion also should have this i part, making it O(nlogn + i).
Space is O(n). Any suggestion to use a simple counter instead of a set is welcome!
Here another idea with the same principle but without the need of sorting:
1. Goes left to right, marking interceptions in that diretion. If i intersects j, put i in a set of intersections in j (i < j because it is left to right!)
2. The step above, marked all intersections in that direction, but also gave us a information of the reach of each disc. If disc 1 reaches disc 4 it will be in 4 list.
3. This time for each disc i, add j to its own list. By doing this you will always add the lower index into the higher index list, avoiding duplication.
4. the intersection information of j, gives you the information of who reachs j. So if k reaches j and i reaches j, i also reaches k. With that in mind, add all elements in j intersection list to i list.
5. sum the size of all intersection lists.
Java code:
import java.util.LinkedHashSet;
import java.util.Set;
class Discs2d {
@SuppressWarnings("unchecked")
public int numberOfIntersections(int[] a) {
// represents the inserction list
Set<Integer>[] intersec = new Set[a.length];
for (int i=0; i<a.length; ++i) {
intersec[i] = new LinkedHashSet<Integer>();
}
// left to right
for (int i=0; i<(a.length-1); ++i) {
for (int r=1; r<=a[i] && (r+i <a.length); ++r) {
// i reaches i+r
intersec[i+r].add(i);
}
}
// right to left
for (int i=a.length-1; i>=1; --i) {
for (int r=1; r<=a[i] && (i-r >=0); ++r) {
// i reaches i-r
intersec[i].add(i-r);
// all elements that reaches i-r, also reaches i
// because they reached i-r using only its own radius
intersec[i].addAll(intersec[i-r]);
}
}
int count = 0;
for (int i=0; i<a.length; ++i) {
count += intersec[i].size();
}
return count;
}
}
What anonimous said its true to a certain point, but this problem is much easier that the pointed one. Can be done in O(n) time and O(n) space, given a dictionary of words.
Algorthm:
1. Split the input string in two parts between every characters.
2. check if both sides are present in dictionary, if so you have two words
public class SplitStrings {
public static List<String[]> splitStr(Set<String> dictionary, String str) {
List<String[]> options = new ArrayList<String[]>();
for (int i=1; i<str.length();++i) {
if (dictionary.contains(str.substring(0,i)) &&
dictionary.contains(str.substring(i))) {
options.add(new String[] {str.substring(0,i),
str.substring(i)});
}
}
return options;
}
}
The function I = max((a-b),(b-c),(c-a)) actually minimizes the distance between abc. If the arrays are alread sorted, you can iterate through then incrementing i, j or j in the direction of minimizing I. If you think about, you will conclude that you must increment the element with minimum value between i, j and k. By doing this, you will make one element going in the direction of others.
Agorithm O(n) time O(1) space:
1. initialize i, j and k to zero
2. checks current minimun value updating it if needed
3. check if i, j and k are at the end, if so stops
4. check which one between i, j and k holds the minimum element. Increment this index. Be cautious to not increment an index if it will go outside the array
5. go to step 2
Code:
public class MinimizeDistance {
public static int[] minizeDistance(int[] a, int[] b, int[] c) {
int[] minIndexes = null;
Integer minVal = null;
int i=0, j=0, k=0;
while (true) {
int curMin = Math.max(a[i] - b[j],
Math.max(b[j] - c[k], c[k] - a[i]));
if (minVal == null || curMin < minVal) {
minVal = curMin;
minIndexes = new int[]{i, j, k};
}
boolean canIncI = i != a.length-1;
boolean canIncJ = j != b.length-1;
boolean canIncK = k != c.length-1;
if (!canIncI && !canIncJ && !canIncK)
break;
if (canIncI && (!canIncJ || a[i] < b[j])
&& (!canIncK || a[i] < c[k]))
++i;
else if (canIncJ && (!canIncK || b[j] < c[k]))
++j;
else
++k;
}
return minIndexes;
}
}
Includes logic for inserting both at head and final:
1. if value > current and value < next insert (base case)
2. if value > current and current < next insert (tail)
3. if value < next and current > next insert (head)
public class CLLInsert {
public static class CLLNode {
public CLLNode next;
public int value;
public CLLNode(CLLNode next, int value) {
super();
this.next = next;
this.value = value;
}
}
public static void insert(CLLNode cll, int val) {
insert(cll, new CLLNode(null, val));
}
public static void insert(CLLNode cll, CLLNode n) {
boolean inserted = false;
while (!inserted) {
if (n.value < cll.next.value) {
if (n.value >= cll.value
|| cll.value > cll.next.value) {
inserted = true;
}
} else if (cll.value > cll.next.value
&& n.value >= cll.value) {
inserted = true;
}
if (inserted) {
n.next = cll.next;
cll.next = n;
}
cll = cll.next;
}
}
}
Logic in java but valid for every language:
1.Create a compare function that maps the characters on base to the index it appears. You can use a map to do this. For other characters you can offset then by the map size or put Integer.MAX_VALUE.
2.Use comparator with the sort method of your choice
Comparator code:
private static class StringBasedSort implements Comparator<Character> {
private Map<Character, Integer> baseMap = new HashMap<Character, Integer>();
public StringBasedSort(String base) {
for (int i=0; i<base.length();++i) {
if (!baseMap.containsKey(base.charAt(i))) {
baseMap.put(base.charAt(i), i);
}
}
}
@Override
public int compare(Character o1, Character o2) {
Integer val1 = baseMap.get(o1);
if (val1 == null)
val1 = (int)o1 + baseMap.size();
Integer val2 = baseMap.get(o2);
if (val2 == null)
val2 = (int)o2 + baseMap.size();
return val1.compareTo(val2);
}
}
Test code:
public static String sortStr1BasedStr2(String strToSort, String base) {
Comparator<Character> cmp = new StringBasedSort(base);
Character[] chrToSort = toCharacterArray(strToSort);
Arrays.sort(chrToSort, cmp);
return toString(chrToSort);
}
// utilitary functions
private static Character[] toCharacterArray(String str) {
char[] tmp = str.toCharArray();
Character[] array = new Character[tmp.length];
for (int i=0; i<tmp.length;++i) {
array[i] = tmp[i];
}
return array;
}
private static String toString(Character[] array) {
char[] tmp = new char[array.length];
for (int i=0; i<tmp.length;++i) {
tmp[i] = array[i];
}
return new String(tmp);
}
If i would model the class string certainly i would cache the hash value (it actualy happens in java) in this case, then amortized cost would be O(1). But I agree that it isnt a true O(n^2). But that is because the .substring part is O(n). Making it O(n^3), being n the number of chracters.
- lucas.eustaquio August 02, 2011no really. I used a hashset for dictionary (hashset is a hashhmap with key=value) so it is o n^2. Actually it also depends on the number of valid phrases found. And about the code i posted again on ideone.com/yFwkN and now it compiled. i should have put that public in the first time.
- lucas.eustaquio August 02, 2011<pre lang="" line="1" title="CodeMonkey2950" class="run-this">import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
class PutSpaces {
private List<String>[] found;
private Set<String> dictionary;
@SuppressWarnings("unchecked")
public List<String> putSpaces(Set<String> dictionary, String str) {
this.dictionary = dictionary;
found = new List[str.length()];
return putSpaces(str, 0);
}
private List<String> putSpaces(String str, int start) {
if (found[start] != null)
return found[start];
List<String> result = new ArrayList<String>();
for (int i=0; i<str.length();++i) {
String word = str.substring(0, i+1);
if (dictionary.contains(word)) {
if (word.equals(str)) {
result.add(word);
} else{
List<String> next = putSpaces(str.substring(i+1), start+i+1);
for (String phrase : next) {
result.add(word + " " + phrase);
}
}
}
}
return found[start] = result;
}
public static void main(String[] args) {
Set<String> dictionary = new HashSet<String>();
dictionary.addAll(Arrays.asList(new String[]{"this", "i", "saw", "is", "a", "awe", "we", "some", "awesome"}));
String str = "thisisawesome";
PutSpaces putSpaces = new PutSpaces();
List<String> phrases = putSpaces.putSpaces(dictionary, str);
System.out.println(phrases);
}
}
</pre>
The code i posted did not compiled. Posted again in (ideone.com/yFwkN)
- lucas.eustaquio August 02, 2011Posted in there (ideone.com/tTV3W). Tested and got the output: [this is a we some, this is awe some, this is awesome]
About being DP or not, it is the DP top bottom approach. The high level logic is: Do i already have a solution for this subproblem? yes, then return. no? calculate it! So the first time the code needs a subproblem, it will solve it.
There was actually a minor error, but it wasn't in the logic (a +1 missing) in List<String> next = putSpaces(str.substring(i+1), start+i+ 1);
This is exactly doing:
f(n,k) = f(n-2, k-1) + f(n-3, k-1) ... + f(2(k-1)-1, k-1) (a)
But like stated before, this is the same as f(n,k) = f(n-2, k-1) + f(n-1,k).
Because
f(n-1,k) = f(n-3, k-1) + f(n-4, k-1) ... f(2(k-1)-1, k-1) (b)
(just replace n for n-1 in previous equation)
Replacing b in a:
f(n,k) = f(n-2, k-1) + f(n-1,k).
Just putting a more explicity cache variable:
public class ChooseKNotConsecutive {
private Integer[][] cache;
public int f(int n, int k) {
cache = new Integer[n][k];
return f2(n, k);
}
private int f2(int n, int k) {
if (n < (2*k - 1)) return 0;
if (k == 1) return n;
if (cache[n-1][k-1] == null)
cache[n-1][k-1] = f2(n-2, k-1) + f2(n-1, k);
return cache[n-1][k-1];
}
}
Yes, it is exactly the same, and it would work for 4 sum, 5 sum, etc by adding extra fors (iterators) outside the one in there.
- lucas.eustaquio August 02, 2011I solved it using DP. I think the time complexity is O(n^2 + number of valid phrases).
My Algorithm:
1: check if a substring was searched, if so, return the cached result.
2: iterates though all substring starting at zero. If they are valid, try to form a phrase with the rest. If it has any valid frases, or it used all the letters, append it to the result array.
3: caches the result got in 2 and return.
for input : thisisawesome
and dictionary: [this, is, a, awe, we, some, awesome]
got: [this is a we some, this is awe some, this is awesome]
Code:
public class PutSpaces {
private List<String>[] found;
private Set<String> dictionary;
public List<String> putSpaces(Set<String> dictionary, String str) {
this.dictionary = dictionary;
found = new List[str.length()];
return putSpaces(str, 0);
}
private List<String> putSpaces(String str, int start) {
if (found[start] != null)
return found[start];
List<String> result = new ArrayList<String>();
for (int i=0; i<str.length();++i) {
String word = str.substring(0, i+1);
if (dictionary.contains(word)) {
if (word.equals(str)) {
result.add(word);
} else{
List<String> next = putSpaces(str.substring(i+1), start+i);
for (String phrase : next) {
result.add(word + " " + phrase);
}
}
}
}
return found[start] = result;
}
}
public class PutSpacesTest {
private Set<String> dictionary = new HashSet<String>();
{
dictionary.addAll(Arrays.asList(new String[]{"this", "is", "a", "awe", "we", "some", "awesome"}));
}
@Test
public void testPutSpaces() {
String str = "thisisawesome";
PutSpaces putSpaces = new PutSpaces();
List<String> phrases = putSpaces.putSpaces(dictionary, str);
System.out.println(phrases);
}
}
I believe the best response is to search palindromes which are centered in each character of string str.
The brute force aproach would chech if every substring is a palindrome. To build all substrings it would be n^2, and to check if then are palindromes is n (worst case). Hence if would be n^3.
If you check for all palindroms wich are centered in each caracter, you would spend O(n+p), n being the number of characters and p being the numbers of palindromes. I dont believe there is a way to find all palindromes without the p part(you cant find a palindrome without search for it), and n is the part where you search for the palindromes. Even the worst case for string "aaaaaaaaaaa", fits this analysis. Is is not that the Time is n^2, is that there are really n^2 palindromes (a lot repeated), but well, they are there in different places.
I wrote a code that does this.
1. function Set<String> findPalindrome(...) iterates through each char looking for palindromes. For each char i, the even palindromes will start at i and i+1 (could be i-1 and i, doesnt matter), and odd ones will start at i-1 and i+1.
2. function void findPalindrome(...) tries to find palindromes that starts at lower and ends at upper. if char[lower] == char[upper] we have a palindrom between that indexes, otherwise makes no sense searching anymore.
public static Set<String> findPalindrome(String str) {
Set<String> found = new LinkedHashSet<String>();
for (int i=0; i<str.length();++i) {
findPalindrome(found, str, i, i+1); // even letters
findPalindrome(found, str, i-1, i+1); // odd letters
}
return found;
}
private static void findPalindrome(Set<String> found, String str, int lower,
int upper) {
while (lower >=0 && upper < str.length()) {
if (str.charAt(lower) != str.charAt(upper))
return;
found.add(str.substring(lower, upper+1));
--lower;
++upper;
}
}
1. Generate a number between [0, N-K]
2. For each k(sorted), checks if the generated number if >= k. If so, increment.
Doing this is the same as simulating holes where k numbers should be.
Time: O(k) (For a constant time solution use artakbegnazaryan advice.
public class KDistinct {
private int[] k;
private Random rand = new Random();
public KDistinct(int[] k) {
this.k = k;
}
public int rand1(int n) {
int r = rand.nextInt(n-k.length);
for (int f : k) {
if (r >= f)
++r;
else
break;
}
return r;
}
}
Simple idea:
Use a hashmap to build a double linked list:
1. Create a Map of linked nodes
2. for each number create a linked node in the map. Check if its previous value is there, if is link then. Do the same for the next value.
3. scan the map to check the longest linked list build
Complexity:
Space: O(n) (The linked lists and the map)
Time: O(n) (One iteration in the input, and one iteration in the hashmap, each using a value at most once)
Code:
public class LongestSequence {
public static class NodeList {
public NodeList prev;
public NodeList next;
public int value;
}
public static NodeList findLongestConsecutives(int[] values) {
Map<Integer, NodeList> nodeMap = new HashMap<Integer, NodeList>();
buildLinkedLists(values, nodeMap);
return findLongestLinkedList(nodeMap);
}
private static void buildLinkedLists(int[] values,
Map<Integer, NodeList> nodeMap) {
for (int val : values) {
if (!nodeMap.containsKey(val)) {
NodeList cur = new NodeList();
cur.value = val;
nodeMap.put(val, cur);
NodeList prev = nodeMap.get(val-1);
if (prev != null) {
prev.next = cur;
cur.prev = prev;
}
NodeList next = nodeMap.get(val+1);
if (next != null) {
next.prev = cur;
cur.next = next;
}
}
}
}
private static NodeList findLongestLinkedList(Map<Integer, NodeList> nodeMap) {
NodeList list = null;
int maxSize = 0;
while (!nodeMap.isEmpty()) {
NodeList tail = nodeMap.entrySet().iterator().next().getValue();
nodeMap.remove(tail.value);
int size = 1;
NodeList head = tail;
while (head.prev != null) {
++size;
head = head.prev;
nodeMap.remove(head.value);
}
while (tail.next != null) {
++size;
tail = tail.next;
nodeMap.remove(tail.value);
}
if (size > maxSize) {
maxSize = size;
list = head;
}
}
return list;
}
}
Solution constant time for all operations (using hashmaps):
It basicly uses 2 hashes. One mapping element -> count and other mapping count -> stack of elements;
public class FrequencyStack<T> {
private final Map<T, Integer> countMap = new HashMap<T, Integer>();
private final Map<Integer, Set<T>> stackMap = new HashMap<Integer, Set<T>>();
private int maxCount = 0;
public void push(T o) {
Integer c = countMap.get(o);
if (c == null) {
countMap.put(o, c = 1);
} else {
countMap.put(o, ++c);
}
Set<T> set = stackMap.get(c);
if (set == null)
stackMap.put(c, set = new LinkedHashSet<T>());
set.add(o);
if (c > maxCount)
maxCount = c;
}
public T pop() {
if (maxCount == 0)
return null;
Set<T> set = stackMap.get(maxCount);
T o = set.iterator().next();
set.remove(o);
if (maxCount == 1) {
countMap.remove(o);
}
if (set.size() == 0) {
stackMap.remove(maxCount);
--maxCount;
}
return o;
}
public T top() {
if (maxCount == 0)
return null;
return stackMap.get(maxCount).iterator().next();
}
}
Last post of mine had an error.
Solution O(n) int worst case. The algorithm tries to find the start and end of the rects that goes bellow each bar b with having b`s height.
1. Goes left to right adding bars to a list and checking if height is incresing.
2. When a bar decreases at i+1, it removes bars that have height > i+1 from the tail of the list. These bars will form a rectangle starting at its position and ending at i
3. When reach the last bar, calculates the rectangle for all bars remaining in the list
Since we are removing the bars from the list a bar will be added once and removed once. The problem is that since the algorithm finds rects starting on bar b and expanding then to to right, the start point may be wrong. The real trick is realizing that if you can find the end point going left to right, you can apply the same logic and find the starting point going right to left.
5. Executes analog steps 1-3 but going right to left. Since all rects already have the right end, it is just a matter of updating its start point.
Space and time: O(n) (always)
public class MaxRectangle {
public static class Rect {
public int start;
public int end;
public int height;
public int area() {
return height*(end-start+1);
}
}
private Rect[] rects;
private int[] heights;
public Rect findMaxArea(int[] heights) {
this.heights = heights;
rects = new Rect[heights.length];
// Find rects left to right
List<Integer> updateList = new ArrayList<Integer>();
for (int i=0; i<heights.length;++i) {
updateList.add(i);
if (i == (heights.length-1)) {
createRectsLR(updateList, i, -1);
} else if (heights[i] > heights[i+1]) {
createRectsLR(updateList, i, heights[i+1]);
}
}
// goes right to left and extends previous rects starts if needed
for (int i=heights.length-1; i >=0; --i) {
updateList.add(i);
if (i==0) {
updateRectsRL(updateList, i, -1);
} else if (heights[i] > heights[i-1]) {
updateRectsRL(updateList, i, heights[i-1]);
}
}
Rect max = null;
for (Rect rect : rects)
if (max == null || max.area() < rect.area())
max = rect;
return max;
}
private void createRectsLR(List<Integer> updateList,
int end, int cutMaxHeight) {
while(updateList.size() > 0
&& heights[updateList.get(updateList.size()-1)] > cutMaxHeight) {
int i = updateList.remove(updateList.size()-1);
rects[i] = new Rect();
rects[i].start = i;
rects[i].end = end;
rects[i].height = heights[i];
}
}
private void updateRectsRL(List<Integer> updateList,
int start, int cutMaxHeight) {
while(updateList.size() > 0
&& heights[updateList.get(updateList.size()-1)] > cutMaxHeight) {
int i = updateList.remove(updateList.size()-1);
rects[i].start = start;
}
}
}
Solution similar to Sriram and above, but with code:
1. p of staying alive outside board: 0.0
2. p of staying alive for k=0 inside board: 1.0
3. p of staying alive for k>0: p(k-1)/8 for each of the 8 movements (p of staying alive x p of movement happening)
Time and Space: O(BoardSize²k) = Space and number of entries in alive matrix
public class KStayingAlive {
private int boardSize = 8;
private Double[][][] alive;
public double pAlive(int r, int c, int k) {
alive = new Double[boardSize][boardSize][k];
return calcPAlive(r, c, k);
}
private double calcPAlive(int r, int c, int k) {
if (r < 0 || r >= boardSize || c < 0 || c >= boardSize)
return 0.0;
if (k == 0)
return 1.0;
// K is one based
if (alive[r][c][k-1] != null)
return alive[r][c][k-1];
double p = 0.0;
p += calcPAlive(r-2, c-1, k-1)/8;
p += calcPAlive(r-2, c+1, k-1)/8;
p += calcPAlive(r+2, c-1, k-1)/8;
p += calcPAlive(r+2, c+1, k-1)/8;
p += calcPAlive(r-1, c-2, k-1)/8;
p += calcPAlive(r+1, c-2, k-1)/8;
p += calcPAlive(r-1, c+2, k-1)/8;
p += calcPAlive(r+1, c+2, k-1)/8;
return alive[r][c][k-1] = p;
}
}
Code
- lucas.eustaquio August 01, 2011As stated in wikipedia, n2 is the fatest (Except for fancier algoritms using bit manipulation)
Here is a faster version of 3sum.
Time: O(n2)
Space: O(1) - without cloning and in place sort
Space: O(n) - cloning or not in place sort
public static void treeSum2(int[] array, int sum) {
array = array.clone();
Arrays.sort(array);
for (int l=0; l<(array.length-2);++l) {
int m = l+1;
int u = array.length-1;
while (m < u) {
int curSum = array[l] + array[m] + array[u];
if (curSum == sum) {
System.out.println("Triplets treeSum2: "
+ array[l] + " " + array[m] + " " + array[u]);
return;
}
else if (curSum > sum)
--u;
else
++m;
}
}
}
More compact version without recursion. Bottom up strategy:
public class TriangleMaxSum {
private Integer[][] maxSum;
public int maxSum(int[][] triangle) {
maxSum = new Integer[triangle.length][];
for (int r=triangle.length-1;r>=0;--r) {
maxSum[r] = new Integer[triangle[r].length];
for (int c=0; (c < triangle[r].length); ++c) {
maxSum[r][c] = triangle[r][c];
if (r != (triangle.length-1)) {
maxSum[r][c] += Math.max(maxSum[r+1][c], maxSum[r+1][c+1]);
}
}
}
return maxSum[0][0];
}
}
Memo and alphayoung are right.
My previous answer was:
f(n,k) = f(n-2, k-1) + f(n-3, k-1) ... + f(2(k-1)-1, k-1) (a)
So we have:
f(n-1,k) = f(n-3, k-1) + f(n-4, k-1) ... f(2(k-1)-1, k-1) (b)
Replacing b in a:
f(n,k) = f(n-2, k-1) + f(n-1,k).
The code now is:
public class ChooseKNotConsecutive {
private Integer[][] cache;
public int f(int n, int k) {
cache = new Integer[n][k];
return f2(n, k);
}
private int f2(int n, int k) {
if (n < (2*k - 1)) return 0;
if (k == 1) return n;
if (cache[n-1][k-1] == null)
cache[n-1][k-1] = f2(n-2, k-1) + f2(n-1, k);
return cache[n-1][k-1];
}
}
}}}
Same logic as above, but building the pyramid instead of calculating its children.
public static double calculateWaterVol2(double c, double l, int kth) {
double[][] pyramid = buildCupPyramid(kth);
pyramid[0][0] = l;
for (int k=1, h = 0; h < pyramid.length; ++h) {
for (int i=0; i<pyramid[h].length;++i, ++k) {
if (pyramid[h][i] > c) {
if (h < pyramid.length - 1) {
double over = (pyramid[h][i]-c)/2;
pyramid[h+1][i] += over;
pyramid[h+1][i+1] += over;
}
pyramid[h][i] = c;
}
if (k == kth)
return pyramid[h][i];
}
}
return 0.0;
}
private static double[][] buildCupPyramid(int kth) {
int n = 0;
int h = 0; // 1, 2, 3, 4 ...
while (n < kth) {
++h;
n += h;
}
double[][] pyramid = new double[h][];
for (int i=0; i<pyramid.length;++i) {
pyramid[i] = new double[i+1];
}
return pyramid;
}
Solved it with O(k).
The idea is simple. Pour L into coup 1. Divide into its children if overflows. Do this for subsequent elements, until find k.
The biggest problem is to find the children. The first child of a coup is the same of last if height does not change. The second one is first + 1.
For the given example, children would found in the the folowing order (se how children repeats when height repeats (kth is one based!):
Height: 1 2 2 3 3 3
Coup : 1 2 3 4 5 6
Child : 23 45 56 78 89 910
public static double calculateWaterVol(double c, double l, int kth /* one based */) {
int [] height = new int[kth];
double[] water = new double[kth];
water[0] = l;
int childIndex = 0;
for (int i=0; i<(kth-1);++i) {
double over = 0.0;
if (water[i] > c) {
over = (water[i] - c)/2;
water[i] = c;
}
if (i == 0 || height[i-1] < height[i]) {
++childIndex;
}
if (childIndex >= kth) break;
height[childIndex] = height[i]+1;
water[childIndex] += over;
++childIndex;
if (childIndex >= kth) break;
height[childIndex] = height[i]+1;
water[childIndex] += over;
}
return water[kth-1] > c ? c : water[kth-1];
}
Solving using DP (Same solution as coderanjiy666, but with caching):
Space and Time will be O(n), since each element path is calculated just once.
public class TriangleMaxSum {
private int[][] triangle;
private Integer[][] maxSum;
public TriangleMaxSum(int[][] triangle) {
this.triangle = triangle;
maxSum = new Integer[triangle.length][];
for (int i=0; i<maxSum.length; ++i) {
maxSum[i] = new Integer[triangle[i].length];
}
}
public int maxSum() {
return maxSum(0,0);
}
private int maxSum(int r, int c) {
if (r >= maxSum.length)
return 0;
if (maxSum[r][c] != null)
return maxSum[r][c];
maxSum[r][c] = triangle[r][c] + Math.max(maxSum(r+1, c), maxSum(r+1, c+1));
return maxSum[r][c];
}
}
public class TriangleMaxSumTest {
private int[][] createRandomTriangle(int rows) {
Random rand = new Random();
int[][] triangle = new int[rows][];
for (int cols = 1, r=0; r<rows; ++r, ++cols) {
triangle[r] = new int[cols];
for (int c = 0; c<cols; ++c) {
triangle[r][c] = rand.nextInt(100);
}
}
return triangle;
}
@Test
public void testMaxSum() {
int[][] triangle1 = createRandomTriangle(1000);
TriangleMaxSum maxSum = new TriangleMaxSum(triangle1);
System.out.println("Max: " + maxSum.maxSum());
}
}
Solution using dynamic programing.
The solution is as folows (calc(int, int) function):
1. If a best solution was found, use it.
2. Select one color not forbiden;
3. Solve the problem for n-1 houses.
4. get the best result adding best result for n-1 to current cost.
5. see if it is the current minimum when you add current color cost
6. update minimum if needed.
7. save best result.
Time complexity: Colors*Houses.
Space complexity: Colors*Houses.
Is easy to see it, if you realize that the runtime cost is the time to fill the best solution matrix, sized Colors*Houses.
By the way, for the problem stated above, the solution is:
Colors: [1, 0, 2, 1, 0, 1] (G, R, B, G, R, G)
Cost: 18
public class PaintHouse {
public class BestResult {
public int[] colors;
public int cost;
public BestResult(int[] colors, int cost) {
this.colors = colors;
this.cost = cost;
}
}
private int[][] cost;
private BestResult[][] best;
public PaintHouse(int[][] cost) {
this.cost = cost;
this.best = new BestResult[cost.length][cost[0].length];
}
public BestResult calc() {
return calc(cost[0].length, -1);
}
private BestResult calc(int n, int forbiden) {
if (forbiden >= 0 && best[forbiden][n-1] != null)
return best[forbiden][n-1];
BestResult min = null;
for (int c = 0, h = cost[0].length - n; c< cost.length; ++c) {
if (c != forbiden) {
if (n == 1) {
if (min == null || min.cost > cost[c][h]) {
min = new BestResult(new int[] {c}, cost[c][h]);
}
} else {
BestResult next = calc(n-1, c);
if (min == null
|| min.cost > (next.cost + cost[c][h])) {
min = new BestResult(new int[next.colors.length+1],
next.cost + cost[c][h]);
min.colors[0] = c;
System.arraycopy(next.colors, 0,
min.colors, 1, next.colors.length);
}
}
}
}
if (forbiden >= 0)
best[forbiden][n-1] = min;
return min;
}
}
public class PaintHouseTest {
@Test
public void testCalc() {
int[][] cost = new int[][] {
new int[] { 7, 3, 8, 6, 1, 2},
new int[] { 5, 6, 7, 2, 4, 3},
new int[] {10, 1, 4, 9, 7, 6}
};
PaintHouse calc = new PaintHouse(cost);
BestResult bestResult = calc.calc();
System.out.println("Colors: " + Arrays.toString(bestResult.colors));
System.out.println("Cost: " + bestResult.cost);
assertEquals(bestResult.cost, 18);
}
}
Thats exactly what was implemented.
- lucas.eustaquio November 12, 2012The idea of the algorithm is simple. Put all water on cup 1.
Then for all cups do (1 to n, only once):
1. If there is more water than capacity subtract the exceding water and put half of each on the 2 cups directly below.
The variables height and child index are just to figure that out. Run it debugging and you will see.