emalaj23
BAN USERdidn't mean to brag  just suggesting that sometimes its easier to try to code it rather than explaining it in text
my solution is very similar to his in that we keep a map where the key and value represent the range of consecutive numbers, except that i combine the ranges of consecutives before and after, so when inserting 4 with existing entries (2,3) (3,2) (5,7) (7,5), all of that is simplified to (2,7) (7,2)
O(n) time & space, as simple/short as possible
public static List<Integer[]> compute(List<Integer> ints, int sum){
Set<Integer> intsNeeded = new HashSet<Integer>();
List<Integer[]> answer = new ArrayList<Integer[]>();
for(Integer num : ints){
if(intsNeeded.contains(num))
answer.add(new Integer[]{num, sum  num});
else
intsNeeded.add(sumnum);
}
return answer;
}

emalaj23
June 15, 2013 O(n) solution traversing the list only *once*
public static Set<Integer> compute(Set<Integer> ints){
Map<Integer, Integer> map = new HashMap<Integer, Integer>();
// produce map of ranges of consecutive numbers
for(Integer key : ints){
if(map.containsKey(key)) continue;
map.put(key, key);
if(map.containsKey(key+1)){
map.put(map.get(key), map.get(key+1));
map.put(map.get(key+1), map.get(key));
if(map.get(key) != key+1) map.remove(key+1);
}
if(map.containsKey(key1)){
map.put(map.get(key), map.get(key1));
map.put(map.get(key1), map.get(key));
if(map.get(key) != key1) map.remove(key1);
}
}
// find longest range
int longest = Integer.MIN_VALUE, start = 0;
for(Entry<Integer, Integer> entry : map.entrySet()){
if(Math.abs(entry.getKey()  entry.getValue()) > longest)
start = Math.min(entry.getKey(), entry.getValue());
}
// create the list to return
Set<Integer> answer = new HashSet<Integer>();
for(int i = start; i <= start+longest; i++)
answer.add(i);
return answer;
}

emalaj23
June 15, 2013 @oOZz
Its important to be aware of the context of the problem; keep in mind that a dictionary normally contains hundreds of thousands of words, whereas a standard application for this could be Scrabble, determine what words can be produced by 7 letters.
In this light, looping through, for example, 200,000 words in the dictionary checking if each can be made w/ the given letters is much much slower than just producing the 2^n possible words and checking them against the dictionary (thats 2^7 = 128 words rather than 200,000).
Here's what I would do:
step 0: preprocess the dictionary so, rather than being a set of words, it is a map of a group of letters in ascending order (ex: word becomes dorw) to the set of words that can be made with this key (this only needs to be done once at the very beginning)
then for each set of 7 letters:
step 1: compute the powerset (set of all subsets) of the 7 letters (this represents all combinations and is only 128 groups of letters)
step 2: sort each to be in ascending order of character (the key in our map)
step 3: check the dictionary for this key  the value represents all the words that can be made with that group of letters .. throw these into a final set which represents your answer
static int findLowestCommonAncestor(final TreeNode node, int first, int second){
if((node.getValue() < first) == (node.getValue() < second)){
return findLowestCommonAncestor(node.getValue() < first ? node.getRight() : node.getLeft(), first, second);
} else if(node.getValue() == first  node.getValue() == second){
return node.getValue() == first ? first : second;
} else
return node.getValue();
}

emalaj23
May 05, 2013 Open Chat in New Window
wouldn't insertions into a max heap of 100 elements cost you n * log(100) since thats the depth of the tree for each of n insertions? so total O(n log(m)) where m is the # most frequent we're trying to find
 emalaj23 January 30, 2014