Google Interview Question
Software Engineer / DevelopersCountry: India
Interview Type: Phone Interview
You assumed a lot of things.
- This doesn't allow for duplicates, when this was no where mentioned
- You are using almost double space than is required
- In Remove you are changing the order of the elements, which you shouldn;t be doing ideally.
In my assumtion:
It should be a simple Array,
where Insert and GetRandomElement are O(1), and Remove is O(n)
Second anonymous.. That is the most retarded thing I have ever heard. A binary tree storing ints will typically take three times as much storage. One for ( int , ptr to left and ptr to right ) on each node. By your logic one should always stick to arrays because you are using more storage then needed.
Replace map with multimap and you can handle duplicates.
If ordering of elements is not important, this could be one solution.
Using hashmap can provide the following solution:
insert(n):
hash n to an index i and add an entry in O(1)
duplicates can be handled by appending to linked list pointed by an index
remove(n):
hashing n to index i, and removing its entry from linked list pointed to by an index i
getRandomElement():
Get the size of keys and generate random element between 1 and size of keys. Iterate to the key, value pairs from random element index until some non-null entry is found.
This is not O(1) for insertion & deletion. Map takes O(logN) operation to search insert & delete
implement using an array and a HashMap that handle dupliates
ideone.com/rlrIuy
class RandomMap
{
private int size;
private int[] a;
private Map<Integer, List<Integer>> map;
private Random r;
public RandomMap(int capacity)
{
size = 0;
a = new int[capacity];
map = new HashMap<Integer, List<Integer>>();
r = new Random();
}
public int size()
{
return size;
}
// TODO: resize
public void add(int n)
{
int index = size++;
a[index] = n;
if(map.containsKey(n)) map.get(n).add(index);
else {
List<Integer> l = new ArrayList<Integer>();
l.add(index);
map.put(n, l);
}
}
public void remove(int n)
{
if(map.containsKey(n)) {
int p = map.get(n).get(0);
map.get(n).remove(0);
if(p != size - 1) {
int m = a[size - 1];
a[p] = m;
List<Integer> l2 = map.get(m);
for(int i = 0; i < l2.size(); i++) {
if(l2.get(i) == size - 1) {
l2.set(i, p);
break;
}
}
}
size--;
}
}
public int getRandom()
{
if(size > 1) return a[r.nextInt(size - 1)];
else if(size == 1) return a[0];
else throw new NoSuchElementException();
}
}
I think the problem is intended to test you about the time complexity of operations when you implement it with different data structures: vector, linkedlist, binary tree, heap, set, bins. If you can compare different time complexity, I believe that is what the interviewer has in mind. The best example come from "Programming Pearl 2nd edition Column Search", where the author implements a search structure containing insert, find, size, using the above data structures. And the author compare the time complexity in details
I was asked this question at Amazon. Only difference was to solve this with existing Java APIs. With some difficulty I could solve. Nevertheless they did not hire me :(
I am making an assumptions here. There will not be any hash collisions i.e. hash buckets are large enough.
RandomMap will contain a List and a HashMap. List will contain every element that is inserted. So get random is easy from a list as said in above points.
HashMap can be used to retrieve elements with a get(key) and put(key,value). Problem - value duplication in list and HashMap. Instead value can be index of array element Then List.get(value) will give the real element.
What if there ARE DUPLICATES?? Now there will be hash collosions for sure. get will now have to iterate over the list of collisioned objects. Not O(1)!
So another redirection is used. Instead of the index value in the HashMap we will store another HashMap<Integer,Integer>. First Integer - a running count which is incremented with value addition. Second Integer - Array Index.
public class RandomMap<K, V>
{
private List<V> _entryList = new ArrayList<>();
private Map<K, HashMap<Integer,Integer>> _entryMap = new HashMap<>();
public int size()
{
return _entryList.size();
}
public V get(Object key)
{
HashMap<Integer, Integer> randomMap = _entryMap.get(key);
return _entryList.get( randomMap.get(0) );
}
public void put(K key, V value)
{
_entryList.add(value);
HashMap<Integer, Integer> randomMap = _entryMap.get(key);
if(randomMap == null)
{
randomMap = new HashMap<>();
}
randomMap.put(randomMap.size(), _entryList.size() - 1);
}
public void remove(Object key)
{
HashMap<Integer, Integer> randomMap = _entryMap.get(key);
Integer indexToBeRemoved = randomMap.get(0);
int lastIndex = _entryList.size() - 1;
_entryList.add(indexToBeRemoved, _entryList.get(lastIndex));
_entryList.remove(lastIndex);
}
public V getRandom()
{
return _entryList.get(new Random().nextInt(_entryList.size()));
}
}
This could be further improved if we use one more change. Why? I will leave it to your imagination :)
private Map<K, RandomMap<Integer,Integer>> _entryMap = new HashMap<>();
Have an array and a map of (int,int).
- Anonymous October 28, 2011Insert(n)
{
Add to end of array if not already there in map
update map with key => n and value = index in array.
}
Remove(n)
{
Find n from map index=i
Remove n from map.
Swap i and last element in array.
update last element and its new index in map.
}
GetRandomElement()
{
Generate a random number between 0 and size of array.
Return the element at the corresponding index.
}
If you choose the map to hash, all operations are o(1).
If you choose the map to be a binary tree, Insert/Remove are O(logn)