Interview Question
Country: United States
hashmap is the way to go here.... the underlying implementation of hashmap could be an array.
for add:
We hash the value to a number between [0..N], and add it at that position in the array. We need to describe collision management techniques.
if ur size increases than N, you will be reshashing the entire DS into
for delete:
same as above, rehash if size drops to quarter the capacity.
get :
is almost naturally constant for hash
getRandom :
choose a key randomly between 0 and currentcapacity. if the key has some value then return it, else hash again. you could choose to perform some optimization here w.r.t the currentSize on the number of retries.
What would be the definition of the HashMap ?? , I mean you will have something like this:
public class MyDS
{
private HashMap<K,V> dataStore = new HashMap<K,V>();
......
}
When I was asked this the interviewer told me that hashMap solution is overkiller,
so what would be K,V ??? can u think on something simpler ??
i think what the interviewer was looking for is an implementation of hashmap and not just use the hashmap builtin solution.
the hashmap Collection comes with a lot of other algorithms and thus a lot of other code. we just need the basic implementation here
Yes I agree, I did precisely that on the interview, After the interviewer told me "you could just wrapped an array into a class and create a growRate method which is going to be responsible of re-sizing the internal array just like the hash map does" Anyways I was just wondering if there is another crazy approach I missed.
Assumption: That you pass the index in case of get() and delete() and for GetRandom i will generate some index and return the value.
For this You can use Hash map with index count and struct Node *.. Ex map<int,struct Node*>
where Struct Node is structure for double link list structure.
Add: For this operation add the element on the head of link list and count++ with Node * in hash map. operation time O(1)
Delete: get the Node * from hash map on index value and remove this element from link list and hash map both. O(1)
Get: get the Node * from hash map on index value and return the node->data. O(1)
GetRandom: Generate random value on the basis of time and follow the same procedure as Get() is no value present for this random value depending on requirement you can follow.
Please let me know if does'nt work.. I'll post the code for this :)
void add(map<int,struct Node *> &mymap,int data,struct Node *&head)
{
struct Node *temp=NewNode(data);
if(!head)
head=temp;
else
{
struct Node *t=head;
temp->next=t;
t->prev=temp;
head=temp;
}
cnt++;
mymap.insert(pair<int,struct Node *>(cnt,temp));
}
int Delete(map<int,struct Node *> &mymap,int index)
{
map<int,struct Node *> :: iterator it;
int temp=-1;
it=mymap.find(index);
if(it==mymap.end())
printf("No Element present at this index\n");
else
{
temp=it->second->data;
if(it->second->prev)
it->second->prev->next=it->second->next;
if(it->second->next)
it->second->next->prev=it->second->prev;
mymap.erase(index);
printf("Element which is present at this index is %d \n",temp);
}
return temp;
}
int Get(map<int,struct Node *> &mymap,int index)
{
map<int,struct Node *> :: iterator it;
int temp=-1;
it=mymap.find(index);
if(it==mymap.end())
printf("No Element present at this index\n");
else
{
temp=it->second->data;
printf("Element which is present at this index is %d \n",temp);
}
return temp;
}
int GetRandom(map<int,struct Node *> &mymap)
{
map<int,struct Node *> :: iterator it;
int temp=-1;
int local=-1;
srand(time(NULL));
local=rand()%cnt;
printf("Random value %d\n",local);
it=mymap.find(local);
if(it==mymap.end())
printf("No Element present at this index\n");
else
{
temp=it->second->data;
printf("Element which is present at this index is %d \n",temp);
}
return temp;
}
The time complexity of std::map.find is O(logN) for an unsorted map, so that implementation does not meet the requirements of O(1) time
http://www.cplusplus.com/reference/map/map/find/
Ok.. Got it.. But i think algo is correct we can use Hash map to implement this.. A minor change is required.
Yeah, hashmap is a good approach, but somehow complicated, you know calculating the hash, selecting a bucket, handling collisions etc..
I Think it's easier just to wrap up an array into a class and then provide some logic to increment the size of the array when needed, per say use a growRate variable which is calculated something like this:
m_growrate = N/n // N -> number of buckets and n ->number of used buckets
if that is greater than 0.7 (70%) just double the size of the array and copy back all the elements, in that case insert an element will take O(1) time in most cases, and O(n) in the worst case.
class Array1
{
private int maxSize;
private double[] Array2;
private int top;
public arrysize(int s)
{
maxSize = s;
array2 = new double[maxSize];
top = -1;
}
public void add(double j)
{
array2[++top] = j;
}
public double delete()
{
return array2[top--];
}
public double get()
{
return array2[top];
}
}
Also the data structure should grow dynamically, no fixed size.
- vhmj1982 February 28, 2013