Amazon Interview Question
Software Engineer / DevelopersCountry: India
It is possible if we restrict the problem domain a little bit. For example if all of the elements being inserted or deleted are in a range & no constraints on the memory, an array and couple of heaps could satisfy this. Note that there is no access of any other element other than min & max.
Anonymous, the last time I checked remove min (or max) is a log(n) operation for a heap since you have to walk through it to ensure data strucuture integrity
well, counting sort or bucket sort radix sort is counting on certain kind of the input data. the question doesn't not say anything about the input data.
Use a Magic Carpet data structure. Be careful though, as it is running so fast that it is getting way to hot.
Sure, I can give you a link.
There is something I want you to do for me before that.
Go to Canada and count all cows there. Then count cats and dogs. Divide cats by dogs and take a square root from that number.
After you do the task, you will get the link.
If the elements of the data structure only supports comparative sorting, then those kind of data structure does not exist. Because according to the feature list, people can achieve the a sorting algorithm wit O(n) time complexity:
1> You put elements in to the data structure one by one
2> Then you get max or min from it one by one.
So time complexity O(2n).
But it is told that comparative sorting could only achieve O(nlogn). So it is impossible for existence of such data structure in this case.
I think he is right. If the interviewer wants constant time, then he/she should be ready to spend extra memory on this. The hash map will be pointing to a vector to do delete in constant time.
the min and max are preserved in aux stacks..the logic is the same as for pop operation of the stack that has min/max functions..the example of implementation can be found at stackoverflow.com/questions/685060/design-a-stack-such-that-getminimum-should-be-o1
sample of c# code:
class Node<T> where T : IComparable<T>
{
public T Value { get; set; }
public Node<T> Next { get; set; }
public Node<T> Prev { get; set; }
}
class O1DataStructure<T> where T:IComparable<T>
{
private readonly Stack<Node<T>> _mins = new Stack<Node<T>>();
private readonly Stack<Node<T>> _maxes = new Stack<Node<T>>();
private Node<T> _head;
private Node<T> _tail;
public void Delete(Node<T> node)
{
var prev = node.Prev;
var next = node.Next;
if (prev != null)
{
prev.Next = next;
if (next != null)
{
next.Prev = prev;
}
}
else
{
_head = node.Next;
}
if (_maxes.Peek() == node)
_maxes.Pop();
if (_mins.Peek() == node)
_mins.Pop();
}
public T Max
{
get
{
if (_maxes.Count == 0)
throw new InvalidOperationException("no values exist");
return _maxes.Peek().Value; }
}
public T Min
{
get
{
if(_mins.Count == 0)
throw new InvalidOperationException("no values exist");
return _mins.Peek().Value;
}
}
public void RemoveMax()
{
if(_maxes.Count == 0)
throw new InvalidOperationException("unable to delete the max");
Delete(_maxes.Peek());
}
public void RemoveMin()
{
if (_mins.Count == 0)
throw new InvalidOperationException("unable to delete the min");
Delete(_mins.Peek());
}
public Node<T> Insert(T value)
{
var node = new Node<T> {Value = value, Prev=_tail, Next = null};
if (_head == null)
{
_head = _tail = node;
}
else
{
_tail.Next = node;
_tail = node;
}
if (_maxes.Count == 0 || _maxes.Peek().Value.CompareTo(value) < 0)
{
_maxes.Push(node);
}
if (_mins.Count == 0 || _mins.Peek().Value.CompareTo(value) > 0)
{
_mins.Push(node);
}
return node;
}
}
2. delete: Delete from HashTable O(1). If item is top of min or max stack, pop it.
But what happens if item is in the middle of stack?
Ahh good catch. Then just don't worry about it. But the assumption would be that items in the stack could have been deleted already. So when it comes to relying on its value, like when determining largest or smallest value, check to see if the item on stack is actually in the hashtable first. Still O(1) though.
No... this is not right. Suppose you want to delete the min then ask what's the new "min", then your hashtable tells you that the value suggested by your min stack is not in the hashtable, then you needs to find the next next one in the min stack, what if that value is again not in the hashtable? I am not sure this is an O(1) operation.
Yeah you are right. The stale data causes O(n) run time. The O(1) requirement on the delete is the killer here I think.
Besides the insertion still need to be done on the heaps too, otherwise the new "min" may only exist in the hash table. But as we realized by the top post that O(1) time is impossible, I still think your solution is so-far the best choice, well, in which case the hash table is unnecessary, and all operations take O(lg n) time.
Who ever downvoted this answer are dumbasses... This is the perfect answer... Use one hash table to insert/delete/lookup at O(1) time. Simultaneously maintain 2 stacks, one max and one min which will have max/min elements on the top of stack which can be retrieved at O(1)... duh !
Not possible to have O(1) for each.
- Anonymous February 21, 2013If it was, you could sort in O(n) time.
Interviewers are getting dumber by the day.