Microsoft Interview Question
Site Reliability EngineersCountry: United States
Interview Type: In-Person
O(n * m) no matter what algorithm you apply - that's true.
"I guess he wanted you to apply DFS" - where is sense here? he wanted DFS and faster than O(n * m) simultaneously? do you mean that interviewer did not know the time complexity of DFS?
Is this possible? If there are k elements, and you need to visit all of them, how can you do it in less than O(n)? Even multi-threading does not help. Multi-threading reduces time, not time complexity because amount of total work in the end is the same.
what do you think about CPU caches? Could it be the case that amortized complexity would be less than O(m*n)?
The only thing I can think of is using a multi-core / multiple processors and using multi-threaded solution.
For instance, since we have multiple processors available, divide the matrix into 2 equal parts and assign the 2 parts to 2 separate threads. Then we can have these two threads run simultaneously on multiple processors and get faster than O(nm) run time. Other than that I think the interviewer was trying a trick question
what about this? assume that all elements in the matrix are the same. just read one element. Runtime O(1).
then this is not a matrix, it is 1 element, the other elements are just waste of space.
Moreover, we cannot do such assumptions, because we are not given input, we are given only an algorithm. The input can be whatever. Even if all the elements in the array are the same, we will get to know this only after we have visited all the elements.
It depends what means for them to actually "read". If you want to provide an interface let's name it "IPossibleToRead" that exposes a single read method: Read (int row, int column) and maybe two additional properties: ColumnLength, RowLength and you could return an implementation of this interface in O (1) time. Next, if he REALLY want to read he could just call our Read method.
Cosider following recursive approach
visit (lower_bound_row,upper_bound_row,columns)
{
if( lower_bound_row == upper_bound_row)
{
PRINT [lower_bound_row][0]........[lower_bound_row][columns]
}
else
{
mid = (lower_bound_row + upper_bound_row)/2;
PRINT [mid][0]......[mid][columns];
visit(lower_bound_row,mid-1,columns);
visit(upper_bound_row,mid+1,columns);
}
}
Let's say size of the array is n*m
Then following is the time complexity
T(n*m) = 2.T(nm/2) + o(m)
Applying master theorem we get
T(nm)=mlog(nm) [here b = 2, a = 2]
so , mlog(nm) is lesser than nm.
So, in this way array can be traversed with faster than nm.
Please let me know If I am doing something wrong.
I do not think you can use the Master Theorem like that. Your recurrence relation, T(nm) = 2T(nm/2) + o(m), is not of the correct form. You need to use the same n throughout. You use nm in some spots, and just m in another.
For example, you could use it for an algorithm with the recurrence T(nm) = 2T(nm/2) + o(nm), which is O(mn log(mn)).
For your algorithm, you need to use a different approach.
It's impossible to do less than O(mn), here is the solution by using DFS recursion.
public static void PrintDFSMatrix(int[,] myInt, int x, int y)
{
if (x < myInt.GetLength(0) && y < myInt.GetLength(1))
{
PrintDFSMatrix(myInt, x + 1, y + 1);
for (int i = y; i < myInt.GetLength(0); i++)
{
Console.WriteLine(myInt[x, i]);
}
for (int i = x + 1; i < myInt.GetLength(1); i++)
{
Console.WriteLine(myInt[i, y]);
}
}
}
Probably DFS?
- Nenad December 08, 2015Big O complexity is going to be the O(n * m) no matter what algorithm you apply.
First solution you provided isn't going to be any faster, even if we examine asymptotic complexity still you need to visit each of the elements.
I guess he wanted you to apply DFS to visit all elements (but to be fast we need to optimize recursion using caching or simply we can use BFS)