masterjaso
BAN USERFor this problem, you would need to sort the bolts and nuts by their size. This is an O(n log n) operation. With both of them sorted, you will then have a case where bolts[i] = nuts[i] and you are matched.
The above assumes that there is exactly one match for each element. If this is not true, then you would need to sort one of the sets and then binary search for a result one element at a time from the other set.
In this case, we implement a customer function to compare our struct/class (c/java). Once we have implemented this, we also need to implement a 'multimap'. A multimap is a data structure similar to other maps in that it has key,value pairs. It's uniqueness is derived from the property that you can insert multiple values using the same key value. In this instance, our key would be the student name (ID) and all associated values would be placed in the list. We can sort the value upon insertion with a lot of complex coding, or we can choose to sort once all values are loaded and we actually extract the values to find the 5 max. The method you use would depend on your implementation criteria and data size. The last step would be going through each student and getting the top 5 scores (see above for sorting methods) and peel the average out of them returning a new Map containing student names as the key, and the top 5 score average as the value.
This question appears to be testing your understanding of container classes and data structures in general, so I would emphasize my talking points around design decisions. If I were asked this question, I would think that the interviewer is more interested in my ability to design a system of interdependent structures more so that my mastery of an algorithmic design. Therefore make your design, explain your decisions, and proactively offer counter points that you discarded and why. The interviewer is wanting to know why you made your decisions so they can gauge your design maturity.
Understanding that everyone (including the judge) are answerable to the law, the man makes the statement:
"You will sentence me to 3 years in prison."
If the judge then sentences him to three years in prison, the man's statement is true and the judge is a liar (for giving 3 instead of 2 years), and therefore the judge would have to sentence himself to 3 years in prison. If the judge says that's true, he then sentences the man to 2 years in prison for being truthful, but the judge then lies (still) and gets 3 years.
This is a question that on the surface appears trivial, but in actuality tests your understanding of logical expressions (true/false) and your ability to interpret dependent conditional statements. (These are not easy...)
To the 'anonymous' poster above, when he uses the sizes as the KEY in the hashtable, that creates the size lookup that he is referencing. Given that these are distinct elements (no duplicates) you could store <size, location> pairs of one of the arrays, then lookup and move elements in O(n) run time. The space complexity is O(n). This is a good workable solution. (I would recommend that the original poster mention the final sorting step to close the loop however.)
- masterjaso October 31, 2014But the problem statement does not indicate that either group of objects are sorted, merely that they contain matching distinct elements at a 1:1 ratio. If it provided one sorted and one non-sorted then yes we could simply sort one of the arrays and be finished. However, this is not the case based on the question. We therefore need to attack it with the information provided.
- masterjaso October 31, 2014This question relies on the UnionFind algorithm whereby each entry in an array points to either the sentinel value (root -1) or it's parent node. By doing a search through these unions and caching results, we can obtain O(n) run time and O(n) space complexity. The algorithm goes:
1. For each node in the list, if it is already cached or -1 skip it, otherwise move on...
2. Set depth to 0 and node to array[i]
3. While array[i] is not -1 check to see if it is cached
4. If array[i] is cached, then add current depth to cache depth, otherwise move to new node and add 1 to depth counter
5. Once you hit root or a cached node, updated the depth for your node at cache[i] to be the final depth
6. At then end, look at all cached results and return the maximum depth
Java code example:
public static int findDepth(int[] p, int[] cache){
//Cycle through all elements to determine depth calculation
for(int i = 0; i < p.length; i++){
int depth = 0; //Depth accumulator
//Check to see if we have visited this node previously, or if this is the root
if(cache[i] > 0 || p[i] == -1) continue;
int node = p[i];
//Transition node to node and accumulate depth count
while(p[node] != -1){
//If we find a previously visited node, add our current depth to it's depth and cache result
if(cache[node] > 0){
depth += cache[node];
break;
}
else{
node = p[node];
depth++;
}
}
cache[node] = depth;
}
//Find our largest depth and return that as the result
int result = 0;
for(int i: cache){
if(i > result){ result = i; }
}
return result;
}
This is a great solution, particularly for amortizing the connections over time. Another method is to allow each second to be a bucket and implement a connection counter. This would require changing qps_millis to qCount (number of active connections this second) and chaning your conditional check in allowThisRequest to if(qCount < qps) return true. Lastly, you would need to add a check on the time to reset qCount to 0 every 1 second.
This is not a major update, but this would allow for larger 'burst' queries which can limit queue times for use cases where query requests do not come in at a steady pace, but instead come in as large spikes and then subside.
I would encourage providing one of these as the answer but explaining both examples during the follow up to demonstrate mastery.
I will preface my answer with the fact that the above answer utilizing a Hashtable runs faster than my answer by virtue of running in O(n) time instead of O(n log n). That being said, my answer has the potential to run in O(1) space complexity instead of O(n) space which is required of the Hashtable. Demonstrating both answers will show your interviewer your complete mastery.
This question is basically taking two randomly sorted arrays containing the same elements and asking that we match both arrays so that A[i] == B[i]. We know this from the fact that:
1. Both arrays are the same size
2. Both arrays contain the same 'object' type
3. The description 'Any object A is of the same size as exactly one object B'
4. The description 'All A's have different sizes, and all B's have different sizes'
So with a firm understanding that we have 2 unsorted lists of identical objects, the solution starts to materialize in the form of sorting. If we sort both lists, then all objects will satisfy the condition A[i] == B[i]. This means that we can get a run time complexity of O(n log n), and depending on the sort algorithm you can get O(1) space complexity by doing the sort in-place.
Quick code example in Java:
public void alignArrays(Object[] A, Object[] B){
if(A.length != B.length) { return; } //fail fast
Arrays.sort(A);
Arrays.sort(B);
}
Note that we could have also made this a boolean return function and added a single for loop to ensure accuracy, but this type of error checking will increase our run time by O(2n) each call. We are given the assumption in the question that size, unique, and matching objects are guaranteed, but explaining your understanding of this concept can help further cement your understanding of the problem.
- masterjaso October 30, 2014This questions requires a firm understanding of a deque. A deque has the ability to add/remove elements from the front and the back. A stack on the other hand operates on a LIFO (Last In First Out) system meaning you can only add and remove elements from the end. Lastly, the word amortized simply means average. You will typically hear of amortized run time analysis when dealing with an algorithm like merge sort or quick sort. This is because there is a random element to the operation on the function. At it's worst it could be O(n^2), but on the average these run in amortized O(n log N) time. So apply the concept of random run time and average outcome to this problem. We are saying that on average we expect our insert/remove functions to operate in near constant time, but understand that there may be cases where it can run longer.
Now with the definitions out of the way, how will we work on this? The hint lies in the very specific number of stacks we get - namely 3. We use one stack for front operations, the second stack for tail operations, and the last stack becomes a buffer that we use to perform reverse operations on either end as needed. Let's walk through an example:
Add these elements to the head: 1,2,3
Add these elements to the tail: 6,5,4
3 - 4
2 - 5
1 - 6
Now if I remove from head or remove from tail, I will get true O(1) constant time removal. But what if I remove more than 3 elements from either end? Then I have to reverse the other end by popping from the other stack to my buffer. This take O(n) operations and then I can resume my O(1) removal after they have been shifted to the buffer stack. The next time I want to add to the buffered end, I have to move items out of the buffer back into the correct stack so that my order is maintained, which is O(n) operation with a O(1) insert operation for the new element.
So knowing how this works, how do we prove O(1) amortized run time? Well, consider that most operations on a deque are bulk load or bulk remove, you can see that you would only have to do a O(n) buffer swap in cases where the entire list is traversed AND there are elements that have been inserted from both the head and tail. So while we will SOMETIMES see a O(n) operation to swap stacks and get the proper order, we will mostly be doing O(1) operations to insert/remove in bulk. Over the lifetime of the data structure this should make the average operations per function call land between 1 - 2, which constitutes an amortized run-time of O(1).
With that in mind, understand that there are certain algorithms that would be considered pathological for this data structure. Those would include algorithms where there are many inserts and deletes at both ends with frequent removals that 'cross the line' between the head and tail stacks. If these occurred frequently then the amortized run-time migrates closer to O(n) which completely nullifies the O(1) run time estimate.
This question is very detailed, and that gives us the proper hint on handling this. The fact that the sequence can be dis-joined (not required to be continuous) allows us to utilize some neat tricks to keep run time and space complexity somewhat controlled. The size complexity of my solution is O(n) as it will require additional data structures that could be equal to the size of the original string plus the size of the known alphabet we are working with. The absolute worst case run-time would be O(n^2) in the event the character string was entered in alphabetical order and then reverse alphabetical order. This is unlikely to happen, but must be considered. Note that we will typically get better run time that this due to our ability to track unique characters by virtue of the Set interface. This will limit the number of full string scans to only those characters that actually have a duplicate, and will only do the full string scan one time for each possible duplicate.
Here is my coded Java solution with comments:
public boolean hasSequence(String s){
int[] alpha = new int[256]; //assumes ASCII char set
HashSet<Character> starts = new HashSet<Character>();
for(int i = 0; i < s.length(); i++){ //Scan string and load char counts into our alphabet array
alpha[s.charAt(i)]++;
}
for(int i = 0; i < 256; i++){ //find duplicate characters and identify them as possible sequence starters
if(alpha[i] > 1){
starts.add((char) i);
}
}
if(starts.size() == 0){ //if all chars distinct, fail fast
return false;
}
for(int i = 0; i < s.length(); i++){ //scan string stopping at known dupes/starts only
if( starts.contains( s.charAt(i) ) ){
HashSet<Character> seq = new HashSet<Character>(); //Create set to hold all distinct chars appearing after a known dupe/start
boolean secondStart = false; //Flag to tell us if we have found the start again, used to identify if a sequence is present
for(int j = i+1;j < s.length(); j++){ //Scan characters after the known dupe/start and filling in a distinct set of those
if(!secondStart && s.charAt(j) == s.charAt(i) ){ secondStart = true; }
if( !seq.contains( s.charAt(j) ) ){ //If the character is not in the seq Set, add it
seq.add( s.charAt(j) );
}
else if( seq.contains( s.charAt(j) ) && secondStart ){ //If character is in the Set AND we have found our start, then this is a sequence
return true;
}
}
}
starts.remove( s.charAt(i) ); //This start has no sequence, remove it from our scan
}
return false;
}
Note that this function could be further customized to accommodate the passing in of different alphabets to ensure modularity and re-usability, but for the sake of simplicity I have assumed an ASCII char set.
- masterjaso October 23, 2014I really like this solutions, and the explanation of of the asymptotic analysis. Excellent work, and thanks for introducing me to a new concept that I had not previously considered.
- masterjaso October 22, 2014I believe that your solution is correct as long as you are under the condition that all of the numbers are distinct. If the numbers can be repeated more than once, then you need to rework your algorithm to account for multiples of the same entry near the head or tail of your list.
Also, you might want to have the return type of the function be a Collection/List that contains the values instead of only printing them (unless interviewer told you that was ok).
Lastly, you should show complete mastery of your solution by talking about the run-time complexity and the space complexity. In your solution it appears that the space is O(n) [Collections.sort() provides O(n/2) which truncates to O(n) asymptotic]. The run time is O(n log n) amortized due to the call to Collections.sort(). Note that the javadoc claims that partially sorted lists can get almost O(n) run time, but that is a rather brave assumption. We always work from a worst-case analysis or overall amortization in the case of random pivoting (like merge and quick sort).
All in all, I like your answer as your function could theoretically run in O(1) space and O(n) run time if the list is distinct and sorted prior to being passed to your function. Again, explaining these subtle differences will prove that you have full mastery and understanding of what is going on under the hood.
This is a fairly direct question, but you can really exhibit your skill by being thorough. This has a solution that is O(n) runtime and O(n) space. You can most likely do better on the space, but that will add some more complex code. Here's my solution with comments for explanation:
public static String replaceTwenty(String s){
//Create StringBuilder to utilize mutability (this is the O(n) extra space)
StringBuilder sb = new StringBuilder(s);
//Cycle through 1 char at a time - the length - 2 is a safety to prevent out of bounds error
for(int pos = 0; pos < sb.length()-2; pos++){
//Do char by char search to find leading indicator, then check for full %20 flag to replace
if(sb.charAt(pos) == '%' && sb.substring(pos, pos+3).equals("%20")){
sb .replace(pos, pos+3, " "); //Replace %20 with " "
}
}
return sb.toString();
}
This is a reverse spiral traversal problem. The tricky part is maintaining start and stop points, iterating solution by layers inward, and ensuring that you are referencing the array properly. Here is a java solution that runs in N^2 time and takes O(1) memory:
public void reverseSpiral(char[][] input){
int matrixSize = input.length; //this is the length of our NxN matrix (N)
int level = 0; //this is the layer we are on, think of an onion starting on the outside and working in
int levelStop;
//Make sure we account for odd N so that we get all layers ciel(N/2)
if(matrixSize % 2 == 0){levelStop = input.length/2;}
else{levelStop = input.length/2 + 1;}
while(level <= levelStop){ //loop till we reach levelStop
int mSize = matrixSize - 1 - level; //Set the max size, this shrinks with each level iteration till we reach middle
for(int i = mSize; i > level; i--){ //Traverse top-right to top-left
System.out.print(input[level][i]);
}
for(int i = level; i < mSize; i++){ //Traverse top-left to bottom-left
System.out.print(input[i][level]);
}
for(int i = level; i < mSize; i++){ //Traverse bottom-left to bottom-right
System.out.print(input[mSize][i]);
}
for(int i = mSize; i > level; i--){ //Traverse bottom-right to top-right
System.out.print(input[i][mSize]);
}
level++; //Move inward 1 level
}
If this is a SQL question, it's pretty straight forward. The below returns a sorted list with the most logins at the top. If you need only to return a single row, simply turn this into a sub-query and select user_id from sub-query where rownum = 1.
select
user_id
,count(date) as logins
from table_name
group by
user_id
order by count(date) desc
For this problem, the first, second, and third values are all interconnected. Any solution that doesn't consider them all at the time is bound to leave room for invalid use cases. A valid brute force solution is located below. The space requirement is O(1) as it only requires minimal storage variables and not copy/sort. The run time is a sad O(n^3) for the triple for loop. That makes me think there should be a better solution, but such an algorithm defies me at this time.
Note that this algorithm will actually allow for arrays that contain positive and negative integer values (as well as floating point with some modification to the data types) as well as arrays of variable length.
Here is my coded solution:
import java.util.*;
public class PickThree{
public static int[] pick(int[] a, int[] b, int[] c){
int[] result = {-1,-1,-1};
int d1 = Integer.MAX_VALUE;
int d2 = Integer.MAX_VALUE;
int d3 = Integer.MAX_VALUE;
int counter = Integer.MAX_VALUE;
for(int i = 0; i < a.length; i++){
for(int j = 0; j < b.length; j++){
for(int k = 0; k < c.length; k++){
d1 = Math.abs(a[i] - b[j]);
d2 = Math.abs(b[j] - c[k]);
d3 = Math.abs(c[k] - a[i]);
if(d1 + d2 + d3 < counter){
counter = d1+ d2 + d3;
result[0] = a[i];
result[1] = b[j];
result[2] = c[k];
}
}
}
}
return result;
}
public static void main(String []args){
int[] a = {-3,-2,0,1,4};
int[] b = {-10,-8,-5,-3,-1};
int[] c = {1,5,10,15,20};
int[] f = pick(a,b,c);
for(int i = 0; i < f.length; i++){
System.out.println(f[i]);
}
}
}
This solution also fails for this example:
A -3, -2, 0, 0 1 4
B -10, -8, -5, -3, -1
C 1, 5, 10, 15, 20
The answer here is 0, -1, 1 which never align in the above mentioned triplet fashion.
To me this would involve some box collision testing to determine which pins are 'in-scope' for the current view. Once a view is established you could store the pins in a HashMap. If the view is shifted, you simply do your collision check again (see which pins lie within the bounding box) and insert the pins into the HashMap. Duplicates will be filtered automatically.
- masterjaso October 25, 2013There are two optimizations depending on the scenario. If you know the specific type of data the container would hold, you should identify that generic using the <> syntax. Secondly, if you know this list will be holding some large amount of data (more than the default 10 that are declared) you could go ahead and allocated that with the constructor and provide an initial size to more closely reflect your array's needs. This will prevent unnecessary resizing and recopying as the arraylist is loaded.
- masterjaso October 03, 2013Good answer. Just to add some more details to show stronger mastery of the subject you should include a discussion about pathological data sets. This is the idea that a universe of key values is size X (every possible entry) is being hashed down to our HashTable size N which is substantially smaller than universe X. If our hashing function regularly encounters values in X that are divisible by N, then an abnormal amount of collisions can occur and degrade the O(1) performance in our HashTable to O(n). Therefore, when considering hashing functions you should absolutely do 2 steps:
1. Consider the type of input you expect to receive and ensure that it doesn't result in abnormal clustering/collisions when applied to your Hashing function.
2. Monitor the performance of the HashTable during its initial implementation to ensure that it is meeting the anticipated benchmarks for collisions, load, and rehashing needs.
So the pseudo provided above is all good, but actually coding it finds some interesting little tweaks that you must make to complete this successfully.
1. Put all elements of arr1 into a HashMap, unless they are duplicates themselves. If they are duplicates, add to arr4 immediately, unless that value already exists in arr4 (arr4 is expandable ArrayList or equivalent).
2. Compare all elements of arr2 to keys in HashMap. If the values are present AND that value is not already in arr4, then add it to arr4.
3. Clear your HashMap
4. Load arr3 into the HashMap.
5. Compare all elements of arr4 to the HashMap, and if the value is contained in the HashMap delete it from both the HashMap and arr4. (By eliminating duplicates in previous steps, we don't have to worry about duplicates being present within the HashMap or the Array itself. Therefore we can delete from both safely at this point).
6. Since the arr4 element was deleted, back the iterator up by 1 to make sure we don't skip any elements.
7. Iterate through the HashMap and load any remaining values into arr4.
8. If arr4.size() is 0, output -1, else output arr4's contents
Key points that need addressed: remove duplicates early in the process so they don't cascade to final comparison. Make sure that you use an expandable/retractable array or equivalent to store elements for arr4 as the size is variable. This also allows you to reuse it for final output. Make sure when you delete element from variable length array to back your iterator up by 1 so you don't skip an element.
Time complexity of this algorithm is O(n), and space complexity is also O(n) since the HashMap and arr4 could potentially hold every element of the largest array in the worst case.
The final output for the above test case is: 2, 4, 6, 8
Sample Code:
int[] arr1={1,3,5,7,9};
int[] arr2={1,2,3,5,4,1,8,9,7};
int[] arr3={1,3,5,7,9,2,1,4,6,5,8};
ArrayList<Integer> arr4 = new ArrayList();
HashMap<Integer, Integer> filter = new HashMap();
for(int i: arr1){
if(filter.containsKey(i) && !arr4.contains(i)){
arr4.add(i);
}
else{
filter.put(i, i);
}
}
for(int i: arr2){
if(filter.containsKey(i) && !arr4.contains(i)){
arr4.add(i);
}
}
filter.clear();
for(int i: arr3){
filter.put(i, i);
}
for(int i = 0; i < arr4.size(); i++){
if(filter.containsKey(arr4.get(i))){
filter.remove(arr4.get(i));
arr4.remove(i);
i--;
}
}
for(int i: filter.keySet()){
arr4.add(i);
}
if(arr4.size() == 0){
System.out.println("-1");
}
else{
for(int i: arr4){
System.out.println(i);
}
}
At first glance I thought this was a simple problem.... then I tried to code it. Took me a little mental gymnastics, but I figured it out. Here was my process for figuring it out:
The pascal triangle is based on the formula n(n-1)/2 which gives the following pyramid:
1
23
456
etc...
So, this means we need to print the name 'anony' as:
a
no
ny-
I am using a '-' to fill in empty spaces for the last line as not every name is equivalent to a triangular number. Then I reasoned through printing out each line, and what the start and end point of each line should be. This led me to look at the DIFFERENCE between the starting points on each line, as well as the ending points on each line. This gave me the following breakdown:
Start points: 1 2 4 7 11
Start Dif: 1 2 3 4
End points: 1 3 6 10 15
End Dif: 2 3 4 5
So my start point increases by a linear incremental amount each iteration. My end point also increases by a linear incremental amount on each iteration, but by an amount that is 1 larger than the start difference.
For simplification, we handle the single case as a stand alone so we don't have to fight with arrays being 0 indexed and how that can create confusion with our algorithm. That being said, here is my algorithm for this process:
1. Declare a counter, start, and end and set all to 0
2. Print the first character and move to the next line
3. While end is less than the length of the name, do steps 4-8
4. Increment count by 1
5. Increment start by count
6. Increment end by count + 1
7. Loop through and print all chars from start to end (inclusive), if you go past the end of the name array, then print a '-' to finish the triangle (you could also return early and leave a pyramid with missing blocks in its foundation)
8. Move to next line and return to step 3
Below is my sample code along with a sample output:
String name = "WhatALongName";
int start = 0, count = 0, end = 0;
System.out.println(name.charAt(0));
while(end < name.length()){
count++;
start += count;
end += (count + 1);
for(int i = start; i <= end; i++){
if(i < name.length()){
System.out.print(name.charAt(i));
}
else{
System.out.print("-");
}
}
System.out.println();
}
Output:
W
ha
tAL
ongN
ame--
If this is how the question was phrased, I would first clarify what is the goal of the comparison. Are we trying to find how similar/dis-similar, or just a simple true/false if they are exact or not? Also, is our comparison case-sensitive?
For this example I will use the following assumtions:
1. We want a true/false based on an exact match of the length/characters
2. Our comparison is indeed case sensitive (boo != Boo)
In my below example, I convert Java String to a character array so that the algorithm is applicable to other languages and not cluttered with Java's built-in String functions. Simply put, the algorithm for string comparison is:
1. Check for equal length, if not then return false.
2. Check sequentially through each array comparing 1 char at a time, if any are different then return false
3. If 1 and 2 both clear without returning false, then return true as they are the same.
public boolean compareStrings(String s1, String s2){
char[] first = s1.toArray();
char[] second = s2.toArray();
if(first.length != second.length){
return false;
}
for(int i = 0; i < first.length; i++){
if(first[i] != second[i]){
return false;
}
}
return true
}
Great test case and readout. I also did the same test in Java with different answers. Appears to be language/compiler dependent as which order the functions evaluate. It also appears to be dependent upon the way the value is being returned, as assigning it to a variable gives one answer, and outputting it directly gives another.
int x = 7, y = -3, z;
z = ++x + y-- - ++y - x-- - x-- - ++y - x-- ;
System.out.println(z); //Output: -11
System.out.println(++x + y-- - ++y - x-- - x-- - ++y - x--); //Output: -8
System.out.println(x---x); //Output: 1
Updated this after further study. So when assigning to a variable, it appears that "x---x" evaluates as (x--) - x which equals 0. But when evaluating to a plain value (not assigned to a variable) it is interpreted as "x - (--x)" which equals 1. Very subtle. Beware.
- masterjaso September 12, 2013So to this question I would have to clarify what the difference in Numberics and Numbers really means. This could just be a knowledge gap on my part. That being said, let's get into the appropriate way to establish a filter on an alphanumeric string.
1. Convert string to char array
2. Determine character set ( i will assume ASCII)
3. On each character evaluate the ASCII value by range. For instance
0-47: special characters (carriage return, newline, Space, etc...)
48-57: Numbers (0-9)
58-64: Special Symbols
65-90: Capital Alphabet (letters)
91-96: More Special Symbols
97-122: Lower Case Alphabet (letters)
123-255: more Special Characters
4. Apply filter rule: either keep or remove based on the type of filter and character present.
Note that there are many character sets and different ways to set these ranges. I picked these ranges for simplicity sake. I would also utilize a StringBuilder data structure to keep the characters that aren't filtered out and return the value of my StringBuilder when the algorithm completed. Should be O(n) time complexity for the filter, and require O(n) additional space for the char array and StringBuilder.
Constant time is impossible when you must evaluate each element at least once to determine if it's a duplicate or not.
That being said, you can get constant time in determining if the element is indeed a duplicate by iterating through your list and inserting each value into a hash table. Use the numbers as the key values, and they hash table will automatically filter out any duplicates upon insertion.
A quick code example in Java:
HashMap<Integer, Integer> hm = new HashMap();
int[] a = {0,5,9,5,6,7,6,9};
for(int i: a){
hm.put(i, i);
}
for(Integer i: hm.keySet()){
System.out.println(hm.get(i));
}
Inserting the numbers of your unordered list into a hash table will create a data structure that can take as in input, the key value your are seeking, use it's hash value, and return if that element exists or not.
Put/Get in hash tables operate on O(1) except in cases where the load factor is too high and there are large numbers of collisions in the same 'bucket'.
There are a lot of correct answers below about pre-increment and post-increment.
If this were the actual interview question, I would clarify with the interviewer if i or a have been initialized. If the above snippet is all that is provided, then the compiler will complain on both about un-initialized variables, and be unable to perform operations.
On the other side, if the variables are initialized, you will get a different kind of error for both. The pre/post increment causes that side of the expression to be a value not a variable. The compiler cannot assign the value of a to another value. It will have to assign it to a variable.
There are many methods, but the sieve of Sieve of Eratosthenes tends to be the one used for large sets of N. You determine a Prime by trying to divide it by all integer values up to its square root. If it is not divisible by any of those numbers, it is indeed a Prime.
Once you have located a prime, you then mark all numbers that can be divided by that prime off the list and move to the next non-prime number and repeat. The largest (last) number that is not marked as prime as the routine scans through is your answer.
To my knowledge the space complexity for this algorithm is O(n) to maintain an array that is equal to the size of N. The running time is O(n log n).
As with all interview questions, I would clarify first that all numbers are distinct. If they aren't, there is no solution that I know of, therefore we'll make the assumption that the elements 1 to 10 are all distinct.
Start by initializing a boolean array with all elements false.
Scan all of the numbers, and the number you scan, set the boolean array index to 1 (arr[i] = 1)
Finally, scan your array and return the indices that are still 0 as the missing elements.
Should run in O(2n) as it requires a scan of N elements and N boolean entries. Ultimately this truncates in O notations to O(n). The space requirement is also O(n) for the extra boolean array.
To implement a queue, you must maintain the invariant of First In, First Out (FIFO). Stacks are the antithesis of this invariant, but by utilizing two stacks, you can successfully create a queue data structure.
The trick is to minimize the need to transfer from Queue In to Queue Out calls.
Let's start with a base implementation. Insertion will load elements onto the Queue In stack. Extraction causes us to pop all elements off the Queue In stack, and push them sequentially onto the Queue Out stack. Then we can pop off as many elements as we need, and move the remaining elements back to the Queue In to be ready to take on new entries.
This meets the basic standards, but breaks the need to keep push and pop at linear time due to the stack shift. To get as close to linear, we could leave elements on the Queue Out stack once they get there until that stack is emptied, then and only then do we initiate a Queue In to Queue Out transfer. This means we only do the O(n) worst case move when the Queue Out is empty. The rest of the time, we maintain the FIFO invariant, as well as the O(1) push/pop/peek interface.
I believe this solution is going down the right path. The only question I see would be, why use 2? Why not 3, 4, 5 or any other number? To this point, I believe you need to be able to speak to why the constant you pick helps to ensure the minimization of regret (going the wrong direction) as arbitrarily going one direction only could lead to no result, and not using a distributed pattern like the exponential approach mentioned here can lead to inefficient turning points.
Just to clarify, I personally think 2 is the best constant, as growth by an exponential value will have 2 going to super large numbers as early as the 20th iteration.
As I see it, this question is really about your attention to detail in relation to the question asked and the requirements. Here are the key points:
1. Bits: only 1's or 0's will be allowed.
2. Equally sized diagonals: they must be the exact same length (see assumption below to see why this is critical)
3. Share a common mid-point: the middle 1 must be present in both diagonals.
We can get some good operational savings by applying these parameters. First off, we are scanning for diagonals that share a common mid-point, so we need not scan the first or last row nor column as they can't be a mid-point. This gets us some major performance gains on smaller matrices (N < 5 or M < 5) as it limits the possible mid-points substantially, which will reduce the number of 'check for diagonal' scans that must be done.
Secondly, the output of this operation is an integer value of the longest diagonal, so there is no need to store the length of all diagonals or use any type of dynamic programming methodology (if we wanted the locations of the longest diagonal then we would). We merely need to find all of the X's as defined above, measure the length of the diagonals, and keep a running tracker of the max length to return at the end, knowing that a return of 0 means there is not an X present. In this sense, our algorithm will actually be in-place, or O(1) as we only use a small number of additional constant variables to track our progress.
One last assumption I will make is that unequal lengths will disqualify an X, even in this case:
1 0 1 0
0 1 0 0
1 0 1 0
0 0 0 1
Now with a clear operational definition, we can proceed with our pseudo:
1. Scan through each matrix entry starting with 1,1 (not 0,0) and ending at n-1,m-1 (not n,m) so that we only scan entries that can be a mid-point.
2. If we encounter a 1, check for diagonals via a quick if to see if the 4 diags contain a 1. If any of them contain 0's, return to step 1 and continue. Otherwise proceed with:
2a. traverse each directional diagonal (up-right, up-left, down-right, down-left) till you find a 0 or edge, count it's value (without the mid-point)
2b. Add together the opposite diags (up-right + down-left and up-left + down-right), add 1 to account for the mid-point and compare sizes
2c. If sizes are the same, compare to maxDiag variable, else go back to step 1
2d. If it's larger, update maxDiag to equal this diagonal length, otherwise leave it at current value
3. Once all entries are scanned, return maxDiag.
Our running time is between O( (nm)^2) and O(nm) because while our primary loop only touches each entry once (excluding the outer edge), there is the possibility to scan each node multiple times depending on the number of diagonals present in the matrix. However, given the constraints, I believe a safe upper notation would be O((nm)^2), knowing that across random distribution of matrices we will get better performance, particularly on smaller matrices due to our "exempt outer edge" scan.
[I hope to come back and edit this with a live code example once I get a chance to write one up]
There is only 1 while loop and you are doing constant time operations on every element, even in the worst case. Without a second loop causing you to re-evaluate each and every node, it has to be less than O(n^2). As explained above, even in the worst case you are doing one insert and one deletion from your tracking list. That evaluates to some constant times N, which truncates in asymptotic analysis to O(N).
I do agree that a heap would be more ideal than manually managing a LinkedList. Thank you for the idea/correction!
The above is the correct solution. You can go one step further to demonstrate total mastery by mentioning that the XOR swap is sub-optimal on modern processor architectures due to them utilizing parallel processing to enhance these types of swap operations. Older architectures actually saw substantial gains from utilizing XOR due to their linear instruction processing limitation, small cache sizes, and slower memory access times.
For example:
Old Architecture:
a = 1;
b = 2;
XOR completes in 3 sequential processes.
Newer architecture:
a=1;
b=2;
temp;
Parallel processing completes in 2 processing steps.
1. Move a to temp
2. Move a=b AND b=temp in parallel (hyperthreading/multi-core)
The above is the correct solution, there is no need to move from your current pointer.
The only problem is the memory management. If you are in C++, then you just left that memory block allocated and have no way to reference it for the delete function to clean it up. If you are using Java, then the de-referencing will mark the 4th node for garbage collection.
This would be a very excellent call out to demonstrate your understanding of appropriate memory management in your given language.
This is a very good solution, and as mentioned above, easily modifiable to fit a tree with N children per node. To elaborate, this is a Depth First traversal of the tree that moves to the farthest children and works backwards to the root. It should visit each node exactly once, and therefor its time complexity is O(n). Given that it is a recursive function that will add a sum variable to the stack with each call, it's space complexity becomes O(n) as well.
- masterjaso August 31, 2013HashMap's put() and get() methods utilize the hashCode() and equals() function. Note that 'abc' and 'ABC' are distinct separate values. As such, the hashCode(abc) and hashCode(ABC) will result in different key values.
On the other side, let's say that we have a key collision and we're trying to retrieve a case sensitive value from the resulting LinkList that is created at that bucket entry. Then the .equals(abc) and .equals(ABC) can clearly distinguish what value you are seeking.
In this question, it's a matter of establishing your if's in the correct order. The primary constraint is that the number of boxes must be even and keep relative order(or within 1 of each other). The weight is a secondary consideration. So to complete the algorithm establish two lists and cycle through the boxes one a time and:
1. compare the size of list1 and list2. If list1.size() > list2.size() add to list 1 and go to next box(or list2.size() > list1.size() add to list 2 and go to next box).
2. if the lists are the same size compare weights of the lists. Add the box to the list with the lowest weight.
This is a greedy approach that operates in O(n) time and by inserting the items to the same point of your list each time, you maintain the ordered criteria. Note that you can also replace the list with a Queue if you want to enforce a FIFO constraint. Space complexity will be O(n) additional space for the two lists.
public void sortBoxes(Box[] boxes){
LinkedList set1 = new LinkedList();
LinkedList set2 = new LinkedList();
int weight1 = 0;
int weight2 = 0;
for(int i = 0; i < boxes.length; i++){
if(set1.size() < set2.size()){
set1.add(box[i]);
weight1 += box[i].weight;
continue;
}
if(set2.size() < set1.size()){
set2.add(box[i]);
weight2 += box[i].weight;
continue;
}
if(weight1 <= weight2){
set1.add(box[i]);
}
else{
set2.add(box[i]);
}
}
}
If this was the complete interview question, I would follow up with several clarifying questions to ensure you are on target (and to display that you have the aptitude for developing requirements that your client might not even understand they need).
Q1: Is the file sorted?
Q2: What are the memory constraints? (can we load the entire file to memory, or must we work with the stream)
Q3: Are we finding just the 2 largest, or the Nth largest?
For the sake of my answer, I will assume the file is unsorted, the file is too large to load into memory, and that we are seeking the Nth largest. In this instance, you would establish a file input stream using your chosen programming language, and then parse the input one number at a time. You also declare a linked list to keep track of your Nth largest items in sorted order. Each time you evaluate a number you compare it to the smallest element in your list (the first element). If it is larger, you add to the correct spot in the list to maintain sorted order. If the list size is larger than Nth larger elements, you delete the smallest node.
This algorithm will give you O(n) worst case running time. For the worst case, your file would be sorted ascending and you would have to add every element to your list which requires N*K elements operations, which truncates to O(n). The space complexity is O(N) as you only allocate memory in proportion to the number of elements you want to find.
public LinkList findNthLargestFromFile(InputStream in, int n){
LinkList list = new LinkList();
int temp;
while(in.hasNext()){
temp = in.getNextInt();
if(temp > list.getFirst() || list.size() < n){
list.add(temp);
list.sort();
if(list.size() > n){
list.deleteFirst();
}
}
}
return list;
}
@ Erasmus
You are correct, my shown algorithm does not maintain the relative order (as disclosed in my comments). I do want to thank you for the sample, it helped me identify a bug and I have amended my post with updated and correct code.
You could treat this problem like a directed graph, giving each edge weight a value equal to the minutes of work it takes to complete. While constructing the graph, if an end time is greater than a start time then that edge cannot exist.
Next identify the vertices that have no incoming paths. These will be the possible start nodes of any possible greatest path.
Then you can do a breadth first search through the graph starting at each possible START node and maintaining a longest path (biggest time) that is compared at the end of each BFS iteration. At the end, you simply reproduce the longest path and it's value.
This problem has a O(n) time and O(1) space complexity solution if relative ordering is not a factor. To keep space constant you have to use n^2 operations (swaps and compares) to rearrange the array. To keep operations constant you would need to allocate additional memory to keep track of where each positive/negative entry will need to land.
Also note that I am making the assumption that 0 is treated like a positive number, although I would clarify this point with the interviewer to ensure correctness.
O(n) time, O(1) space without ordering solution:
1. Start with two pointers, one at the front and one at the end ( i=0, j = array.length -1). While i is less than j repeat steps 2-4.
2. Evaluate i - if it is less than 0 increment it by 1
3. Evaluate j - if it is greater than or equal to 0 decrement it by 1
4. Evaluate i && j - if i is greater than or equal to zero AND j is less than 0 then swap the elements. Increment i and decrement j.
Here is a working sample:
public static void sort(int[] a){
int i = 0;
int j = a.length - 1;
int tmp;
while(i < j){
if(a[i] >= 0 && a[j] < 0){
tmp = a[i];
a[i] = a[j];
a[j] = tmp;
i++;
j--;
continue;
}
if(a[i] < 0){i++;}
if(a[j] >= 0){j--;}
}
}
So isn't there a much simpler solution than looping through the list? I believe so.
First let's utilize a list.size() function to return the size of the list. That will help us find even/odd.
Now when a node is deleted from anywhere in the list, we find the middle element via:
if(list.size() %2 == 0)
mid = list.size()/2
else
mid = ciel(list.size()/2)
Note that the ciel function rounds up to the next largest int. You could also allow it to divice, truncate the remainder, then simply add 1.
Either way, we now know our middle location based on the size of the list after deletion. Now simply iterate through your list to that element (mid) and set the end node mid-pointer to that.
If our list has a built in pointer to the end, this becomes very easy as we do not need to loop through to find the end. We do have to design for the edge case that occurs when we delete the end node though.
If we are given 2 eggs only, then yes you will encounter a linear search after the first one breaks, as when you get down to 1 egg, you can only find the best floor by going 1 floor at a time. But as long as you have 2 eggs or more, you can always binary search.
If you read my assumptions, it shows that I am assuming 2 'or more' eggs for the optimal search using the binary method (divide remaining possible floors in half and checking the mid-point). This solution works best for infinite number of eggs though.
Ultimately the most correct answer for 2 eggs and infinite floors was actually provided by PKT with the triangular number solution: t(t+1)/2 throws. The trick here is determining the proper number of floors to skip with each toss. In the case of the above formula t(t+1)/2, this quadratic equation. To minimize our number of throws, we must calculate the positive root for our quadratic equation. At each attempt thereafter, we reduce the number of floors we skip by 1. This means we eventually reduce our number of throws to a linear search at the very end, or only when we have found our first floor with a break. This ensures that you have ever decreasing gaps to minimize the number of linear attempts you have to make as you get to higher floors, but does increase the number of attempts it takes on the lower floors.
So the key to this question lies in the number of eggs you have to try.
Let's start with a brute force method:
1. Starting at ground floor (if it breaks, there is no answer)
2. Increment floors by the number of eggs you have
3. Repeat until your egg breaks
4. If you have eggs left go back to the last floor that didn't break eggs and increment by one until you find the first floor that causes a break.
5. MaxFloorWithoutBreakage = CurrentFloor - 1
So the running time of this algorithm is linear O(n/e) where e is the number of eggs you have.
To make this more efficient, you could make the assumption that eggs are always 2 or more at the start of the experiment. Then as long as you have 2 or more eggs, you do a binary search. This effectively cuts the number of available floors in half each time until you reach your final egg. On your final egg, you go back to your last 'safe' floor and start incrementing by 1 until it breaks.
In this instance the worst case is still O(n/2 - 1), but the average running time (assuming the break point is a different floor each time) becomes O(log n).
For the bonus round, if you actually code this in a sorted array that represents the floors, this also has a O(n) space complexity, which means it can run in place.
@yolo
You can iterate through a LinkList without the head pointer, so long as you have one of the nodes from the list. My example above does not utilize a head pointer, but definently utilizes iteration. Every node in a list has a pointer to next, and therefore you can start at any node N and move/look forward.
You would implement a breadth first search using a queue to sum the values on each level, print them, then move to the next level.
void printLevelOrder(BinaryTree *root) {
if (!root) return;
queue<BinaryTree*> nodesQueue;
int nodesInCurrentLevel = 1;
int nodesInNextLevel = 0;
int levelSum = 0;
nodesQueue.push(root);
while (!nodesQueue.empty()) {
BinaryTree *currNode = nodesQueue.front();
nodesQueue.pop();
nodesInCurrentLevel--;
if (currNode) {
levelSum += currNode.value;
nodesQueue.push(currNode->left);
nodesQueue.push(currNode->right);
nodesInNextLevel += 2;
}
if (nodesInCurrentLevel == 0) {
cout << levelSum;
cout << endl;
nodesInCurrentLevel = nodesInNextLevel;
nodesInNextLevel = 0;
levelSum = 0;
}
}
}
So to delete the last node from a singly linked list, you invoke a function from that node (this is the additional data I believe the interviewer was talking about) that:
1. Checks if this node is the last node - if so set it to null
2. Iterates through the list looking 2 nodes ahead (n.next.next) to find the end of the list
3. Finally moves to the last node and sets it to null
Brief example:
Node n = myLinkedList.getRandomNode(); (this is any node that is part of a linked list)
n.deleteLast();
public void deleteLast(){
//check if this is the last node
if(this.next == null) {
this = null;
return;
}
//look 2 nodes ahead to see if we have reached the end
Node temp = this;
while(temp.next.next != null){
temp = temp.next;
}
//Finally, move to last node and set it null
temp = temp.next;
temp = null;
}
RepI am a good learner
Repharrylseay, Aghori Mahakal Tantrik at ASAPInfosystemsPvtLtd
I am Harry and I live in the USA but basically I am from Bangor. I live my life very ...
Repjesselblagg9, Cloud Support Associate at 247quickbookshelp
Hello, I am Jesse and I live in Springfield, USA. I am working in a company as an engineering technician ...
I believe this to be O(n) runtime (scans the entire array 2 times) while also being O(1) memory (swaps in-place with only a few 'placeholder' variables and counters).
- masterjaso June 10, 2022