whitenoise
BAN USER- 0of 0 votes
AnswersLet A be an array of size n, containing positive or negative integers, with A[1] < A[2] <...< A[n]. Design an efficient algorithm (should be more efficient than O(n)) to find an i such that A[i] = i provided such an i exists. What is the worst case computational complexity of your algorithm?
- whitenoise in United States| Report Duplicate | Flag | PURGE
Algorithm
well, if you go to the end of the linked list and come back up to the first node, you are traversing it twice.
- whitenoise May 17, 2013- we have n coins each of unique denomination
- a coin once selected cannot be reused/repeated (sampling without replacement)
- number of subsets of a set of n coins = 2^n
- brute force: compare sum of elements in a subset with the desired amount.
- Repeat 2^n times - obviously bad. Exponential time complexity
- So, sort the set. Use binary search to find the desired denomination
- The denomination is either present in the set, or not.
- Either way, you will know the coins lesser than A. Reject all coins bigger than A.
- Apply the brute algorithm (above) to this list of subsets (having coins smaller than A).
- This way, at least we reduce the time complexity to half (but it remains exponential)
well it is exactly what it says in the name: collaborative. group of machines are collaborating together to provide caching services to clients (typically in a WWW setting) providing the illusion of a single cache. Google to learn more.
- whitenoise April 27, 2013Since the service need to track only the player moves and subsequent state changes, the board size (infinity) does not matter. Another reason why the infinite expanse of the board does not matter is this: since the player who gets the 3 consecutive/contiguous blocks win, the other play will be forced to play in the same vicinity as the first player. he players can start anywhere on the board, as long as the service maintains a log of moves and the state changes.
For a player to win, she must play at least 4 moves for a 3 in a row state. Sometimes, she will need 5 moves or perhaps higher in this case. So evidently, we will need to track the blocks marked by the players in each move. I am thinking of using a stack for each player. Popping the stack every 4th move for each player, check each moves, meaning the cells for their adjacency. This is O(1) time efficient.
The adjacency is proved by:
1. If rows match across the moves (row match)
2. If cols match across the moves (column match)
3. If rows and cols both increase or decrease (diagonal match)
(agreed i need to elaborate on this)
if not match is found, discard the stack, start again.
(There are gaps in this approach, but would be happy to hear other alternatives)
If we visit chrome://history/ we see that the browsing history is grouped by date. The immediate browsing history is of course separate from this. I think every browser is exhibiting more or the same behavior.
So the data structure for storing the browsing history needs to be split into 2 categories:
1. Immediate - less than 6 hours
2. Long term - more than 6 hours
For immediate, we can use stacks. When the age runs out, we can copy this into a BST where each node represent a time stamp key and URL as value.
The long term storage can also go into a hash map/hash table. Where time stamp can be the key and URL the value. Agreed, some time stamps could be pretty close of each other, but the hash function should be able to identify the difference based on difference in seconds value.
A good API design should involve the following:
1. No state management. Push all state management to client code.
2. Idempotent. Same input will always provide the same output.
3. No Transaction management. Again, push it to client code.
4. Service oriented - meaning, provide a specific elemental service
When you apply all these design principles, your API code will automatically become lean. From that perspective, everything you will do becomes 'disposable' - all objects and primitive variables that you will use to service a certain request will become disposable.
Then comes the bloody internals. Ensure no loitering, reset your collections references to null etc.
But I believe that a good overall design principle are must for eliminating conditions for memory leaks.
agreed. also quick sort performance is a function of pivot, a bad choice of which may throw the whole things off. Best approach is, since constant space performance is the only criteria, to use Insertion Sort or Selection Sort both of which does in-place sorting thereby meeting the constant space constraint.
- whitenoise April 25, 2013rudolph bayer invented B-Tree - hence also known as bayer tree sometimes leading to some confusion perhaps.
- whitenoise April 25, 2013We do not need to start from the beginning, as suggested here. Instead, start at tails. The smallest of the tails is the lower bound of the range guaranteed. The issue is to find the upper bound. Of course the upper bound cannot be more than the max of all tails.
So start with the max of tails as upper bound. To best this estimation, compare the next smaller number than this tail (in the same list as the max of tails, of course). If this number is greater than lower bound (which is fixed), then continue looking. Stop when your number is smaller than lower bound.
This way, you don't need search all the K lists. Here is the pseudo code:
For each list in K collection:
Get the tailValue and eval the running minimum (minOfTails)
Get the tailValue and eval the running maximum (maxOfTails)
:loop end
Set the range lower bound = minOfTails //This is our lower bound
//Now take the Kth list whose tail is maximum
For each item in the Kth list: (reverse loop, from the tail)
Check: If minOfTails is less than the element
If yes, continue
If no, then set the element as upper bound of range and exit
:loop end
Set the last known KList value greater than minOfTails as upper bound of range.
We do not need to start from the beginning, as suggested here. Instead, start at tails. The smallest of the tails is the lower bound of the range guaranteed. The issue is to find the upper bound. Of course the upper bound cannot be more than the max of all tails.
So start with the max of tails as upper bound. To best this estimation, compare the next smaller number than this tail (in the same list as the max of tails, of course). If this number is greater than lower bound (which is fixed), then continue looking. Stop when your number is smaller than lower bound.
This way, you don't need search all the K lists. Here is the pseudo code:
For each list in K collection:
Get the tailValue and eval the running minimum (minOfTails)
Get the tailValue and eval the running maximum (maxOfTails)
:loop end
Set the range lower bound = minOfTails //This is our lower bound
//Now take the Kth list whose tail is maximum
For each item in the Kth list: (reverse loop, from the tail)
Check: If minOfTails is less than the element
If yes, continue
If no, then set the element as upper bound of range and exit
:loop end
Set the last known KList value greater than minOfTails as upper bound of range.
Objective:
1. Minimize the total waste by volume (not by flavor)
2. Maximize distribution (so maximum number of people must be served)
Constraints:
1. A person cannot have more than one flavor
2. More than one flavor cannot be distributed to a person
3. One flavor can be distributed to more than one person
4. Per person cake share V lies between Vmin and Vmax
Logic:
To minimize the waste by volume, the per person share must be adjusted to achieve the minimal waste.
Arrange the cakes in descending order Vmax to Vmin
Setup a cake distribution list: KDistribList[]
Setup the waste basket: wasteBasket
For each cake in the list:
Check if KDistribList size = K {
If YES, then add the remaining cakes to wasteBasket
Exit; distribution is complete
}
Count the number of times V goes into Vcake
Throw the remainder into waste basket { wasteBasket = wasteBasket+ Vcake%V }
If (Vcake/V <= (K - KDistribList.Size) { //Meaning there are spaces left
add the Vs to distribution list KDistribList[] = [The Vs extracted from Vcake]
} else if (Vcake/V > (K - KDistribList.Size) {
//Meaning there are more pieces than spaces left
add as much Vs to distribution list KDistribList[] that will fit
add the remaining to wasteBasket { wasteBasket = wasteBasket+ Vcake%V }
}
The above algorithm assumes that K => N, there are either same or more people than cakes. If there are more cakes than people (N > K), then the above algorithm still applies, but there will always be more cakes wasted in this scenario, but we will maximize the distribution (meaning every person will get a piece guaranteed.)
Also, since minimizing waste is an objective, the way to do so is to play around with the choice of V to see what yields the minimum wasteBasket value. So we will need to repeat this algorithm for different choices of V and check for wasteBasket minima.
what say?
traverse the graph (N1->N2), (N1->N3), (N1->N4), (N2->N3), and (N3->N4) and extract nodes and add them to node_master table:
<node_master>
node_id, node_value
N1 1234
N2 34
N3 56
N4 90
Then for each node, extract the children, add them to node_family table:
<node_family>
node_parent, node_child
N1 N2
N1 N3
N1 N4
N2 N3
N3 N4
Then to find the parent/child: SQL query may be:
SELECT node_master.node_id, node_family.node_child
FROM node_master INNER JOIN node_family
ON node_master.node_id = node_family.node_id
WHERE <specify a filter>;
A graph data structure will best fit this need. To really understand why, start with a single linked list. Assume the head of the list is a given URL, or a given time stamp signifying the start of observation or entries in the log.
Then the next node that follows will be the next time stamp that the same URL was visited (assuming head = URL). And the next another time stamp. This goes on. When you traverse from the head, you will know that this given URL was accessed in sequence at these time stamps that follow.
Now, start with a time stamp node in the sequence. Branch out of the main link to point to another URL that was accessed at that same time stamp. Repeat the above steps for the URL as the head.
Soon enough, you will end up with a directed graph.
it is just a quota based system. The actual allocation happens when you add files to your quota. The storage expands and collapses automatically, based on your current usage.
- whitenoise April 25, 2013public class ClosestSquare{
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println("Enter the number here: ");
double x;
Scanner scanIn = new Scanner(System.in); //Wrap the System In with a SCANNER
x = Integer.valueOf(scanIn.nextLine());
scanIn.close();
if (Math.floor(Math.sqrt(x)) == Math.ceil(Math.sqrt(x))){
System.out.println("The given number x: " + x + " is a perfect square...");
}else{// The given number is NOT a perfect square so let's find the closest one
double xSmall = Math.floor(Math.sqrt(x))*Math.floor(Math.sqrt(x));
double xLarge = Math.ceil(Math.sqrt(x))*Math.ceil(Math.sqrt(x));
if (Math.abs((x-xSmall)) > Math.abs(x-xLarge)){
System.out.println(("The closest perfect square is: " + xLarge));
}else{
System.out.println(("The closest perfect square is: " + xSmall));
}
}
}
}
a perfect hash with 0 hash collision will perform in O(1). Arrays, sorted or otherwise, always offer a performance that is a function of it's length.
- whitenoise May 17, 2013