zahidbuet106
BAN USER- 1of 1 vote
AnswersSuppose we are detecting fraud cheques and we found the cheques with the following list of patterns are fraud:
- zahidbuet106 in United States
111122234455
1234
22334455
11111111
234567
etc.
Now if you have a new cheque and wan to detect fraud in O(1) time what data structure you want to use?| Report Duplicate | Flag | PURGE
Amazon Software Engineer / Developer Data Structures - 0of 0 votes
AnswersSuppose a customer buys items for $10 in a shop and the cashier swipe her card at a POS charging $10. Assume that the card has $100 balance before swiping. POS sends the $10 transaction to a machine A in the Amazon cloud. A calls a service to update transaction and card balance, and then sends acknowledgement back to the POS. But the ack got lost in the middle and POS sends another $10 transaction request. How would you make sure that the balance is $90, not $80. And how would you distinguish multiple try with two legitimate $10 transaction back to back.
- zahidbuet106 in United States
Hint: You can't use more than one transaction entry in Database and you don't have the rollback provision.| Report Duplicate | Flag | PURGE
Amazon Software Engineer / Developer Database - 0of 0 votes
AnswersGiven a BST and a node, write the BST data structure and a method in Java
- zahidbuet106 in United States
Node findNode(Node n)
that will find the next node of n in the BST. For example, if the tree looks like:
7
/ \
5 11
/ \ /
4 6 9
/ \
2 15
Then,
findNode(2) = 4,
findNode(4) = 5,
findNode(5) = 6
findNode(6)=7
findNode(7)=9
findNode(9)=11
findNode(11)=15
Note that you are not given the root of the tree.
Hint: you may assume that you have parent pointer.| Report Duplicate | Flag | PURGE
Amazon Software Engineer / Developer Algorithm
how is this code run in O(n)?
- zahidbuet106 December 11, 2015how this is linear in time?
- zahidbuet106 December 11, 2015@dhs, yes . it is a modified Kadane's algorithm.
- zahidbuet106 December 11, 2015O(n) time and O(1) space solution assuming a constant size alphabet set of the characters in the string. The idea is to stretch a sliding window until we find all characters. Once we find all the characters then we either slide the window if not reached the end of string or we can shrink the window as long as we all characters matches in the shrunk window.
//O(1) match between two histogram due to constant size alphabet i.e. 256
private static int countMatches(int[] textHist, int[] patHist){
int match = 0;
for(int i = 0; i< 256; i++){
if(patHist[i] > 0 && textHist[i] > 0){
match++;
}
}
return match;
}
public static String minLenSubStringWithAllChars(String str, Set<Character> chars){
int[] textHist = new int[256];
int[] patHist = new int[256];
int start = 0;
int end = 0;
int minLen = Integer.MAX_VALUE;
int bestStart = 0;
int bestEnd = 0;
//prepare the initial window of size of the char set
for(char c: chars){
textHist[str.charAt(end)]++;
patHist[c]++;
end++;
}
while(start < str.length()){
int matches = countMatches(textHist, patHist);
//if current window doesn't contain all the characters
//then strech the window to right upto the end of string
if(matches < chars.size() && end < str.length()){
//strech window
textHist[str.charAt(end)]++;
end++;
}
//if current window contains all the characters with frequency
//at least one then we have the freedom to shrink the window
//from front.
else if(matches >= chars.size()){
//as current window contains all character so update minLen
if(end-start < minLen){
minLen = end-start;
bestStart = start;
bestEnd = end;
}
//shrink window
textHist[str.charAt(start)]--;
start++;
}
//if current window doesn't cntains all chars
//but we can't strech the window anymore then break
else{
break;
}
}
return str.substring(bestStart, bestEnd);
}
public static void permuteList(String[][] list, int start, ArrayList<String> perms){
if(start == list.length){
if(perms.size() == list.length)
System.out.println(perms.toString());
return;
}
for(int i = 0; i < list[start].length; i++){
perms.add(list[start][i]);
for(int j = start+1; j <= list.length; j++){
permuteList(list, j, perms);
}
perms.remove(list[start][i]);
}
}
public static void permuteList(String[][] list, int start, ArrayList<String> perms){
if(start == list.length){
if(perms.size() == list.length)
System.out.println(perms.toString());
return;
}
for(int i = 0; i < list[start].length; i++){
perms.add(list[start][i]);
for(int j = start+1; j <= list.length; j++){
permuteList(list, j, perms);
}
perms.remove(list[start][i]);
}
}
str.split("-?\\d+(\\.\\d+)?").length == 0
- zahidbuet106 November 03, 2015wrong solution..I wonder how wrong solutions are getting promoted to top!
- zahidbuet106 October 10, 2015If you think about real english word search then we can consider length of search word within a max limit i.e. O(1) length. Then the search through each path will be constant and overall complexity is of course can't be less then O(n).
- zahidbuet106 October 09, 2015This is classic connected components problem in Graph theory. We can easily find number of islands using DFS. If the graph is connected then DFS would visit all the nodes exactly once. Now, if the graph has islands i.e. connected components then we can count number of DFS traversals needed to visit all the nodes exactly once. This is O(n*m) time complexity.
- zahidbuet106 October 07, 2015We can build a trie of n strings in O(n) time if we assume that strings are of limited length dictionary words and alphabet size is limited (e.g. 256). Then there is a O(n) solution using the Trie.
After the trie has been built then for any given search word search down the word in the trie. If we find that there is no child in the current trie node for the current character at i of the search string, then we allow to the mismatch and go down to all the childs (maxm (256-1)=255). Now, we search for next character in all the child and stop as soon as we found a mismatch again. This way we will exhaust the search string and we check each of the current nodes in the trie (maxm 255) if it it contains an end word. If yes, then the words at those nodes are the result.
The worst case complexity assuming constant A of alphabet size and maximum string length K - is when we have mismatch in first position and the root node contains a node for all letters in the alphabet but the first character in the search string. So worst case complexity for search is (A-1)*(K-1) = O(K).
@SJ : subset is guaranteed O(lgn) because it uses headSet and tailSet to find the range , both of them are O(lgn).
@mger1979 : count += Math.abs(subSet.first().second - subSet.last().second)+1;
I am doing this to avoid calling size() method as Java currently do linear scan when you call size(). Instead I made my life difficult and kept index of an element in the second field of pair. So, you can calculate size using n=end-start+1 by O(1) time.
A simple O(nlgn) solution using cumsum and Java navigable set's subset operation. The idea is that if the number itself falls between the range than its a subset and we count it. Otherwise, if cumsum at the position of the element falls between the range then there is a subset from the start to this position and we count it.
Note that, there might be many other subarray's in between start and current position that can sum to the given range. We can check if A[i..j] sums to S by checking if cumsum[j]-cumsum[i] = S. In other words, at each position i, we can check if cumsum[j]-S was seen earlier in some previous position i < j. We can easily do this using a hash to track each of the cumsum at each position.
Now, question is how to track the range of sum? Idea is to put the cumsums seen so far in a TreeSet (i.e. BST order) and for a position i we find how many elements fall in the range
by finding number of elements in the subMap from (cumsum[i]-b) to (cumsum[i]-a) (think why?). So, we increase count by this number.
Below is a O(nlgn) implementation of this logic. Note that, treeset is handled in such a way that we don't have to call size() method that is O(n) itself. Instead I keep index and calculate size by using simple formula of (endIndex-startIndex+1) .
Overall time complexity O(nlgn), overall space complexity O(n).
public static int subArrayWithSumInRange(int[] A, int a, int b){
int count = 0;
TreeSet<Pair> treeSet = new TreeSet<test.Pair>();
int cumsum = 0;
for(int i = 0; i< A.length; i++){
cumsum+=A[i];
if(A[i] >= a && A[i] <= b){
count++;
}
NavigableSet<Pair> subSet = treeSet.subSet(new Pair(cumsum-b, i+1), true, new Pair(cumsum-a, i+1), false);
if(!subSet.isEmpty()){
count += Math.abs(subSet.first().second - subSet.last().second)+1;
}
treeSet.add(new Pair(cumsum, i));
}
return count;
}
private static class Pair implements Comparable<Pair>{
public int first;
public int second;
public Pair(int first, int second){
this.first = first;
this.second = second;
}
@Override
public int compareTo(Pair o) {
if(this.first == o.first)
return Integer.compare(this.second, o.second);
else
return Integer.compare(this.first, o.first);
}
@Override
public String toString() {
return "[" + first + "," + second + "]";
}
}
This is similar as computing inversion count of an array which basically counts smaller (or equal) elements on the right. So, surpasser of a position = n-i-1-inversion_count at that position.
- zahidbuet106 September 14, 2015I hope you may know the difference between tight bound and lower bound. It is theta(n) means tight bound. Also, you may already know why building Heap can O(nlgn). It all depends on how you are pushing down/pulling up the heap elements.
- zahidbuet106 September 14, 2015There is a better way to achieve this. We can find the kth smallest element from the 2D array by using a min heap. Then median will be n/2th smallest element. (or avg of n/2-1 and n/2+1 th).
For example, we can start with building a min heap by inserting first column of the 2D array in O(nlgn) time. Now, at each iteration extract min from the heap and insert the next element from the row of the element extracted (if the min is at the end of the row then no insert). This ensures we are traversing in ascending order. That is , the extracted element at kth iteration will be the kth smallest element. The complexity for such traversal is O(klgn) if k>n.
public static int kthSmallestElement(int[][] A, int k){
int n = A.length;
int m = A[0].length;
MatrixElement kthSmallest = null;
PriorityQueue<MatrixElement> minHeap = new PriorityQueue<MatrixElement>();
//add column 0 into meanHeap - O(nlgn)
for(int i = 0; i<n; i++){
minHeap.offer(new MatrixElement(A[i][0], i, 0));
}
//extract min from minheap and insert next element from the same row of the extracted min
int count = 0;
while(!minHeap.isEmpty() && count < k){
kthSmallest = minHeap.poll();
count++;
//
if(kthSmallest.col+1 < m){
minHeap.offer(new MatrixElement(A[kthSmallest.row][kthSmallest.col+1], kthSmallest.row, kthSmallest.col+1));
}
}
return kthSmallest.val;
}
public static class MatrixElement implements Comparable<MatrixElement>{
public int val;
public int row;
public int col;
public MatrixElement(int val, int row, int col){
this.val = val;
this.row = row;
this.col = col;
}
@Override
public int compareTo(MatrixElement o) {
return Integer.compare(this.val, o.val);
}
}
how the order of this algo is O(ngn) ? can you explain little bit more? it looks like time complexity for countElements() itself taking O(nlgm) ..
- zahidbuet106 September 11, 2015Each number is at a distance of k from its original position -- this can only happen when we do only one swap of each position i with position i+k (conversely, i with i-k) . For example,
original numbers = [1, 2, 3, 4, 5, 6, 7, 8]. Then for k=2 we swap 1 and 3, 2 and 4, 5 and 7, 6, and 8. Final array would be [3, 4, 1, 2, 7, 8, 5, 6].
So, in the question we are given [3, 4, 1, 2, 7, 8, 5, 6] and we need to get back to original sorted array. So, we will do the same swap again and we will get back the original array (because swap is reversible).
Do a merge operation (as we do in the merge phase of merge sort) on each row successively. Then median is the m*n/2 the element id m*n is odd or if its even the media is the average of (m*n/2-1, m*n/2+1)th element. Total complexity is O(n*m)
- zahidbuet106 September 08, 2015Do a merge operation (as we do in the merge phase of merge sort) on each row successively. Then median is the m*n/2 the element id m*n is odd or if its even the media is the average of (m*n/2-1, m*n/2+1)th element. Complexity of this solution is O(nmlg(n)).
There is a better way to achieve this. We can find the kth smallest element from the 2D array by using a min heap. Then median will be n/2th smallest element. (or avg of n/2-1 and n/2+1 th).
For example, we can start with building a min heap by inserting first column of the 2D array in O(nlgn) time. Now, at each iteration extract min from the heap and insert the next element from the row of the element extracted (if the min is at the end of the row then no insert). This ensures we are traversing in ascending order. That is , the extracted element at kth iteration will be the kth smallest element. The complexity for such traversal is O(klgn) if k>n.
public static int kthSmallestElement(int[][] A, int k){
int n = A.length;
int m = A[0].length;
MatrixElement kthSmallest = null;
PriorityQueue<MatrixElement> minHeap = new PriorityQueue<MatrixElement>();
//add column 0 into meanHeap - O(nlgn)
for(int i = 0; i<n; i++){
minHeap.offer(new MatrixElement(A[i][0], i, 0));
}
//extract min from minheap and insert next element from the same row of the extracted min
int count = 0;
while(!minHeap.isEmpty() && count < k){
kthSmallest = minHeap.poll();
count++;
//
if(kthSmallest.col+1 < m){
minHeap.offer(new MatrixElement(A[kthSmallest.row][kthSmallest.col+1], kthSmallest.row, kthSmallest.col+1));
}
}
return kthSmallest.val;
}
public static class MatrixElement implements Comparable<MatrixElement>{
public int val;
public int row;
public int col;
public MatrixElement(int val, int row, int col){
this.val = val;
this.row = row;
this.col = col;
}
@Override
public int compareTo(MatrixElement o) {
return Integer.compare(this.val, o.val);
}
}
there was a typo bug in the my final return. It should return false instead of true. Also, appreciate @rchg1988 for the fixing of missing case. Including these 2 fix the solution works for all cases now
public static boolean isCrossed(double[] s){
//base case
if(s.length < 4){
return false;
}
if(s[0] >= s[2] && s[3] >= s[1]){
return true;
}
//test if the moves are on outward increasing spiral
int i = 3;
while(i < s.length){
if(s[i] > s[i-2] && s[i-1] > s[i-3])
i++;
else break;
}
//if we visited all the moves then there is no intersection
if(i == s.length){
return false;
}
//otherwise moves are on a decreasing inward spiral starting from i
//we first need check if the two spirals are crossing each other which can only possible
//when edge i+1 crosses edge (i-4) or edge i+1 crosses edge i-2 (if exists)
if(i < s.length && i > 3 && s[i + 1] >= (s[i - 1] - s[i - 3])){
if (s[i] >= (s[i - 2] - s[i - 4]) || s[i + 1] >= s[i - 1])
return true;
}
//if two spiral didn't intersect then check for decreasing s
while(i+3 < s.length){
if(s[i] > s[i+2] && s[i+1] > s[i+3]){
i++;
}
else break;
}
//if we visited all the moves then there is no intersection
if(i+3 == s.length){
return false;
}
return false;
}
Here is my O(n^2) time and O(n) space using a greedy approach - that is it will return a suboptimal solution.
Idea is to partition the array in pairs of element that would distribute the sum as uniformly as possible across the partitions. So, each time we would try to take 2 pairs and put one pair in a partition and the other pair in the other partition.
1. Sort the array
2. If number of elements in less than 4 then create partitions accordingly for each cases when we have 1 element or 2 element or 3 element in the array.
3. Otherwise we will take 2 pair each time and put into two partition such that it minimizes the sum diff.
4. Pick the pair(largest, smallest) elemets in the the sorted array and put it to the smaller (wr.to sum) partition.
5. Then pick the second largest element and find its buddy to put them in the 'other' partition such that sum of second largest and its buddy minimizes the sum difference of the partitions.
6. The above approach will give a suboptimal solution. The problem in NP complete so, we can't have an optimal solution but we can improve the approximation ratio as follows.
7. If we have suboptimal solution (ie. sum diff != 0) then we try to improve the solution by swapping a large element in the larger partition with a small element in the smaller partition such that the swap actually minimizes the sum diff.
//overall O(n^2) time and O(n) space solution using a greedy approach
public static ArrayList<Integer>[] findEqualPartitionMinSumDif(int A[]){
//first sort the array - O(nlgn)
Arrays.sort(A);
ArrayList<Integer> partition1 = new ArrayList<Integer>();
ArrayList<Integer> partition2 = new ArrayList<Integer>();
//create index table to manage largest unused and smallest unused items
//O(n) space and O(nlgn) time to build and query the set
TreeSet<Integer> unused = new TreeSet<>();
for(int i = 0; i<A.length; i++){
unused.add(i);
}
int i = 0;
int j = A.length-1;
int part1Sum = 0;
int part2Sum = 0;
int diffSum = 0;
//O(n^2) processing time
while(unused.size() > 0){
i = unused.first();
j = unused.last();
diffSum = part1Sum-part2Sum;
//in case of size of the array is not multiple of 4 then we need to process last 3(or 2 or 1)
//element to assign partition. This is special case handling
if(unused.size() < 4){
switch(unused.size()){
case 1:
//put the 1 remaining item into smaller partition
if(diffSum > 0){
partition2.add(A[i]);
part2Sum += A[i];
}
else{
partition1.add(A[i]);
part1Sum += A[i];
}
break;
case 2:
//among the remaining 2 put the max in smaller and min in larger bucket
int max = Math.max(A[i], A[j]);
int min = Math.min(A[i], A[j]);
if(diffSum > 0){
partition2.add(max);
partition1.add(min);
part2Sum += max;
part1Sum += min;
}
else{
partition1.add(max);
partition2.add(min);
part1Sum += max;
part2Sum += min;
}
break;
case 3:
//among the remaining 3 put the two having total value greater then the third one into smaller partition
//and the 3rd one to larger bucket
unused.remove(i);
unused.remove(j);
int middle = unused.first();
if(diffSum > 0){
if(A[i]+A[middle] > A[j]){
partition2.add(A[i]);
partition2.add(A[middle]);
partition1.add(A[j]);
part2Sum += A[i]+A[middle];
part1Sum += A[j];
}
else{
partition2.add(A[j]);
partition1.add(A[i]);
partition1.add(A[middle]);
part1Sum += A[i]+A[middle];
part2Sum += A[j];
}
}
else{
if(A[i]+A[middle] > A[j]){
partition1.add(A[i]);
partition1.add(A[middle]);
partition2.add(A[j]);
part1Sum += A[i]+A[middle];
part2Sum += A[j];
}
else{
partition1.add(A[j]);
partition2.add(A[i]);
partition2.add(A[middle]);
part2Sum += A[i]+A[middle];
part1Sum += A[j];
}
}
break;
default:
}
diffSum = part1Sum-part2Sum;
break;
}
//first take the largest and the smallest element to create a pair to be inserted into a partition
//we do this for having a balanced distribute of the numbers in the partitions
//add pair (i, j) to the smaller partition
int pairSum = A[i]+A[j];
int partition = diffSum > 0 ? 2 : 1;
if(partition == 1){
partition1.add(A[i]);
partition1.add(A[j]);
part1Sum += pairSum;
}
else{
partition2.add(A[i]);
partition2.add(A[j]);
part2Sum += pairSum;
}
//update diff
diffSum = part1Sum-part2Sum;
//we have used pair (i, j)
unused.remove(i);
unused.remove(j);
//move j to next big element to the left
j = unused.last();
//now find the buddy for j to be paired with such that sum of them is as close as to pairSum
//so we will find such buddy A[k], i<=k<j such that value of ((A[j]+A[k])-pairSum) is minimized.
int buddyIndex = unused.first();
int minPairSumDiff = Integer.MAX_VALUE;
for(int k = buddyIndex; k<j; k++){
if(!unused.contains(k))
continue;
int compPairSum = A[j]+A[k];
int pairSumDiff = Math.abs(pairSum-compPairSum);
if(pairSumDiff < minPairSumDiff){
minPairSumDiff = pairSumDiff;
buddyIndex = k;
}
}
//we now find buddy for j. So we add pair (j,buddyIndex) to the other partition
if(j != buddyIndex){
pairSum = A[j]+A[buddyIndex];
if(partition == 2){
partition1.add(A[j]);
partition1.add(A[buddyIndex]);
part1Sum += pairSum;
}
else{
partition2.add(A[j]);
partition2.add(A[buddyIndex]);
part2Sum += pairSum;
}
//we have used pair (j, buddyIndex)
unused.remove(j);
unused.remove(buddyIndex);
}
}
//if diffsum is greater than zero then we can further try to optimize by swapping
//a larger elements in large partition with an small element in smaller partition
//O(n^2) operation with O(n) space
if(diffSum != 0){
Collections.sort(partition1);
Collections.sort(partition2);
diffSum = part1Sum-part2Sum;
ArrayList<Integer> largerPartition = (diffSum > 0) ? partition1 : partition2;
ArrayList<Integer> smallerPartition = (diffSum > 0) ? partition2 : partition1;
int prevDiff = Math.abs(diffSum);
int largePartitonSwapCandidate = -1;
int smallPartitonSwapCandidate = -1;
//find one of the largest element from large partition and smallest from the smaller partition to swap
//such that it overall sum difference in the partitions are minmized
for(i = 0; i < smallerPartition.size(); i++){
for(j = largerPartition.size()-1; j>=0; j--){
int largerVal = largerPartition.get(j);
int smallerVal = smallerPartition.get(i);
//no point of swapping larger value from smaller partition
if(largerVal <= smallerVal){
continue;
}
//new difference if we had swapped these elements
int diff = Math.abs(prevDiff - 2*Math.abs(largerVal - smallerVal));
if(diff == 0){
largerPartition.set(j, smallerVal);
smallerPartition.set(i, largerVal);
return new ArrayList[]{largerPartition, smallerPartition};
}
//find the pair to swap that minimizes the sum diff
else if (diff < prevDiff){
prevDiff = diff;
largePartitonSwapCandidate = j;
smallPartitonSwapCandidate = i;
}
}
}
//if we indeed found one such a pair then swap it.
if(largePartitonSwapCandidate >=0 && smallPartitonSwapCandidate >=0){
int largerVal = largerPartition.get(largePartitonSwapCandidate);
int smallerVal = smallerPartition.get(smallPartitonSwapCandidate);
largerPartition.set(largePartitonSwapCandidate, smallerVal);
smallerPartition.set(smallPartitonSwapCandidate, largerVal);
return new ArrayList[]{largerPartition, smallerPartition};
}
}
return new ArrayList[]{partition1, partition2};
}
This is a special case of subset sum problem, called 'The Easiest Hard Problem' where, for a given set {x1, x2, ..., xn} that sums to S, you will find a subset of size n/2 (or n/2+1) that sums to S/2 (or S/2+1).
- zahidbuet106 September 06, 2015// insert current element to the left or right heap and get median so far
public int getMedian(final int current, final int med, final MaxHeap<Integer> left, final MinHeap<Integer> right) {
final int balance = left.size() - right.size();
int median = med;
// both heaps are of equal size.
if (balance == 0) {
// need to insert in left
if (current < median) {
left.insert(current);
median = left.top();
}
// need to insert in right
else {
right.insert(current);
median = right.top();
}
}
// left heap is larger
else if (balance > 0) {
// need to insert in left
if (current < median) {
right.insert(left.extract());
left.insert(current);
}
// need to insert in left
else {
right.insert(current);
}
median = (left.top() + right.top()) / 2;
}
// right heap is larger
else if (balance < 0) {
// need to insert in left
if (current < median) {
left.insert(current);
}
// need to insert in left
else {
left.insert(right.extract());
right.insert(current);
}
median = (left.top() + right.top()) / 2;
}
return median;
}
public int getStreamMedian(final int[] stream) {
int median = 0;
final MaxHeap<Integer> left = new MaxHeap<Integer>();
final MinHeap<Integer> right = new MinHeap<Integer>();
for (int i = 0; i < stream.length; i++) {
median = getMedian(stream[i], median, left, right);
}
return median;
}
if(x&1){
y = x << 1;
}
else{
y = x >> 1;
}
public static boolean isCrossed(double[] s){
//base case
if(s.length < 4){
return false;
}
if(s[0] >= s[2] && s[3] >= s[1]){
return true;
}
//test if the moves are on outward increasing spiral
int i = 3;
while(i < s.length){
if(s[i] > s[i-2] && s[i-1] > s[i-3])
i++;
else break;
}
//if we visited all the moves then there is no intersection
if(i == s.length){
return false;
}
//otherwise moves are on a decreasing inward spiral starting from i
//we first need check if the two spirals are crossing each other which can only possible
//when edge i+1 crosses edge (i-4) if its there
if(i < s.length && i > 3 && s[i+1] >= (s[i-1]-s[i-3])){
if (s[i] >= (s[i - 2] - s[i - 4]) || s[i + 1] >= s[i - 1])
return true;
}
//if two spiral didn't intersect then check for decreasing s
while(i+3 < s.length){
if(s[i] > s[i+2] && s[i+1] > s[i+3]){
i++;
}
else break;
}
//if we visited all the moves then there is no intersection
if(i+3 == s.length){
return false;
}
return false;
}
can you explain how does it work for these cases
[2, 1, 4, 4, 3, 2, 2, 1, 1] --> should be non-crossing
[2, 1, 4, 4, 3, 3, 2, 1, 1] --> should be crossing
I can see this code is failing for both of the above cases (returning false at i=4).
how does it work? it doesn't work for these cases -
[2, 1, 4, 4, 3, 2, 2, 1, 1] --> should be non-crossing
[2, 1, 4, 4, 3, 3, 2, 1, 1] --> should be crossing
both cases failed in your code.
you are only considering ingoing spiral or outgoing spiral. Won't work for move containing both. For example, 2, 1, 4, 4, 3, 3, 2, 1, 1.
- zahidbuet106 September 05, 2015you are only considering ingoing spiral or outgoing spiral. Won't work for move containing both. For example, 2, 1, 4, 4, 3, 3, 2, 1, 1.
- zahidbuet106 September 05, 2015you are only considering ingoing spiral or outgoing spiral. Won't work for move containing both. For example, 2, 1, 4, 4, 3, 3, 2, 1, 1.
- zahidbuet106 September 05, 2015@Anon, good catch, yes... we need one to constantly make sure that the partition satisfy a constraint : y contains an even after swapping X[a] with the Y[b]
I modified the code a little bit to resolve this case.
@Anonymous
those paths are needed. Consider for this case -
6
/ \
4 117
/ \
3 5
@people : I wonder if u had grasp the concept behind the solution. Although the solution is not efficient in case of non-english dictionary words...in which case the production calculation will be little expensive... do u have a solution of o(n) without generating permutations?
- zahidbuet106 July 11, 2014consider to solve this problem by not using any standard library such as string.find ..
- zahidbuet106 June 23, 2014Consider this example, find the path for sum = 3:
3
/
-2
\
2
This is plainly O(n^2). Use Manacher's algorithm to find it in O(n)
- zahidbuet106 June 14, 2014With a BST we have an added advantage that we know all the nodes on the left is less than the current node and all the nodes on the right are greater than the current node.
So, while doing a DFS to search for a minLen path in a BST we actually can take informed decision at each node to either go left or right based on the remaining sum to search and hence cutting down the search space in half in average case. This will guarantee finding the minimum length path because -
1) If current node value is equal to sum and it is a leaf node than we have a complete path. If it is not a leaf then we need to find a zero sum minLen path along one of its subtrees.
2) If current node value is greater than (or equal to) the remaining sum i.e. value>=(sum-value) then we should search for new sum=(sum-value) in the left sub tree because - if a path exists in the left subtree, it must be the shorter than any possible path in the right subtree, since all nodes at left is smaller than those at right. If the search in left subtree return with no such path then will search in the right subtree if only if the value of current node is.
3) Similarly, If current node value is smaller than the remaining sum i.e. value<(sum-value) then we should search for new sum=(sum-value) in the right sub tree because - if a path exists in the right subtree, it must be the shorter than any possible path in the right subtree, since all nodes at right is greater than those at left. If the search in right subtree returns with no such path only then will search in the left subtree.
So, average case complexity will be improved to O(lgn) with a complete BST. However, the worst case complexity will be O(n) in case where we have to search both left and right subtree for the remaining sum.
private int minLenSumPathBST(final TreeNode root, final int sum, final int len) {
if (root == null) {
return Integer.MAX_VALUE;
}
// find the remaining sum as we are including current node in the current path
final int remainingSum = sum - root.key;
// If remaining sum is zero and it is a leaf node then we found a complete path from root to a leaf.
if (remainingSum == 0 && root.left == null && root.right == null) {
return len + 1;
}
// If remaining sum is less than current node value then we search remaining in the left subtree.
else if (remainingSum <= root.key) {
int l = minLenSumPathBST(root.left, remainingSum, len + 1);
// if search in left subtree fails to find such path only then we search in the right subtree
if (l == Integer.MAX_VALUE) {
l = minLenSumPathBST(root.right, remainingSum, len + 1);
}
return l;
}
// If remaining sum is greater than current node value then we search remaining in the right subtree.
else {
int l = minLenSumPathBST(root.right, remainingSum, len + 1);
// if search in right subtree fails to find such path only then we search in the left subtree
if (l == Integer.MAX_VALUE) {
l = minLenSumPathBST(root.left, remainingSum, len + 1);
}
return l;
}
}
With a BST we have an added advantage that we know all the nodes on the left is less than the current node and all the nodes on the right are greater than the current node.
So, while doing a DFS to search for a minLen path in a BST we actually can take informed decision at each node to either go left or right based on the remaining sum to search and hence cutting down the search space in half in average case. This will guarantee finding the minimum length path because -
1) If current node value is equal to sum and it is a leaf node than we have a complete path. If it is not a leaf then we need to find a zero sum minLen path along one of its subtrees.
2) If current node value is greater than (or equal to) the remaining sum i.e. value>=(sum-value) then we should search for new sum=(sum-value) in the left sub tree because - if a path exists in the left subtree, it must be the shorter than any possible path in the right subtree, since all nodes at left is smaller than those at right.
3) Similarly, If current node value is smaller than the remaining sum i.e. value<(sum-value) then we should search for new sum=(sum-value) in the right sub tree because - if a path exists in the right subtree, it must be the shorter than any possible path in the right subtree, since all nodes at right is greater than those at left.
So, average case complexity will be improved to O(lgn) with a complete BST. However, the worst case complexity will be O(n) in case where we have to search both left and right subtree for the remaining sum.
private int minLenSumPathBST(final TreeNode root, final int sum, final int len) {
if (root == null) {
return Integer.MAX_VALUE;
}
// find the remaining sum as we are including current node in the current path
final int remainingSum = sum - root.key;
// If remaining sum is zero and it is a leaf node then we found a complete path from root to a leaf.
if (remainingSum == 0 && root.left == null && root.right == null) {
return len + 1;
}
// If remaining sum is less than current node value then we search remaining in the left subtree.
else if (remainingSum <= root.key) {
int l = minLenSumPathBST(root.left, remainingSum, len + 1);
// if search in left subtree fails to find such path only then we search in the right subtree
if (l == Integer.MAX_VALUE) {
l = minLenSumPathBST(root.right, remainingSum, len + 1);
}
return l;
}
// If remaining sum is greater than current node value then we search remaining in the right subtree.
else {
int l = minLenSumPathBST(root.right, remainingSum, len + 1);
// if search in right subtree fails to find such path only then we search in the left subtree
if (l == Integer.MAX_VALUE) {
l = minLenSumPathBST(root.left, remainingSum, len + 1);
}
return l;
}
}
yes. only a little modification is needed to handle the cases like one described by solongso in the previous comment
- zahidbuet106 June 14, 2014.Question didn't ask to swap alternate pairs instead it is asking for swapping alternate elements. For example: 1-->2-->3-->4-->5 should be transformed to 3-->4-->5-->2-->1.
public static LinkedListNode swapAlternateNodes(final LinkedListNode head) {
if (head == null || head.next == null) {
return null;
}
LinkedListNode newhead = null;
LinkedListNode prev = head;
LinkedListNode cur = prev.next;
LinkedListNode temp = null;
while (cur != null && cur.next != null) {
temp = cur.next;
prev.next = cur.next.next;
cur.next = prev;
temp.next = cur;
if (newhead == null) {
newhead = temp;
} else if (newhead.next == prev) {
newhead.next = temp;
} else if (newhead.next.next == prev) {
newhead.next.next = temp;
}
prev = cur;
cur = cur.next;
}
return newhead;
}
So, we will use the KMP algorithm to find all the occurrences of the second string (pattern) in the first string (text) in O(n). Save these positions into an array of integers (lets say positionArray). We will use a StringBuilder to construct the replaced string. Also We need to keep a pointer to the first index of the position array (lets say positionPointer).
Scan the text from left to right and append the character in current position into the string builder except for the position where the positionPointer is currently pointing to in the positionArray, in which case we append the given string (replace string) in the string builder and skip the whole pattern in the text from current scan position.
Once we've replaced the string we need to increment the positionPointer to skip the overlapping pattern positions (research this example: text="aaaaaa", pattern = "aa", replaceWith = "X"). We can do it by aligning the positionPointer with the scan index.
The overall operation is O(n) time and O(n) space. Below is a pseudocode assuming that one can easily implement standard KMP from internet sources.
public String replace(final String text, final String pattern, final String replaceWith) {
if (text.equals(pattern)) {
return replaceWith;
}
if (pattern == null || pattern.isEmpty() || replaceWith == null || replaceWith.isEmpty()) {
return text;
}
final int[] substringPositions = KMPSearch(text, pattern);
int positionPointer = 0;
final StringBuilder replacedString = new StringBuilder();
final char[] textChars = text.toCharArray();
for (int i = 0; i < textChars.length; i++) {
while (positionPointer < substringPositions.length && i > substringPositions[positionPointer]) {
positionPointer++;
}
if (positionPointer < substringPositions.length && i == substringPositions[positionPointer]) {
replacedString.append(replaceWith);
i += pattern.length() - 1;
} else {
replacedString.append(textChars[i]);
}
}
return replacedString.toString();
}
So, we will use the KMP algorithm to find all the occurrences of the second string (pattern) in the first string (text) in O(n). Save these positions into an array of integers (lets say positionArray). We will use a StringBuilder to construct the replaced string. Also We need to keep a pointer to the first index of the position array (lets say positionPointer).
Scan the text from left to right and append the character in current position into the string builder except for the position where the positionPointer is currently pointing to in the positionArray, in which case we append the given string (replace string) in the string builder and skip the whole pattern in the text from current scan position.
Once we've replaced the string we need to increment the positionPointer to skip the overlapping pattern positions (research this example: text="aaaaaa", pattern = "aa", replaceWith = "X"). We can do it by aligning the positionPointer with the scan index.
The overall operation is O(n) time and O(n) space. Below is a pseudocode assuming that one can easily implement standard KMP from internet sources.
public String replace(final String text, final String pattern, final String replaceWith) {
if (text.equals(pattern)) {
return replaceWith;
}
if (pattern == null || pattern.isEmpty() || replaceWith == null || replaceWith.isEmpty()) {
return text;
}
final int[] substringPositions = KMPSearch(text, pattern);
int positionPointer = 0;
final StringBuilder replacedString = new StringBuilder();
final char[] textChars = text.toCharArray();
for (int i = 0; i < textChars.length; i++) {
while (positionPointer < substringPositions.length && i > substringPositions[positionPointer]) {
positionPointer++;
}
if (positionPointer < substringPositions.length && i == substringPositions[positionPointer]) {
replacedString.append(replaceWith);
i += pattern.length() - 1;
} else {
replacedString.append(textChars[i]);
}
}
return replacedString.toString();
}
@Anonymous : are u assuming that e is a fixed and public information?
- zahidbuet106 June 12, 2014yes u r right..should be max even in y. typo. Fixed now.
- zahidbuet106 June 04, 2014In fact you can traverse in O(√n) by preprocessing the list and doing O(√n) extra work during insert (indeed an intelligent work upon which the search will be O(√n)) . The answer is "Skip List". Even it is possible to do in O(lgn) by selecting a proper number of levels . Just search in google
- zahidbuet106 June 03, 2014In fact you can traverse in O(√n) by preprocessing the list and doing O(√n) extra work during insert (indeed an intelligent work upon which the search will be O(√n)) . The answer is "Skip List". Even it is possible to do in O(lgn) by selecting a proper number of levels . Just search in google.
- zahidbuet106 June 03, 2014This is completely feasible in real life as words are rarely that long in english dictionary! Use a long for storing product values. Also: calculate product just once for dictionary words and use them for subsequent searches.
- zahidbuet106 June 03, 2014Need a little but important information: you got to make two heaps balanced (i.e. differ by at most 1 element).
- zahidbuet106 June 02, 2014
Repmarilynarhea, Computer Scientist at ABC TECH SUPPORT
I live my life very happily by god grace and I am working as a Gaming machine servicer and my ...
use a priority queue with frequency as priority. Given n, each time extract top n elements from the queue and put into scheduler queue (output). Now, decrease the frequency of the elements extracted and put it back to priority queue if new frequency is grater than zero. If queue is left with less than n elements then put (n-queue_size) numbers of idle jobs in the out put. Overall complexity is O(Nlgm) where N = length of input string. m = total number of unique tasks.
- zahidbuet106 February 05, 2016