algooz
BAN USER 0of 0 votes
AnswersA circus is designing an act consisting of a tower of people standing atop one anotherâ€™s shoulders. For practical and aesthetic reasons, each person must be both shorter and lighter than the person below her. Given the heights and weights of each person in the circus, what is the largest possible number of people in such a tower?
 algooz
Input(ht wt) : (65, 100) (70, 150) (56, 90) (75, 190) (60, 95) (68, 110)
Output: The longest tower is length 6 and includes from top to bottom: (56,90) (60,95) (65,100) (68,110) (70,150) (75,190) Report Duplicate  Flag  PURGE
Microsoft Software Engineer / Developer Algorithm
If "The tree is a generalized tree with as many child nodes as possible"
Then the problem is to check if 2 undirected acyclic graphs are symmetric or not.
maintain 2 stacks.
first one for keeping the no. of children for each node, during a level order traversal.
second one for keeping the actual nodes at any level.
then comp level by level both graphs by popping elements from stack 1 and stack 2.
1. As you are reading words from the file, maintain its count in the hash table.
2. Keep a k (=10) size minheap, and whenever you update a word in the hash table and increase its count ci, compare ci with the top element of the heap kt.
3. If ci > kt then replace kt with ci and heapify.
4. At the end when you have traversed the whole of doc you will have 10 most freq element in the heap.
#include<stdio.h>
#include<stdlib.h>
#include<time.h>
typedef struct node
{
int data;
struct node *left;
struct node *right;
}Node;
typedef struct queue
{
Node * cell;
struct queue *next;
}Queue;
void Enqueue(Queue **head, Node *temp)
{
Queue *current = *head;
Queue *newNode = (Queue *)malloc(sizeof(Queue));
newNode>cell = temp;
newNode>next = NULL;
if(NULL == *head)
{
*head = newNode;
}
else
{
while(NULL != current>next)
current = current>next;
current>next = newNode;
}
}
Queue * Dequeue(Queue **head)
{
Queue *temp = *head;
if(NULL == *head)
return NULL;
else
{
*head = (*head)>next;
}
return temp;
}
Node * newNode(int val);
Node * insertNode(Node *, int);
Node * buildBST(Node *);
void inorder(Node *root);
int lookUp(Node *root, int val);
int pathSum(Node *, int);
void printArray(int path[], int len);
void pathSum(Node *root, int path[], int pathlen, int sum);
void levelOrder(Node *root);
void hasSum(Node *root);
#define ORIG_SUM 8
int main()
{
Node *root = NULL;
//int path[100];
root = buildBST(root);
printf("\nInorder traversal is : \n");
inorder(root);
printf("\n\n");
//pathSum(root, path, 0, ORIG_SUM);
levelOrder(root);
return 0;
}
void levelOrder(Node *root)
{
Queue *head = NULL;
Enqueue(&head,root);
while(NULL != head)
{
Queue *temp = Dequeue(&head);
hasSum(temp>cell);
if(temp>cell>left != NULL)
{
Enqueue(&head,temp>cell>left);
}
if(temp>cell>right != NULL)
{
Enqueue(&head, temp>cell>right);
}
}
}
void hasSum(Node *root)
{
int path[100];
pathSum(root, path, 0, ORIG_SUM);
}
void pathSum(Node *root, int path[], int pathlen, int sum)
{
if(NULL == root)
return;
//append this node the current array
path[pathlen] = root>data;
++pathlen;
sum = sum  root>data;
if(0 == sum)
{
printArray(path, pathlen);
}
else
{
pathSum(root>left,path, pathlen,sum);
pathSum(root>right,path, pathlen,sum);
}
}
void printArray(int path[], int len)
{
int i=0;
for(i=0;i<len;i++)
{
printf("%d\t", path[i]);
}
printf("\n\n");
}
Node * newNode(int val)
{
Node *temp = (Node *)malloc(sizeof(Node));
if(NULL != temp)
{
temp>data = val;
temp>left = NULL;
temp>right = NULL;
}
else
exit(1);
return temp;
}
Node * insertNode(Node *root, int val)
{
if(NULL == root)
return newNode(val);
else
{
if(val < root>data)
root>left = insertNode(root>left, val);
else
root>right = insertNode(root>right, val);
return root;
}
}
Node * buildBST(Node * root)
{
int n;
int val;
int i;
int A[] = {5,3,7,2,4,6,8};
printf("\nHow many nodes in the tree ? ");
scanf("%d", &n);
srand(time(NULL));
for(i=0;i<n;i++)
{
/*val = rand()%98 + 1;
if(!lookUp(root, val))
{
root = insertNode(root, val);
}
else
{
printf("\nDuplicate value\n");
} */
root = insertNode(root, A[i]);
}
printf("\nInsertion Done !\n");
return root;
}
int lookUp(Node *root, int val)
{
if(NULL == root)
{
return 0;
}
else
{
if(val == root>data)
return 1;
else if(val < root>data)
return lookUp(root>left, val);
else
return lookUp(root>right, val);
}
}
void inorder(Node *root)
{
if(NULL == root)
return;
else
{
inorder(root>left);
printf("%d\t", root>data);
inorder(root>right);
}
}

algooz
April 27, 2008 Get the root of T2 say r2,
1. for each occurance of r2 in t1.
2. Do inorder i1 and preorder p1 traversal of t1 considering r2 of t1 as root.
3. Check if the i1 and p1 matches with the inorder i2 and preorder p2 traversal of t2.
4. if it does, then t2 is subtree of t1 otherwise NOT.
Traditionally, character segmentation into words is done by a"greedy algorithm."The greedy algorithm involves the following : (1) Start from the beginning of the given sentence being processed and exhaust all the possible words that match the initial part of character string in the sentence.
(2) Pick the longest word (i. e., the word that has the largest number of characters) and put a space at the end of the matched sub string in the sentence, treat the remaining character string as a new sentence and repeat step (1) until all the characters in the sentence are processed.
The greedy algorithm does not always make the best choice from a global perspective. In fact, it may choose combinations that are not only not optimal but which are also not syntactically correct. As stated in T. Cormen et al."Introduction to Algorithms," p. 329 :"A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution.
you can read more at:
wipo.int/pctdb/en/wo.jsp?IA=WO2001048738&wo=2001048738&DISPLAY=DESC
Khoa you are doing a great job!
Well no comments so far related to the shell sort code.
So i will go ahead with that.
void shellSort(int A[], int n)
{
int gap, i, j, temp;
for(gap=n/2; gap>0;gap /= 2)
{
for(i=gap; i<n;i++)
{
for(j=igap;j>=0 && A[j]>A[j+gap];j = gap)
{
temp = A[j]; A[j]=A[j+gap]; A[j+gap] = temp;
}
}
}
}
The outermost controls the gap between compared elements, shrinking it from n/2 by a factor of 2 each pass until it becomes zero.
The middle loop steps along the elements.
Innermost loop compares each pair of elements that is separated by gap, and reverses any that is out of order.
The basic structure of the reader writer problem is like
Reader()
Wait until no writers
Access database
Check out  wake up a waiting writer
Writer()
Wait until no active reader or writer
Access the database
Check out  wake waiting reader or writer
The detailed translation would be something like this
Reader()
{
//first check self into the system
lock.acquire();
while((AW+WW) > 0) // is it safe to read ?
{
WR++; // No. Writers exist.
OkToRead.wait(&lock); // sleep on condition variable
WR; // No longer waiting
}
AR++;
lock.release();
// perform actual readonly access
AccessDataBase(readonly);
//Now, check out of the system
lock.acquire();
AR; // no longer active
if(AR == 0 && WW > 0) // no other active readers
OkToWrite.signal(); // wake up one writer
lock.release();
}
//Logic for writer
Writer()
{
// first check self into the system
while((AW+AR)>0){ // Is it safe to write ?
WW++; // No. active users exist
OkToWrite.wait(&lock); //sleep on cond variable
WW; // no longer waiting
}
AW++; // Now we are active
lock.release();
// perform actual write access
AccessDataBase(Write);
// Now check out of the system
lock.acquire();
AW; // no longer active
if(WW > 0){ // give priority to writers
OkToWrite.signal(); // wake up one writer
}else if(WR > 0){ // otherwise wake readers
OkToRead.broadcast(); // wake all readers.
}
lock.release();
}
This problem is tough to get right and lot depends on the which all process are atomic. I think in the above solution also readers may starve
at this Reader() entry code
while((AW+WW) > 0) // is it safe to read ?
what about this approach ?
I am assuming that sorting needs to be done on "id" and its unique for each individual.
1. Lets split the Big file with million entries into n (f1,f2,..fn) small files.
2. Maintain the n small files in n minheaps.
3. Now in each heap we know the smallest id in that heap, and we have n ids.
4. We take the root of each n heap in memory and find the min amongst them, lets say its xj.
5. Now we know xj is the smallest id present in the whole of file.
6. we swap xj from heap fj with the last element of f1 heap, reduce the size of f1 by one, and heapify f1 and fj from which x1 was originally part of.
Repeat 4, 5, 6 till we run out of heaps.
After that we will have n arrays (f1,f2..fn) which are sorted.
just append these arrays and we have the sorted file wrt to id.
oops there was bug in the code
void foo(const int A[], const int len)
{
int B[len];
int prod = 1;
int i=0;
for(i=0;i<len;i++)
{
B[i] = prod;
prod *= A[i];
}
prod = 1;
for(i=len1;i>=0;i)
{
B[i] *= prod;
prod *= A[i];
}
for(i=0;i<len;i++)
{
printf("%d\t", B[i]);
}
}

algooz
April 24, 2008 We can calculate the array B in the following manner.
1. First iterate thru Array A (from i=1 to n) and for each value of A[i] store the product of A[1] ..A[i1] elements in B[i].
2. Next iterate thru array A (from i=n to 1) and for each value of A[i] store the product of A[i1] .. A[n] in B[i].
3. After the steps 1 and 2 we will have the required values in array B
code for the same is as follows:
void foo(const int A[], const int len)
{
int B[len];
int prod = 1;
int i=0;
for(i=0;i<len;i++)
{
B[i] = prod;
prod *= A[i];
}
prod = 1;
for(i=len1;i>=0;i)
{
B[i] = prod;
prod *= A[i];
}
for(i=0;i<len;i++)
{
printf("%d\t", B[i]);
}
}

algooz
April 24, 2008 1. Generate random bits, by calling rand5 until we get something other than 5
2. Output the value mod 2.
3. For generating rand7, call steps 1 and 2 three times, so that you get three random bits and treat them as threebit binary numbers(range 0..7).
4. Do step 3 until you dont get a 0, then output the value.
Assuming intervals are sorted.
lets say the element in question to be k.
now divide the n intervals in half, Consider n/2 interval to be [a,b]
Now k belongs in the interval [a,b] if (ax)(bx) <= 0
if(x > b)
then do binary search in the right side of the intervals of [a,b]
if(x < a)
then do binary search in the left side of the intervals of [a,b]
Complexity would be here O(logn) as is the case with binary search.
>>make any assumptions you wanted..
Khoa what if I assumed that numbers are already sorted, then i just need one pass thrugh 1 billion numbers to confirm that they are already sorted.
Anyways the above assumptions may not impress anybody so I believe if all the 1 billion numbers could fit in memory then i would go for any inplace O(nlogn) algorithm .. personal choice would be quick sort(randomized version) or even heap sort is also not a bad idea.
is this problem vague just to test the candidate as where he goes with assumptions ?
statistical info abt the data could really help in choosing abt the algorithm.
I feel a hashtable and a min heap will do a better job.
As the words are coming maintain the hash, and a min heap of size k.
Suppose a new word comes, then add it to hash and if already present increase its count value to say x.
Now suppose the top element in heap is having a count y.
If x > y
then replace y with x and rebuild the heap.
else do nothing
here time complexity after reading n words = O(nlogk)
Open Chat in New Window
>> 3) Multiply all, then divide by each number.
 algooz June 17, 2008what if one number is zero?