restlesstoday
BAN USERHow about this?
void remove_sorted_duplicates (node *&n)
{
node *ptr = n;
node *target;
node *prev = NULL;
node *end;
int val;
while (ptr)
{
end = ptr->next;
val = ptr->data;
while (end && (end->data == val) )
{
delete ptr; //may be called multiple times, but delete NULL is ok
ptr = NULL;
target = end;
end = end->next;
delete target;
}
if (ptr) //unique node
{
if (prev)
prev->next = ptr;
else
n = ptr;
prev = ptr;
ptr=ptr->next;
}
else //duplicate node
{
if (!prev)
n = ptr;
ptr = end;
}
}
}
Basic algorithm is to delete all duplicates in the inner while loop. After that is a matter of setting previous/head/next ptrs.
- restlesstoday August 23, 2009@mg
Nice code.
But it fails/seg fault when the length of the lists are unequal ( size of orig > rev) due to the pass by reference argument -- node*& rev.
You'll need to add guard statements to make sure rev is non-null before derefencing rev.
@Ashutosh -- not sure how your code works. check_rev() takes two arguments.
I assume a BST means one with minimal height, e.g AVL tree.
I think gumber approach is correct. Directly iterate through the tree, and call insert() algorithm. Insert() is o(logn). Iteration is o(n). So total complexity is nlogn.
I'm not sure if I understand the problem fully, bit is this sufficient?
int unevenRand (unsigned int a, unsigned int b)
{
unsigned int number;
rand_s(&number);
number = number%(b);
if (number <a)
return 1;
else
return 0;
}
I'm not sure if I understand the problem fully, bit is this sufficient?
int unevenRand (unsigned int a, unsigned int b)
{
unsigned int number;
rand_s(&number);
number = number%(b);
if (number <a)
return 1;
else
return 0;
}
First code post doesn't account for this case:
10
/
5
\
8
Second post is good, but requires whole tree traversal, with memory storage n.
How about this:
Node * secondLargest(Node* root)
{
if (!root)
return NULL;
else if (!root->right)
return maxValue(root->left);
else if (!root->right->right)
return root;
else
return secondLargest(root->right);
}
where maxValue is a function that returns the right most node.
- restlesstoday August 02, 2009
Or how about this recursive version?
Basic algorithm is to delete "both" duplicates upon return. When duplicates are deleted a static flag to remember the last deleted node is set; so that we can handle "multiple" duplicates.
- restlesstoday August 23, 2009