peter tang
BAN USERImplement a linked list split function which is to take a predicate. Use latest std::function feature from C++11. See code on cpluspluslearning-petert.blogspot.co.uk/2015/11/linkedlist-split-linked-list-into.html
Test
bool IsFibonaciNumber(int val)
{
int temp = 5 * val*val;
int root = static_cast<int>(sqrt(temp - 4));
if (root*root == (temp - 4)) {
return true;
}
root = static_cast<int>(sqrt(temp + 4));
if (root*root == (temp + 4)) {
return true;
}
return false;
}
void TestSplitLLAgainstFibonaci()
{
{
LinkedListElement<int>* head = nullptr;
LinkedListElement<int>* fibonaciNodes = nullptr;
LinkedListElement<int>* nonFibonaciNodes = nullptr;
LL_SpiltLinkedList<int>(&head, &fibonaciNodes, &nonFibonaciNodes, IsFibonaciNumber);
assert(fibonaciNodes == nullptr);
assert(nonFibonaciNodes == nullptr);
}
{
std::vector<int> testArr = { 0, 1, 1, 2, 3, 5, 8 };
LinkedListElement<int>* head = nullptr;
LL_PushFrontFromStdVector(&head, testArr);
LinkedListElement<int>* fibonaciNodes = nullptr;
LinkedListElement<int>* nonFibonaciNodes = nullptr;
LL_SpiltLinkedList<int>(&head, &fibonaciNodes, &nonFibonaciNodes, IsFibonaciNumber);
assert(fibonaciNodes != nullptr);
assert(nonFibonaciNodes == nullptr);
assert(head == nullptr);
std::vector<int> result;
LL_CopyDataToStdVector(fibonaciNodes, result);
assert(testArr == result);
LL_Delete(&fibonaciNodes);
}
{
std::vector<int> testArr = { 4, 6, 7, 9, 10 };
LinkedListElement<int>* head = nullptr;
LL_PushFrontFromStdVector(&head, testArr);
LinkedListElement<int>* fibonaciNodes = nullptr;
LinkedListElement<int>* nonFibonaciNodes = nullptr;
LL_SpiltLinkedList<int>(&head, &fibonaciNodes, &nonFibonaciNodes, IsFibonaciNumber);
assert(fibonaciNodes == nullptr);
assert(nonFibonaciNodes != nullptr);
assert(head == nullptr);
std::vector<int> result;
LL_CopyDataToStdVector(nonFibonaciNodes, result);
assert(testArr == result);
LL_Delete(&nonFibonaciNodes);
}
{
std::vector<int> testArr = { 0, 4, 1, 6, 1, 7, 2, 9, 3, 10, 5, 8 };
std::vector<int> fibonaciArr = { 0, 1, 1, 2, 3, 5, 8 };
std::vector<int> nonFibonaciArr = { 4, 6, 7, 9, 10 };
LinkedListElement<int>* head = nullptr;
LL_PushFrontFromStdVector(&head, testArr);
LinkedListElement<int>* fibonaciNodes = nullptr;
LinkedListElement<int>* nonFibonaciNodes = nullptr;
LL_SpiltLinkedList<int>(&head, &fibonaciNodes, &nonFibonaciNodes, IsFibonaciNumber);
assert(fibonaciNodes != nullptr);
assert(nonFibonaciNodes != nullptr);
assert(head == nullptr);
std::vector<int> fibonaciResult;
LL_CopyDataToStdVector(fibonaciNodes, fibonaciResult);
assert(fibonaciArr == fibonaciResult);
std::vector<int> nonFibonaciResult;
LL_CopyDataToStdVector(nonFibonaciNodes, nonFibonaciResult);
assert(nonFibonaciArr == nonFibonaciResult);
LL_Delete(&fibonaciNodes);
LL_Delete(&nonFibonaciNodes);
}
}
Guaranteed O(logN) solution. Comparing to Mailman's solution, after spot the value should still use binary search to find the first and last occurrence rather than linearly search forward and backward. Because it can cause O(N) solution in the worse case, for instance {5, 5, 5}.
For C++ implementation, refer to: cpluspluslearning-petert.blogspot.co.uk/2015/10/bs-find-first-and-last-of-given-number.html
Test
void TestFindTheFistAndLastOfGivenNumberInSortedArray()
{
std::vector<int> sortedArray;
size_t first, last;
{
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
1,
first,
last) == false);
}
{
sortedArray = { 5 };
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
1,
first,
last) == false);
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
5,
first,
last) == true);
assert(first == 0 && last == 0);
}
{
sortedArray = { 5, 6 };
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
5,
first,
last) == true);
assert(first == 0 && last == 0);
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
6,
first,
last) == true);
assert(first == 1 && last == 1);
}
{
sortedArray = { 5, 5, 5, 5, 5 };
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
5,
first,
last) == true);
assert(first == 0 && last == 4);
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
6,
first,
last) == false);
}
{
sortedArray = { 1, 2, 3, 5, 5, 5, 5, 5, 6, 7, 8 };
assert(FindTheFirstAndLastOfGivenNumberInSortedArray(sortedArray,
5,
first,
last) == true);
assert(first == 3 && last == 7);
}
}
In the case of {5, 5, 5}, this algorithm results in O(N) solution.
- peter tang October 28, 2015This is the question of generating the power set. The sub-problem can be easily worked out. For the C++ code and sub-problem description, refer to: cpluspluslearning-petert.blogspot.co.uk/2014/04/dynamic-programming-generate-power-set.html
- peter tang October 28, 2015Use the same idea of partition the array in the quick sort. Take the mid value (1 in this case) as the pivotal value. Partition the array with less than the pivotal value and more than the pivotal value. Therefore it is a O(n) solution (with traverse the array only once) and O(1) space solution. Please refer to: cpluspluslearning-petert.blogspot.co.uk/2015/10/sort-array-with-three-distinct-numbers.html
Test
void TestSortArrayWithThreeDistinctNumbers()
{
std::vector<int> arr;
SortArrayWith3DistinctNumbers(arr, 0, 1, 2);
assert(arr.empty() == true);
arr = { 2, 2, 2, 0, 0, 0, 1, 1, 1 };
SortArrayWith3DistinctNumbers(arr, 0 , 1, 2);
assert(arr == std::vector<int>({ 0, 0, 0, 1, 1, 1, 2, 2, 2 }));
arr = { 0, 1, 1, 0, 1, 2, 1, 2, 0, 0, 0, 1 };
SortArrayWith3DistinctNumbers(arr, 0, 1, 2);
assert(arr == std::vector<int>({ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2 }));
}
The idea is quite simple that it is to use hash table to take all the elements from the first tree and then check if the second tree has exactly same elements as in the hash table. There is no special about the algorithm but I am going to demonstrate how to use recursive (in some sort of function programming sense) functions to implement the solution. Because tree structure is a very good source to understand functional programming particularly in the sense of sub-problem.
Implementation refer to cpluspluslearning-petert.blogspot.co.uk/2015/10/bts-detect-if-two-tree-have-same.html
TEST
void TestCasesOfCheckTwoTreesWithSameElements()
{
const std::vector<int> testInput1 = { 1, 3, 5, 7, 2, 4, 6, 8 };
auto tree1 = ConstructTreeRecursive(testInput1);
assert(tree1 != nullptr);
{
TreeNode<int>* invalid1 = nullptr;
TreeNode<int>* invalid2 = nullptr;
assert(CheckTwoTreesWithSameValues(invalid1, invalid2) == true);
assert(CheckTwoTreesWithSameValues(tree1, invalid2) == false);
assert(CheckTwoTreesWithSameValues(invalid1, tree1) == false);
}
{
const std::vector<int> testInput2 = { 1, 3, 5, 7, 2, 4, 6, 1, 5, 2, 4 };
auto tree2 = ConstructTreeRecursive(testInput2);
assert(tree2 != nullptr);
assert(CheckTwoTreesWithSameValues(tree1, tree2) == false);
DeleteTree(&tree2);
}
{
const std::vector<int> testInput2 = { 1, 3, 5, 7, 2, 4, 6, 8, 1, 3, 5, 7, 2, 4, 6, 8 };
auto tree2 = ConstructTreeRecursive(testInput2);
assert(tree2 != nullptr);
assert(CheckTwoTreesWithSameValues(tree1, tree2) == true);
DeleteTree(&tree2);
}
{
const std::vector<int> testInput2 = { 1, 3, 5, 7, 2, 4, 6, 8, 1, 3, 5, 7, 2, 4, 6, 8, 9 };
auto tree2 = ConstructTreeRecursive(testInput2);
assert(tree2 != nullptr);
assert(CheckTwoTreesWithSameValues(tree1, tree2) == false);
DeleteTree(&tree2);
}
DeleteTree(&tree1);
}
This is a duplicate problem and refer to details here: cpluspluslearning-petert.blogspot.co.uk/2014/10/data-structure-and-algorithm-find-least.html
Here is the pseduo-code
1. Combine the 3 arrays into one with data structure to indicate which array the value comes from
2. Sort this big array (of data structure) with the value from A, B and C as key
3. Assign minAbsSum, prevA, prevB and prevC as invalid (to track last A, B and C visited so far)
4. Go through each element in the big array.(Loop the big array)
4.1 If curVal is from array A, pick curVal, prevB and prevC. Go to 4.4
4.2 If curVal is from array B, pick curVal, prevA and prevC. Go to 4.4
4.3 If curVal is from array C, pick curVal, prevA and prevB. Go to 4.4
4.4 Calculate the sum of the absolute values of difference
if prevB and prevC are valid in case of 4.1
if prevA and prevC are valid in case of 4.2
if prevA and prevB are valid in case of 4.3
4.5 Update the minAbsSum if absSum < minAbsSum,
4.6 Return if minAbsSum = 0, otherwise continue
// Test
//********************************************************************************
void testCase1()
{
std::vector<int> aVec = { 9, 7, 5, 7, 4, 8, 12, 30, 50 };
std::vector<int> bVec = { 30, 5, 9, 20, 35, 70, 50, 12 };
std::vector<int> cVec = { 8, 10, 30, 21, 6, 3, 6, 10, 0 };
ResultTuple result = CalculateLeastAbsSum(aVec, bVec, cVec);
assert(result.absSum == 0);
assert(result.val[0] == 30);
assert(result.val[1] == 30);
assert(result.val[2] == 30);
}
void testCase2()
{
std::vector<int> aVec = { 9, 7, 6, 8, 4, 5, 3, 1, 2 };
std::vector<int> bVec = { 20, 10, 19, 12, 15, 17, 14, 12 };
std::vector<int> cVec = { 30, 31, 21, 40, 25, 23, 36, 40, 50 };
ResultTuple result = CalculateLeastAbsSum(aVec, bVec, cVec);
assert(result.absSum == 24);
assert(result.val[0] == 9);
assert(result.val[1] == 10);
assert(result.val[2] == 21);
}
void testCase3()
{
// 3 vectors has 4 and 9, this case should return 4
std::vector<int> aVec = { 9, 7, 6, 8, 4, 5, 3, 1, 2 };
std::vector<int> bVec = { 20, 4, 19, 2, 15, 17, 9, 12 };
std::vector<int> cVec = { 3, 13, 9, 40, 25, 23, 6, 4, 5 };
ResultTuple result = CalculateLeastAbsSum(aVec, bVec, cVec);
assert(result.absSum == 0);
assert(result.val[0] == 4);
assert(result.val[1] == 4);
assert(result.val[2] == 4);
}
void testCase4()
{
// 3 vectors has two sets result
// (90, 89, 91) and (3, 4, 5)
// in this case, it should return (3, 4, 5)
std::vector<int> aVec = { 90, 3, 16, 28, 45, 35, 63, 71, 82 };
std::vector<int> bVec = { 89, 4, 19, 26, 48, 37, 69, 72, 86 };
std::vector<int> cVec = { 91, 5, 15, 29, 43, 33, 66, 74, 85 };
ResultTuple result = CalculateLeastAbsSum(aVec, bVec, cVec);
assert(result.absSum == 4);
assert(result.val[0] == 3);
assert(result.val[1] == 4);
assert(result.val[2] == 5);
}
void testCase5()
{
std::vector<int> aVec = { 90, 83, 16, 28, 45, 35, 63, 71, 3 };
std::vector<int> bVec = { 95, 88, 19, 26, 48, 37, 69, 72, 1 };
std::vector<int> cVec = { 91, 85, 15, 29, 43, 33, 66, 74, 2 };
ResultTuple result = CalculateLeastAbsSum(aVec, bVec, cVec);
assert(result.absSum == 4);
assert(result.val[0] == 3);
assert(result.val[1] == 1);
assert(result.val[2] == 2);
}
Pseudo code:
1. Sort rep in the descending order as sortedRep
2. Two pointers points to arr and sortedRep as arrPtr and repPtr
3. Compare the value arrPtr points to with the value repPtr points to
* If less, replace the value arrPtr points to with the value repPtr points to
And move arrPtr and repPtr forward by 1
* If not less, move attPtr forward by 1
4. Repeat Step 3 until either pointer reaches the end
Example please refer to: cpluspluslearning-petert.blogspot.co.uk/2015/10/data-structure-and-algorithm-find.html
- peter tang October 10, 2015Details: cpluspluslearning-petert.blogspot.co.uk/2015/09/data-structure-and-algorithm-find.html
O(n) solution.
void TestCasesOfFindLongestUniqueCharacterSerial()
{
std::string testString;
assert(FindLongestUniqueCharacterSerial(testString).empty());
testString = "aaaaaaaaaaaaaaaaaaaa";
assert(FindLongestUniqueCharacterSerial(testString) == "a");
testString = "abcdefghijklmn";
assert(FindLongestUniqueCharacterSerial(testString) == testString);
testString = "aabcbcdbca";
assert(FindLongestUniqueCharacterSerial(testString) == "dbca");
testString = "bcccf";
assert(FindLongestUniqueCharacterSerial(testString) == "bc");
}
See here: cpluspluslearning-petert.blogspot.co.uk/2015/09/bts-print-first-and-last-nodes-of-each.html
Test
void TestCasesOfFirstAndLastOfEachLevelOfTree()
{
std::vector<int> testInput{ 1 };
TreeNode<int>* rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
std::vector<TreeNode<int>*> result;
GetFirstAndLastOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 1);
assert(*result[0]->data == 1);
DeleteTree(&rootBFS);
testInput = { 1, 2 };
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetFirstAndLastOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 2);
assert(*result[0]->data == 2);
assert(*result[1]->data == 1);
DeleteTree(&rootBFS);
testInput = { 1, 2, 3 };
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetFirstAndLastOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 3);
assert(*result[0]->data == 2);
assert(*result[1]->data == 1);
assert(*result[2]->data == 3);
DeleteTree(&rootBFS);
testInput = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetFirstAndLastOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 7);
// Tree:
// 6
// 3, 8
// 1, 4, 7, 9
// 2, 5, 10
assert(*result[0]->data == 6);
assert(*result[1]->data == 3);
assert(*result[2]->data == 8);
assert(*result[3]->data == 1);
assert(*result[4]->data == 9);
assert(*result[5]->data == 2);
assert(*result[6]->data == 10);
DeleteTree(&rootBFS);
}
Implementation: cpluspluslearning-petert.blogspot.co.uk/2015/09/bst-return-sum-of-each-level-amazon.html
Test
void TestCasesOfSumOfEachLevelOfTree()
{
std::vector<int> testInput{ 1 };
TreeNode<int>* rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
std::vector<int> result;
GetSumOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 1);
assert(result[0] == 1);
DeleteTree(&rootBFS);
testInput = {1, 2};
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetSumOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 2);
assert(result[0] == 2);
assert(result[1] == 1);
DeleteTree(&rootBFS);
testInput = { 1, 2, 3 };
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetSumOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 2);
assert(result[0] == 2);
assert(result[1] == 4);
DeleteTree(&rootBFS);
testInput = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
result.clear();
GetSumOfEachLevelOfTree(rootBFS, result);
assert(result.size() == 4);
assert(result[0] == 6); // 6
assert(result[1] == 11);// 3, 8
assert(result[2] == 21); // 1, 4, 7, 9
assert(result[3] == 17);// 2, 5, 10
DeleteTree(&rootBFS);
}
Implementation: cpluspluslearning-petert.blogspot.co.uk/2015/09/bts-find-next-in-order-value-facebook.html
TEST
void TestCasesOfNextInOrder()
{
const std::vector<int> testInput{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
TreeNode<int>* rootBFS = ConstructTreeOnSortedValueBFS(testInput);
assert(rootBFS != nullptr);
auto nextInOrder = NextInOrder(rootBFS, 0);
assert(nextInOrder.get() == nullptr);
nextInOrder = NextInOrder(rootBFS, 1);
assert(*nextInOrder == 2);
nextInOrder = NextInOrder(rootBFS, 2);
assert(*nextInOrder == 3);
nextInOrder = NextInOrder(rootBFS, 3);
assert(*nextInOrder == 4);
nextInOrder = NextInOrder(rootBFS, 4);
assert(*nextInOrder == 5);
nextInOrder = NextInOrder(rootBFS, 5);
assert(*nextInOrder == 6);
nextInOrder = NextInOrder(rootBFS, 6);
assert(*nextInOrder == 7);
nextInOrder = NextInOrder(rootBFS, 7);
assert(*nextInOrder == 8);
nextInOrder = NextInOrder(rootBFS, 8);
assert(*nextInOrder == 9);
nextInOrder = NextInOrder(rootBFS, 9);
assert(*nextInOrder == 10);
nextInOrder = NextInOrder(rootBFS, 10);
assert(nextInOrder.get() == nullptr);
nextInOrder = NextInOrder(rootBFS, 11);
assert(nextInOrder == nullptr);
TreeNode<int>* rootDFS = ConstructTreeOnSortedValueDFS(testInput);
assert(rootDFS != nullptr);
nextInOrder = NextInOrder(rootDFS, 0);
assert(nextInOrder.get() == nullptr);
nextInOrder = NextInOrder(rootDFS, 1);
assert(*nextInOrder == 2);
nextInOrder = NextInOrder(rootDFS, 2);
assert(*nextInOrder == 3);
nextInOrder = NextInOrder(rootDFS, 3);
assert(*nextInOrder == 4);
nextInOrder = NextInOrder(rootDFS, 4);
assert(*nextInOrder == 5);
nextInOrder = NextInOrder(rootDFS, 5);
assert(*nextInOrder == 6);
nextInOrder = NextInOrder(rootDFS, 6);
assert(*nextInOrder == 7);
nextInOrder = NextInOrder(rootDFS, 7);
assert(*nextInOrder == 8);
nextInOrder = NextInOrder(rootDFS, 8);
assert(*nextInOrder == 9);
nextInOrder = NextInOrder(rootDFS, 9);
assert(*nextInOrder == 10);
nextInOrder = NextInOrder(rootDFS, 10);
assert(nextInOrder.get() == nullptr);
nextInOrder = NextInOrder(rootDFS, 11);
assert(nextInOrder == nullptr);
DeleteTree(&rootBFS);
DeleteTree(&rootDFS);
}
It is an optimization problem:
- Objective function: Min(lambda1*X1+lamda2*X2 + ... + lamdaN*XN)
- Where lamda(i) either equal to 0 or 1
and Sigma(lamda(i)) = m, m is N/2
void TestCases
{
DataArray result;
// solution: 1, 8, 9
assert(DivideToTwoPartsWithEqualSum(DataArray{ 1, 2, 3, 6, 8, 9, 7 }, result) == 18.0);
// solution: 5, 7, 62
assert(DivideToTwoPartsWithEqualSum(DataArray{ 5, 7, 50, 8, 9, 62 }, result) == 74);
// solution: 10, 20
assert(DivideToTwoPartsWithEqualSum(DataArray{ 1, 10, 11, 18, 20 }, result) == 30);
// solution: 1, 10, 13
assert(DivideToTwoPartsWithEqualSum(DataArray{ 1, 7, 8, 9, 10, 13 }, result) == 24);
}
Please refer C++ code to cpluspluslearning-petert.blogspot.co.uk/2015/09/functional-programming-divide-array.html
- peter tang September 07, 2015Solution:
- f(0) = 1
- f(1) = m
- f(n) = m*f(n-1) = pow(m, n);
- f(i, j) = f(i-1) * f(m-j) = pow(m, n+i-j-1), where i < j and i and j <= n.(a jump connection i, j)
See more here: cpluspluslearning-petert.blogspot.co.uk/2015/08/find-number-of-paths-from-source-to.html
Comparing with normal tree traversal via BFS and DFS. The only thing need to do for this problem is to take push the sum (from root to its parent node) into the stack or queue. As traversing through the tree, compare this sum plus its own value against the given SUM. If equal, then found a path. Otherwise no.
Here is the implementation: cpluspluslearning-petert.blogspot.co.uk/2015/07/data-structure-and-algorithm-find-tree.html
An example is provied as well
- peter tang June 19, 2015The idea follows:
1. use sweep line algorithm to find how many interval overlaps with each other.
2. use the number of overlap of intervals as the cost function. And Employ the steepest descent algorithm to remove the intervals with the largest number of intervals.
It is to remove the least intervals to guarantee the left intervals are not overlapped with each other. Therefore guarantee that S' is one of the maximal non-overlap subset of S.
Details follow: cpluspluslearning-petert.blogspot.co.uk/2015/06/data-structure-and-algorithm-find_19.html
Could you be a bit more specific about how the marking can be done unevenly? Example?
- peter tang June 15, 20151. Sort the array
2. Go through each element - x, find the upper bound of (x+1). Compare and find the element with largest number of covered within 1.
O(nlogn) solution.
I got a O(nlogn) and O(n) solution in computation and space respectively.
Here is how the example works out.
Let's take a look at the example: [1, 2], [5, 6], [1, 5], [7, 8], [1, 6]
The sorted lower and upper bounds are,
- Lower bounds: {1, 1, 1, 5, 7}
- Upper bounds: {2, 5, 6, 6, 8}
Here is the algorithm runs
- Take 1 as the lower bound of the first interval
- Ui = 2, Li+1 = 1, where i = 1. it is a OVERLAP case and continue
- Ui = 5, Li+1 = 1, where i = 2, it is a OVERLAP case and continue
- Ui = 6, Li+1 = 5, where i = 3, it is a OVERLAP case and continue
- Ui = 6, Li+1 = 7, where i = 4, it is NOT a OVERLAP case
* The first interval stops her as [1, 6]
* The second interval starts here with lower bound as 7
- Reach the end, construct the second interval with the upper bound of Un, [7, 8]
Two overlapped intervals generated as [1, 6] and [7, 8]
Insert key 1 with empty vector and key 7 with empty vector into hash map,
Order the overlapped interval's lower bounds as {1, 7}
- Task [1, 2]: the 1st value <= 1 is 1. then append 0 into the value of Key 1 of hash map
- Task [5, 6]: the 1st value <= 5 is 1. then append 1 into the value of Key 1 of hash map
- Task [1, 5]: the 1st value <= 1 is 1. then append 2 into the value of Key 1 of hash map
- Task [7, 8]: the 1st value <= 7 is 7. then append 3 into the value of Key 7 of hash map
- Task [1, 6]: the 1st value <= 1 is 1. then append 4 into the value of Key 1 of hash map
Go through the hash map,
[0, 1, 2, 4] in one group
[3] in one group
More details about the algorithm and pseudo-code, please refer to: cpluspluslearning-petert.blogspot.co.uk/2015/06/data-structure-and-algorithm-find.html
- peter tang June 04, 2015I have proposed a Trie solution. I don't think O(nk) is an achievable solution. The worst case would be O(n*(k^2)) in the case of {a, aa, aaa, aaaa, aaaaa}.
A detailed Trie structure is proposed and a solution with example is shown as well. Follow this for more details: cpluspluslearning-petert.blogspot.co.uk/2015/05/data-structure-and-algorithm-trie-and.html
Tom's solutions is the best so far. My solution does not beat it. It's just a different angle to think about this problem by dynamic programming.
If interested, follow this: cpluspluslearning-petert.blogspot.co.uk/2015/05/dynamic-programming-minimize-number-of.html
A tree structure is able to solve this issue.It is binary ordered tree. Here is the insertion and deletion operations, when visiting each element of given array in order.
Insertion
- Give x, Find the node, y, that is one of following cases. And insert x as a child of y
* y is a leaf node and less than x
* y is not a leaf node but its child is greater than x
- If y is the root, then insert as its left node
Deletion after insertion of x
- If y is not a left node and x has a sibling z (must be greater than x)
* Delete z, if z has no children
- x has a counterpart, z, at the same level at the right branch,
* Delete x, if x > z and z has child/children,
- Compare x with its sibling z
* Delete the right branch up to the common ancestor of x and z, if x is less
than z and x's ancestors are less than z's ancestor at each level
Two examples [-1, 2, 100, 100, 101, 3, 4, 5, -7] and [10, 8, 6, 10, 2, 10 , 6, 14, 4, 9, 5, 13, 3, 11, 7, 15, 80, 9, 10, 10, 10, 10] are solved by hand. The detailed steps and reasoning are provided in this blog: cpluspluslearning-petert.blogspot.co.uk/2015/05/data-structure-and-algorithm-find.html
- peter tang May 09, 2015Pseudo-code
- 1. Find out all cubes under N as {X1, X2, ..., Xm}
- 2. Compute all possible SUM = Xi+Xj and increment its frequency
- 3. Filter out SUM no greater than N and its frequency equal to 2
Counting the frequency associated with the SUM could use the hash map by SUM as the key and the frequency as the value. It takes constant time to search and linear time to filter out those meet the requirement.
Complexity
- M equal to the integer no greater than N^(1/3)
- In maximum has M*(M-1)/2 possible SUM
Therefore the computation complexity is O(N^(2/3)) and space complexity is O(N^(1/3)+N^(2/3)). Both are better than linear complexity.
Follow more details on: cpluspluslearning-petert.blogspot.co.uk/2015/04/data-structure-and-algorithm-find-all_30.html
Use the similar idea as the sweep line algorithm.
Pseudo-code:
- 1. Group lower bounds and upper bounds as {X1, ..., Xn} and {Y1, ..., Yn}
- 2. Sort lower and upper bounds in ascending order as {L1, ..., Ln} and {U1, ..., Un}
- 3. Take LowerBound = L1 as the lower bound of the first interval
- 4. Compare Li+1 and Ui where i ranges from 1 to N-1
* 4.1 if Li+1 > 1 + Ui, then
construct the interval with Ui as the upper bound, [LowerBound, Ui]
update LowerBound = Li+1
go to Step 4.3
* 4.2 if i = N - 1, then
construct the interval with Un as the upper bound, [LowerBound, Un]
Terminate the program
* 4.3 Increment i by 1 and repeat Step 4
Example:
Let's take a look at the example: [4, 8], [3, 5], [-1 2], [10, 12]
The sorted lower and upper bounds are,
- Lower bounds: {-1, 3, 4, 10}
- Upper bounds: { 2, 5, 8, 12}
Here is the algorithm runs
- Take -1 as the lower bound of the first interval
- Ui = 2, Li+1 = 3, where i = 1. it is a CONTINUOUS case and continue
- Ui = 5, Li+1 = 4, where i = 2, it is a OVERLAP case and continue
- Ui = 8, Li+1 = 10, where i = 3, this is none of above cases
* The first interval stops her as [-1, 8]
* The second interval starts here with lower bound as 10
- Reach the end, construct the second interval with the upper bound of Un, [10, 12]
Follow this link: cpluspluslearning-petert.blogspot.co.uk/2015/04/data-structure-and-algorithm-find.html, for more details.
- peter tang April 28, 2015Excellent solution. First 3, the largest goes in the middle. The second 3 has the left element overlapped with the first 3. But it does not matter. Because if the overlapped element is not the largest, it stays where it is. If it is the largest, then a smaller is swapped here, then the relationship in the first 3 is kept.
- peter tang April 23, 2015None of above solutions talked about the minimal swaps. This solution has O(N) computation complexity and O(N) space complexity and guarantees minimal swaps as M-1/M+1. M is the edit distance between A and B, assuming each element as a single character as if in a string.
Here is the pseduo-cdoe,
- 1. Extract out the different elements into the edit distance list
- 2. Check if 0 is appearing in edit distance list
- 2.1 If not, swap 0 with one of element in the edit distance and add it into this list
- 3. From the list look at the location of element "0".
- 4. Find out the value x in B at the same location
- 5. Swap 0 with x in the edit distance list (in A)
- 6. Remove x from the edit distance list
- 7. Repeat last 5 steps until exhaust the edit distance list
Following this: cpluspluslearning-petert.blogspot.co.uk/2015/04/data-structure-and-algorithm-minimal.html, for more details - the reasoning and logic.
- peter tang April 23, 2015There is a duplicate question a while ago. The number is no more than 4.
Follow this link to find out the previous question and one solution: cpluspluslearning-petert.blogspot.co.uk/2015/02/dynamic-programming-minimize-number-of.html.
A dynamic solution is provided as well.
Simple permutation problem. See this: cpluspluslearning-petert.blogspot.co.uk/2014/04/dynamic-programming-combinations-of.html. It is a dynamic solution.
- peter tang April 20, 2015Surprisingly there is no need for a program to calculate. This is more like a brain teasing question. The lease cost is n - max(di), which di represents the length of log between neighbor markings. Each time cut off the shorter log on the sides of log and the log with longest length remains in the hard at the end.
Follow this blog, cpluspluslearning-petert.blogspot.co.uk/2015/04/minimize-cost-of-cutting-log-google.html, to see the induction.
Test:
#include <cassert>
void TestCases()
{
{
std::vector<size_t> input = { 1, 2, 3, 4, 5, 6, 7 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 2, 3, 4, 5, 6, 7, 1 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 3, 4, 5, 6, 7, 1, 2 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 4, 5, 6, 7, 1, 2, 3 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 5, 6, 7, 1, 2, 3, 4 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 6, 7, 1, 2, 3, 4, 5 };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
{
std::vector<size_t> input = { 7, 1, 2, 3, 4, 5, 6, };
assert(MaxSumOfElementsTimeIndicesViaRotation(input) ==
(1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 + 5 * 5 + 6 * 6 + 7 * 7));
}
}
C++ implementation
#include <vector>
size_t MaxSumOfElementsTimeIndicesViaRotation(const std::vector<size_t> input)
{
size_t fPrev = 0;
size_t sum = 0;
size_t index = 1;
for (auto x : input) {
fPrev += x*index;
sum += x;
++index;
}
size_t maxVal = fPrev;
const size_t SizeOfInput = input.size();
size_t fCur;
for (index = 1; index < SizeOfInput; ++index) {
fCur = fPrev + input[index - 1] * SizeOfInput - sum;
if (fCur > maxVal) {
maxVal = fCur;
}
fPrev = fCur;
}
return maxVal;
}
See the sub-problem deduction:cpluspluslearning-petert.blogspot.co.uk/2015/02/data-structure-and-algorithm-max-sum-of.html
- peter tang February 24, 2015Modification (editing does not work for me):
It can be proven that the initial K' is better than log2(N). Therefore the computation complexity is better than O(N^(0.5*(K' - 1))), - better than O(N^(0.5*(log2(N) - 1))). In the case of lemma (K' = 4), the computation complexity is O(N^(1.5)).
Modification (comment editing does not work properly):
It can be proven that the initial K' is better than log2(N). Therefore the computation complexity is better than O(N^(0.5*(K' - 1))), - better than O(N^(0.5*(log2(N) - 1))). In the case of lemma (K' = 4), the computation complexity is O(N^(1.5)).
Bear in mind that this is a minimization problem. As long as we can find a K, any search is more than K can be ignored.
My idea is to find a K which is good enough initially to start with. Here is how to find the very initial K,
Here is one of case can be regards as the worst case.
- K = 0
- x = sqrt(N), N' = N - x*x and K increment to 1
- If N' < 4, K = K + N'
* If N' is equal to 3, 2, 1 or 0, then their K will be itself.
* 3 = 1+1+1; 2 = 1+1; 1 = 1; 0 - N is a square number
* return K (we call this K as K')
- Else N = N' and repeat last 2 steps
As long we found K', then any search worse than K' can be ignored. And if a solution is better than K', then can always update K' as the better so far. Carry on the rest search.
It can be proven that the initial K' is better than log2(N). Therefore the computation complexity is better than O(N^(0.5*(K' - 1))), - better than O(N^(0.5*(log2(N) - 1))). In the case of lemma (K' = 4), the computation complexity is O(N^(1.5)).
The induction of the computation complexity, please refer to the following:
cpluspluslearning-petert.blogspot.co.uk/2015/02/dynamic-programming-minimize-number-of.html
By the way the solution has O(1) space complexity.
Excellent comments.
Just a minor issue on Step 2 and 3. I guess Step 2 is to find the distances from each pond to the rest, therefore the data on Step 3 will have the "star-like" connection.
I believe that this is not correct way because the result of this will not be the global optimal. When consider the distance of i-th pond to the rest of (N-i), the flip of "1" to "0" that we've done on the previous (i-1) ponds have already changed the map of water and island, and the number of ponds from N to (N-i+1). The previous work has to be considered.
This is an excellent interview question. Anyone fails to answer this question should not be hired. A very fundamental question between pionter and pointer to pointer. Pointer to pointer should be used anywhere potentially can modify where the pointer points to. In this case "free" is to change where the pointer points to, from a valid address to null after free.
Any function in linked list that can call malloc/free should use pointer to pointer. Please refer to this blog about more common APIs.
cpluspluslearning-petert.blogspot.co.uk/2014/08/data-structure-and-algorithm-linked.html?m=1
The problem is simple but your solution isn't. O(mn) complexity for computation.
- peter tang February 09, 2015I don't think your sub-problem is correct. It is minimization problem.
F(n) = 1 + min{F(n-i*i)}, where i <= sqrt(n).
- 1st recursion: N^(1/2)
- 2nd recursion: N^(1/4)
- 3rd recursion: N^(1/8)
......
Total: 1/2+1/4+1/8+ ...... = 1
So it is O(N) computation complexity and O(1) space complexity.
Beautiful, really? O(N^(1.5)) computation complexity and O(N) space complexity.
- peter tang February 03, 2015Test
void TestPrintSubSetOfSummOfAllElementsEqualZero_Heuristic()
{
{
const SET testSet = { 1, 3, 5, 7};
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 0);
}
{
const SET testSet = { -1, -3, -5, -7 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 0);
}
{
const SET testSet = { -7, -8, -9, -10, 1, 2, 3 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 0);
}
{
const SET testSet = { 7, 8, 9, 10, -1, -2, -3 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 0);
}
{
const SET testSet = { 0, 0, 0, 0};
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 15);
}
{
const SET testSet = { 5, 3, 1, 8, -8, -4 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 4);
}
{
const SET testSet = { 0, 0, 0, 0 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 15);
}
{
const SET testSet = { -1, 1 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 1);
}
{
const SET testSet = { -1, 1, -3, 3 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 3);
}
{
const SET testSet = { -1, 1, -3, 3, -4, 4 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 9);
}
{
const SET testSet = { -1, 1, -2, 2, -3, 3, -4, 4 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 25);
}
{
const SET testSet = { -1, 1, -2, 2, -3, 3, -4, 4, -5, 5 };
assert(Set(testSet).PrintSubSetOfSummOfAllElementsEqualZero_Heuristic() == 75);
}
}
Continue with my last comments. I was thinking of if there are any heuristic rules to help the searching.
My method is to sort the given set into 3 different portions: the negative sets, Sn, zero elements sets, So, and the positive element sets, Sp. We can ignore the zero elements set for now because they can either appear or not appear in the solution sub-set. Therefore any solution sub-set have to have negative and positive elements. The idea is to generate sub-sets from the Sp ( with a known NegativeSum) first and then to find a sub-set in Sp, whose sum is equal to the absolute value of NegativeSum.
The heuristic rules:
- Sn or Sp is empty, no need to search
- So is not empty
* Found solutions from Sn and Sp, then the number of solution sub-set expand time of
2^SizeOf(So)
* Not found, then the number of solution sub-sets are 2^SizeOf(So) -1. The solution
sub-sets are made of all zeros.
- If the sum of the current sub-set of Sp is more than the absolute value of NegativeSum,
then stop the search in the Sp because taking any extra element in from Sp will simply
cause the sum of the positive sub-set higher.
The details please refer to: cpluspluslearning-petert.blogspot.co.uk/2015/02/dynamic-programming-list-all-sub-sets.html
Test:
1. The sub-sets is printed out to standard display
2. Test against the number of sub set with sum equal to zero
void TestPrintSubSetOfSummOfAllElementsEqualZero()
{
{
const SET testSet = { 5, 3, 1, 8, -8, -4 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 4);
}
{
const SET testSet = { 0, 0, 0 , 0 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 15);
}
{
const SET testSet = { -1, 1 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 1);
}
{
const SET testSet = { -1, 1, -3, 3 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 3);
}
{
const SET testSet = { -1, 1, -3, 3, -4, 4 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 9);
}
{
const SET testSet = { -1, 1, -2, 2, -3, 3, -4, 4 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 25);
}
{
const SET testSet = { -1, 1, -2, 2, -3, 3, -4, 4, -5, 5 };
assert(PrintSubSetOfSummOfAllElementsEqualZero(testSet) == 75);
}
}
As my last comment points out that the way of generating the POWER set will not be feasible if the size, N, of the given set is too large because of the space complexity.
I have crafted a solution of O(N) space complexity that will constant update the indices of elements of subsets and its sum. It is a DP solution. Please refer to:
cpluspluslearning-petert.blogspot.co.uk/2015/01/dynamic-programming-list.html
This is a solution for C char. For std::string, the use constructor string output(input.rbegin(), input.rend());
- peter tang November 01, 2012For wildchar case, don't move the pointer to the PATTERN.
For the case of aaaabc and aaabc.
The first around aaaa doesn't match aaab
The second around aaabc matches aaabc, then reutrn 1 (0-based index)
Not sure how you do "assign a unique prime number to every unique integer in the array"?
Think about an long array and with wide range and each integer has one hit. To do "assign a unique prime number to every unique integer in the array" will be a big job.
Roman numerals: see the wikipedia
const static string romanStr = "IVXLCDM";
const static int romanVal = {1, 5, 10, 50, 100, 500, 1000};
int GetRomanValue(string input)
{
int total = 0, temp = 0;
for (string::iterator it = input.rbegin(); it != input.rend(); ++ it){
size t = romanString.find(*it);
total += romanVal[t] > temp? romanVal[t] : -romanVal[t];
temp = romanVal[t];
}
return total;
}
Suppose that "input" is a valid string of Roman characters.
- peter tang November 01, 2012
The idea is to use Trie structure to build up the dictionary. Feed the live stream into the dictionary (assuming a new character is coming at a time). Go down deep into the tree (Trie data structure) one more level given the newly coming character. Keep a list of valid tree paths and go through the list when a new character is coming.
- Generate signal to call the API, if a new word is found with the new character.
Keep this tree path because following character(s) can form new words.
- Keep this tree path in the list, if does not find a new word but the tree path is valid.
Because it is one of candidates to form a new word with the following stream.
- Remove this tree path from the list, if the tree path is invalid.
- Search the new character in the root of dictionary
- Call the API and add the tree path into the list, if this single character is a word.
- Add the tree path into the list, if the tree path is valid.
TEST
- peter tang November 16, 2015