FBNerd
BAN USER@unknown: Really can you? Microsoft will be please to announce that strlen() is now a constant time algorithm ^^... Just tell them what you know!
@Praveen: You are right. As with all questions here they are ridiculously under-specified and it is always funny how people just post some random answers to their interpretation, without even stating their interpretation. Microsoft should not hire them lol.
This question can only be solved under the assumption that you possess a pointer to the end of the string or can obtain it in constant time, which is only possible if you know the size of the string, which is not the case for c-style strings.
"standard top-n algorithm"? Well seems to be not so standard to Google, but anyway, just look for "C++ partial_sort implementation", it does the very same thing. And in more detail.
1) Add elements to the heap until it has size K
2) Then you always remove the min element which is log(K) in exchange for adding a LARGER element. This way you get all the (n-k) smallest elements out of there...
3) Return the head of the min-heap.
So you get O(log(K) * (n-k) + k) for a pairing/fibbo-heap. Not all heaps will give you this worst-case performance... So a binary heap would end up with O(log(K) * (n-k) + k*log(k)) = O(n*log(k))...
Alternatively you could use randomized Quickselect, which to me sounds nicer... and gives you a more stable complexity of O(n) (randomized).
"In case where a circle consists of 360 points"
That is already wrong. A circle consists of infinite points and makes a big difference...
It is so much easier to explain.
1) Note that the problem is symmetric, so we can pick one anchor point for the angle in question.
2) Note that rectangular is impossible. This requires three points with absolute exact coordinates. The probability for that to happen is zero! You may come close but it will NEVER "be" rectangular... It is like asking for the probability for a random real number in [0,1] of R to be sqrt(2)... It just never happens.
3) There are three possible right angles, which are symmetric again, so we restrict even further by only allowing the legs of the anchor point to vary by 180 (-90 to +90) degrees each. This gives a unique family of triangles, which by rotational symmetry extends to the defined problem.
4) We can rotate the leg around the anchor point by the above defined degrees of freedom while maintaining the rectangle.
Thus it follows that <90 and >90 are equally likely... That's basically it.
I doubt that. Simply because gathering some magic OS values won't solve the problem that you need to be able to generate millions of random values per second. It is really about how to create a pseudo-random generator... Usually you only "desync" a cryptographic one every now and then with new entropy, it is just too costly.
Also, gathering random data is no guarantee for uniform distribution! You need to work a bit more...
It depends. If you want an algorithm to be used only one time then of course you need to add it to its run-time complexity. But if you, like in this question, are allowed to pre-process the data, then you will have two run-times, the one for preprocessing and the one for answering queries...
- FBNerd February 28, 2013Questions is a bit underspecified. Assuming you can have arbitrarily many pattern special chars in there and then it gets complicated. All solutions except the downvoted one (LOL) would fail on that.
I could even write this in an ediotr and it COMPILED and worked out of the box. This is awesome :D.
#include <iostream>
#include <string>
class FindPattern
{
private:
bool parse(
std::string::const_iterator frontier,
std::string::const_iterator frontierEnd,
std::string::const_iterator pattern,
std::string::const_iterator patternEnd,
std::string& outResult)
{
if(pattern == patternEnd)
{
outResult = "";
return true;
}
if(frontier == frontierEnd)
return false;
switch(*pattern)
{
case '*':
{
for(auto fit = frontier; fit != frontierEnd; fit++)
{
if(parse(fit, frontierEnd, pattern + 1, patternEnd, outResult))
{
outResult = std::string(frontier, fit) + outResult;
return true;
}
}
return false;
}break;
default:
{
if((*frontier == *pattern) || (*pattern == '.'))
{
if(!parse(frontier + 1, frontierEnd, pattern + 1, patternEnd, outResult))
return false;
outResult = *frontier + outResult;
return true;
}
else
return false;
}
}
}
public:
std::string operator()(std::string frontier, std::string pattern)
{
std::string res;
for(auto fit = frontier.begin(); fit != frontier.end(); fit++)
{
if(parse(fit, frontier.end(), pattern.begin(), pattern.end(), res))
return res;
}
return ">>FAILED<<";
}
};
int main(int argc, char** argv)
{
if(argc == 3)
std::cout << "Found: \"" << FindPattern()(argv[1], argv[2]) << "\"" << std::endl;
else
std::cout << "Invalid usage! Need two parameters..." << std::endl;
return 0;
}
Online algorithm will probably have to remember ALL sequences so far... There should be no way to do better. Then you can keep them sorted by their last element and will have to manage O(n) sequences simultaneously in the worst-case. There is stuff you can do about optimizing memory though, but in principle this should be it.
So a more precise idea. Keep chunks of continous increases in a "Node" and has forward links (a list) which is empty. Further, keep all Nodes sorted by their highest element and you need to clone all Nodes, such that a sequence like "1 2 3" has three nodes, "1", "1 2" and "1 2 3". Now if you encounter a new Node, you scan in O(log(n)) time for the first Node that could continue in you new found sequence. You then add forward links to all folowing, already known Nodes. Then you clone and add these new Nodes... Also keep track of the maximum path found so far. I am not sure what you can do to improve if the future is totally unknown.
Wikipedia: Longest_increasing_subsequence
Offline:
L = 0
for i = 1, 2, ... n:
binary search for the largest positive j ≤ L
such that X[M[j]] < X[i] (or set j = 0 if no such value exists)
P[i] = M[j]
if j == L or X[i] < X[M[j+1]]:
M[j+1] = i
L = max(L, j+1)
This does not work. Besides the fact that your algorithm can't return the subsequence in question, look at the following counterexample:
1 100 2 50 101 3 70 80 107 45
You temp and sub length are this:
t: 2 1 2 3 1 2 3 4 1
s: 0 2 2 2 3 3 3 3 4
So your result is 4, while the true answer is 6, that is:
1 2 50 70 80 107
> Is Longest Increasing sub-sequence different from this quesiton?
No but your algorithm does not fit the question. You need to keep track of all previous sequences, because they may all continue in the future and then turn out to be the longest with the very last element you read.
Data structure choice is ours hey? So what say we just add a bit "needsmerge" with an array of lists that would potentially need to be merge into one on next traversal/other operation. That is a typical tradeoff. If you want merge in O(1) then you expect it to be a common operation. So we just do it via lazy evaluation...
- FBNerd February 28, 201301010101010101 -> n-1 intervals with 2 entries
01010101010101 -> n-3 intervals with 4 entries
01010101010101 -> n-5 intervals with 6 entries
...
01010101010101 -> n-1-2k intervals with 2k entries
implies that there are O(n^2) intervals with balanced 0 and 1... (Gauss sum)
implies that there is no linear solution to this problem!
You need to assume general position for this to work. Imagine clusters of point of order O(n) which all have the same coordinates. But it can be fixed using an initial bucket sort pass that filters out clusters. That's the answer for an interview ;). Simple and effective and I guess they won't hold the general position against you...
- FBNerd February 26, 2013Your nickname is great! *rofl*
Okay this has gotten way uglier than I anticipated but except for the printTree function the code doesn't really matter...
#include <string>
#include <vector>
#include <functional>
#include <memory>
template<class TTree>
void printTree(
TTree tree,
std::function<void (TTree, std::function<void (TTree)>)> enumChildren,
std::function<void(TTree)> printNode,
std::function<void()> printLevelSepa)
{
std::vector<TTree> frontier;
frontier.push_back(tree);
frontier.push_back(nullptr);
for(int i = 0; i < frontier.size(); i++)
{
auto current = frontier[i];
if(current == nullptr)
{
printLevelSepa();
if(i < frontier.size() - 1) // only if there are children left to process
frontier.push_back(nullptr); // signal end of next tree level
}
else
{
printNode(current);
enumChildren(current, [&](TTree child) { frontier.push_back(child); });
}
}
}
class BinaryTree
{
public:
typedef std::shared_ptr<BinaryTree> pointer_type;
pointer_type left;
pointer_type right;
std::string value;
BinaryTree(std::string value) : value(value), left(nullptr), right(nullptr) { }
BinaryTree(std::string value, pointer_type left, pointer_type right) : value(value), left(left), right(right) { }
};
int main(int argc, char** argv)
{
auto tree = std::make_shared<BinaryTree>(
"Root",
std::make_shared<BinaryTree>(
"L1_left",
std::make_shared<BinaryTree>("L1L_left"),
std::make_shared<BinaryTree>("L1R_right")
),
std::make_shared<BinaryTree>(
"R1_right",
std::make_shared<BinaryTree>("R1L_left"),
std::make_shared<BinaryTree>("R1R_right")
)
);
printTree<BinaryTree::pointer_type>(
tree,
[&](BinaryTree::pointer_type node, std::function<void (BinaryTree::pointer_type)> callback)
{
if(node->left != nullptr) callback(node->left);
if(node->right != nullptr) callback(node->right);
},
[&](BinaryTree::pointer_type node) { std::cout << node->value << " "; },
[&]() { std::cout << std::endl; });
return 0;
}
Find a shortest path. Then recursively find shortest path where you remove one node each from your first shortest path. You will end up with a set of shortest paths and regardless which node is removed in the graph you now you always have an alternative shortest path.
- FBNerd February 26, 2013The question is not really specified. So for a few interpretations:
1) Interval sizes are fixed means to me, that all intervals are of the same size, just as in the example. In that case just use either an unordered_set for space efficiency or a bitarray for more speed. The set only contains the left border of each interval while the array contains a 1 for each present number. Both have O(1) for all operations...
2) Interval can be of variable size. Order by left and right borders in two search trees. Insert/find/delete is log(n). Or use bitarray for efficiency and wasting space...
A loop is more efficient here than memorization but it doesn't impact the complexity, so heck.
#include <vector>
#include <iostream>
#include <unordered_map>
int findSubSeq_internal(
int offset,
const std::vector<int>& nums,
std::unordered_map<int, int>& space)
{
if(offset >= nums.size())
return 0;
auto it = space.find(offset);
if(it != space.end())
return it->second;
int a = findSubSeq_internal(offset + 3, nums, space);
int b = nums[offset] + std::max(a, findSubSeq_internal(offset + 2, nums, space));
space.insert(std::make_pair(offset, b));
return b;
}
int findSubSeq(std::vector<int> nums)
{
if(nums.empty())
return 0;
std::unordered_map<int, int> space;
int a = findSubSeq_internal(0, nums, space);
int b = findSubSeq_internal(1, nums, space);
return std::max(a, b);
}
int main(int argc, char** argv)
{
std::vector<int> nums = {2,6,3,4,5,17,9,12,3,89,3,6,7,3,4};
int sum = findSubSeq(nums);
return 0;
}
What about:
bool mcNugget(int n)
{
static std::vector<int> notWorking = {1,2,3,4,5,7,8,10,11,13,14,16,17,10,22,23,25,28,31,34,37,43};
return notWorking.find(n) == notWorking.end();
}
LOL, seems as if all number above 43 can be composed by nuggets. So buy a lot and you can't be wrong... I verified it up to 2048 with brute force.
- FBNerd February 26, 2013#include <vector>
#include <string>
#include <functional>
#include <iostream>
#include <unordered_map>
std::pair<char, int> countCharsAndReturnMax(std::string str, std::unordered_map<char, int>& counts)
{
std::pair<char, int> max;
for(auto c : str)
{
if(counts[c]++ > max.second)
max = std::make_pair(c, counts[c]);
}
return max;
}
int main(int argc, char** argv)
{
std::unordered_map<char, int> counts;
auto maxEntry = countCharsAndReturnMax("Facebook", counts);
std::cout << "Given strings has following char histogram:" << std::endl << std::endl;
for(auto e : counts)
{
if(e.first == maxEntry.first)
std::cout << ">>";
else
std::cout << " ";
std::cout << " '" << e.first << "' : " << e.second << " times" << std::endl;
}
return 0;
}
What do you by the naive way would always be n!? In fact it is impossible to do better unless you managed to print a string in negative time lol. The question is rather how much worse does it get? And the solutions above seem to be a nightmare of memory allocation and copying. Mine is free of them all.
#include <vector>
#include <string>
#include <functional>
#include <iostream>
#include <stdint.h>
bool permutation_internal(std::vector<int>& perm, int offset, std::vector<int>& validIndices, std::function<bool()> callback)
{
if(offset == perm.size())
return callback();
for(int i = 0; i < validIndices.size(); i++)
{
if(validIndices[i] == -1)
continue;
validIndices[i] = -1;
perm[offset] = i;
if(!permutation_internal(perm, offset + 1, validIndices, callback))
return false;
validIndices[i] = i;
}
return true;
}
template<class TIter>
bool permutation(TIter begin, TIter end, std::function<bool (const std::vector<TIter::value_type>&)> predicate)
{
int s = (int)std::distance(begin, end);
std::vector<TIter::value_type> res(s);
std::vector<int> perm(s);
std::vector<int> validIndices(s);
for(int i = 0; i < s; i++) validIndices[i] = i;
return permutation_internal(perm, 0, validIndices, [&]()
{
for(int i = 0; i < perm.size(); i++)
res[i] = *(begin + perm[i]);
return predicate(res);
});
}
void printStringPerms(std::string str)
{
permutation(str.begin(), str.end(), [&](const std::vector<char>& perm)
{
for(auto c : perm) std::cout << c;
std::cout << std::endl;
return true;
});
}
int main(int argc, char** argv)
{
printStringPerms("Facebook");
return 0;
}
What do you by the naive way would always be n!? In fact it is impossible to do better unless you managed to print a string in negative time lol. The question is rather how much worse does it get? And the solutions above seem to be a nightmare of memory allocation and copying. Mine is free of them all.
#include <vector>
#include <string>
#include <functional>
#include <iostream>
#include <stdint.h>
bool permutation_internal(std::vector<int>& perm, int offset, std::vector<int>& validIndices, std::function<bool()> callback)
{
if(offset == perm.size())
return callback();
for(int i = 0; i < validIndices.size(); i++)
{
if(validIndices[i] == -1)
continue;
validIndices[i] = -1;
perm[offset] = i;
if(!permutation_internal(perm, offset + 1, validIndices, callback))
return false;
validIndices[i] = i;
}
return true;
}
template<class TIter>
bool permutation(TIter begin, TIter end, std::function<bool (const std::vector<TIter::value_type>&)> predicate)
{
int s = (int)std::distance(begin, end);
std::vector<TIter::value_type> res(s);
std::vector<int> perm(s);
std::vector<int> validIndices(s);
for(int i = 0; i < s; i++) validIndices[i] = i;
return permutation_internal(perm, 0, validIndices, [&]()
{
for(int i = 0; i < perm.size(); i++)
res[i] = *(begin + perm[i]);
return predicate(res);
});
}
void printStringPerms(std::string str)
{
permutation(str.begin(), str.end(), [&](const std::vector<char>& perm)
{
for(auto c : perm) std::cout << c;
std::cout << std::endl;
return true;
});
}
int main(int argc, char** argv)
{
printStringPerms("Facebook");
return 0;
}
PLEASE? A KD-tree? They will be more than happy to see you implement one in pseudocode... BTW a KD-Tree has O(n) worst-case complexity.
What they were probably looking for is this.
Sort all rectangles and points by their x and y coordinates into sorted sets Rx, Ry, Px and Py.
Then in each recursion level swap a bit which says whether to subdivide in X or Y direction. (This is just to make complexity analysis a whole heck of a lot easier)
You always use the middle element in the current subdivision of Rx, Ry, Px, Py as splitter to obtain the sets for the next subdivision.
The recursion terminates after log(n) steps when there is either none or one rectangle left. Then you check whether all points in the subdivided Px or Py are in there and add them to a result set.
The algorithm is at most O(n*log(n)^2) worst case... I have the proof on paper but it would be too tedious to carry it out here. Shouldn't be too hard ;).
Simple: Java is to C++ what Lambda Calculus is to Haskell ^^
- FBNerd August 25, 2013or in summary: I HATE JAVA