Nelson Perez
BAN USER
ABOUT
•Nelson has 10+ years of software development and leading experience.
•Passionate about Big Data, Business Intelligence (BI), Web Development & Cloud highly scalable & highly available services
•Always excited to rapidly ramp-up & learn new skills and technologies.
•Definitely looking for interesting opportunities to skills, passion and agility to provide direct innovation and value to customers.
•Startup/Hackathon cultures are always welcome!
HIGHLIGHTS
•Architected & leaded a company-wide automation platform from build to feature progress & burn down reports for Action Engine.
•Developed service health optics for Windows Azure Active Directory.
•Developed Exchange Online Storage service a big data pipeline.
•Delivered new big data behavioral targeting segments for display and email ads for Bing, Hotmail and Live Messenger marketing teams
•Leaded the solution that saved $3.7 Million/year optimizing Search Engine Marketing budget for Bing Search Cashback online campaign.
EDUCATION
•B.S. in Computer Engineering [CS + EE] + Project Manager Certificate @ University of Puerto Rico – Mayaguez, PR 1999-2005
•Certifications: Project+, Technical Trainer, Project+, MCP
- 1of 1 vote
AnswersConvert a binary tree into a In Order traversal circular list re-purposing the node's pointers Left & Right as Previous and Next respectively.
- Nelson Perez in United States
Hint: A single node Left & Right points to itself.
Note: This is not a binary search tree.| Report Duplicate | Flag | PURGE
Facebook Software Engineer Coding
Oops though it was a Binary SEARCH Tree!
- Nelson Perez March 24, 2015CPU O(nlogn) Memory O(1):
Just keep inserting the pre-order elements to a tree.
For those asking about duplicates if the tree was created with a consistent algorithm then this solutions would work.
Otherwise you need to do a more complex O(n) algorithm which works for certain case but definitely I would avoid in an interview.
public static Node PreOrderBuildTree(int[] preOrder)
{
Node result = null;
foreach(int data in preOrder)
{
result = BST_PreOrderInsert(result, data);
}
}
private static Node BST_PreOrderInsert(Node head, int data)
{
if(head == null)
return new Node(data);
Node current = head;
Node prev = null;
while(current != null)
{
prev = current ;
if(current .Data >= data)
{
current = current .Right;
}
else
{
current = current .Left;
}
}
if(prev.Data >= data)
{
prev.Rigth = new Node(data);
}
else
{
prev.Right = new Node(data);
}
return head;
}
CPU O(n) Memory (n) Algorithm:
Call recursively maintaining the min and max based the side that you go.
It inserts nodes if they are within bounds.
Otherwise:
When out of bound less than the min means we are done with the Left side.
When out of bound been more than max means that we are done with the Right side.
This approach has a CPU order of growth of O(n) as the insertion is done at the lowest possible level thus not needing to traverse the entire tree to insert a new element.
The drawback is that in the worst case the Memory order of growth is O(n) due to call stack.
public static public static Node PreOrderBuildTree(int[] preOrder)
{
if(preOrder == null || preOrder.Length == 0) return null;
Node result = preOrder[0];
CoreBuildTree(
result,
preOrder,
1,
int.MinValue,
int.MaxValue);
return result;
}
private int CorePreOrderBuildTree(
Node head,
int[] preOrder,
int pos,
int min,
int max)
{
// If we are at the end we are done
if(preOrder.Length == pos)
return pos;
int data = preOrder[pos];
// This is for the Right Side
if(data >= head.Data && data <= max)
{
head.Rigth = new Node(data);
pos++;
pos = CoreBuildTree(
head.Right,
preOrder,
pos,
head.Data,
max);
}
// This is for the left side there still element left
if(pos != preOrder.Legnth)
{
data = preOrder[pos];
if(data <= head.Data && data <= min)
{
head.Left = new Node(data);
pos++
pos = CoreBuildTree(
head.Left,
preOrder,
pos,
min,
head.Data);
}
}
return pos;
}
Just keep inserting the pre-order traversal elements to a tree.
public static Node BuildTree(int[] preOrder)
{
Node result = null;
foreach(int data in preOrder)
{
result = BST_PreOrderInsert(result, data);
}
}
private static Node BST_PreOrderInsert(Node head, int data)
{
if(head == null)
return new Node(data);
Node current = head;
Node prev = null;
while(current != null)
{
prev = current ;
if(current .Data >= data)
{
current = current .Right;
}
else
{
current = current .Left;
}
}
if(prev.Data >= data)
{
prev.Rigth = new Node(data);
}
else
{
prev.Right = new Node(data);
}
return head;
}
}
This is a very unusual Facebook question as normally is not technology specific but more specific to a problem.
Searching the internet I found how a class and function would look like.
<?php
class Foo {
public $aMemberVar = 'aMemberVar Member Variable';
public $aFuncName = 'aMemberFunc';
function aMemberFunc() {
print 'Inside `aMemberFunc()`';
}
}
$foo = new Foo;
?>
I can parse it this way.
public Dictionary<string, List<string>> GetMapOfClassFunctions(string phpCode)
{
string[] pCode = phpCode.Split();
Dictionary<string, List<string>> result = new Dictinary<string, List<string>>();
for(int i = 1; i < pCode.Length; i++)
{
if("<?php" == pCode[i])
{
for(i = i+1; i < pCode.Length -1 ; i++)
{
if(pCode[i] == "?>") break;
else if(pCode[i] == "class")
{
i++;
string className = pCode[i];
result.Add(className, new List<string>());
int parenthesis = 0;
for(i = i+1; i < pCode.Length-1; i++)
{
if(pCode[i] == "{") parenthesis ++;
else if(pCode[i] == "}" parenthesis--;
else if(parenthesis == 0) break;
else if(parenthesis == 1 && pCode[i] == "function")
{
i++;
string functionName = pCode[i];
result[className].Add(functionName);
for( i = i + 1; i < pCode.Length; i++)
{
if(pCode[i] == "{") parenthesis ++;
else if(pCode[i] == "}" parenthesis--;
else if(parenthesis == 1) break;
}
}
}
}
}
}
}
}
Seems that a progression of this problem would be either:
#1 How would you do solve this problem but with only 4GB of memory in a single machine.
#2 How would you solve this problem with multiple machines with 4GB of memory.
#1 Can be solve two ways having three files sorting all values and do binary searh which is OK. The second way that I would personally use is getting the hashcode and create a file for every hashcode and the lookup becomes O(1).
#2 Can be solvable distributing the records across the machines getting the hashcode in a warranted algorithm that evenly distributes the hashcodes % number of machines and that machine will have the record in memory.
Loading: Use the hashcode to read the file to only get the ones that correspond to the machine.
Search: Get the HashCode to tell which machine corresponding machine that has it. Ask that machine for the record using the HashCode.
Seems that the methods assume that the RollNo, name & address is unique.
I'll implement to support collision of RollNo, name & address as this seems to be the right approach here.
Seems simple apprach of to just use multiple hash tables to find the values for an specific input in O(1) growth function which is fast due to the large amount of data as any other method would seem slow because of the growth factor.
class StudentData
{
int RollNo;
string Name;
string Address;
}
class StudentSearcher
{
Dictinary<int, List<StudentData>> byRollNo =
new Dictinary<int, List<StudentData>>();
Dictinary<string, List<StudentData>> byName =
new Dictinary<string, List<StudentData>>();
Dictinary<string, List<StudentData>> byAddress =
new Dictinary<string, List<StudentData>>();
public StudentSearcher(IEnumerable<StudentData> data)
{
foreach(StudentData d in data)
{
if(!byRollNo.ContainsKey(d.RollNo))
{
byRollNo.Add(d.RollNo, new List<StudentData>(){d});
}
else
{
byRollNo[d.RollNo].Add(d);
}
if(!byName.ContainsKey(d.Name))
{
byName.Add(d.Name, new List<StudentData>(){d});
}
else
{
byName[d.Name].Add(d);
}
if(!byAddress.ContainsKey(d.RollNo))
{
byAddress.Add(d.Address, new List<StudentData>(){d});
}
else
{
byAddress[d.Address].Add(d);
}
}
}
public List<StudentData> getStudentByRollNo(int rollno)
{
if(this.byRollNo.ContainsKey(rollno))
{
return this.byRollNo[rollno];
}
return null;
}
public List<StudentData> getStudentsByName(String name)
{
if(this.byName.ContainsKey(name))
{
return this.byName[name];
}
return null;
}
public List<StudentData> getStudentsByAddress(String address)
{
if(this.byAddress.ContainsKey(address))
{
return this.byAddress[address];
}
return null;
}
}
This seems simple.
SimpleDate GetNextQuater(SimpleData date)
{
int quater = (date.getMonth)/3 + 1;
int month = (quater) * 3 ;
int year = date.getYear;
if(month > 12)
{
year++;
month = month % 12;
}
return new SimpleDate(year, month, 1);
}
Doing a solution using familiarity within each other seems interesting.
When looking at the problem I though about how could I expand my way through the graph to find the closest friends within them without weighting existing combination relationships.
Maybe if I find an average of closeness and divide by the number of friends already found would give me a good approximation as to be exact this familiarity needs to be recalculated when finding a new friend.
hmmm.. Thinking about it seems that this becomes an NP-Incomplete problem as in order to calculate familiarity it needs to recalculate everything from the beginning based on newly calculated familiarity which this process would change familiarity again and repeat itself over and over and the algorithm would never end.
Do Breath First Search of a directed weighted graph where the weight is the closenessWeigth meaning that the higher it is the less close friend relationship is.
This algorithm will get the closest of the closest friends only.
Now I'm thinking about this algorithm when doing a party at my place. ;-)
// This represents a friend
class FriendVertex
{
// Describes all the edges to another friends
List<FriendEdge> relationships;
object Data; // Whatever other data object there is to have.
}
// This represents a friend relationship
class FriendEdge
{
FriendVertex friend;
int closenessWeigth;
}
public IEnumerable<FriendVertex> GetNthClosestFriends(
IEnumerable<FriendVertex> mFriends,
int nthFriends)
{
Queue<FriendEdge> q = new Queue<FriendEdge>();
HashSet<FriendVertex> foundFriends = new HashSet<FriendVertex>();
// Populate known friends first
foreach(FriendVertex fv in mFriends)
{
// Adding this friends as you don't want then but their friends
foundFriends.Add(fv);
q.Enqueue(new FriendEdge() {
friend = fv,
closenessWeigth= 0});
}
// Stop when there are no friends left or we fill the nthFriendCount
while(q.Count != 0)
{
FriendEdge friendEdge = q.Dequeue();
if(friend.closenessWeigth != 0)
{
friend.closenessWeigth--;
q.Enqueue(friend);
}
else
{
// There would definitely be repeated friends so only caring
// about relationship that friend of a friend first.
if(!foundFriends.Contains(friend))
{
// One less friend needed
nthFriends--;
foundFriends.Add(friend);
yield return friend;
}
// This means that we are done finding friends
if(nthFriends == 0)
break;
// Get friends of this friend
foreach(FriendEdge fe in friend.relationships)
{
// Recreating an object to not modify the original from the graph
q.Enqueue(new FriendEdge() {
friend = fe.friend,
closenessWeigth = fe.closenessWeigth});
}
}
}
// No friends of friends found so returning empty set
return new List<FriendVertex>();
}
Weird game seems like a Bingo in steroids with a 100X100 board.
I've been explicit with the booleans and combined multiple loop into one but I don't know it the performance would be better than separate as there are additional comparisons and jumping all around memory.
In reality in an interview problably the best way is to create a generalized method which you tell how to advance to the next cell but because I wanted to have the best performance I've avoid it.
public static int MinMingo(int[][] board, int[] called)
{
if(called.Length < 100)
{
return -1;
}
// This is used to track the last position on the array.
Dictionary<int, int> set = new Dictionary<int, int>();
for(int i = 0; i < called.Length; i++)
{
set.Add(called[i], i);
}
int minMingo = -1;
int maxIndexDiagonal1 = 0;
int maxIndexDiagonal2 = 0;
// Using single loop for all of the lookups.
for(int i = 0; i < 100; i++)
{
int maxIndexRow = 0;
int maxIndexColumn = 0;
for(int j = 0; j < 100;++)
{
if(!set.ContainsKey(board[i][j]))
{
maxIndexRow = -1;
if(maxIndexColumn == -1)
break;
}
else if(maxIndexColumn != -1 && maxIndexColumn < set[board[i][j]]
{
maxIndexColumn = set[board[i][j]];
}
if(!set.ContainsKey(board[j][i]))
{
rowColumn = int.MaxValue;
if(foundRow == -1)
break;
}
else if(maxIndexRow != -1 && maxIndexRow < set[board[j][i]])
{
maxIndexRow = set[board[j][i]];
}
}
// finds if is the smallest index
if(maxIndexRow > 0 && minMingo > maxIndexRow)
minMingo = maxIndexRow;
if(maxIndexColumn > 0 && minMingo > maxIndexColumn)
minMingo = maxIndexColumn
// Track the maximum index diagonal from the begining of x
if(!set.Contains(board[i][i]))
{
maxIndexDiagonal1 = -1;
}
else if(foundDiagonal1 != -1 && maxIndexDiagonal1 < set[board[i][i]])
{
maxIndexDiagonal1 = set[board[i][i]];
}
// Tracks the maximum index diagonal starting from the end of x
if(!set.Contains(board[100-i-1][i]))
{
maxIndexDiagonal2 = -1;
}
else if(maxIndexDiagonal2 != -1 && maxIndexDiagonal2 < set[board[100-i-1][i]])
{
maxIndexDiagonal2 = set[board[100-i-1][i]];
}
}
if(maxIndexDiagonal1 > 0 && minMingo > maxIndexDiagonal1))
minMingo = maxIndexDiagonal1;
if(maxIndexDiagonal2 > 0 && minMingo > maxIndexDiagonal2))
minMingo = maxIndexDiagonal2;
return (minMingo != int.MaxValue)?minMingo:-1;
}
Use a variable to be written by the first thread with some specific message in a thread safe way using a lock object.
string firstThread = null;
object lockObj = new object();
void threadWorkerFunction(string threadName)
{
if(firstThread == null)
{
lock(lockObj)
{
if(firstThread == null)
{
firstThread = threadName;
}
}
}
}
First search with one of the fancy algorithm which I don't remember whom and that literally just used or invented but it works, then add the replacement after it's found.
public static string ReplaceString(
string source,
string target,
string replacement)
{
if(source.Length < target.Length)
return source;
long tagetHash = 0;
foreach(char c in target)
targetHash += c.GetHashCode();
StringBuilder sb = new StringBuilder();
long sourceHash = 0;
int tail = 0;
int head = 0;
for(; tail < target.Length; tail++)
sourceHash += source[tail].GetHashCode();
while(tail < S.Length)
{
bool found = false;
for(; tail < source.Length; tail++, head++)
{
if(sourceHash == targetHash)
{
// Verify
bool equal = true;
for(int i = head; i < tail; i++)
{
if(source[i] != target[i - head])
{
equal = false;
break;
}
}
if(equal)
{
found = true;
break;
}
}
// Append as it did not found a match and modify the rolling hash
sb.Append(S[head]);
sourceHash += S[tail].GetHashCode() - S[head].GetHashCode();
}
// If not found making tail equal to head so it will copy the rest
if(!found)
{
tail = head;
break;
}
// Add the replacement
for(int i = 0; i < replecement.Length; i++)
sb.Append(repleacement[i]);
}
// Copy the rest of the string
for(int i = tail; i < S.Length; i++)
sb.Append(S[i]);
return sb.ToString();
}
I don't remember if this the KMP algorithm but I think is similarly good as it is O(n) solution worst case would be O(n^2) if there are always hash-code collisions.
public static bool SubStringExist(string S, string subStr)
{
long hahs = 0;
foreach(char c in subStr)
hash += c.GetHashCode();
// Not needed but using for ease of understanding
int head = 0;
int tail = 0;
long sHash = 0;
for(; tail < S.Length; tail++)
sHash += S[tail].GetHashCode();
for(; tail < S.Length; tail++)
{
if(hash == strHash)
{
// Verify in case of hash collisions
bool equal = true;
for(int j = head; j < tail; j++)
{
if(S[j] != subStr[j-head])
{
equal = false;
break;
}
}
if(equal)
{
return true;
}
}
sHash += S[tail].GetHashCode() - S[head].GetHashCode();
head++;
}
}
Very Nice. Took me a while to do similar implementation but I used a queue to keep valid chars instead off keeping the tail position.
I was going to suggest that once you find that the tail in the map is 1 that is when you should be comparing the minLen as that is when you actually find the minimum length of the current subset.
This problem is my kryptonite this is my cleanest iteration so far in linear time.
public static GetSmallesString(string S, HashSet<char> set)
{
var charCount = new Dictionary<char, int>();
var q = new Queue<Tuple<char, int>>();
var hSet = new HashSet<char>(set);
foreach(char c in set)
charCount.Add(c, 0);
int start = -1;
int offset = int.MaxValue;
Tuple<char, int> head = null;
Tuple<char, int> tail = null;
for(int i = 0; i < S.Length; i++)
{
char s = S[i];
if(hSet.Contains(s))
hSet.Remove(s);
if(charCount.ContainsKey(s))
{
tail = new Tuple<char, int>(s, i);
q. Enqueue(tail);
if(hSet.Count == 0)
{
head = q.Dequeue();
while(charCount[head.Item1]-- == 1)
{
hSet.Add(head.Item1);
int toffset = tail.Item2 - head.Item2 + 1;
if(toffset < offset)
{
offset = toffset;
start = head.Item2;
}
break;
}
}
}
}
if(offset == int.MaxValue)
return null;
return S.Substring(start, offset);
}
This is a multi array merge similar to the mergesort but with multiple list or arrays. So a min heap is solution for this. In .NET can be done using a SortedDictionary which behave similarly to a min Heap.
I seems very complex but it is necessary to handle repeated numbers when using a SortedDictionary.
This algorithm is O(n*mlogm)
n -> total rows
m -> total columns
public static List<int> GetSortedList(int[][] matrix)
{
List<int> result = new List<int>();
SortedDitionary<int, List<Tuple<int,int>>> minHeap =
new SortedDictionar<int, List<Tuple<int, int>>>();
// Initialize the minHeap with the numbers as sorted key
// and a list of tuples with the index and the row index
// I use a list as multiple rows could have the same value.
for(int i = 0; i < matrix.Length; i++)
{
int[] array = matrix[i];
if(!minHeap.ContainsKey(array[0]))
{
minHeap.Add(
array[0],
new List<Tuple<int,int>> {
new Tuple<int, int[]>(0, array)
});
}
else
{
minHeap[array[0].Add(new Tuple<int, int[]>(0, array));
}
}
while(minHeap.Count > 0)
{
var kvp = minHeap.First(); // O(1)
minHeap.Remove(kvp.Key);
// Going through the list of repeated number arrays
for(int i = 0; i < kvp.Value.Count; i++)
{
int number = kvp.Key;
// Adding to the result
result.Add(number);
int nextIndex = kvp.Value[i].Item1 + 1;
int[] array = kvp.Value[i].Item2;
// It just picked the last number so not adding back
if(nextIndex == array.Length)
{
continue;
}
int next = array[index];
if(!minHeap.ContainsKey(next))
{
// O(logm)
minHeap.Add(
next,
new List<Tuple<int,int[]>> {
new Tuple<int, int[]>(nextIndex , array)
});
}
else
{
// O(logm)
minHeap[next].Add(
new Tuple<int, int[]>(nextIndex , array));
}
}
}
return result;
}
I do not think this is a perfect algorithm but this is what I could do in the whiteboard.
public static bool IsMatch(string S, string P)
{
int ppos = 0;
int spos = 0;
while(ppos < P.Length && spos < S.Length)
{
char p = P[ppos];
char s = S[spos];
if(p == s || p == '.')
{
ppos++;
spos++;
}
else if(p == '*')
{
if(ppos < P.Length - 1 && s == P[ppos + 1])
{
spos++;
ppos += 2;
}
else if(ppos > 0 && (s == P[pos - 1] || P[ppos-1] == '.'))
{
spos++;
}
else
return false;
}
else
return false;
}
return (ppos == P.Length && spos == P.Length);
}
This seem good but for a SUB-SEQUENCE but in this case is for a SUBSET which as mentioned above it will only work the original problem when the values are sorted.
- Nelson Perez March 05, 2015So where is the subset been returned?
- Nelson Perez March 05, 2015This algorithm output is either the number with the max count or the sum of a pair which difference is one.
This algorithm has an order to growth of O(n) using a single pass through the array.
public static List<int> FindMaxSubset(int[] set)
{
List<int> result = new List<int>();
if(set == null || set.Count == 0)
{
return result;
}
Dictionary<int, int> numberCount = new Dictionary<int, int>();
// This is to calculate the max value which diference is 0 which is itself
int maxItem = set[0];
int maxItemCount = 1;
// Finds the best out of the combination of two numbers with difference of 1
int twoItem1 = set[0];
int twoItem2 = set[0];
int twoItemMaxCount = 0;
foreach(int s in set)
{
if(!numberCount.ContainsKey(s))
{
numberCount.Add(s, 1);
}
else
{
numberCount[s]++;
}
// Track the max count of an item
if(maxCount < number[s])
{
maxItem = s;
maxCount = numberCount[s];
}
// Check if an equivalent sum exist
if(numberCount.ContainsKey(s -1))
{
int count = numberCount[s] + numberCount[s - 1];
if(twoItemMaxCount < count)
{
twoItem1 = s;
twoItem2 = s - 1;
twoItemMaxCount = count;
}
}
else if(numberCount.ContainsKey(s + 1))
{
int count = numberCount[s] + numberCount[s + 1];
if(twoItemMaxCount < count)
{
twoItem1 = s;
twoItem2 = s + 1;
twoItemMaxCount = count;
}
}
}
// Determine which one is the maximum out of:
// A single element max count or
// The two item sum count
if(maxCount > twoItemMaxCount)
{
for(int i = 0; i < maxCount; i++)
{
result.Add(maxItem);
}
}
else
{
int count = numberCount[twoItem1];
for(int i = 0; i < count; i++)
{
result.Add(twoItem1);
}
count = numberCount[twoItem2];
for(int i = 0; i < count; i++)
{
result.Add(twoItem2);
}
}
return result;
}
Simple clean solution
public static void MoveZeroToRight(int[] A)
{
if(A == null)
return;
int head = 0;
int tail = A.Length - 1;
while(head > tail)
{
bool head0 = A[head] == 0;
bool tail0 = A[tail] == 0;
if(head0 && !tail0)
{
A[head] = A[tail];
A[tail] = 0;
head++;
tail--;
}
else if(!head0)
{
head++;
}
else if(tail0)
{
tail--;
}
}
}
Here is the best solutions using memoization which makes this problem way faster.
public static bool IsHopeable(int[] A)
{
return coreIsHopeable(A, 0, new Dictionary<int, bool>());
}
private static bool coreIsHopeable(int[] A, int start, Dictionary<int, bool> memo)
{
if(memo.ContainsKey(start))
{
return memo[start];
}
bool result = false;
// Assuming if you hope over it will be possible to hope to the end
if(start >= A.Length - 1)
{
result = true;
}
if(A[start] != 0)
{
for(int i = 1; i < A[start]; i++)
{
if(coreIsHopeable(A, start + i, memo))
{
result = true;
break;
}
}
}
memo.Add(start, result);
return result;
}
It seems somewhat complicated because it uses a bottom up dynamic programming algorithm.
public static List<string> GetTopWords(int[] keys, Dictionary<string,int> words, int top)
{
if(keys == null || keys.Length == 0)
return null;
List<string> possibleStrings = GetAllPossibleStrings(keys);
SortedDictionary<int, string> sorter = new SortedDictionary<int, string>();
foreach(string ps in possibleStrings)
{
if(words.ContainsKey(ps))
{
sorter.Add(words[ps], ps);
}
}
List<string> result = new List<string>();
foreach(var kvp in sorter)
{
if(top == 0)
break;
result.Add(kvp.Value);
top--;
}
return result;
}
List<string> GetAllPossibleStrings(int[] keys)
{
// Bottom-up algorithm using dynamic programming
List<string> results = new List<string>();
int start = keys.Length - 1;
// Populate the last letter
foreach(char c in getChars(keys[start--])
{
result.Add(c.ToString());
}
start --;
while(start >=0)
{
List<string> temp = new List<string>();
foreach(char c in getChars(keys[start--])
{
foreach(string r in result)
{
temp.Add(c + r);
}
}
result = temp;
}
return result;
}
public static Person FindCelebrity(IEnumerable<Person> P)
{
if(P == null) return null;
// Creating a working list
P = new List<Person>(P);
int pos = 0;
while(P.Count > 1)
{
int p1 = pos;
int p2 = pos + 1;
bool p1kp2 = Knows(P[p1], P[p2]);
bool p2kp1 = knows(P[p2], P[p1]);
if(p1kp2 && !p2kp1)
{
P.Remove(p2);
}
else if(!p1kp2 && p2kp1)
{
P.Remove(p1);
if(pos != 0)
pos--;
}
else if(p1kp2 && p1kp2)
{
P.Remove(p2);
P.Remove(p1);
}
else
{
pos++;
}
}
// There is no celebrity
if(P.Count == 0)
return null;
return P[0];
}
Simple algorithmic trick for this problem.
public static void ReverseWords(char[] A)
{
Reverse(A, 0, A.Length);
int start = 0;
for(int i = 0; i < A.Length; i++)
{
if(A[i] == ' ')
{
Reverse(A, start, i);
start = i+1;
}
}
Reverse(A, start, A.Length);
}
private static Reverse(char[] A, int start, int end)
{
for(int i = start; i < (start+end)/2; i++)
{
char swap = A[i];
A[i] = A[end-i];
A[end-i] = swap;
}
}
Simple algorithmic trick for this problem.
public static void ReverseWords(char[] A)
{
Reverse(A, 0, A.Length);
int start = 0;
for(int i = 0; i < A.Length; i++)
{
if(A[i] == ' ')
{
Reverse(A, start, i);
start = i+1;
}
}
Reverse(A, start, A.Length);
}
private static Reverse(char[] A, int start, int end)
{
for(int i = start; i < end/2; i++)
{
char swap = A[i];
A[i] = A[end-i];
A[end-i] = swap;
}
}
public static void PrintSeries(int n)
{
string N = n.ToString();
for(int i = 0; i < 10; i++)
{
char prev = N[0];
int count = 1;
StringBuilder sb = new StringBuilder();
for(int i = 1; i < N.Length; i++)
{
char c = N[i]
if( c != prev)
{
sb.Append(count.ToString() + prev);
prev = c;
count = 1;
}
else
{
count++;
}
}
// Doing final append of the last values
sb.Append(count.ToString + prev);
Console.WriteLine(sb.ToString());
}
}
Here is a solution in O(n) growth using a hash-table to store the count of characters and a list of the letters that repeat that amount of times while keeping the maximum value to search later.
public static List<char> getLongestConsecutiveChars(string S)
{
if(string.IsNullOrEmpty(S)) return null;
int count = -1;
int max = -1;
char prev = '\0'; // Initial value does not matter
var ht = new Dicitionary<int, List<char>>();
foreach(char s in S)
{
if(s != prev)
{
if(!ht.ContainsKey(count))
{
ht.Add(count, new List<char>());
}
ht[count].Add(prev);
if(count > max)
{
max = count;
}
count = 0;
prev = s;
}
else
{
count++;
}
}
// Adding last one
if(!ht.ContainsKey(count))
{
ht.Add(count, new List<char>());
}
ht[count].Add(prev);
if(count > max)
{
max = count;
}
return ht[max];
}
I like this implementation for it's simplicity.
- Nelson Perez March 03, 2015public static int Evaluate(string S)
{
int number = 0;
var numbers = new Stack<int>();
var operations = new Stack<char>();
for(int i = 0; i < S.Length; i++)
{
char s = S[i];
if(char.IsDigit(s))
{
number *= 10;
number += s - '0';
}
else
{
numbers.Push(number);
number = 0;
if(s == '*')
{
operations.Push('*');
}
else if(s == '+')
{
operations.Push('+');
}
}
}
numbers.Push(number);
int temp = 0;
while(operations.Count > 0)
{
Console.WriteLine(numbers);
if(operations.Peek() == '*')
{
int p1 = numbers.Pop();
int p2 = numbers.Pop();
numbers.Push(p1*p2);
operations.Pop();
}
else if(operations.Peek() == '+')
{
operations.Pop();
if(operations.Count > 0 && operations.Peek() == '*')
{
temp = numbers.Pop();
}
else
{
int p1 = numbers.Pop() + temp;
int p2 = numbers.Pop();
numbers.Push(p1 + p2);
temp = 0;
}
}
}
return numbers.Pop() + temp;
}
I though of this one solution and at some point I felt it was too much memory.
But that is the trade off.
I'm assuming an empty first head node.
public static void ReverseList(Node head)
{
if(head == null)
{
return;
}
head = CoreReverseList(head.Next, null);
}
private static Node CoreReverseList(Node cur, Node prev)
{
if(cur == null)
return prev;
Node n = CoreReverseList(cur.Next, cur);
cur.Next = prev;
return n;
}
Actually I saw that you use a HashSet I do not quite understood how it works.
I tried with a couple of samples and indeed does work but I'm still sort not sure how it does work.
You got the right idea but if (sum > i ) sum = arr[i]; section I don't think that is the right approach is better to subtract the first added value and so on.
- Nelson Perez March 02, 2015Linear time taking care of negatives as well. I created a function that might as well return the subset as it knows the range but uses "yield return" so it return on the first part of the loop.
I used yield return if there is no need to find the actual values so it does not iterate through the results just tells if a result will exists on the first yield return.
public HasSubsetThatSums(int[] A, int T)
{
return (FindSubsetThatSums(A, T) != null);
}
public static IEnumerable<int> FindSubsetThatSums(int[] A, int T)
{
int head = 0;
int tail = 0;
int sum = 0;
while(tail < A.Length)
{
if (sum > T)
{
sum -= A[head++];
}
else if ( sum < T)
{
sum += A[tail++];
}
else // if(sum == T)
{
for( int i = head; i < tail; i++)
{
yield return A[i];
}
}
}
return null;
}
Thanks for the input.
Yep I just realized when I did it in the white board but I'll post it as a new entry.
public static Node ReverseList(Node head)
{
if(head == null)
return null;
ReverseListHelper(head.Next, head);
}
private static Node ReverseListHelper(Node current, Node head)
{
if(current == null)
return null;
if(current.Next == null)
{
head.Next = current;
}
else
{
Node n = ReverseListHelper(current.Next, head);
n.Next = current;
}
return current;
}
class ListNode
{
public int Data { get; set; }
public ListNode Next { get; set; }
}
public class TreeNode
{
public int Data { get; set; }
public TreeNode Left { get; set; }
public TreeNode Right { get; set; }
}
public static List<ListNode> CreateLevelList(TreeNode root)
{
List<ListNode> result = new List<ListNode>();
if(root == null)
{
return result;
}
Queue<TreeNode> stack = new Queue<TreeNode>();
// Using null as the marker of a new level
queue.Enqueue(root);
queue.Enqueue(null);
ListNode head = new ListNode();
ListNode tail = head;
while(queue.Count > 0)
{
TreeNode node = queue.Dequeue();
if(node == null)
{
result.Add(head);
head = new ListNode();
tail = head;
queue.Enqueue(null);
}
else
{
tail.Next = new ListNode() { Data = node.Data };
tail = tail.Next;
if(node.Left != null)
queue.Enqueue(node.Left);
if(node.Rigth != null)
queue.Enqueue(node.Right);
}
}
return result;
}
This takes care of all required CSV scenarios doing a "yield return" which returns each line to the caller as needed and taking only space as needed.
The input is a IEnumerable<char> so that the function can parse the CSV data as it been loaded so no need to load the entire file into memory in a string form to start parsing the same goes for the caller we will only load the required data from the file no matter the size.
This provides a minimal memory footprint which will be at most a single line of the CSV file.
The memory footprint could be optimize even further by parsing a token at a time but because lines don't tend to be bigger than I did not bother.
public static IEnuerable<List<string>> ParseCSV(IEnumerable<char> S)
{
if(string.IsNullOrEmpty(S))
return null;
List<string> line = new List<string>();
StringbBuilder token = new StringBuilder();
string prev = string.Empty;
bool quote = false;
foreach(char s in S)
{
if(s == '"')
{
// On the case of double quotes it will make this to true as previously
// though that it was the closing quote previously making it false
// So it quotes is false and it previously was a quote then this is a
// double quote scenario
if(quotes == false && prev == '"')
{
token.Append('"');
}
quotes = ~quotes;
}
if(quotes == true)
{
token.Append(s);
}
else if(s == ',')
{
line.Append(token.ToString());
token.Clear();
}
else if(s == '\n')
{
line.Add(token.ToString());
token.Clear();
yield return line;
line = new List<string>();
}
else
{
token.Append(s);
}
prev = s;
}
// Adding last item left
line.Add(token.ToString());
yield return line;
}
Why do you want to complicate this problem interpreting a Binary Tree using Graph a algorithm?
Do we gain anything by doing so?
This is always O(n) as you need to visit all the nodes.
- Nelson Perez February 27, 2015public static int GetMaxDepth(Node root)
{
if(root == null)
return 0;
int left = GetMaxDepth(root.Left);
int right = GetMaxDepth(root.Left);
return 1 + ((left > right)?left:rigth);
}
Oh finally saw it. Darn "only positive numbers".
- Nelson Perez February 27, 2015Don't understand why everybody else assumes that the numbers are only positives.
You need to look at the entire array in order to determine if the SUM exist.
public static bool IsSubsetSum(int T, int[] S)
{
for(int i = 0; i < S.Length; i++)
{
int total = 0;
for(int j = i; j < S.Length; j++)
{
total += S[j];
if(total == T)
{
return true;
}
}
return false;
}
}
double Rand7()
{
return (rand5()/5)*7
}
public static string SplitWords(string S, HashSet<string> words)
{
if(string.IsNullOrEmpty(S)
throw new ArgumentNullException("S can't be null");
List<string> result = new List<string>();
if(!CoreSplitWords(S, words, 0, result))
{
throw new ArgumentException("There are set of words);
}
return string.Join(' ', result);
}
private static bool CoreSplitWords(
string S,
HashSet<string> words,
int start,
List<int> result)
{
if(start == S.Length)
return true;
for(int i = start; i < S.Length; i++)
{
string value = S.Substring(start, i - start);
if(words.Contains(value))
{
result.Add(value);
if(CoreSplitWords(S, words, i + 1, result))
{
return true;
}
}
}
return false;
}
}
Oops missed the IsValid() function implementation.
bool IsValid(char c)
{
if(char.IsDigit(c) || (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z'))
{
return true;
}
return false;
}
This can be done with a single loop no need to do multiple ones.
bool IsPalindrome(string S)
{
int start = 0;
int end = S.Length - 1;
while(start <= end)
{
if(!IsValid(start))
{
start++
}
else if(!IsValid(end))
{
end--;
}
else if(S[start] == S[end]
{
start++;
end--;
}
else
{
return false;
}
}
return true;
}
RepNellieWheeler212, None at Service Now
Hey Everyone! My name is Nellie Wheeler and I live in the constantly radiant and wonderful San Francisco, CA, and ...
mmm.. I think the answer is depends!
- Nelson Perez March 24, 2015Depends on the type of query and the type of data.
Although I do agree that one of the solutions could be using a map reduce approach.
But there could be different approaches:
#1 Doing Map Reduce with multiple servers with a clustering to optimize the query.
#2 In memory write read-write through cache
#3 Common data could be duplicated across many records optimizing multiple sub-queries.
#1 Map Reduce:
A Query system could be brute force querying every server and get all of them return whatever data was found you reduce this through an aggregation function which can be improve by doing a clustering scheme to just map the query to certain servers.
#2 In memory read-write through cache
This approach of query on big system is crucial to have responsiveness. Where you actually always query the cache then the cache queries the underlying storage.
#3 Common Data
Also lets say the query just get a very limited scope but data need to retrieved from multiple places.
We could scarify consistency as social media is not critical like banks where all operations dealing with money need to be atomic.
For example: User, Has Friends so when the user logs needs to see their friends Names, DOB or any other normally consistent data.
So the data could be setup where for an User record to contain a cache version the friends Names, DOB of all his friends. This means data is heavily replicated and it is fine if is not up to date. So queries become sort of a hashtable lookup where you map using the hashcode the machine and get the record on the machine.