Nelson Perez
BAN USER
ABOUT
•Nelson has 10+ years of software development and leading experience.
•Passionate about Big Data, Business Intelligence (BI), Web Development & Cloud highly scalable & highly available services
•Always excited to rapidly rampup & learn new skills and technologies.
•Definitely looking for interesting opportunities to skills, passion and agility to provide direct innovation and value to customers.
•Startup/Hackathon cultures are always welcome!
HIGHLIGHTS
•Architected & leaded a companywide automation platform from build to feature progress & burn down reports for Action Engine.
•Developed service health optics for Windows Azure Active Directory.
•Developed Exchange Online Storage service a big data pipeline.
•Delivered new big data behavioral targeting segments for display and email ads for Bing, Hotmail and Live Messenger marketing teams
•Leaded the solution that saved $3.7 Million/year optimizing Search Engine Marketing budget for Bing Search Cashback online campaign.
EDUCATION
•B.S. in Computer Engineering [CS + EE] + Project Manager Certificate @ University of Puerto Rico – Mayaguez, PR 19992005
•Certifications: Project+, Technical Trainer, Project+, MCP
 1of 1 vote
AnswersConvert a binary tree into a In Order traversal circular list repurposing the node's pointers Left & Right as Previous and Next respectively.
 Nelson Perez in United States
Hint: A single node Left & Right points to itself.
Note: This is not a binary search tree. Report Duplicate  Flag  PURGE
Facebook Software Engineer Coding
Very hard problem here to solve.
We could go greedy on this one or do dynamic programming.
I decided dynamic programming on the board but the more I think about it might easier and cheaper to do it in a greedy algorithm.
In a greedy we can sort the task costs and keep doing task linearly comparing what is cheaper either divide or perform linearly.
Anyway here is my dynamic programming solution:
public int MinTaskDuration(int[] tasks, int divideCost)
{
var taskDict = Dictionary<int, int>();
foreach(var t in task)
{
if(!taskDict.ContainsKey(t))
{
taskDict.Add(t, 1);
}
else
{
taskDict[t]++;
}
}
return MinTaskDuration(taskDic, divideCost, new Dictionary<string, int>());
}
private int MinTaskDurationCore(
Dictionary<int, int> tasks,
int divideCost,
Dictionary<string, int> memo)
{
var hash = CreateHash(tasks);
if(memo.ContainsKey(hash))
return memo[hash];
if(task.Count == 0) return 0;
if(task.Count == 1 && task.First().Value == 1) return task.First().Key;
var tA = tasks.ToArray();
int min = int.MaxValue;
for(int i = 0; i < tA.Length; i++)
{
int a = tA[i].Key;
if(task[tA[i].Key] == 1);
{
task.Remove(tA[i].Key);
}
else
{
task[tA[i].Key];
}
foreach(int j = i; j < tA.Legnth; j++)
{
if(!task.ContainsKey(tA[j].Key)) continue;
int b = tA[j].Key;
if(task[tA[j].Key] == 1);
{
task.Remove(tA[j].Key);
}
else
{
task[tA[j].Key];
}
var minDuration = MinTaskDurationCore(tasks, divideCost, memo);
var notDividing = a + b + minDuration;
var dividing = divideCost + Math.Max(a,b) + minDuration;
var temp = Math.Min(notDividing, dividing);
if(temp < min)
min = temp;
if(!task.ContainsKey(tA[j].Key))
{
task.Add(b, 1);
}
else
{
task[b]++;
}
}
if(!task.ContainsKey(tA[i].Key))
{
task.Add(a, 1);
}
else
{
task[a]++;
}
}
memo.Add(hash, min);
}
string CreateHash(Dictionary<int, int> tasks)
{
StringBuilder sb = new StringBuilder();
foreach(var kvp in tasks)
{
for(int i = 0; i < kvp.Value; i++)
{
sb.Append(kvp.Key + ',');
}
}
return sb.ToString();
}

Nelson Perez
September 19, 2016 Here is the O(n) solution
Actually I hate this is a very difficult problem to solve in O(n).
I was doing similar exercise of implement a queue with a Max() function from the book "Elements of Programming Interviews" which has a O(n) 'Ninja' level solution which I could not get as it required a another 'ninja' solution from another problem.
This is a variation of implementing a Queue which had a Max function which used a Stack which maintained the max but in this case it maintains the minimum.
This is a very hard solution to perform this in O(n) taking O(n) space as well.
class Stack<T>()
{
List<StackNode<T>> buffer = new List<StackNode<T>>();
private class StackNode<T>
{
T Value;
T Min;
}
public void Push(T val)
{
var node = new StackNode<T>();
node.Value = val;
if(buffer.Count == 0  buffer[buffer.Count  1].Min > val)
{
node.Min = val;
}
else
{
node.Min = buffer[buffer.Count  1].Min;
}
buffer.Add(node);
}
void T Pop()
{
var node = buffer[buffer.Count  1];
buffer.RemoveAt(buffer.Count  1);
return node.Value;
}
public bool IsEmpty()
{
return buffer.Count == 0;
}
public T Min()
{
if(buffer.Count > 0)
return buffer[Count  1].Min;
return default(T);
}
}
public class Queue<T>
{
Stack<T> A = new Stack<T>();
Stack<T> B = new Stack<T>();
public void Enqueue(T val)
{
A.Push(val);
}
public T Dequeue()
{
if(B.IsEmpty())
{
while(!A.Empty())
{
B.Push(A.Pop());
}
}
return B.Pop();
}
public T Min()
{
wile(!A.Empty())
{
B.Push(A.Pop());
}
return B.Min();
}
}
IEnumerable<int> FindMinOfSubArrays(int[] A, int k)
{
if(A.Length < k) throw ArgumentOutOfRangeException("k");
Queue q = new Queue();
for(int i = 0; i < k; i++)
{
q.Enqueue(A[i]);
}
int min = q.Min();
for(int j = k; j < A.Length; j++)
{
q.Enqueue(A[j]);
q.Dequeue();
var temp = q.Min();
if(min > temp)
min = temp;
yield return min;
}
}

Nelson Perez
September 19, 2016 As mentioned before we can get an average case of O(N*LogN) but still a O(N^2) in the worst case.
The idea is to based on which direction you move in the BST you'll cache the number of values more than.
I think there might be better way to do this but so far I though about using the BST.
For simplicity I'll assume that the numbers are unique.
class Node
{
int Data;
int RightCount = 0;
int MoreThanCount = 0;
Node Left = null;
Node Rigth = null;
}
public IEnumerable<int> FindMorethanAfter(int[] A)
{
if(A.Length == 0) yield break;
var root = new Node() { Data = A[A.Length 1] };
var map = new Dictionary<int,
for(int i = A.Length  2; i >= 0; i)
{
var cur = root;
var node = new Node() { Data = A[i] };
while(true)
{
if(cur.Data < node.Data)
{
cur.RightCount++;
if(cur.Rigth == null)
{
cur.Right = node;
map.Add(node.Data, node);
break;
}
cur = cur.Right;
}
else if(cur.Data > node.Data)
{
node.MoreThanCount += cur.RightCount;
node.MoreThanCount++;
if(cur.Left == null)
{
cur.Left = node;
map.Add(node.Data, node);
break;
}
cur = cur.Left;
}
}
}
foreach(var a in A)
{
yield return map[a].MoreThanCount;
}
}

Nelson Perez
September 19, 2016 Actually I hate this is a very defficult problem to solve in O(n).
I was doing similar exercise of implement a queue with a Max() function from the book "Elements of Programming Interviews" which has a O(n) 'Ninja' level solution which I could not get as it required a another 'ninja' solution from another problem.
This is a variation of implementing a Queue which had a Max function which used a Stack which maintained the max but in this case it maintains the minimum.
This is a very hard solution to perform this in O(n) taking O(n) space as well.
class Stack<T>()
{
List<StackNode<T>> buffer = new List<StackNode<T>>();
private class StackNode<T>
{
T Value;
T Min;
}
public void Push(T val)
{
var node = new StackNode<T>();
node.Value = val;
if(buffer.Count == 0  buffer[buffer.Count  1].Min > val)
{
node.Min = val;
}
else
{
node.Min = buffer[buffer.Count  1].Min;
}
buffer.Add(node);
}
void T Pop()
{
var node = buffer[buffer.Count  1];
buffer.RemoveAt(buffer.Count  1);
return node.Value;
}
public bool IsEmpty()
{
return buffer.Count == 0;
}
public T Min()
{
if(buffer.Count > 0)
return buffer[Count  1].Min;
return default(T);
}
}
public class Queue<T>
{
Stack<T> A = new Stack<T>();
Stack<T> B = new Stack<T>();
public void Enqueue(T val)
{
A.Push(val);
}
public T Dequeue()
{
if(B.IsEmpty())
{
while(!A.Empty())
{
B.Push(A.Pop());
}
}
return B.Pop();
}
public T Min()
{
wile(!A.Empty())
{
B.Push(A.Pop());
}
return B.Min();
}
}
public IEnumerable<int> FindMinOfSubArray(int[] A, int k)
{
if(A.Length < k) throw ArgumentOutOfRangeException("k");
Queue q = new Queue();
for(int i = 0; i < k; i++)
{
q.Enqueue(A[i]);
}
int min = q.Min();
for(int j = k; j < A.Length; j++)
{
q.Enqueue(A[j]);
q.Dequeue();
var temp = q.Min();
if(min > temp)
min = temp;
yield return min;
}
}

Nelson Perez
September 19, 2016 public int FindKthSpiralValue(int[][] N, int kth)
{
if(kth >= N.Length*N.Length  kth < 0)
throw ArgumentOutOfRangeException("kth");
int startx = 0;
int starty = 0;
int endx = N.Length1;
int endy = N.Length1;
while(true)
{
for(int i = startx; i < endx; i++)
{
if(kth == 0) return N[i][starty]l;
kth;
}
for(int j = starty; j < endy; j++)
{
if(kth == 0) return N[endx][j];
kth;
}
for(int k = endx; k > startx; k)
{
if(kth == 0) return N[k][endy];
kth;
}
for(int l = endy; l > starty; l)
{
if(kth == 0) return N[startx][l];
kth;
}
startx++;
starty++;
endx;
endy;
}
}

Nelson Perez
September 18, 2016 Seems that you only need to count all pairs and if there is at least one unpair then it can be added one as 'abcba' and 'abba' are both valid palindrome.
public int MaxPalindrome(string S)
{
var counts = new Dictionary<char, int>();
foreach(var s in S)
{
if(!counts.ContainsKey(s))
{
counts.Add(s, 1);
}
else
{
counts[s]++;
}
}
bool hasMiddle = false;
int total = 0;
foreach(var kvp in counts)
{
total+= kvp.Value / 2 * 2;
if(kvp.Value % 2 == 1)
hasMiddle = true;
}
if(hasMiddle)
total++;
return total;
}

Nelson Perez
September 18, 2016 I'm not a big fan of this problem as it cost me an interview with Google as at that time I did not really practice. Now I know better.
First I tried using a HashSet to track the possible Mayors and remove from it as I find people that did not knew each other but it seemed silly to make the problem an horrible solution as I was handling not to remove an item while iterating so an exception would not occurr.
The best solution is using a list iterate and remove using a FOR loop modifying the indexes so I did not had to worry about an exception occurring. Then verify that every knows mayor and the mayor knows noone.
Person FindMayor(Person[] people)
{
var list = new List<Person>(people)
for(int i = 0; i < list.Count; i++)
{
for(int j = i + 1; j < list.Count; j++)
{
if(!knows(list[i], list[j])
{
list.RemoveAt(j);
j;
}
if(!knows(list[j], list[i])
{
list.RemoveAt(i);
i;
}
}
}
// Everyone needs to know the mayor and the mayor needs to know noone
foreach(var p in people)
{
for(int i = 0; i < list.Count; i++)
{
if(!knows(p, list[i])  knows(list[i], p))
{
list.RemoveAt(i);
i;
}
}
}
return list.FirstOrDefault();
}

Nelson Perez
September 11, 2016 This seems to be a hard problem as we need to create first a method that finds the shortest path and enumerate each of the node pairs.
We could use memoization in order to make avoid making the algorithm exponential.
public GetShortPathSums(List<Node> nodes)
{
int sum = 0;
var memo = new Dictionary<Node, Dictionary<Node, int>>();
var hashSet = new HashSet<Node>();
foreach(var start in nodes)
{
foreach(var end in nodes)
{
if(!memo.ContainsKey(start)  !memo[start].ContainsKey(end))
{
visited.Add(start);
sum += GetShortestPathWeigth(start, end, hashSet, memo);
visited.Remove(start);
}
}
}
}
private int GetShortestPathWeigth(
Node start,
Node end,
HashSet<Node> visited,
Dictionary<Node, Dictionary<Node, int> memo)
{
if(memo.ContainsKey(start) && memo[start].ContainsKey(end))
{
return memo[start][end];
}
if(start == end) return 0;
var min = int.MaxValue;
foreach(var path in start.Paths)
{
if(!visited.Contains(path.Node))
{
visited.Add(path.Node);
var temp = GetShortestPathWeigth(path.Node, end, visited, memo);
if(temp < min)
{
min = temp;
}
visited.Remove(path.Node);
}
}
if(!memo.ContainsKey(start))
{
memo.Add(start, new Dictionary<Node, int>());
}
memo[start].Add(end, min);
// Because the path could go both ways so path from start> end == end > start
if(!memo.ContainsKey(end))
{
memo.Add(end, new Dictionary<Node, int>());
}
memo[end].Add(start, min);
return min;
}

Nelson Perez
September 11, 2016 Of course there is the obvious n*m solution but we can do a n + m solution if we substract the values out of the total and store the values in a hash set and then look up the value of the other list.
Here is a solution in C#:
public bool IsSumPossibe(int[] nA, int[] mA, int total)
{
var substract = new HashSet<int>();
foreach(var n in nA)
{
substract.Add(total  n);
}
if(divided.Count == 0) return false;
foreach(var m in mA)
{
if(substract.Contains(m))
{
return true;
}
}
return false;
}

Nelson Perez
September 11, 2016 This is a simple trailing problem of a linked list to make it hard I'll implement a custom number of before linked nodes and if it is not possible to find the nth from the end it will return null.
/// Head should be a dumb object
public Node GetNthBeforeEnd(Node head, int nth)
{
if(head == null)
throw ArgumentNullException("head");
if(head.Next == null && nth < 0)
throw ArgumentOutOfRangeException();
Node tail = head;
Node curr = head;
for(int i = 0; i < nth; i++)
{
cur = cur.Next;
if(cur == null)
{
// Can't get it as there are less than nth
return null;
}
}
while(cur != null)
{
cur = cur.Next;
tail = tail.Next;
}
return tail;
}
public static PrintSecondLast(Node head)
{
Node result = GetNthBeforeEnd(head, 1);
if(result != null)
Console.WriteLine(result.Data);
}

Nelson Perez
July 27, 2015 Well assuming that you are getting a matrix of colors for example you'll need to only care about the red and black colors.
public static int CalculateSquares(Color[][] board)
{
Color prev = Color.White;
int xCount = 0;
for(int i = 0;i < board.Length; i++)
{
Color cur = board[i][0];
if(cur != prev && (cur == Color.Black  cur == Color.Red))
{
xCount++;
}
}
prev = Color.White;
int yCount = 0;
for(int i = 0;i < board[0].Length; i++)
{
Color cur = board[0][i];
if(cur != prev && (cur == Color.Black  cur == Color.Red))
{
yCount++;
}
}
return xCount*yCount;
}

Nelson Perez
July 27, 2015 @ kdirishkumar Thanks for the comment well I did not care about overflow case I'm assuming is an integer as it is int.Parse() not a long or someother type of large integer.
To fix thisI'll probaly need to wrap the possible overflow exception to throw my own ArgumentOutOfRangeException.
This works only if is a Binary Search Tree which not true for all Binary trees.
 Nelson Perez July 08, 2015Here I've implemented like a graph BFS traversal. I've also implemened the Deserialize.
Probably it would be easier if it was a Binary Search tree.
public static string SerializeBinaryTree(Node root)
{
if(root == null)
{
return string.Empty();
}
var q = new Queue<Node>();
var sb = new StringBuilder();
q.Enqueue(root);
while(q.Count > 0
{
Node c = q.Dequeue();
if(c != null)
{
sb.Append(string.Format("{0},", c.Data);
q.Enqueue(c.Left);
q.Enqueue(c.Right);
}
else
{
sb.Append(",");
}
}
return sb.ToString();
}
public static Node DeserializeBinaryTree(string S)
{
if(string.IsNullOrEmpty(S)) return null;
string[] N = S.Split(",");
var q = new Queue<Node>();
int r;
if(int.TryParse(N[0], out r)
{
q.Enqueue(new Node() { Data = r};
}
else return null;
Node result = q.Peek();
for(int i = 1; i < N.Length; i+= 2)
{
int l,r;
Node c = q.Dequeue();
if(int.TryParse(N[i], out l)
{
c.Left = new Node() { Data = l};
}
if((i+1 != N.Length && int.TryParse(N[i+1], out r)
{
c.Rigth = new Node(){ Data = r };
}
}
return result;
}

Nelson Perez
July 08, 2015 This works if there is just only one non duplicate value which this is the case and all duplicates has a even number of dupes.
But it does not print duplicates. (although the question is not very clear)
@pantjeevan98: @Scott is correct this is O(nlogn) as every insertion is O(logn) so n insertions is O(nlogn) then the double loop actually is O(n)
So this is O(nlogn + n) => O(nlogn)
public static int GetAscendingRotation(int[] A)
{
int p1 = 0, p2 = 1;
while(p2 < A.Length)
{
if(A[p1] > A[p2])
{
return p2;
}
}
return 0;
}

Nelson Perez
July 01, 2015 public static string SortByAge(string fileContent)
{
var minHeap = new SortedDictionary<int, List<string>>();
foreach(string line in fileContent.Split('\n'))
{
string[] parsed = line.Split(',');
string name = parsed[0];
int age = Convert.ToInt32(parsed[1]);
if(!minHeap.ContainsKey(age))
{
minHeap.Add(age, new List<string>() { name });
}
else
{
minHeap[age].Add(name);
}
}
StringBuilder sb = new StringBuilder();
foreach(var kvp in minHeap)
{
foreach(string name in kvp.Value)
{
sb.Append(string.Format("{0},{1}\n", name, kvp.Key));
}
}
return sb.ToString();
}

Nelson Perez
July 01, 2015 public static int ParseInt(string number)
{
bool isNeg = (number[0] == '');
int result = 0;
for(int i = isNeg?1:0; i < number.Length; i++)
{
int c = number[0]  '0';
if(c > 9  c < 0)
{
throw new ArgumentException("Not a valid number");
}
result = result*10 + c;
}
if(isNeg)
{
result *= 1;
}
return result;
}

Nelson Perez
July 01, 2015 Yes that is the same approach I concluded.
It was very tricky for me to come into that conclusion and write the code.
Feel free to take a look at my algorithm bellow and let me know.
This actually would be n^n as every time you do step 2 you'll need to process n times found string n matches for those you'll need to process n matches and so and on.
So this proposed algorithm is actually n^n.
Actually this does not work as you assume that the first found is the actual path and there could be path that lead nowhere.
Also this actually is a O(n^2 * m) algorithm even though is not working correctly.
This was a very hard one for me to make it.
Actually I though of a recursive version with is n^n but then looking a bit closer I though that this is better to do using a graph algorithm which creating the graph would be n^2 and traversing it would be n using a BFS traversal.
public static bool PathExist(List<string> set, string start, string end)
{
if(string.IsNullOrEmpty(start) 
string.IsNullOrEmpty(end) 
start.Length != end.Length
{
throw new ArgumentException();
}
// Sanitize the set
var hs = new HashSet<string>();
foreach(string s in set)
{
if(s.Length == start.Length)
{
// Hashset ignores if is already there
hs.Add(s);
}
}
// Building graph in n^2
hs.Add(start);
hs.Add(end);
var graph = Dictionary<string, HashSet<string>>();
foreach(var t1 in hs)
{
foreach(var t2 in hs)
{
if(IsOneLetterDiff(t1, t2))
{
if(!graph.ContainsKey(t1))
{
graph.Add(t1, new List<string>() { t2 });
}
else
{
graph[t1].Add(t2);
}
}
}
}
// Now BFS traverse of the graph to find if a path exist
var q = Queue<string>();
var visited = new HashSet<string>();
q.Enqueue(start);
while(q.Count > 0)
{
string cur = q.Dequeue();
// Found a path
if(cur == end)
return true;
if(!visited.Contains(cur))
{
visited.Add(cur);
if(graph.ContainsKey(cur))
{
foreach(var child in graph[cur])
{
q.Enqueue(child);
}
}
}
}
return false;
}
// Assumes that their both nonnull and with the same length
private static bool IsOneLetterDiff(string one, string two)
{
bool found = false;
for(int i = 0; i < one.Length; i++)
{
if(one[i] != two[i])
{
if(!found)
{
found == true;
}
else
{
return false;
}
}
return found;
}
}

Nelson Perez
June 24, 2015 Yes this is in fact a BFS traversal.
public static List<List<string>> GetFriendLevels(Dictionary<string, List<string>> friends, string person)
{
var q = new Queue<string>();
var hs = new HashSet<string>();
var result = new List<List<string>>();
q.Enqueue(friend);
q.Enqueue(null);
hs.Add(friend);
List<string> cl = new List<string>();
while(q.Count > 0)
{
string cur = q.Dequeue();
if(cur == null)
{
result.Add(cl);
cl = new List<string>();
if(q.Count != 0)
{
q.Enqueue(null);
}
}
else
{
cl.Add(f);
foreach(string f in friends[cur])
{
if(!hs.Contains(f)
{
q.Enqueue(f);
hs.Add(f);
}
}
}
}
// Removing the first level wish is itself
result.Remove(0);
return result;
}

Nelson Perez
June 24, 2015 As mentioned before Depth First Search but aggregating the sum and creating a new list once found that adds to the expected sum.
public static List<List<int>> TreePathThatSumsN(Node root, int n)
{
var result = new List<List<int>>();
CoreTPTSN(root, n, 0, result, new List<int>());
return result;
}
private static void CoreTPTSN(
Node root,
int n,
int sum,
List<List<int> result,
List<int> current);
{
if(root == null)
return;
sum += root.Data;
current.Add(root.Data);
if(sum == n)
{
result.Add(new List<int>(current));
}
CoreTPTSN(
root.Left,
n,
sum,
result,
current);
CoreTPTSN(
root.Rigth,
n,
sum,
result,
current);
current.Remove(current.Count1);
}

Nelson Perez
June 23, 2015 Seems easy enough just make a Hash Table with the key been the Data and the value been a list of heads that have them.
Then look them up.
public List<Tuple<int, int>> GetIntersections(List<LinkedList<int>> lists)
{
List<Tuple<int, int>> result = List<Tuple<int, int>>();
var combinations = new HashSet<string>();
var map = new Dictionary<int, List<int>>();
foreach(LinkedList<int> ll in lits)
{
Node<int> current = ll.Head;
while(current != null)
{
if(map.ContainsKey(current.Data))
{
map[current.Data].Add(ll.Head.Data)
}
else
{
map.Add(current.Data, new List<int> { ll.Head.Data };
}
}
}
foreach(LinkedList<int> ll in lits)
{
Node<int> current = ll.Head;
while(current != null)
{
foreach(var head in map[current.Data])
{
if(head != ll.Head.Data)
{
// Find combination and reverse combination exist
// so they are are unique
string hash = head + "," + ll.Head.Data;
string reverseHash = ll.Head.Data + "," + head;
if(!combinations.Contains(hash) &&
!combinations.Contains(reverseHash)
{
combinations.Add(hash);
result.Add(new Tuple<int, int>(ll.Head.Data, head);
}
}
}
}
}
return result;
}

Nelson Perez
June 16, 2015 It is not that it should be divisible it should be a power of 5.
 Nelson Perez June 16, 2015Wow this seems to be an intimidating problem at first but if you think about it you just need to split once a power of five is found then look for another sets after that chunk and used memoization to recall when you already processed a previous chunk.
I used a hash set to know if a number is a power of 5.
I think there are better solutions but this is what I would came up with at the moment.
public int FindSplitsOfPowerOfFive(string S)
{
return CoreFindSplits(S, 0, S.Length, new Dictionary<string, int>());
}
private int CoreFindSplits(string S, int start, int end, Dictionary<string, int> memo)
{
string hash = start + "," + end;
if(memo.ContainsKey(hash))
{
return memo[hash];
}
var sorted = SortedDictionary<int, int>()
// Find all Powers of Five while parsing
long current = 0;
int min = 2;
for(int i = start, i < S.Length; i++)
{
current <<= 1;
current += S[i] =='1'?1:0;
if(IsPowerOfFive(current))
{
int sub = CoreFindSplits(S, i+1, end, memo);
if(sub != 1 && sub < min)
{
min = sub + 1;
}
}
}
memo.Add(hash, min);
return min;
}
private bool IsPowerOfFive(long number)
{
// Precalculated all powers of five for efficient lookup
const HashSet<long> powersOfFive = new HashSet<long>
{
5,
25,
125,
625,
3125,
15625,
78125,
390625,
1953125,
9765625,
48828125,
244140625,
1220703125,
6103515625,
30517578125,
152587890625,
762939453125,
3814697265625,
19073486328125,
95367431640625,
476837158203125,
2384185791015625,
11920928955078125,
59604644775390625,
298023223876953125,
1490116119384765625,
7450580596923828125
};
return powersOfFive.Contains(number);
}

Nelson Perez
June 16, 2015 Use binary search to find when there is a zero after one to make this a O(log(n)) algorithm rather than a O(n).
This could also be done alliteratively with a while loop but decided to do recursively just a bit simplicity.
(or maybe I'm loosing my edge) :P
public int CountZeros(int[] A)
{
return coreCountZeros(A, 0, A.Length);
}
private int coreCountZeros(int[] A, int start, int end)
{
if(start == (end  1))
{
if(A[start] == 0)
return start + 1;
return start;
}
int mid = (start + end)/2;
if(A[mid] == 1)
{
return coreCountZeros(A, start, mid);
}
//else A[mid] == 0
return coreCountZeros(A, mid, end);
}

Nelson Perez
June 16, 2015 Seems like something that dynamic programming could help. I did not remove duplicates making the assumption that numbers are unique.
I was going to convert into a bottom up algorithm adding till is more than sum that you are looking for but decided to keep it like these as I can think this way best to understand, at least for me it is.
public static IEnumerable<IEnumerable<int>> GetPermutations(int[] A, int[] S)
{
List<List<int>> result = new List<List<int>>();
var memo = new Dictionary<int, List<List<int>>>();
foreach(int sum in S)
{
result.AddRange(GetPermutationsCore(A, new List<int>(), sum, memo));
}
return result;
}
private static List<List<int>> GetPermutationsCore(
int[] A,
List<int> current,
int sum,
Dictionary<int, List<List<int>>> memo)
{
if(sum == 0)
{
return new List<List<int>>() { new List<int>(current) };
}
else if(sum < 0)
{
return new List<List<int>>();
}
if(memo.ContainsKey(sum))
{
return memo[sum];
}
var result = new List<List<int>>();
foreach(int a in A)
{
current.Add(a);
result.AddRange(GetPermutationsCore(A, current, sum  a, memo));
current.RemoveAt(current.Count  1);
}
memo.Add(sum, result);
return result;
}

Nelson Perez
May 15, 2015 This is exactly the same solution that I did on an interview (but using C#) and I think the interviewer was totally confused.
 Nelson Perez May 14, 2015This problem is very similar to a Breath First Search graph algorithm where you go level by level and using a null value to mark the level.
I had this interview question once and I messed up. It worked but I used a Queue and Stack to reverse and a toggle flag to determine first Left then Right or first Right and then Left.
Instead of just dumping into a list and traverse using a toggle flag to determine the direction.
public static void PrintInZigZag(Node<T> root)
{
Queue<Node<T>> q = new Queue<Node<T>>();
bool left2right = false;
List<T> output = new List<T>();
q.Enquue(root);
q.Enqueue(null);
while(q.Count > 0)
{
Node val = q.Dequeue();
if(val == null)
{
PrintList(output, left2rigth);
left2rigth = ~left2rigth;
output.Clear();
q.Enqueue(null);
}
else
{
output.Add(val.Data);
if(val.Left != null)
{
q.Enqueue(val.Left);
}
if(val.Rigth != null)
{
q.Enqueue(val.Rigth);
}
}
}
}
private static void PrintList(List<T> output, bool forward)
{
int start = 0;
int end = output.Count 1;
int plus = 1;
if(!forward)
{
start = output.Count 1;
end = 0;
plus = 1;
}
for(int i = start; i != end; i += plus)
{
Console.Write(output[i].ToString() + " ");
}
Console.WriteLine(output[end]);
}

Nelson Perez
May 14, 2015 Interesting solution of using the integer a your "array" for counting (true/false) sort and using bitwise operations to fill without the loop to populate it. Very nice.
 Nelson Perez May 14, 2015Actually it is interesting that people though that the time were hours in a single day which seems to allow linear time algorithms.
I myself though of times out of something so time could be more than a 24 hour boundary.
public static IEnumerable<int> FindIntersections(Tuple<int, int>[] TS)
{
// int => first, Tuple<int, int> => largest range, int => position
var sorted = new SortedDictionary<int, Tuple<Tuple<int, int>, int>>();
HashSet<int> result = new HashSet<int>();
for(int i = 0; i < TS.Length; i++)
{
var ts = TS[i];
// Probably the best approach is to is a HashTable first to remove
// intersections so lookups are O(1) rather than O(logn)
if(sorted.ContainsKey(ts.Item1))
{
var val = sorted[ts.Item1];
result.Add(i);
result.Add(val.Item2);
if(val.Item1.Item2 < ts.Item2)
{
sorted[ts.Item1] = new Tuple<Tuple<int, int>, int>(ts, i);
}
}
else
{
sorted.Add(ts.Item1, new Tuple<Tuple<int, int>, int>(ts, i));
}
}
var prev = new Tuple<Tuple<int, int>, int>(new Tuple<int, int>(1, 1), 1);
foreach(var kvp in sorted)
{
if(prev.Item1.Item2 > kvp.Key)
{
result.Add(prev.Item2);
result.Add(kvp.Value.Item2);
}
prev = kvp.Value;
}
result.Remove(1);
// It is not sorted
return result;
}

Nelson Perez
May 14, 2015 I think this is simple and straigh forward inorder traversal as shown bellow if O(1) is required without counting the call stack.
public static void InOrderTraversal(Node root)
{
if(root == null)
return;
InOrderTraversal(root.Left);
Console.WriteLine(root.Data);
InOrderTraversal(root.Rigth);
}
Now stating that the call stack counts towards the O(1). Here the best way is to flatten the three into a list where the Next is the Right. Adding the parent node to the right most of the left side.
Note:
Notice that in the code bellow keeps integrity of tree fixing the new Right of the previous node after performing the flatening and also return the new root element.
This is not necesary for the algorithm as I could easily don't care about the reference integrity of the tree during traversal so no need for caring about the prevParent nor returning the new root.
public static void InOrderTraversalO1(Node root)
{
Node parent = root;
Node newRoot = null;
// Dummy node to not check for null
Node prevParent = new Node();
while(parent != null)
{
if(parent.Left == null)
{
Console.WriteLine(parent.Data);
if(newRoot == null)
{
newRoot = parent;
}
prevParent = parent;
parent = parent.Right;
}
else
{
Node left = parent.Left;
Node rightMost = left;
// Find right most
while(rightMost .Rigth != null)
{
rightMost = rightMost .Right;
}
rightMost.Right = parent;
parent.Left = null;
parent = left;
// Keep integrity
prevParent.Right = parent;
}
}
return newParent;
}

Nelson Perez
May 07, 2015 public static void PopulateNext(Node root)
{
if(root == null) return;
Queue<Node> q = new Queue<Node>();
q.Enqueue(root);
q.Enqueue(null)
Node cur = null;
while(q.Count > 0)
{
Node t = q.Dequeue();
if(cur == null)
{
cur = t;
}
else
{
cur.Next = t;
cur = t;
}
if(t != null)
{
if(t.Left != null)
{
q.Enqueue(t.Left);
}
if(t.Rigth != null)
{
q.Enqueue(t.Right);
}
}
else if(q.Count > 0)
{
q.Enqueue(null);
}
}
}

Nelson Perez
April 19, 2015 Seems easy enough to do a recursion.
public static IEnumrable<string> PermuteLetters(List<string> words)
{
HashSet<string> result = new HashSet<string>();
PermuteLettersHelper(
words,
String.Empty,
0,
result);
return result;
}
private static PermuteLettersHelper(
List<string> words,
string S,
int index,
HashSet<string> result)
{
if(index >= words.Count)
{
if(!result.Contains(S))
{
result.Add(S);
}
}
else
{
foreach(char c in words[index])
{
PermuteLettersHelper(words, S + c, index + 1, result);
}
}
}

Nelson Perez
April 19, 2015 This seems to be a version of finding a substring in a set of string but in this case there is a vertical horizontal and such.
Where you could use a rolling hash in order to find the word.
I rushed a little bit but hopefully you get the idea at least from the vertical or horizontal versions.
class WordCoordinate
{
public int StartX {get;set;}
public int StartY {get;set;}
public int EndX {get;set;}
public int EndY {get;set;}
}
public static WordCoordinate FindWord(char[][] grid, string word)
{
WordCoordinate result = null;
List<Func<WordCoordinate, char[][], string>> functions =
new List<Func<WordCoordinate, char[][], string>>()
{
FindWordVertical,
FindWordHorizontal;
FindWordDiagonal1,
FindWordDiagonal2
}
foreach(var func in functions)
{
result = func(grid, word);
if(result != null)
{
break;
}
}
return result;
}
private static long GetHash(IEnumerable<char> chars)
{
long result = 0;
foreach(char c in chars)
{
result += c.GetHashCode();
}
return result;
}
private static WordCoordinate FindwordVertical(char[][] grid, string word)
{
long wordHash = GetHash(word);
for(int i = 0; i < grid.Length; i++)
{
int j = 0;
int runningHash = 0;
for(; j < grid[i].Length ; j++)
{
runningHash += grid[i][j].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[i][j  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[i][j  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = j  word.Length,
StartY = i,
EndX = j,
EndY = i);
}
}
}
}
}
return null;
}
private static WordCoordinate FindwordHorizontal(char[][] grid, string word)
{
long wordHash = GetHash(word);
for(int i = 0; i < grid.Length; i++)
{
int runningHash = 0;
for(int j = 0; j < grid[i].Length ; j++)
{
runningHash += grid[j][i].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[j][i  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[j][i  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = i  word.Length,
StartY = j,
EndX = i,
EndY = j);
}
}
}
}
}
return null;
}
private static WordCoordinate FindWordDiagonal1(char[][] grid, string word)
{
long wordHash = GetHash(word);
for(int i = 0; i < grid.Length; i++)
{
int runningHash = 0;
for(int j = 0; j < grid[0].Length ; j++)
{
runningHash += grid[j+i][j].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[j+i][j  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[j+i][j  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = j+i  word.Length,
StartY = j  word.Length,
EndX = i+j,
EndY = j);
}
}
}
}
}
for(int j = 0; j < grid.Length;j++)
{
int runningHash = 0;
for(int i = 0; i < grid[0].Length ; i++)
{
runningHash += grid[j+i][j].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[j+i][j  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[j+i][j  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = j+i  word.Length,
StartY = j  word.Length,
EndX = i+j,
EndY = j);
}
}
}
}
}
return null;
}
private static WordCoordinate FindWordDiagonal1(char[][] grid, string word)
{
long wordHash = GetHash(word);
for(int i = 0; i < grid.Length; i++)
{
int runningHash = 0;
for(int j = grid[0].Length  1; j >=0 ; j)
{
runningHash += grid[j+i][j].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[j+i][j  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[j+i][j  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = j+i  word.Length,
StartY = j  word.Length,
EndX = i+j,
EndY = j);
}
}
}
}
}
for(int j = 0; j < grid.Length;j++)
{
int runningHash = 0;
for(int i = grid[0].Length  1; i >=0 ; i)
{
runningHash += grid[j+i][j].GetHashCode();
if(j >= word.Length)
{
runningHash = grid[j+i][j  word.Length;
if(runningHash == wordHash)
{
bool equal = true;
for(int k = 0; k <= word.Length; k++)
{
if(grid[j+i][j  word.Length + k] != word[k])
{
equal != false;
break;
}
}
if(equal)
{
return new WordCoordinate() {
StartX = j+i  word.Length,
StartY = j  word.Length,
EndX = i+j,
EndY = j);
}
}
}
}
}
return null;
}

Nelson Perez
April 16, 2015 function RemoveDuplicates(str, index)
{
if(index == str.length)
return "";
var first = str.charAt(index);
for(var i = index; i < str.length ; i++)
{
if(str.charAt(i) != first)
{
return first + RemoveDuplicates(str, i);
}
}
// Last char of the sequence
return first;
}

Nelson Perez
April 16, 2015 I'm think there might be a better solution but mine is "reverse" the dictionary and traverse the "tree" counting the levels of depth and adding them up.
This is a O(n) solution as it looks up each employee a constant number of times.
public Dicitonary<string, int> CountReports(Dicitonary<string, string> map)
{
var Rmap = new Dicitonary<string, List<string>>();
string ceo = null;
foreach(var kvp in map)
{
if(kvp.Key == kvp.Value)
{
ceo = kvp.Key;
}
else
{
if(!Rmap.ContainsKey(kvp.Value))
{
Rmap.Add(kvp.Value, new List<string>() { kvp.Key });
}
else
{
Rmap[kvp.Value].Add(kvp.Key);
}
}
var result = new Dictionary<string, int>();
CountReportsHelper(Rmap, result, ceo);
return result;
}
private int CountReportsHelper(
Dictionary<string, List<string>> Rmap,
Dictionary<string, int> result,
string root)
{
if(!Rmap.ContainsKey(root))
{
result.Add(root, 0);
return 1;
}
int count = 0;
foreach(string r in Rmap[root])
{
count += CountReportsHelper(Rmap, result, r);
}
result.Add(root , count);
return count + 1;
}
}

Nelson Perez
April 12, 2015 I think this is clever to on O(n) order of growth. At the beginning I though I wouldn't work but it does work.
 Nelson Perez April 12, 2015This needs to rearrange the characters in the Fancy Shuffle.
 Nelson Perez April 12, 2015Seems that you are missing the step where you put the two picked characters back inorder to your sorted by count characters.
As if we only take the two highest and put them in a string until there no character left it not work wel.
For example AABBCC will not return false according to your algorithm
As seems like the algorithm pics AA BB to create the string creating
ABAB then you are left with "CC".
When the result should be: ABCABC.
Not a big an of the solution that I came up but it is not bad.
First I determine the repetition of the characters in a hashtable then I use a "Heap" which I can simulate with a SortedDictionary and take the max number ocurrence and mix with the second max occurences till the SortedDictionary is empty.
This assumes that there is a reverse comparison in order to make the SortedDictionary act like a MaxHeap.
This algorithm has an O(n + nlogn) order of growth but it does many nlogn operations when utilizing the SortedDictionary some could be removed by using a HashSet to supplement the behavior as lookups of the SortedDictionary are log(n) but decided to remove that code to simplify the code.
There also could be minor optimizations where when populating the HashSet (Dictionary) it could detect if the string already is a fancy shuffle so it does not need to do anything else.
I think there might be a better algorithm which remove lost of the nlogn operations. Maybe using a combination of a SortedDictionary and a Queue where the Queue maintains the two largest number of repeated characters and it adds more as the dequeue values reaches the same value as the max of remaining elements in the SortedDictionary.
public static fancy_shuffle(string S)
{
Dictionary<char, int> ht = new Dictionary<char, int>();
foreach(char c in S)
{
if(!ht.ContainsKey(c))
{
ht.Add(c, 1);
}
else
{
ht[c]++;
}
}
var sd = new SortedDictionary<int, List<char>>(/* need reverse comparer */);
foreach(var kvp in ht)
{
if(!hsd.Contains(kvp.Value))
{
sd.Add(kvp.Value, new List<char>(){ kvp.Key });
}
else
{
sd[kvp.Value].Add(kvp.Key);
}
}
StringBuilder sb = new StringBuilder();
Tuple<char, int> prev = null;
while(sd.Count != 0)
{
// Assuming that it has a reverse comparer to make getting the largest value
// the first in the SortedDictionary as this is an O(1) operation.
var kvp = sd.First();
int occur = kvp.Key
char c = kvp.Value.First();
sb.Append(c);
if(kvp.Value.Count == 1)
{
sd.Remove(occur);
}
else
{
kvp.Value.RemoveAt(0);
}
// Add again the previous element if needed
if(prev != null && prev.Item2 > 0)
{
if(!hsd.Contains(prev.Item2))
{
sd.Add(prev.Item2, new List<char>(){ prev.Item1});
}
else
{
sd[prev.Item2].Add(prev.Item1);
}
}
// Make the current the previous element
prev = new Tuple<char, int>(c, ocurr  1 );
}
// This means there is an element left that requires to be duplicated
// in order to make the fancy shuffle work
if(prev != null && prev.Item2 > 0)
{
throw new ArgumentException("This string can't be fancy shuffled");
}
return sb.ToString();
}

Nelson Perez
April 12, 2015 public static int CountBits(byte input)
{
int result = 0;
while(input != 0)
{
result += input%2;
input == input >> 1;
}
return result;
}
public static IsByte3Bits(byte input)
{
return CountBits(byte) == 3;
}

Nelson Perez
April 12, 2015 This is a simple as dequeue the source and queue each of them to the target.
void TowersOfHanoi(PriotityQueue<int> source, PriorityQueue<int> target, PriorityQueue<int> buffer)
{
while(source.Count > 0)
{
target.Enqueue(source.Enqueue);
}
}

Nelson Perez
April 03, 2015 Seems simple enough I'm assuming #1 That the head is a dummy object #2 That I can modify the objects.
public static Node MergeSortedList(Node head1, Node head2)
{
Node current1 = head1;
Node current2 = head2;
if(current1 != null)
{
current1 = current1.Next;
}
if(current2 !=null)
{
current2 = current2.Next;
}
Node head = new Node();
Node current = head;
while(current != null)
{
if(current1 == null)
{
// This means we are done with list 1
current.Next = current2;
break;
}
else if(current2 == null)
{
// This means we are done with list 2
current.Next = current1;
break;
}
else if(current1.Data < current2.Next)
{
current.Next = current1;
current1 = current1.Next;
}
else
{
current.Next = current2;
current2 = current2.Next;
}
current = current.Next;
}
return head;
}

Nelson Perez
March 30, 2015 mmm.. I think the answer is depends!
Depends on the type of query and the type of data.
Although I do agree that one of the solutions could be using a map reduce approach.
But there could be different approaches:
#1 Doing Map Reduce with multiple servers with a clustering to optimize the query.
#2 In memory write readwrite through cache
#3 Common data could be duplicated across many records optimizing multiple subqueries.
#1 Map Reduce:
A Query system could be brute force querying every server and get all of them return whatever data was found you reduce this through an aggregation function which can be improve by doing a clustering scheme to just map the query to certain servers.
#2 In memory readwrite through cache
This approach of query on big system is crucial to have responsiveness. Where you actually always query the cache then the cache queries the underlying storage.
#3 Common Data
Also lets say the query just get a very limited scope but data need to retrieved from multiple places.
We could scarify consistency as social media is not critical like banks where all operations dealing with money need to be atomic.
For example: User, Has Friends so when the user logs needs to see their friends Names, DOB or any other normally consistent data.
So the data could be setup where for an User record to contain a cache version the friends Names, DOB of all his friends. This means data is heavily replicated and it is fine if is not up to date. So queries become sort of a hashtable lookup where you map using the hashcode the machine and get the record on the machine.
Oops though it was a Binary SEARCH Tree!
 Nelson Perez March 24, 2015
RepNellieWheeler212, None at Service Now
Hey Everyone! My name is Nellie Wheeler and I live in the constantly radiant and wonderful San Francisco, CA, and ...
Open Chat in New Window
The best approach is to keep track of the operations and apply the reverse operation when you insert the number. In this way the insertion keeps track of the previous operations.
 Nelson Perez October 09, 2017