Nelson Perez
BAN USER
ABOUT
•Nelson has 10+ years of software development and leading experience.
•Passionate about Big Data, Business Intelligence (BI), Web Development & Cloud highly scalable & highly available services
•Always excited to rapidly ramp-up & learn new skills and technologies.
•Definitely looking for interesting opportunities to skills, passion and agility to provide direct innovation and value to customers.
•Startup/Hackathon cultures are always welcome!
HIGHLIGHTS
•Architected & leaded a company-wide automation platform from build to feature progress & burn down reports for Action Engine.
•Developed service health optics for Windows Azure Active Directory.
•Developed Exchange Online Storage service a big data pipeline.
•Delivered new big data behavioral targeting segments for display and email ads for Bing, Hotmail and Live Messenger marketing teams
•Leaded the solution that saved $3.7 Million/year optimizing Search Engine Marketing budget for Bing Search Cashback online campaign.
EDUCATION
•B.S. in Computer Engineering [CS + EE] + Project Manager Certificate @ University of Puerto Rico – Mayaguez, PR 1999-2005
•Certifications: Project+, Technical Trainer, Project+, MCP
- 1of 1 vote
AnswersConvert a binary tree into a In Order traversal circular list re-purposing the node's pointers Left & Right as Previous and Next respectively.
- Nelson Perez in United States
Hint: A single node Left & Right points to itself.
Note: This is not a binary search tree.| Report Duplicate | Flag | PURGE
Facebook Software Engineer Coding
I certainly agree with this code above but if fast response and memory is not an issue probably a hash of ("userid" + "pageId") would be better.
But if you want to track large number of users and large number of pages like some of the folks here are asking for the solution would require some mechanism to find an user fast and the page fast.
If this is not that heavy transitionally in real life I would probably use a SQL type of database with 3 tables:
- users - Will index based on whatever we want to hash (userid or username)
- pagesNames - Contain all pages that any user found
- userPageCount - contains the relationship between user and pages and the respective count or number of views.
If is mildly heavy transactional would do the same approach but with a cache service which maintains a copy currently used copy in memory and asynchronously stores the data adding to the current count every now an then to the database but locking the userPageCount record in order to make it thread safe.
If is heavy transactional I'll probably go with a no-SQL solution where I could have a single blob table indexed by the userName + pageName containing the count.
So when updating I would find a record based on "userName + PageName" with an thread safe Add operation.
I see very complicated solutions not sure why they are not making any sense to me.
I'll do mine using C#.
I'm pretty much creating a hash of all unique numbers and their total occurrences then I use each number to see if the adding them conforms with the wanted total.
I also do some checks in order to determine if the array is valid to do this operation because the question ask to give exactly half the total sum.
List<int> HalfTotalSubSet(int[] input)
{
// This is to store each number and how many occurrences are found
Dictionary<int, int> numberHash = new Dictionary<int, int>();
int total = 0;
foreach (int n in numbers)
{
if(!numberHash.ContainsKey(n))
{
numberHash.Add(n, 1);
}
else
{
numberHash[n]++;
}
total += n;
}
if (total%2 != 0)
{
throw new ArgumentException(
"The total sum of the array is an even number.\nTotal: " + total);
}
List<int> subset = new List<int>();
if(TotalSumSubSetCore(numberHash, subset, total/2))
{
return subset;
}
// Either this or just return an empty subset or null. I'm leaving it implicit for now.
throw new ArgumentException(
"There is no subset that sum half the total.\nTotal:" + total);
}
bool TotalSumSubSetCore(Dictionary<int, int> numberHash, List<int> subset, int sum)
{
foreach(KeyValuePair nh in numberHash)
{
if(nh.Value > 0)
{
int newSum = sum - nh.Key;
// This means that there are no remaining numbers to process
if(newSum == 0)
{
subset.Add(nh.Key);
return true;
}
// This assuming that they are non-negative numbers otherwise there is not
// no need to have to do this check but it will go through all numbers assuming
// that there is one that could be negative and make it sum equal to zero.
if(newSum < 0)
{
continue;
}
nh.Value--;
if(TotalSumSubSetCore(numberHash, subset, newSum))
{
subset.Add(nh.Key);
return true;
}
nh.Value++;
}
}
// This means that no number could give the exact sum.
return false;
}
RepNellieWheeler212, None at Service Now
Hey Everyone! My name is Nellie Wheeler and I live in the constantly radiant and wonderful San Francisco, CA, and ...
The best solution could multithreading the folder processing as the filesystem is way slower than the processor handling multiple threads and any locking overhead so it will throw all hardrive request at once for listing files so the harddrive will reading will be optimized based on all the open read requests.
It should either keep track of all the threads that are still searching and stop once the time is up.
Again this is high performance because it is using the much resources from memory and CPU as much as it can while it waits for the FileSystem to return the list of files and folders.
I put a max threads in order to limit how many threads are created.
Another way would be to create only 2 main threads and one thread Safe List with the found folders.
- Nelson Perez August 04, 2013So the thread:
#1 Traverse the folders populating the list (could be async also).
#2 Searches each folder in the list for a match while the traverse is not done and there are folders to traverse otherwise it waits will the traversal finds a new folder. (This thread could also spawn another sub threads to process separate found folder).