Google Interview Question
Senior Software Development EngineersCountry: United States
Interview Type: Phone Interview
First create a hashtable that stores the list of children for each node. Set the root to be the node with parent=0. Then do a recursive call on the root. The recursive call will find the weight of the subtree under this node, print it, then return it. So given a node, we can apply this recursive call to its children to both print their weights as well as calculate the total weight for current node.
class SumTreeWeight {
class Node {
int id;
int parent;
int weight;
}
public void printSubTreeWeight(List<Node> nodes) {
Node root = null;
HashMap<Integer, List<Node>> children = new HashMap<Integer, List<Node>>();
for(Node node : nodes) {
if(node.parent == 0)
root = node;
int parent = node.parent;
if(!children.containsKey(parent))
children.put(parent, new LinkedList<Node>());
children.get(parent).add(node);
}
sumWeight(root, children);
}
private int sumWeight(Node node, HashMap<Integer, List<Node>> children) {
int weight = node.weight;
if(children.containsKey(node)) {
for(Node child : children.get(node)) {
weight += sumWeight(child, children);
}
}
System.out.println("Weight for " + node.id + " is " + weight);
return weight;
}
}
Here's how I see one possible approach:
1.) Parse the CSV file into a tree. the algorithm should be recursive as the entries in the CSV are not guaranteed to be sorted. So, the moment you come across an entry where the parent is not yet in the tree, just put aside in a list of orphans. Then run p.1 with this list and repeat until it's empty.
2.) Once you have a tree, then traverse the tree (DFS or BFS doesn't matter) and for each node, call printWeight(Node node, 0).
3.) printWeight(Node node, int *sum) does traverse all it's nodes (again DFS or BFS doesn't matter) and sum the weights into the sum external variable.
The thing is how to optimize p.2 and not over-visit the already visited nodes and remember the intermediate sum results?
Yes, we can use dynamic programming for this.
For each node, add totalWeight with default value -1, meaning "unset".
For each node, locate it in the tree with logn time, then call of getWeight:
In getWeight, test if totalWeight = -1, if so, just use BFS to call recursively, once got the value, set totalWeight. if totalWeight >0, it's set, no need to parse into this subtree.
This could also be dynamic, which means we can add new node in the tree and getWeight of every node including the new one takes O(1) time.
How about memoization instead of dynamic programming?
1. You do not need to add a tree or and keep track of orpans. Just keep a map of integers to nodes in the Tree class. Keep a list (or map) of children and of parents in the Node class.
public class GTree{
private Node root; //You can set the node you encounter of weight zero to root
private Map<Integer, Node> intToNodeMap;
private Map<Integer, Integer> intToSubTreeWeightMap;
....}
public class Node{
private int weight;
private List<Node> children;
private List<Node> parent;
}
Each time you parse a line, add the new nodes to the map (error checks included). If you are adding a parent that is not yet set, set the weight to negative one. Once you hit the line where it is time to create the parent, just add in the weight.
2. Use simple recursion. Save a the subTreeWeight outputs in a map before you return.
public int getSubTreeWeight(Map<Integer,Integer> intToSubTreeWeightMap){
if( intToSubTreeWeightMap.contains(this.weight){
return intToSubTreeWeightMap.get(this.weight);
}
int subTreeWeight = weight;
for(Node n : children.values(){
subTreeWeight + = n.getSubTreeWeight(intToSubTreeWeightMap)
}
intToSubTreeWeightMap.put(this.weight, subTreeWeight )
return subTreeWeight ;
This is O(n).
How about memoization instead of dynamic programming?
1. You do not need to add a tree or and keep track of orpans. Just keep a map of integers to nodes in the Tree class. Keep a list (or map) of children and of parents in the Node class.
public class GTree{
private Node root; /*You can set the node you encounter of weight zero to root*/
private Map<Integer, Node> intToNodeMap;
private Map<Integer, Integer> intToSubTreeWeightMap;
/*... */
}
public class Node{
private int weight;
private List<Node> children;
private List<Node> parent;
}
Each time you parse a line, add the new nodes to the map (error checks included). If you are adding a parent that is not yet set, set the weight to negative one. Once you hit the line where it is time to create the parent, just add in the weight.
2. Use simple recursion. Save a the subTreeWeight outputs in a map before you return.
public int getSubTreeWeight(Map<Integer,Integer> intToSubTreeWeightMap){
if( intToSubTreeWeightMap.contains(this.weight)){
return intToSubTreeWeightMap.get(this.weight);
}
int subTreeWeight = weight;
for(Node n : children.values()){
subTreeWeight += n.getSubTreeWeight(intToSubTreeWeightMap);
}
intToSubTreeWeightMap.put(this.weight, subTreeWeight );
return subTreeWeight;
}
This is O(n).
#include <iostream>
#include <vector>
using namespace std;
class Node {
public:
int id;
int parent;
int weight;
// ... can add other fields right here, if necessary
int totalWeight; // a total weight of self and all the sub tree.
Node(int id_, int parent_, int weight_)
{
id = id_; parent = parent_; weight = weight_; totalWeight = 0;
}
};
void printSubTreeWeight(vector<Node> nodes) {
int ct; // current index
bool haveParent;
for (int i=0; i<nodes.size(); i++)
{
ct = i;
haveParent = false;
do
{
haveParent = false;
nodes[ct].totalWeight += nodes[i].weight;
if (nodes[ct].parent!=0)
{
//current = findIndexByID(nodes[ct].parent);
for (int j=0; j<nodes.size(); j++)
{
if (nodes[j].id == nodes[ct].parent)
{
ct = j;
haveParent = true;
}
}
}
}while (haveParent);
}
for (int i=0; i<nodes.size(); i++)
{
cout<<"id="<<nodes[i].id<<" Parent="<<nodes[i].parent<<" W="<<nodes[i].weight<<" total="<<nodes[i].totalWeight<<endl;
}
}
int main()
{
vector<Node> nodes;
//Node n = new Node(10,30,1);
nodes.push_back(Node(10,30,1));
nodes.push_back(Node(30,0,10));
nodes.push_back(Node(20,30,2));
nodes.push_back(Node(50,40,3));
nodes.push_back(Node(40,30,4));
printSubTreeWeight(nodes);
}
and the output is
id=10 Parent=30 W=1 total=1
id=30 Parent=0 W=10 total=20
id=20 Parent=30 W=2 total=2
id=50 Parent=40 W=3 total=3
id=40 Parent=30 W=4 total=7
Hey here is my quick and dirty "brute force" solution in C++. The run time complexity is O(N^2) as it relies on a nested for loop. Let me know if anyone has any improvements/suggestions. Hope this helps!
#include <iostream>
#include <string>
#include <vector>
using namespace std;
struct node
{
//structure properties
int id;
int parentId;
int weight;
//the constructor for node
node(int id_, int parentId_, int weight_)
{
id = id_;
parentId = parentId_;
weight = weight_;
}
};
void printSubTreeWeight(vector<node> nodes)
{
//define vector to acumulate the weights of all parent nodes
vector<node> parents;
bool hasFound;
//loop through the nodes and compare their parent nodes to see if they are the same
//the time complexity of this algorithm is O(N^2) because it has a loop within a loop
for (int i = 0; i < nodes.size(); ++i)
{
hasFound = false;
for (int j = 0; j < parents.size(); ++j)
{
if (parents[j].parentId == nodes[i].parentId)
{
parents[j].weight += nodes[i].weight;
hasFound = true;
}
}
if (!hasFound)
{
parents.push_back(nodes[i]);
}
}
//now print the output with the use of a for loop, O(N) time complexity
for (int i = 0; i < parents.size(); ++i)
{
cout << "ID: " << parents[i].parentId << " Weight: " << parents[i].weight << endl;
}
}
int main()
{
//define the current tree configuration in a vector of static node values
vector<node> nodes;
vector<node> nodes{{10,30,1},
{30,0,10},
{20,30,2},
{50,40,3},
{40,30,4}};
printSubTreeWeight(nodes);
return 0;
}
the output is:
ID: 30 Weight: 7
ID: 0 Weight: 10
ID: 40 Weight: 3
I think this will also work if you add or remove nodes.
On reviewing my solution and looking at the comments on @Diego Giagios post I realize I did not account for the weight of the parent node its self. The algorithm above could easily be modified to add the weight of the node its self to the accumulated weights after the fact or inside the one of the nested loops by referencing the node ID that the subtree nodes are pointing to.
Thanks for your solution, but I think there are 2 things I have some doubts.
- The printSubTreeWeight(nodes) function suppose to print subtree total weight of only some nodes (subset) out of the all the nodes, e.g. as given list of nodes in the question.
- I think you only count the total of direct child node, not including the child of the child nodes.
Node {
int id;
int parent;
int weight;
int subtree_weight;
}
Create a direct address table which maps node id to node object.
for node x in node list
y = x
while y is not null:
y.subtree_weight = x.weight
y = hash(y.parent)
Complexity is O(nh) (h is the height of the tree). If the tree is balanced, O(nlgn).
You should be able to do a post order traversal on the tree. I have a snippet but mine doesn't add the 30 node's weight right
Post order traversal is usually used to sum up the disk space in a dir. So if you want to get the weight at each node-post order traversal
public class Node {
// Object item;
int id;
Node parent;
Node firstChild;
Node nextSibling;
int weight;
static int subTreeWeight =0;
//constructor
public Node(int id,Node parent,Node firstChild,Node nextSibling,int weight){
System.out.print("Here");
this.id = id;
this.parent = parent;
this.firstChild = firstChild;
this.nextSibling = nextSibling;
this.weight = weight;
}
public int visit(){
// System.out.print("Visiting " + this.id +"\n");
subTreeWeight+=this.weight;
this.weight =subTreeWeight;
System.out.print(subTreeWeight);
return subTreeWeight;
}
public void postorder(){
if (firstChild != null) {
subTreeWeight=0;
firstChild.postorder();
}
this.visit();
System.out.print("SUbtree of id = "+this.id+"weight is " + subTreeWeight + "\n");
if (nextSibling != null) {
if(firstChild != null && parent != null){
this.weight+=subTreeWeight;
}
subTreeWeight=0;
nextSibling.postorder();
}
}
}
Can someone take a look at this and see where I am going wrong?
You should be able to do a post order traversal on the tree. I have a snippet but mine doesn't add the 30 node's weight right
Post order traversal is usually used to sum up the disk space in a dir. So if you want to get the weight at each node-post order traversal
public class Node {
// Object item;
int id;
Node parent;
Node firstChild;
Node nextSibling;
int weight;
static int subTreeWeight =0;
//constructor
public Node(int id,Node parent,Node firstChild,Node nextSibling,int weight){
System.out.print("Here");
this.id = id;
this.parent = parent;
this.firstChild = firstChild;
this.nextSibling = nextSibling;
this.weight = weight;
}
public int visit(){
// System.out.print("Visiting " + this.id +"\n");
subTreeWeight+=this.weight;
this.weight =subTreeWeight;
System.out.print(subTreeWeight);
return subTreeWeight;
}
public void postorder(){
if (firstChild != null) {
subTreeWeight=0;
firstChild.postorder();
}
this.visit();
System.out.print("SUbtree of id = "+this.id+"weight is " + subTreeWeight + "\n");
if (nextSibling != null) {
if(firstChild != null && parent != null){
this.weight+=subTreeWeight;
}
subTreeWeight=0;
nextSibling.postorder();
}
}
}
Can someone take a look at this and see where I am going wrong?
You should be able to do a post order traversal on the tree. I have a snippet but mine doesn't add the 30 node's weight right
Post order traversal is usually used to sum up the disk space in a dir. So if you want to get the weight at each node-post order traversal
public class Node {
// Object item;
int id;
Node parent;
Node firstChild;
Node nextSibling;
int weight;
static int subTreeWeight =0;
//constructor
public Node(int id,Node parent,Node firstChild,Node nextSibling,int weight){
System.out.print("Here");
this.id = id;
this.parent = parent;
this.firstChild = firstChild;
this.nextSibling = nextSibling;
this.weight = weight;
}
public int visit(){
// System.out.print("Visiting " + this.id +"\n");
subTreeWeight+=this.weight;
this.weight =subTreeWeight;
System.out.print(subTreeWeight);
return subTreeWeight;
}
public void postorder(){
if (firstChild != null) {
subTreeWeight=0;
firstChild.postorder();
}
this.visit();
System.out.print("SUbtree of id = "+this.id+"weight is " + subTreeWeight + "\n");
if (nextSibling != null) {
if(firstChild != null && parent != null){
this.weight+=subTreeWeight;
}
subTreeWeight=0;
nextSibling.postorder();
}
}
}
Can someone take a look at this and see where I am going wrong?
You should be able to do a post order traversal on the tree. I have a snippet but mine doesn't add the 30 node's weight right
Post order traversal is usually used to sum up the disk space in a dir. So if you want to get the weight at each node-post order traversal
public class Node {
// Object item;
int id;
Node parent;
Node firstChild;
Node nextSibling;
int weight;
static int subTreeWeight =0;
//constructor
public Node(int id,Node parent,Node firstChild,Node nextSibling,int weight){
System.out.print("Here");
this.id = id;
this.parent = parent;
this.firstChild = firstChild;
this.nextSibling = nextSibling;
this.weight = weight;
}
public int visit(){
// System.out.print("Visiting " + this.id +"\n");
subTreeWeight+=this.weight;
this.weight =subTreeWeight;
System.out.print(subTreeWeight);
return subTreeWeight;
}
public void postorder(){
if (firstChild != null) {
subTreeWeight=0;
firstChild.postorder();
}
this.visit();
System.out.print("SUbtree of id = "+this.id+"weight is " + subTreeWeight + "\n");
if (nextSibling != null) {
if(firstChild != null && parent != null){
this.weight+=subTreeWeight;
}
subTreeWeight=0;
nextSibling.postorder();
}
}
}
Can someone take a look at this and see where I am going wrong?
A method that doesn't build a tree, but uses a queue, a children counter array, a subtree weight array instead.
1. O(n) time to determine leaf nodes, put them in a queue. (initially scan the node array from left to right, and use an array to store the number of its direct children, leaf nodes are those that have no direct children). Initialize each node's subtree weight with its own weight.
2. O(n) time to calculate the weights in a bottom-up way. You add each node's weight to its parent's subtree weight, decrease children counter of the parent node(enqueue the parent node if childern counter has dropped to zero), and then dequeue current node.
3. Output each node's weight.
class TreeNode {
int id;
int parentId;
int weight;
public TreeNode(int id, int parentId, int weight) {
this.id = id;
this.parentId = parentId;
this.weight = weight;
}
}
public TreeNode[] calWeight(String[] strs) {
HashMap<Integer, Integer> childParentMap = new HashMap<Integer, Integer>();
HashMap<Integer, TreeNode> idNodeMap = new HashMap<Integer, TreeNode>();
HashSet<Integer> visited = new HashSet<Integer>();
for (String str : strs) {
String[] items = str.split(",");
int id = Integer.valueOf(items[0]);
int parent = Integer.valueOf(items[1]);
int weight = Integer.valueOf(items[2]);
TreeNode cNode;
if (idNodeMap.containsKey(id)) {
cNode = idNodeMap.get(id);
cNode.weight += weight;
cNode.parentId = parent;
} else {
cNode = new TreeNode(id, parent, weight);
idNodeMap.put(id, cNode);
}
TreeNode pNode;
childParentMap.put(id, parent);
visited.add(id);
if (parent == 0){
continue;
}
if (!idNodeMap.containsKey(parent)) {
pNode = new TreeNode(parent, -1, weight);
idNodeMap.put(parent, pNode);
} else {
int t = id;
do {
int p = childParentMap.get(t);
pNode = idNodeMap.get(p);
pNode.weight += idNodeMap.get(t).weight;
t = p;
} while (visited.contains(pNode) && childParentMap.containsKey(t));
}
}
TreeNode[] r = new TreeNode[strs.length];
int index = 0;
for (TreeNode tn : idNodeMap.values()) {
r[index] = tn;
index++;
}
return r;
}
time complexity is O(n). Any node could be regarded as child or parent.If it is a child, it will update the weight of itself and its direct parent. It takes no more than 2n. If the node has been visited and it is the parent of other node, we will update the weight of its ancestors which has been visited. It takes no more than n.hence, every this algorithms takes O(n).
#include<vector>
#include<iostream>
#include<stddef.h>
#include<map>
using namespace std;
class node{
public:
node(){
}
node(int a,int b,int c,int d=0){
id=a;
pid=b;
w=c;
sum=d;
}
node& operator=(const node& n){
id = n.id;
pid = n.pid;
w = n.w;
sum = n.sum;
}
int id;
int pid;
int w;
int sum;
};
int get_weight_sum(const map<int,vector<node> >& graph,int id,map<int,node>& csv){
int result=0;
map<int,vector<node> >::const_iterator it = graph.find(id);
result=csv[id].w;
if(it==graph.end()){
}else{
const vector<node>& vec = it->second;
for(int i=0;i<vec.size();i++){
result+=get_weight_sum(graph,vec[i].id,csv);
}
}
csv[id].sum = result;
return result;
}
int get_weight(map<int,node>& csv){
map<int,vector<node> >graph;
map<int,node>::iterator it1 = csv.begin();
for(;it1!=csv.end();it1++){
if(it1->second.id){
graph[it1->second.pid].push_back(it1->second);
}
}
return get_weight_sum(graph,0, csv);
}
int main(){
map<int,node>csv;
csv[0]=node(0,0,0);
csv[10]=node(10,30,1);
csv[30]=node(30,0,10);
csv[20]=node(20,30,2);
csv[50]=node(50,40,3);
csv[40]=node(40,30,4);
get_weight(csv);
map<int,node>::iterator it1 = csv.begin();
for(;it1!=csv.end();it1++){
std::cout<<"id:"<<it1->first<<" sum:" <<it1->second.sum<<std::endl;
}
}
I think disturbing the struct Node least would be better. Also, I would avoid making helper functions until extremely necessary. So, first make a hash map(unordered_set in this case) that maps id to index in the list (vector in this case) and identify the root while doing this (O(n))
Then create a data structure to store set of children of each parent (I have used vector<stack<int> > child for this purpose) . Again O(n).
Now iteratively run DFS on this . Recursion can be used here, but I avoid making unnecessary helper functions. Again O(n)
Total O(n)
#include<iostream>
#include<cstring>
#include<cstdio>
#include<vector>
#include<stack>
#include<unordered_map>
using namespace std;
bool debug=false;
struct Node{
int id;
int parent;
int weight;
};
void printSubTreeWeight(vector<Node> nodes){
int size=nodes.size(),root,index,temp;
vector<int>val(size,0);
vector<stack<int> > child(size);
unordered_map<int,int> indexOf;
vector<bool> lookup(size,false);
for(int i=0;i<size;++i){
indexOf[nodes[i].id]=i;
if(nodes[i].parent==0)root=i;
}
for(int i=0;i<size;++i)if(nodes[i].parent!=0){
child[indexOf[nodes[i].parent]].push(i);
}
stack<int> s,sum;
s.push(root);sum.push(nodes[root].weight);
lookup[root]=true;
while(!s.empty()){
index=s.top();
if(!child[index].empty()){
int c_index=child[index].top();
if(!lookup[c_index]){
s.push(c_index);
sum.push(nodes[c_index].weight);
lookup[c_index]=true;
}
child[index].pop();
}else{
val[index]=sum.top();
temp=sum.top();sum.pop();
if(!sum.empty()){
temp+=sum.top();sum.pop();
sum.push(temp);
}
s.pop();
}
}
for(int i=0;i<size;++i)cout<<val[i]<<" ";
cout<<endl;
}
int main(int argc , char **argv)
{
if(argc>1 && strcmp(argv[1],"DEBUG")==0) debug=true;
vector<Node> v;
int t,n;cin>>t;
while(t--){
cin>>n;
v=vector<Node>(n);
for(int i=0;i<n;++i)cin>>v[i].id>>v[i].parent>>v[i].weight;
printSubTreeWeight(v);
}
return 0;
}
I have tested it on the following test cases :
Input :
2
5 10 30 1 30 0 10 20 30 2 50 40 3 40 30 4
7 100 0 2 21 100 5 26 100 7 29 100 8 3 100 12 1 26 5 4 26 6
Output:
1 20 2 3 7
45 5 18 8 12 5 6
I think we can add a subweight and subcount to the structure.and make a hash
of and then traverse the hash, if subcount equals to 0, print and sub its' parents subcount and add weight as well, so each travese, we eliminated the current leaf nodes. Below is the code, just an examlpe.
the complexty is o(nh), if balanced would be nlgn
struct node
{
//structure properties
int id;
int parentId;
int weight;
int subweight;
int subcount;
//the constructor for node
node(int id_, int parentId_, int weight_):id(id_), parentId(parentId_),weight(weight_),subweight(weight_),subcount(0)
{}
};
int main()
{
unorder_map<int, node> node_map;
node_map.insert(std::make_pair(10, node(10, 30, 1)));
node_map.insert(std::make_pair(30, node(30, 0, 10)));
node_map.insert(std::make_pair(20, node(20, 30, 2)));
node_map.insert(std::make_pair(50, node(50, 40, 3)));
node_map.insert(std::make_pair(40, node(40, 30, 4)));
node_map.insert(std::make_pair(0, node(0, 0, 0)));
typedef unorder_map<int,node>::iterator node_iter;
node_iter iter;
for(iter = node_map.begin(); iter != node_map.end();)
{
node_iter = node_map.find(iter->second.parentId);
node_iter->second.subcount++;
}
while(1)
{
for(iter = node_map.begin(); iter != node_map.end();)
{
if(iter->second.subcount == 0)
{
cout<<iter->first<<" "<<iter->second.subweight<<endl;
if(iter->first == 0)
{
flag = true;
break;
}
node_iter = node_map.find(iter->second.parentId);
node_iter->second.subweight += iter->second.subweight;
node_iter->second.subcount--;
node_iter tmp_iter = iter;
iter++;
node_map.erase(tmp_iter);
}
else
{
iter++;
}
}
if(flag)
break;
}
return 0;
}
Here's a simple memoized Python solution that builds the tree (as a hash map sending each node to the list of its children) and prints the weight of every node in the tree:
from collections import defaultdict
def weights(triples):
tree = defaultdict(list)
weights = defaultdict(int)
cache = defaultdict(int)
for t in triples:
weights[t[0]] = t[2]
tree[t[1]].append(t[0])
for t in triples:
print str(t[0]) + ": " + str(weight(tree, weights, cache, t[0]))
def weight(tree, weights, cache, id):
if id in cache:
return cache[id]
w = weights[id]
if id in tree:
for child in tree[id]:
w += weight(tree, weights, cache, child)
cache[id] = w
return w
print weights([(10, 30, 1), (30, 0, 10), (20, 30, 2), (50, 40, 3), (40, 30, 4)])
I think it's not necessary to parse the CSV into a tree. I just sorted the entries by parent and accumulated the weights of the children (and the children of the children) for every parent. Time complexity O(nlogn). Written in C++.
Output:
0=20
30=10
40=3
Code:
#include <iostream>
#include <map>
#include <iterator>
#include <algorithm>
#include <vector>
struct node {
node(int id, int parent_id, int weight) :
id_(id), parent_id_(parent_id), weight_(weight) { }
bool operator<(const node& other) const {
return parent_id_ < other.parent_id_;
}
int id_;
int parent_id_;
int weight_;
};
std::map<int, int> find_node_tree_weights(std::vector<node>& nodes) {
if (nodes.empty())
return std::map<int,int>();
// Sort the nodes by parent - O(nlogn)
std::sort(nodes.begin(), nodes.end());
// Loop throught the nodes - O(n)
std::map<int, int> paren_weights;
for (auto it = nodes.rbegin(); it != nodes.rend(); it++) {
const node& node = *it;
// Calculate weight for this parent
paren_weights[node.parent_id_] += node.weight_;
if (paren_weights.count(node.id_))
paren_weights[node.parent_id_] += paren_weights[node.id_];
}
// Return
return paren_weights;
}
int main() {
std::vector<node> nodes{{10,30,1},
{30,0,10},
{20,30,2},
{50,40,3},
{40,30,4}};
std::map<int, int> weights = find_node_tree_weights(nodes);
for (const auto& p : weights) {
std::cout << p.first << "=" << p.second << std::endl;
}
return 0;
}
I came up with this too but when coding I realized that this is not necessarily O(nlogn), because for each child of a parent, you need a binary search to locate its index, the same holds with their children,too. So that depends on the depth and the node number. You might have n(lognâ§logn)Maybe the average cost may be lee but that demands a math analysis.
And another thing, this won't creat a dynamic set, so adding a node need potential recalculation.
I don't think this works, I don't even think the solution you gave is the right one. The weight of 40 should be its weight, plus the one of its subtree, so w(40)+w(50)=3+4=7. Following this train of thought, the solution would be:
node 0: tree weight is 20
node 30: tree weight is 17
node 40: tree weight is 7
@Diego Giagio
Let's see the input and output:
Input =
10,30,1
30,0,10
20,30,2
50,40,3
40,30,4
Output=
node 0: tree weight is 10
node 30: tree weight is 7
node 40: tree weight is 3
First, you didn't count in the X (the weight of a subtree for node X includes the own weight of X).
Second, node 30 has 3 children, the statement in the question is not exactly clear but I think node 30 should have a total weight of 20, not just the weight of its node40 branch.
My idea is use map. it's O(n) time and O(n) space where n is the number is nodes
Given a list of nodes
1. put nodes in a hashmap
2. find the direct child nodes of each node and put it in a hash map
---1) make a new HashMap<Integer, ArrayList<Node>> (ie name it map)
---2) for each node, map.put(node.id, new ArrayList<Node>())
---3) for each node, map.get(node.parent.id).add(node.id)
3. make a new HashMap<Integer, Integer> (ie name it weight) which the id is key and weight is value
4. for each key->value in map,
for each subelement in value, process them
- meow January 09, 2014