zr.roman
BAN USER- 0of 0 votes
AnswersGiven a tree.
Each node is of type:public class Node { public IEnumerable<Node> Children { get; set; } }
Write a function GetAllNodes which returns collection, containing all nodes from given tree. (Given only the root node), i.e.:
IEnumerable<Node> allNodes = GetAllNodes( rootNode );
(The task is given in C#, but this is not a restriction, you may use any language you want).
- zr.roman in Russia| Report Duplicate | Flag | PURGE
Software Engineer / Developer Algorithm - 0of 0 votes
AnswersImplement class Queue with help of only 2 stacks.
i.e.:
- zr.roman in Russiaclass Queue<T>{ private Stack<T> _stack1; private Stack<T> _stack2; // public methods: Enqueue, Dequeue, Peek, Count // private methods, if any. }
| Report Duplicate | Flag | PURGE
Software Engineer / Developer Algorithm - 0of 0 votes
AnswerImplement class Stack with help of only 2 queues,
i.e.:
- zr.roman in Russiaclass Stack<T>{ private Queue<T> _queue1; private Queue<T> _queue2; // public methods: Push, Pop, Peek, Count // private methods, if any. }
| Report Duplicate | Flag | PURGE
Software Engineer / Developer Algorithm - 3of 3 votes
AnswersImagine a man reading a book.
- zr.roman in Russia
He can perform only 2 possible actions of reading:
1) read a page in a minute (careful reading),
2) read two pages in a minute (look through).
Nothing else is permitted.
Calculate the number of all possible combinations of book reading ways with given number of pages.
Example: given 3 pages.
Answer is 3 combinations, as follows:
1st: Careful reading (1) - careful reading (1) - careful reading (1),
2nd: Careful reading (1) - look through (2),
3rd: Look through (2) - careful reading (1).| Report Duplicate | Flag | PURGE
SDE-2 Algorithm
looks like, it is O(n^2) solution, because each time we insert element in the bit structure, we have to recalculate at most n-2 existing elements in worst case (quite rare case, but still). In best case we have to recalculate 0 elements. So, average running time is (n-2)/2 per each iteration. For the whole solution average running time is n*(n-2)/2.
Asymptotic complexity is O(n^2).
This is not just a running total, as it is described on wikipedia.
The task is of much more complicated (combinatorial) logic.
c# implementation.
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading;
namespace PhoneBook {
public struct Person {
public Person( string name, string phoneNumber, string address ) {
Name = name;
PhoneNumber = phoneNumber;
Address = address;
}
public string Name { get; }
public string PhoneNumber { get; }
public string Address { get; }
}
public class Phonebook {
private readonly object _lock = new object();
private readonly ConcurrentDictionary<string, Person> _indexByName;
private readonly ConcurrentDictionary<string, Person> _indexByPhoneNumber;
public Phonebook( IEnumerable<Person> people ) {
_indexByName = new ConcurrentDictionary<string, Person>();
_indexByPhoneNumber = new ConcurrentDictionary<string, Person>();
foreach ( var p in people ) {
_indexByName.AddOrUpdate( p.Name, p, (s, pers) => pers );
_indexByPhoneNumber.AddOrUpdate( p.PhoneNumber, p, (s, pers) => pers );
}
}
public Person? LookupByName( string name ) {
Person pers;
var res = _indexByName.TryGetValue( name, out pers );
return res ? (Person?)pers : null;
}
public Person? LookupByPhoneNumber( string phoneNumber ) {
Person pers;
var res = _indexByPhoneNumber.TryGetValue( phoneNumber, out pers );
return res ? (Person?)pers : null;
}
public void AddPerson( Person person ) {
Monitor.Enter( _lock );
var res1 = LookupByName( person.Name );
var res2 = LookupByPhoneNumber( person.PhoneNumber );
if ( res1 != null && res2 != null ) {
Monitor.Exit( _lock );
return;
}
Person p;
if ( res1 != null ) {
_indexByPhoneNumber.TryRemove( ((Person)res1).PhoneNumber, out p );
_indexByPhoneNumber.TryAdd( person.PhoneNumber, person );
}
if ( res2 != null ) {
_indexByName.TryRemove( ((Person)res2).Name, out p );
_indexByName.TryAdd( person.Name, person );
}
_indexByName.AddOrUpdate( person.Name, person, (s, pers) => person );
_indexByPhoneNumber.AddOrUpdate( person.PhoneNumber, person, (s, pers) => person );
Monitor.Exit( _lock );
}
}
}
sorry, looks like you are totally wrong.
I examined all the possible trees, which can be made up from your picture.
They are (in-order traversals):
1) ECBDACBDFMEAG
2) ECBDAFCBDMEAG
3) CBDEACBDFMEAG
4) CBDEAFCBDMEAG
My solution works greatly on all these variants. The answer is true, because the result pair is {CBD, 434}.
sorry, but your in-order traversal is wrong. Depending on direction of edges it should start either ECBDA... or CBDEA...
Also, from your picture it is obvious that the end of in-order traversal must be ...MEAG.
But you wrote something different.
So, I can say nothing definite about your conclusion, 'cause it is based on wrong idea.
Can you specify the tree more accurately, in such notation:
M( A ( E ( <left>, <right> ), F ( <left>, <right> ) ), A ( E, G ) )
the solution is to perform in-order traversal, and obtain 2 sequences:
seq of nodes and seq of levels.
Then find duplicate sub sequences while comparing the corresponding sub sequences of levels and additionally checking the proper begin and end of sub sequence (explained in examples 3 and 4).
If we find a pair: 2 identical subseqs of nodes and 2 identical subseqs of levels then the answer is true (i.e. tree has sub trees).
Example 1:
D
| \
B B
| \ \ \
A C A C
In-order traversal result:
ABCDABC
212 0 212
We have 2 identical pairs: {ABC, 212} and {ABC, 212}, so the tree has duplicate sub trees.
Example 2:
A
| \
B C
| \ \ \
A C B D
In-order traversal result:
ABCABCD
212 021 2
Though we have 2 duplicate nodes subseqs: ABC and ABC, but the corresponding levels subseqs are not identical: 212 and 021. So there are no duplicate sub trees in the tree.
Example 3:
D
| \
B B
| \ \
A C A
In-order traversal result:
ABCDAB
212 0 21
We have 2 identical pairs: {AB, 21} and {AB, 21}, but additional check fails. First AB is not a valid subseq because the next level after 21 is 2, this means that the sub tree has one more node ('cause 2 > 1), that is why the first pair should be {ABC, 212}.
So, {ABC, 212} and {AB, 21} are not identical, so again there are no duplicate sub trees in the tree.
Example 4:
D
| \
B B
\ \ \
C A C
In-order traversal result:
BCDABC
12 0 212
We have 2 identical pairs: {BC, 12} and {BC, 12}, but additional check fails. Second BC is not a valid subseq because the prev level before 1 is 2, this means that the sub tree has one more node ('cause 2 > 1), that is why the second pair should be {ABC, 212}.
So, {BC, 12} and {ABC, 212} are not identical, so again there are no duplicate sub trees in the tree.
c# implementation.
double data type does not work due to loss of precision while rounding values. Use decimal instead.
static private int[] Get( decimal num ) {
int[] res = { 1/*before*/, 0/*after*/ };
var tmpNum = num / 10;
while ( tmpNum > 1 ) { res[ 0 ]++; tmpNum /= 10; }
tmpNum = num * 10;
while ( tmpNum % 10 != 0 ) { res[ 1 ]++; tmpNum *= 10; }
return res;
}
the logic is simple: due to induction we know 4 non-zero elements:
a[0] = k,
a[k] = 1,
a[1] = 2 (want to write 1, but cannot, 'cause we already have one 1, so we write 1+1),
a[2] = 1.
So, the rest n-4 elements are the 0 elements.
But we see that in first line of our induction analysis we have a[0] = k, and we already know that 0 presents n-4 times.
So, k = n-4.
So, we can substitute:
a[0] = n-4,
a[n-4] = 1,
a[1] = 2,
a[2] = 1.
c#, Newton approximation.
private static double GetRootCubeNewton( double num ) {
double accuracy = 0.0000000000001;
double x = 1;
int n = 10;
while ( true ) {
for ( int i = 0; i < n; i++ ) {
x = ( 2 * x + num / ( x * x ) ) / 3;
}
if ( Math.Abs( x*x*x - num ) < accuracy ) {
break;
}
}
return x;
}
1) initialize 2 counters: one for each given node;
2) in 2 consecutive loops go from n1 to root, and from n2 to root, while incrementing respective counter in respective loop, and testing if n1 is placed on the way from n2 to root, and vice versa.
3) compare 2 obtained roots: if they are not same, then return NULL,
If the are the same then compare counters.
If any of them is 0, then the answer is root.
If n1 is placed on the way from n2 to root (or vice versa), then we take node to which the smaller counter corresponds and get its parent, and this will be the answer.
Otherwise subtract the smaller counter from the bigger one - this will be the number of steps upward, which are necessary to perform starting from node to which the bigger counter corresponds to reach the level of the other given node. Then, when we reach this level, we traverse starting from this level moving 2 pointers upward in 1 loop until we hit the common parent.
If the counters are equal and differ from 0, then it means that given nodes already lay on the same level, so we traverse from both nodes upward in 1 loop until we hit the common parent.
O(log n).
O(1).
BFS obviously.
c# implementation.
using System.Collections.Generic;
using System.Linq;
namespace UndirectedGraphToDAG {
class Program {
static private void ConvertUndirectedGraphToDAG( Node node ) {
var currList = new List<Node> { node };
var prevList = new List<Node>();
while ( currList.Any() ) {
var nextLayer = new List<Node>();
foreach ( var item in currList ) {
item.FromNodes = new List<Node>();
item.ToNodes = prevList.Count == 0 ? null : new List<Node>() ;
item.FillToNodes( prevList );
item.FillFromNodes( currList, nextLayer );
prevList.Add( item );
}
prevList = currList;
currList = nextLayer;
}
}
public class Node {
public List<Node> Nodes = new List<Node>();
public List<Node> ToNodes = null;
public List<Node> FromNodes = null;
public void FillToNodes(List<Node> prevList) {
foreach ( var prev in prevList ) {
if ( Nodes.Contains( prev ) ) {
ToNodes.Add( prev );
}
}
}
public void FillFromNodes( List<Node> currList, List<Node> nextLayer ) {
foreach ( var connNode in Nodes ) {
if ( ToNodes != null && ToNodes.Contains( connNode ) ) { continue; }
FromNodes.Add( connNode );
if ( currList.Contains( connNode ) ) { continue; }
nextLayer.Add( connNode );
}
}
}
static void Main(string[] args) { }
}
}
really, it's something wrong with the task.
Could the author please clarify if.
Firstly, I've mentioned above that radar works in another manner than it is described in the task.
Secondly, the API's signature is "void scan(const Plane &P)", the return value is void.
But the task requires that function Scan could at any given time be able to RETURN the 100 closest planes to the tower (0,0).
sorry, I did not quite catch your idea.
"each time a plane is detected ... heap must be rebuild" - how to imagine that.
Let's model radar's work: radar starts to scan space from point {0,0}. Next step is point {1,0} and let's say at this point radar finds first nearest plane. So, what heap must be rebuilt?
"what if there are less than 100 planes within radius RadiusLimit?" - then the output will be the actual number of nearest planes.
"what if there is no plane within radius RadiusLimit?" - it is ok, the output will be a list with 0 elements.
Any radar has its limit, meaning that no one radar can scan infinite space.
Weak radars can scan 10 km of space (for example), powerful radars can scan 100 km (or more) of space, etc. Limit exists for every radar.
This means that if we have a weak radar (10 km of scanning space) and if the nearest plane is 11 km away, then the radar will not detect it.
the task is unclear.
"void scan(const Plane &P) that is called periodically whenever the radar detects a plane" - radar works in another manner: it scans the sky all the time, trying to detect planes, and shows all the planes that are in radar's scope per each scan.
The API should be:
List<Plane> scan();
for example (c#):
using System;
using System.Collections.Generic;
namespace Radar {
class Program {
readonly static List<Plane> FlyingPlanes = new List<Plane> {/*add planes which are in the sky now*/};
private const int RadiusLimit = 10000;
private const int RadiusStep = 1;
private const int AngleStep = 1;
private const int PlanesLimit = 100;
private static List<Plane> Scan() {
var nearestPlanes = new List<Plane>();
for ( int radius = 1; radius < RadiusLimit; radius += RadiusStep ) {
for (int angle = 0; angle < 360; angle += AngleStep) {
var radians = Math.PI * angle / 180;
double x = Math.Cos( radians ) * radius;
double y = Math.Sin( radians ) * radius;
var potentialPlane = new Plane { x = x, y = y };
if ( FlyingPlanes.Contains( potentialPlane ) ) {
nearestPlanes.Add( potentialPlane );
}
if ( nearestPlanes.Count >= PlanesLimit ) {
return nearestPlanes;
}
}
}
return nearestPlanes;
}
private struct Plane {
public double x;
public double y;
}
static void Main(string[] args) {
var nearestPlanes = Scan();
}
}
}
c# implementation.
static private string MakePalindrome( string str ) {
var sb = new StringBuilder();
for ( int i = str.Length - 1; i >= 0; i-- ) {
sb.Append( str[ i ] );
}
var reverse = sb.ToString();
int maxCnt = 0;
// use KMP instead, I use simple brute force not to blow up the code
for ( int i = 0; i < str.Length; i++ ) {
if ( str[ i ] != reverse[ 0 ] ) { continue; }
int tmpI = i;
Func<int> f = () => tmpI - i;
while ( tmpI < str.Length && str[ tmpI ] == reverse[ f.Invoke() ] ) {
tmpI++;
}
if ( f.Invoke() > maxCnt ) { maxCnt = f.Invoke(); }
}
return str + reverse.Substring( maxCnt, reverse.Length - maxCnt );
}
c# implementation.
DP, D&C.
O(n*logn).
using System;
namespace ZigZagArrange {
public static class Program {
static public int[] ZigZagArrange( int[] arr ) {
if ( arr.Length < 2 ) { return arr; }
int[] part1 = GetReArrangedPart1( arr );
int[] part2 = GetPart2AsIs( arr, part1.Length );
if ( part2 != null ) {
part2 = ZigZagArrange( part2 );
}
return Concat( part1, part2 );
}
static private int[] GetReArrangedPart1( int[] arr ) {
int i = 1;
while( arr[ 0 ] == arr[ i ] && i < arr.Length - 1 ) { i++; } // handle duplicates
int fMinI = arr[ 0 ] > arr[ i ] ? i : 0 ;
int fMaxI = arr[ 0 ] < arr[ i ] ? i : 0 ;
if ( fMaxI == 0 ) {
fMinI = getfMinI( arr, i );
} else {
fMaxI = getfMaxI( arr, i );
}
int[] res = new int[ Math.Abs( fMaxI - fMinI ) + 1 ];
for ( i = 0; i <= Math.Max( fMinI, fMaxI ); i++ ) {
res[ i ] = arr[ i ];
}
res = ReArrange( res, fMaxI, fMinI );
return res;
}
private static int[] ReArrange( int[] arr, int fMaxI, int fMinI ) {
int[] res = new int[ arr.Length ];
int i = 0;
while ( i < res.Length ) {
res[ i++ ] = arr[ fMaxI ];
if ( i >= res.Length ) { break; }
res[ i++ ] = arr[ fMinI ];
fMaxI = fMaxI > fMinI ? fMaxI - 1 : fMaxI + 1;
fMinI = fMinI > fMaxI ? fMinI - 1 : fMinI + 1;
}
return res;
}
private static int[] Concat( int[] part1, int[] part2 ) {
if ( part1 == null ) { return part2; }
if ( part2 == null ) { return part1; }
int[] res = new int[ part1.Length + part2.Length ];
for ( int i = 0; i < part1.Length; i++ ) {
res[ i ] = part1[ i ];
}
for ( int i = 0; i < part2.Length; i++ ) {
res[ part1.Length + i ] = part2[ i ];
}
return res;
}
private static int[] GetPart2AsIs( int[] arr, int p2Length ) {
if ( arr.Length == p2Length ) { return null; }
int[] res = new int[ arr.Length - p2Length ];
for ( int i = p2Length; i < arr.Length; i++ ) {
res[ i - p2Length ] = arr[ i ];
}
return res;
}
static private int getfMinI( int[] arr, int i ) {
if ( i == arr.Length - 1 ) { return i; }
if ( arr[ i ] >= arr[ i + 1 ] ) {
return getfMinI( arr, i + 1 );
}
return i;
}
static private int getfMaxI( int[] arr, int i ) {
if ( i == arr.Length - 1 ) { return i; }
if ( arr[ i ] <= arr[ i + 1 ] ) {
return getfMaxI( arr, i + 1 );
}
return i;
}
static void Main(string[] args) { }
}
}
the problem of this solution is that you do too many assumptions.
For example, consider the situation when you are given a third-party lib which contains class Node. And you get to know that method GetChildren() every time returns child nodes in random order. In this case your solution will not work.
Another assumption is that foreach iterator returns elements of collection in desired order. In general this is not true, it depends on the collection's container. Consider the situation when you are to use a container with random elements order. The solution again will not work.
In general the problem is dependency upon the assumptions.
c# implementation.
Running space 2*n.
Space complexity O(n).
using System;
namespace AllPossiblePaths {
class Program {
static private int Get( int i, int j, int len ) {
int res = 0;
if ( i == j && j == len - 1 ) {
return 1;
}
if ( j < len - 1 ) {
res += Get( i, j + 1, len );
}
if ( i < len - 1 ) {
res += Get( i + 1, j, len );
}
return res;
}
static void Main( string[] args ){
var res = Get( 0, 0, 8 );
Console.WriteLine( res );
Console.ReadLine();
}
}
}
another c# implementation.
circular algo: the solution replaces elements one by one in really a circular manner.
private static void ShiftRightNew( int[,] arr ) {
int first = 0, row = 0, col = 0, rowt2 = 0, colt2 = 0, pass = 0;
var last = arr.GetLength( 0 ) - 1;
var tmp1 = arr[ first, first ];
var tmp2 = arr[ first, first + 1 ];
while ( last - first > 0 ) {
Get( ref row, ref col, ref rowt2, ref colt2, ref pass, first, last );
arr[ row, col ] = tmp1;
tmp1 = tmp2;
tmp2 = arr[ rowt2, colt2 ];
if ( rowt2 == first && colt2 == first + 1 ) {
first++;
last--;
tmp1 = arr[ first, first ];
tmp2 = arr[ first, first + 1 ];
row = first; col = first; rowt2 = first; colt2 = first;
}
}
}
private static void Get( ref int row, ref int col, ref int rowt2, ref int colt2, ref int pass, int first, int last ) {
if ( pass == 0 ) {
if ( rowt2 == first && ( colt2 == last ) ) {
rowt2 = first + 1;
col = last;
pass++;
return;
}
col++;
if ( last - first == 1 ) {
colt2++; rowt2++; pass++;
} else { colt2 = col + 1; }
return;
}
if ( pass == 1 ) {
if ( rowt2 == last && colt2 == last ) {
colt2 = last - 1;
row = last;
pass++;
return;
}
row++;
rowt2 = row + 1;
return;
}
if ( pass == 2 ) {
if ( rowt2 == last && colt2 == first ) {
rowt2 = last - 1;
col = first;
pass++;
return;
}
col--;
colt2 = col - 1;
return;
}
if ( pass == 3 ) {
if ( rowt2 == first && colt2 == first ) {
colt2 = first + 1;
row = first;
pass = 0;
return;
}
row--;
rowt2 = row - 1;
}
}
According to the task the string may have length up to 20 GB.
Your solution will crash when try to get stream.length, because in case of 20 GB input string stream.length will be 10 billion, but stream.length is of Integer type, which has upper bound of approx. 2 billion.
c# implementation.
public static List<Node> GetLongestSequence( Node node ) {
var res = GetLongestForIthElement( node );
var hashSet = new HashSet<int>();
var self = node;
var curr = node.Next;
while ( curr != self ) {
if ( hashSet.Contains( curr.Value ) ) {
curr = curr.Next; continue;
}
hashSet.Add( curr.Value );
var tmp = GetLongestForIthElement( curr );
if ( tmp.Count > res.Count ) {
res = tmp;
}
curr = curr.Next;
}
return res;
}
public static List<Node> GetLongestForIthElement( Node node ) {
var list1 = new List<Node> { node };
var list2 = new List<Node>();
var tmp = list1;
var N = node.Value;
var self = node;
var curr = node.Next;
while ( curr != self ) {
if ( curr.Value != N ) {
tmp.Add( curr );
curr = curr.Next;
continue;
}
if ( list1.Count >= list2.Count ) {
list2.Clear();
list2.Add( curr );
tmp = list2;
} else {
list1.Clear();
list1.Add( curr );
tmp = list1;
}
curr = curr.Next;
}
if ( list2.Count == 0 ) { return new List<Node>(); }
return list1.Count >= list2.Count ? list1 : list2;
}
actually, O(nlogn) is possible.
- zr.roman December 31, 2015Here (/question?id=5631660689195008) is a O(nlogn) solution for creating a new array as follows: how many elements are greater than that element remaining in the array.
So, we can apply this approach twice:
1) get first array containing values: how many previous elements are greater than each element in the initial array.
2) get second array containing values: how many remaining elements are smaller than each element in the initial array.
Then do 1 pass through the 2 new arrays simultaneously, multiplying the values to calculate count of inversions.
The complexity will be O(n logn).