emb
BAN USER- 1of 1 vote
AnswersYou have an array of unique integer numbers and only one operation: MoveToFront(x) that moves given number to the beginning of the array.
- emb in United States
Write a program to sort the array using the minimum possible number of MoveToFront() calls.| Report Duplicate | Flag | PURGE
Facebook Software Engineer Algorithm - 6of 6 votes
AnswersYou are given a matrix with N rows and N columns. Elements in matrix can be either 1 or 0. Each row and column of matrix is sorted in ascending order.
Find number of 0-s in the given matrix.
Example:0 0 1 0 1 1 1 1 1 Answer: 3 0 0 0 0 Answer: 4
Update: Expected complexity is O(log(N)). The best I've seen in comments is still O(N).
- emb in United States
Update2: Alright, guys, sorry for a bit of trolling. Obviously this is not possible to do faster than O(N). Here is why: take a diagonal (N, 1), (N-1, 2), ... (1, N). Suppose input matrix has all 0's above this diagonal and all 1's under this diagonal. So only diagonal elements vary. Clearly, diagonal elements do not depend on each other. So we have to analyze each diagonal element which is O(N).
Nice job, @gen-y-s :)| Report Duplicate | Flag | PURGE
Google Software Developer Algorithm - 0of 2 votes
AnswersA robot on a plane has 2 types of commands:
1. move forward by X units (X is integer 0 <= X <= 10000 )
2. rotate by X degrees (X is integer in range [-180, 180] )
A robot looks likedef robot(commands): while True: for command in commands: execute(command)
Given a list of commands (of size <= 10000) tell if it's possible to build a wall around the robot such that he will never touch it.
Example:
- emb in United States[move(10), rotate(180), move(10)] -> answer is yes [move(10), rotate(45), move(10), rotate(-45), move(10), rotate(45)] - answer is no
| Report Duplicate | Flag | PURGE
Google Software Developer Brain Teasers - 2of 2 votes
AnswersYou are given a range [first, last], initially white. You need to paint it black.
For this purpose you have a set of triples
[(f, l, cost), ...] - where each triple means that you can paint range [f, l] for `cost` coins (limitations: cost is floating point >= 0, f, l, first, last are integers).
Find minimum cost needed to paint the whole range [first, last] or return -1 if it's impossible
Example:[first, last] = [0, 5] and set of triples is [[0, 5, 10], [0, 4, 1], [0, 2,5], [2, 5, 1]]
Clearly the answer is to take [0, 4, 1] and [2, 5, 1] - the total cost will be 2.
Another example:[first, last] = [0, 5] triples are [[1,4, 10], [2, 5, 6]]
answer is -1, because it's impossible to color whole range.
- emb in United States| Report Duplicate | Flag | PURGE
Google Software Developer Algorithm - 8of 8 votes
AnswersYou are given a graph, some edges are black, some are red. Find a spanning tree with one restriction: if we take some node as root, every path from it to some leaf node must consist of alternating red-black-red-black edges. That is, no path from root to leaf must contain sequential black-black edges or red-red edges.
- emb in United States
You are guaranteed that such spanning tree exists.| Report Duplicate | Flag | PURGE
Google Software Developer Algorithm - 0of 0 votes
AnswersThere are N coins with coordinates (x, y) where x >0 and y >0
- emb in United States
You start at (0, 0) and you can only do steps of form (dx, dy) where dx >0 and dy > 0
Print the maximum number of coins that you can collect.
Clarification: you can do as many moves as you wish, the point is to collect maximum number of coins. If you are located at position (a, b) you may jump to position (a+dx, b+dy) for all dx > 0 and dy > 0
@krbchd: Your algorithm may output incorrect values. Suppose there are points (5, 7), (5, 8), (5, 9) for y coordinates LIS will output 7, 8, 9, however since these points are on the same x axis, you can choose only one of them.| Report Duplicate | Flag | PURGE
Facebook Software Developer Algorithm - 9of 9 votes
AnswersGiven a packed file with 1Tb of 64-bit doubles (first 8 bytes are first double, next 8 bytes are next, etc) find the exact value of median. For simplicity assume the number of doubles is odd.
- emb in United States
You can't modify the file and you have only 8Gb of free memory.
Update: you may use no more than two passes through file and your algorithm shouldn't rely on some nature of file - it should work in all cases.| Report Duplicate | Flag | PURGE
Google Software Developer Coding - 0of 0 votes
AnswersGiven an array of numbers, find the longest alternating subsequence. That is, a subsequence [a1, a2, a3, ..., ak] where a1 > a2, a3 < a2, a4 > a3, ... or vice versa (Graphically looks like /\/\/\... or \/\/\/\....
- emb in United States| Report Duplicate | Flag | PURGE
Google Software Developer Algorithm - 0of 0 votes
AnswersGiven a set of numbers {x1, x2, x3, x4, ..., xN} (N>=3) a set of its pairwise sums is {x1+x2, x1+x3, x1+x4, x2+x3,x2+x4,x3+x4, ...,}. (That is s_k = x_i + x_j where i != j)
Restore a set of numbers given a set of its pairwise sums.
Note: you don't know given some k, to which i and j it refers, (i.e. input is given in undefined order)
EDIT: couldn't comment, so here is clarification
Example:S = {1, 5, 10, 100} (n elements) P = {6, 11, 101, 15, 105, 110} (n * (n - 1) / 2 elements)
Given P you have to restore S.
Note here means that if you knew which element in P corresponded to which pair of indices in S, you could just solve a simple linear equation
- emb in United Statesx1+x2=a{k1} x2+x3 = a{k2}, ...., x{n-1} + x{n} = a{k{n-1}, x{n} + x1 = a{k{n}}
| Report Duplicate | Flag | PURGE
Facebook Intern - 5of 5 votes
AnswersYou are given a function bool rand_bit_p() that returns true with some unknown probability p and false with probability 1 - p.
- emb in United States
Write function rand_bit() using rand_bit_p that will return true and false with equal probability (that is, implement a fair coin, given unfair coin)| Report Duplicate | Flag | PURGE
Google Software Engineer Algorithm - 6of 6 votes
AnswersGiven a sorted array of size N of int32, find an element that repeats > ceil(N / 2) times. Your algo may assume that there will be always such element. Space/time O(1).
- emb in United States
Follow up question: Now element repeats > ceil(N / 4) times. Space/time O(1)| Report Duplicate | Flag | PURGE
Google Intern - 1of 1 vote
AnswersGiven integer k and a subset S of set {0, 1, 2, ..., 2^k - 1}
- emb in United States
Return the count of pairs (a, b) where a and b are from S and (a < b) and (a & b == a)
& here is bit-wise and.
Do it faster than O((2^k)^2), assume k <= 16
Example:
0b111
0b101
0b010
Answer: 2
0b110
0b011
0b101
Answer: 0| Report Duplicate | Flag | PURGE
Facebook Software Engineer Algorithm - 2of 2 votes
AnswersYou are given a set of points on x axis (consumers)
- emb in United States
Also you are given a set of points on a plane (producer)
For every consumer print the nearest producer.
Wanted something better than O(n^2) time.
Example:
consumers: 1 5 7
producers: (0, 3), (1,1), (3, 2), (8, 10), (9, 100)
Answer:
for 1 nearest producer is (1, 1), for 5 nearest is (3, 2), for 7 nearest is (3, 2)
Follow-up question: now both sets are sorted by x coordinate. Could you come up with a linear algorithm?| Report Duplicate | Flag | PURGE
Facebook Software Engineer Algorithm - 0of 0 votes
AnswersGiven n, return 1 ^ 2 ^ 3 ^ ... ^ n
Where ^ is binary xor.
Note: n is a 64-bit number, and 1<<63 is a valid n for this problem.
Examples:
- emb in United States>>> reduce(lambda a,b:a^b, [1,2,3]) 0 >>> reduce(lambda a,b:a^b, [1,2,3,4]) 4 >>> reduce(lambda a,b:a^b, [1,2,3,4,5,6,7]) 0 >>> reduce(lambda a,b:a^b, [1,2,3,4,5,6,7,8,9]) 1
| Report Duplicate | Flag | PURGE
Facebook Software Engineer Intern - 2of 2 votes
AnswersYou are given a permutation arr[N]. E.g. arr[3] = {2, 1, 0} or arr[5] = {0,1,2,4,3};
- emb in United States
Then you can prepare somehow and then start serving requests: request(a, b, k) = sorted(arr[a:b])[k], that is, k-th order statistic on slice [a:b] of arr.
E.g. if arr is [3,4,5,0,1,2] and a = 2 and b = 5, then arr[a:b] = [5,0,1] and let k = 2, so we sort it - get [0,1,5] and take k-th element, that is - 5.
Implement request(a, b, k) function. You can preprocess input data, that is, assume there will be only one array and many request() calls.| Report Duplicate | Flag | PURGE
Facebook Software Engineer Algorithm - 8of 8 votes
AnswersGiven an array int32 arr[] of size n, return the number of non-empty contigious subarrays whose sum lies in range [a, b]
That is, implement the following naive algorithm faster than O(n^2)def naive_algorithm(lst, a, b): result = 0 for i in xrange(len(lst)): for j in xrange(i, len(lst)): if a <= sum(lst[i:j + 1]) <= b: result += 1 return result
Examples:
count([1,2,3], 0, 3) = 3 # [1], [2], [3], [1, 2], [3] count([-2,5,-1], -2, 2) = 3 # [-2], [-1], [-2, 5, -1]
You may assume that there are no overflows, that is sum(|x_i|) <= MAX_INT - 1
- emb in United States| Report Duplicate | Flag | PURGE
Google Software Engineer - 1of 5 votes
AnswersYou are given an array of n unique integer numbers 0 <= x_i < 2 * n
- emb in United States
Print all integers 0 <= x < 2 * n that are not present in this array.
Example:
find_missing([0]) = [1]
find_missing([0, 2, 4]) = [1, 3, 5] # because all numbers are [0, 1, 2, 3, 4, 5]
find_missing([]) = []
find_missing([0, 1, 4, 5]) = [2, 3, 6, 7] # because all numbers are [0, 1, 2, 3, 4, 5, 6, 7]
Quirks are about requirements:
Time complexity O(n) - BUT there should be some fixed constant C independent of size of input such that every element of array is written/read < C times, so radix sorting the array is a no go.
Space complexity O(1) - you may modify the initial array, BUT sorted(initial_array) must equal sorted(array_after_executing_program) AND you can't store integers outside range [0, 2n) in this array (imagine that it's an array of uint32_t).| Report Duplicate | Flag | PURGE
Google Software Engineer Brain Teasers - 1of 1 vote
AnswersYou are given a flat room 1x1 metres, a position of victim in it (v_x, v_y) and a position of a killer (k_x, k_y) both inside this room (in range [0, 1]).
- emb in United States
Then the killer shoots once at some direction. The bullet reflects of the walls as if it was a light ray - if it falls under angle X degrees, it will reflect at angle X degrees, if it gets into the corner it just reflects back. If the bullet hits guardian (see below) it stops and killer fails.
Write a function that will be given coordinates of victim and a killer and will return a list of coordinates of guardians such that it's impossible for a killer to kill victim.
That is, whichever direction the killer will shoot, the bullet will never reach victim, or will be stopped by a guardian.
Here is an example for the case when we assume the walls don't reflect bullet (for simplicity):
killer: (0, 0), victim: (1, 1). The solution to this simplified problem is to place 1 guardian between killer and victim e.g. on (0.1, 0.1).
Your task is to do this with accounting bullet reflection. E.g. in the previous case the killer can shoot at (1/3, 1), the bullet will reflect to (2/3, 0) and finally get to the victim at (1, 1).| Report Duplicate | Flag | PURGE
Google Software Engineer Brain Teasers - 8of 8 votes
AnswersYou are given a list of n float numbers x_1, x_2, x_3, ... x_n, where x_i > 0.
- emb in United States
A traveler starts at point (0, 0) and moves x_1 metres to the north, then x_2 metres to the west, x_3 to the south, x_4 to the east and so on (after each move his direction changes counter-clockwise)
Write an single-pass algorithm that uses O(1) memory to determine, if the travelers path crosses itself, or not (i.e. if it's self-intersecting)
e.g.
2 1 1 2 -> crosses
1 2 3 4 -> doesn't cross| Report Duplicate | Flag | PURGE
Google Software Developer Algorithm - 1of 1 vote
AnswersWrite function to determine if given unsigned 32-bit number is a power of 3
int is_power_of_3(uint32_t n)
return 1 if yes, 0 otherwise.
e.g.is_power_of_3(27) = 1 is_power_of_3(9) = 1 is_power_of_3(42) = 0 is_power_of_3(0) = 0
Expected the answer not to be straightforward loop, but something faster.
- emb in United States| Report Duplicate | Flag | PURGE
Google Software Engineer Math & Computation
This can be done using 2d segment/BIT tree. The idea is that you first build a tree by first coordinate and then for each node of that tree you build a segment tree by second coordinate.
Since input parameters are int, you can build a segment tree for [0,MAX_INT] * [0, MAX_INT] and you will have O(log(INT_MAX)^2) complexity.
Do they really ask you to implement 2d segment trees? I doubt one would write that if never written before and this is so topcoder/hackerrank specific.
What does it mean "While sorting you are not allowed to change the original ordering of same element" - do you mean sort must be stable?
So you want stable in-place partition? Sorry, but that doesn't sound like an interview question.
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.5554
O(len(s)^2) memory, runtime.
import collections
def solve(s):
dp = {}
previous = {}
def dfs(l, r):
result = dp.get((l, r))
if result is not None:
return result
if l >= r - 1:
result = 0
else:
candidates = [
((l + 1, r), (r, s[l], True)),
((l, r - 1), (l, s[r - 1], False))
]
if s[l] == s[r - 1]:
candidates.append(((l + 1, r - 1), None))
min_score = len(s)
best_cand = None
for (nl, nr), action in candidates:
score = dfs(nl, nr)
if score < min_score:
min_score = score
best_cand = ((nl, nr), action)
previous[(l, r)] = best_cand
result = min_score + (0 if best_cand[1] is None else 1)
dp[(l, r)] = result
return result
dfs(0, len(s))
actions = []
cur = (0, len(s))
while True:
res = previous.get(cur)
if res is None:
break
next_cur, action = res
if action is not None:
actions.append(action)
cur = next_cur
return actions
def check(s):
actions = solve(s)
s = list(s)
insert_at = [collections.deque() for i in xrange(len(s) + 1)]
for pos, char, front in actions:
if front:
insert_at[pos].appendleft(char)
else:
insert_at[pos].append(char)
for pos, chars in reversed(list(enumerate(insert_at))):
if chars:
s[pos:pos] = ''.join(chars)
assert s == s[::-1], ''.join(s)
return ''.join(s)
print check('pototp')
print check('oto maym oto')
print check('a lazy fox jumps overy a lazy')
@krbchd, why not just run a dijkstra for a graph where friend edge weighs 0 and enemy edge weighs 1?
upd: I've read "every two nodes are either friend, or enemy" so just check if there is a friend-path between src and dst, and if not, use a single magic potion to go from src to dst.
Here is "expected" solution. (N*log(N) where N is number of intervals)
import heapq
def solve(first, last, intervals):
heap = [(0, first)]
for pos, end, cost in sorted(intervals + [(last, last, 0)]):
while heap and heap[0][1] < pos:
heapq.heappop(heap)
if not heap:
return -1
curCost, curPos = heap[0]
if curPos >= last:
return curCost
heapq.heappush(heap, (cost + curCost, end))
assert solve(0, 5, [[0, 5, 10], [0, 4, 1], [0, 2,5], [2, 5, 1]]) == 2
assert solve(0, 5, [[1,4, 10], [2, 5, 6]]) == -1
assert solve(0, 0, []) == 0
assert solve(0, 5, [[-3, -1, 10], [-1, 5, 10]]) == 10
Alright, here goes my stream of consciousness - that's how I solved this problem, full thought process.
So say, we have a permutation of length N. We know that permutation of length N can be represented by storing a number for each position "count of numbers bigger than current on the left of it" - for example
3 2 1
For 3 this number will be 0
For 2 this number will be 1 (becaue 3 is on the left of it)
For 1 this number will be 2 (because 3 and 2 are on its left).
It can be proven that (0, 1, 2) uniquely represents a permutation (3 2 1).
It's also easy to see that sum of (0, 1, 2) will give us a number of inversions.
Alright, suppose for some element i, the count of elements greater than it on the left is a. We can see that we can only decrease a by k - by moving elements larger than i from the left to the right of it (each time element i will move one position left).
Also consider this: for largest element this value will be 0, so we can't decrease it, for second largest - either 0 or 1, for third largest - either 0 or 1 or 2.
So we could decrease the inversions count by this number: (n - 1) + (n - 2) + ... + (n - k). Suppose k = n - 1, then this number will be 1 + ... + (n - 1) = n*(n-1)/2 which is maximum number of inversions in array (this means, if k = n - 1, we can sort it).
So how could we sort the array now? Simply take a permutation, rewrite it in the "count of bigger elements on the left" form, then subtract (n-1)+(n-2)+...+(n - k) from values (if you stop earlier, don't worry you've just sorted whole array) and then restore the permutation.
Let's take an insertion sort that moves values only by k to the left. Let's prove that it's correct.
def limited_insertion_sort(arr, k):
arr = list(arr)
for last_sorted in xrange(1, len(arr)):
new_elem = arr[last_sorted]
insertion_point = last_sorted
while insertion_point > max(0, last_sorted - k) and arr[insertion_point - 1] > new_elem:
arr[insertion_point] = arr[insertion_point - 1]
insertion_point -= 1
arr[insertion_point] = new_elem
return arr
arr = [4,3,2,1]
assert limited_insertion_sort(arr, 2) == [2, 1, 3, 4]
assert limited_insertion_sort(arr, 1) == [3, 2, 1, 4]
Time O(n*k)
After first k steps first k+1 elements will be sorted. Let's call this "top". At each moment
this "top" will contain top k elements.
Clearly: suppose the next element is bigger than everything in the "top". Then it will just
land at the head of the "top".
Suppose it's smaller than "top"-s k elements. Then it will land at the tail of the top (because element can move k positions left).
Otherwise, the element will land somewhere in the middle of the top and the smallest element will get evicted.
This means that every element will be able to decrease its inversion count by k (of course, after taking into consideration it's limit on maximum inversions), so insertion sort is the same as the permutation converting algorithm in the beginning.
However, we can do better - there is a data structure suitable for keeping top k elements and evicting smallest, named heap.
def limited_top_keeper_sort(arr, k):
# python doesn't allow us to heapify subarray
# but in c++ this is possible, so I'll use a separate
# array for simplicity.
result = []
topk = []
for i in xrange(min(len(arr), k)):
topk.append(arr[i])
if len(arr) > k:
heapq.heapify(topk)
for i in xrange(min(len(arr), k), len(arr)):
result.append(heapq.heappushpop(topk, arr[i]))
result.extend(sorted(topk))
return result
assert limited_top_keeper_sort(arr, 2) == [2, 1, 3, 4]
assert limited_top_keeper_sort(arr, 1) == [3, 2, 1, 4]
O(n*log(k)) time.
- emb June 01, 2016Here is an O(1) per number algorithm that outputs these numbers in requested order.
If we can assume that k is <= 64, or we can use python long arithmetic, then we can use "gosper hack". This obviously works in O(1) per number, so the whole algorithm works in O(t) where t is number of items we want to print (e.g. 2^k to print everything)
The idea is in comments
def gosper(n):
"""return smallest number > n with the same number of bits set"""
# group is first sequence of 1's in n starting from lowest bits
# 000111110000
# ^---^ - this is "group"
# the idea is simple - take the highest bit in group
# and move it one position left
# the rest of the group will go to the right
# like this:
# 000111110000
# turns into
# 001000001111
# after another iteration this turns into
# 001000010111
# and so forth...
# lowest_set_bit is the lowest 1 in that group
# 000111110000
# ^ here it is
lowest_set_bit = ((~n) + 1) & n
# this points straight before the highest bit
# 0001110000
# ^ here - to the 0
new_group_head = (lowest_set_bit + n) & (~n)
# now let's work with what was left on the left
original_group = n & (new_group_head - 1)
# remove group old head - it has migrated one bit left
new_group = original_group ^ (new_group_head / 2)
# move the tail to the beginning
new_group /= lowest_set_bit
# reassemble result
result = n & ~original_group
result |= new_group_head
result |= new_group
assert result > n
assert bin(result).count('1') == bin(n).count('1')
return result
def all_combinations(k):
yield '0' * k
mask = (1 << k) - 1
for num_bits in xrange(1, k + 1):
n = (1 << num_bits) - 1
while True:
yield bin(n)[2:].rjust(k, '0')
n = gosper(n)
if n > mask:
break
k = 5
result = list(all_combinations(k))
print len(result)
assert len(result) == 2 ** k
assert result[-1] == '1' * k
for s in all_combinations(5):
print s
Example output: ideone.com/FO329N
- emb May 27, 2016We can use simultaneous BFS from both source and destination to achieve ~2*sqrt(n) frontier size instead of n for random graphs.
Whenever two searches meet, we keep the minimal distances from each node and ignore longer paths from that moment - something like branch and bound.
To count number of shortest paths we track paths count in each vertex and then multiply them.
from collections import defaultdict, deque
graph = defaultdict(list)
graph.update({
0: [1, 2, 3],
4: [1, 2, 3],
5: [4, 6],
6: [7, 8, 9],
10: [7, 8, 9]
})
# prettify graph
def normalize(graph):
for v, neighbours in graph.items():
for u in neighbours:
if u != v:
graph[u].append(v)
for v, neighbours in graph.iteritems():
graph[v] = sorted(list(set(neighbours)))
return graph
def simultaneous_bfs(graph, source, destination):
SOURCE_TYPE, DESTINATION_TYPE = range(2)
queue = deque([(source, SOURCE_TYPE), (destination, DESTINATION_TYPE)])
allpaths = [defaultdict(int), defaultdict(int)]
alldistances = [defaultdict(int), defaultdict(int)]
min_distances = None
allpaths[SOURCE_TYPE][source] = 1
allpaths[DESTINATION_TYPE][destination] = 1
alldistances[SOURCE_TYPE][source] = 0
alldistances[DESTINATION_TYPE][destination] = 0
while queue:
u, vertex_type = queue.popleft()
paths = allpaths[vertex_type]
distances = alldistances[vertex_type]
if min_distances is not None:
if distances[u] >= min_distances[vertex_type]:
continue
for v in graph[u]:
if paths[v]:
if distances[v] == distances[u] + 1:
paths[v] += paths[u]
else:
this_distance = distances[u] + 1
if allpaths[1 - vertex_type][v]:
other_distance = alldistances[1 - vertex_type][v]
if min_distances is None:
min_distances = [None] * 2
min_distances[vertex_type] = this_distance
min_distances[1 - vertex_type] = other_distance
else:
if min_distances[vertex_type] < this_distance:
continue
min_distances[vertex_type] = this_distance
distances[v] = this_distance
paths[v] = paths[u]
queue.append((v, vertex_type))
result = 0
if min_distances is None:
return 0
for v in allpaths[0]:
if alldistances[0][v] == min_distances[0] and alldistances[1][v] == min_distances[1]:
result += allpaths[0][v] * allpaths[1][v]
return result
graph = normalize(graph)
assert simultaneous_bfs(graph, 0, 10) == 9
assert simultaneous_bfs(graph, 0, 4) == 3
assert simultaneous_bfs(graph, 3, 10) == 3
Note: not an answer, I'm asking again for clarification, but couldn't comment for some reason :(
So basically a smarter version of the following brute-force pseudocode is required?
def size(node): return 0 if node is None else 1 + size(node.left) + size(node.right)
def compare(a, b): return a is None and b is None or a is not None and b is not None and compare(a.left, b.left) and compare(a.right, b.right)
def find_max(node):
if node is None or compare(node.left, node.right):
return size(node), node
return max(find_max(node.left), find_max(node.right))
Here I suppose that we can only move left-right-bottom-up and not diagonally.
Such path exists if either M is even, or N is even. Suppose M is even. Then path is simple - begin from (0, 0), go down, then ascend in a snake-like manner, filling all cells. Since there are even number of rows, on the M-1 row we will go to the right, on M-2 to the left, ..., and on 0 - to the left back to (0, 0), since M - 2 is even.
Now - proof that if both coordinates are odd, there exists no such path.
Color all cells in a chess board manner. Since there are N*M cells, which is odd, it means that path from first cell to last cell through all cells will contain N*M - 1 moves, which is even. Each move changes color of current cell from black to white or vice versa. This means that first and last cells will be of the same color - this means that we will never be able to move from the last cell to the first in order to turn hamiltonian path into a cycle.
O(prisoners_to_be_justified^3 * cells^2) time,
O(prisoners_to_be_justified^2, * cells^2) space
Better rewrite it without recursion in order to avoid stack overflow.
def compute_min_coins_helper(justified, first_justified, last_justified, first_cell, last_cell, cache):
if not first_justified < last_justified:
return 0
cache_key = (first_justified, last_justified, first_cell, last_cell)
result = cache.get(cache_key)
if result is not None:
return result
min_coins = None
for justified_idx in xrange(first_justified, last_justified):
prisoner_idx = justified[justified_idx]
coins = (
prisoner_idx - first_cell +
compute_min_coins_helper(justified, first_justified, justified_idx, first_cell, prisoner_idx, cache) +
last_cell - (prisoner_idx + 1) +
compute_min_coins_helper(justified, justified_idx + 1, last_justified, prisoner_idx + 1, last_cell, cache)
)
if min_coins is None or coins < min_coins:
min_coins = coins
cache[cache_key] = min_coins
return min_coins
def compute_min_coins(justified, cells):
return compute_min_coins_helper(
[j - 1 for j in justified],
0, len(justified),
0, cells,
cache={}
)
assert compute_min_coins([1,2,3], 3) == 2
assert compute_min_coins([3], 8) == 7
assert compute_min_coins([3, 6, 14], 20) == 35
import random
cells = 100
justified = random.sample(xrange(1, cells + 1), 50)
compute_min_coins(justified, cells)
Alright, the question seems abandoned, so it's safe to post answer:
IEEE754 doubles have property that if reinterpret_casted to uint64, the comparison will still yield correct results (except for negative numbers - how to fix this is exercise to reader)
Now we interpret doubles as uint64
So we know total number of doubles in file. Then we count numbers with MSB being 1 and being 0. After this is done, we know if median has its MSB as 1 or 0. Then we grep only those numbers that have MSB equal to this value and do the same 64 times till we get all bits of median.
But hey, we haven't used the memory at all. So instead of having 2 counters for 0 and 1 let's have 2^32 counters. This way we will be able to find the median in just two passes - once for highest 32 bits and once for lowest 32 bits.
So we think - there are 1Tb of doubles, that's approximately 140B numbers - we will have to use 37 bits for every counter (for the worst skew case), and 2**32 * 37 = 158Gib ~ 20 Gb of memory.
Turns out we are ok with 12 bits for each counter.
The idea is the following: if some counter overflows, append its number to some list so that later we will know that it overflowed.
So for k-bit counters we will need k * (2**32) bits for counters and 32 * (2**40 / 2 ** k) for overflow entries.
def mem(k, total = 2 ** 40):
bits = k * (2 ** 32) + 32 * total / (2 ** k)
return bits / 8
for i in xrange(1, 100):
print i, mem(i) / float(2 ** 30)
And this gives us 7Gb required memory, so that we have 1Gb for OS runtime , file buffer and etc.
After we pass through the file, sort the overflow entries (268Millions max) using quicksort and merge-sort-step through both arrays in order to get the real uint64_t value of each counter and decide where median is.
So basically this problem is about the bit-bucket trick and counter-compressing trick.
We can do it in O(n).
Basically, having s and reverse(s), we need to find to find reverse(s) in s.
Example:
offset=0
tests
stset - no match
offset=1
tests
stset - no match
offset=1
tests
stset - match of size 3
So we take reverse(s[:-3]) and append it to the resulting string.
Here is code, matching is done using KMP
def get_prefix_string(s):
result = [0] * len(s)
for i, c in enumerate(s):
if i == 0:
continue
k = result[i - 1]
while True:
if s[i] == s[k]:
result[i] = k + 1
break
if not k:
break
k = result[k - 1]
return result
def shortest_palindrome(self, s):
seek_in = s[::-1]
prefix_string = get_prefix_string(s)
matched_length = 0
for compare_with in seek_in:
while True:
if s[matched_length] == compare_with:
matched_length += 1
break
if not matched_length:
break
matched_length = prefix_string[matched_length - 1]
return s[matched_length:][::-1] + s
What is the size of digits.get(0) ? For every integer we will be doing intersection.retainAll(digits.get(0))
And if I understand correctly, digits.get(0) is of size 32768
>>> print sum(1 for i in xrange(2 ** 16) if i & 1)
32768
Or I'm missing something very important...
- emb December 14, 2015There are C(k, b) numbers with b bits set to 0. So we will traverse 2^b * C(k, b) for each b in range [0, 16]
It is known that sum b=0 to b=16 (2^b * C(k, b)) is 3^k.
log(3)/log(2) < 2
So this algorithm is faster than (2^k)^2, it is ~ (2^k)^(1.58496).
Though I wonder, if there are algorithms that work faster than this, than don't enumerate all pairs
If you meant "total number of pairs in {0, ..., 2^k - 1} then it is 3^n - 2^n. You were close: each bit in pair (a, b) where a & b = a could be in 3 states: 0 in both a b, 1 in both a b, 0 in a and 1 in b. So total number of pairs is 3^n. But we also counted pairs where both numbers are equal. There are 2^n of such numbers, subtract them. And answer is 3^n - 2 ^ n (or in terms of question: 3^k - 2^k)
- emb December 14, 2015>O(n) time
So you've just solved longest increasing subsequence in linear time.
Suppose we need to find longest increasing subsequence of permutation 1..n = P
We can instead find longest common subsequence of string
[1 2 3 ... n] and permutation P, so we can apply your algorithm to it and it will give us the longest common subsequence, which is also the longest increasing subsequence of permutation P in O(n) time.
2) this is a variation of "find tree height" problem
class Node(object):
def __init__(self, name, *children):
self.name = name
self.children = children
self.height = None
def __repr__(self):
return "Node(%s, %r, height=%d)" % (self.name, self.children, self.height)
def find_height(node):
height = 1
for child in node.children:
find_height(child)
height = max(height, child.height + 1)
node.height = height
def find_min_height(node, height_from_parent=0):
if not node.children:
return height_from_parent
n = len(node.children)
lmax = [0] * n
rmax = [0] * n
for i in xrange(1, n):
lmax[i] = max(lmax[i - 1], node.children[i - 1].height)
rmax[n - i - 1] = max(rmax[n - i], node.children[n - i].height)
return min(
find_min_height(child, max(lm, rm, height_from_parent) + 1)
for (lm, rm, child) in zip(lmax, rmax, node.children)
)
N = Node
tree = N(1, N(2, N(3, N(4))))
find_height(tree)
assert find_min_height(tree) == 3
So basically for each node we find height, leafs have height 1, parents 2 and so on.
Then we need to query each node "what would be your height if you were root?"
That's maximum amongs heights of all of its children and one plus height of its parent if the parent was root and didn't have that particular node as a child.
Time O(n) since we visit each node 1 time when finding height and 1 time when finding minimum height.
In Java/C++ we would declare owner variable volatile, I suppose.
from threading import Lock, current_thread, Thread
def get_thread_id():
return id(current_thread())
class RLock(object):
def __init__(self):
self.lock = Lock()
self.owner = None
self.count = 0
def acquire(self):
current = get_thread_id()
if self.owner == current:
self.count += 1
else:
self.lock.acquire()
assert self.owner is None
self.owner = current
self.count = 1
def release(self):
current = get_thread_id()
assert self.owner == current
assert self.count > 0
self.count -= 1
if not self.count:
self.owner = None
self.lock.release()
def __enter__(self):
self.acquire()
def __exit__(self, tp, val, tb):
self.release()
rlock = RLock()
def factorial(n):
with rlock:
if n <= 0:
return 1
else:
return n * factorial(n - 1)
rlock = RLock()
shared = 0
def repeat(times, n):
for i in xrange(times):
add_numbers(n)
def add_numbers(n):
global shared
with rlock:
result = 0
if n <= 1:
result = 1
else:
result = n
add_numbers(n - 1)
shared += result
num_threads = 5
iterations = 100000
depth = 10
threads = [Thread(target=repeat, args=(iterations, depth))
for _ in xrange(num_threads)]
[t.start() for t in threads]
[t.join() for t in threads]
assert shared == num_threads * iterations * depth * (depth + 1) / 2
@Anonymous, no, this solution is correct.
Since all numbers are positive, array of partial sums is strictly increasing.
Now you are given an array of strictly increasing elements and have to answer in linear time if there are two elements that aj - ai = delta, j > i
So the dumb approach would be to binary search for j for every index i.
Then we see that if we increase i, the second index j will only increase, it can't decrease. So we just move j until aj - ai < delta (if aj - ai = delta, we've found the solution) and then do i = i + 1 after that.
Your solution is really elegant. Here is my way (slower but still O(n))
Suppose we need to find nearest(consumers, producers) and there are more consumers than producers. We know that nearest producers indices are increasing.
So we find a solution for nearest(consumers[::2], producers) and then using answers for even consumers, find answer for odd ones in O(len(consumers) + len(producers)).
Suppose now there are more producers than consumers.
Since for each consumer we need to find one producer, we can throw away len(producers) - len(consumers) producers (or more). That's how we do it:
We maintain a stack of producers: p[0], p[1], p[2], p[3], ..., p[top]
where p[0] is current best for consumer 0, p[1] is current best for consumer 1, and so on
For each producer we do the following: (a)
1)if stack is empty, push that producer on the stack
2)otherwise, suppose topmost producer is p[top]. If that producer is further than current producer to consumers[top], we pop p[top] and goto (a). If we didn't go to (a), insert producer at the top (only if top < len(consumers)
The resulting complexity is O(n + n / 2 + n / 4 + ...) = O(n)
Hi, I wrote a checker for your program and it gave some counterexamples
Consumers = [5, 7, 8] Producers = [(1, 7), (3, 5), (9, 1)]
Expected [(9, 1), (9, 1), (9, 1)]
Expected distances: [17, 5, 2]
Got [(3, 5), (9, 1), (9, 1)]
Got distances: [29, 5, 2]
Here is a checker hxxp://ideone.com/9pYTpR
- emb October 27, 2015Nice, you've got the O(n*log(n)) idea.
So we just take the middle consumer, find its nearest producer and then for consumers on the left/right we only need to search for producers on the left/right of nearest producer.
Next I was given the following hint:
However, for O(n) solution you need stronger observation - that is if we take some two consumers c_i and c_j (i < j) and some two producers p_k p_l (k < l) then if distance(c_i, p_k) > distance(c_i, p_l), then distance(c_j, p_k) > distance(c_j, p_l). Or, in english - if some consumer is closer to p_l than to p_k, then all consumers on the right are closer to p_l than to p_k
Generate a random correct brace sequence, then convert it into binary tree. O(N) time, O(N) space.
Random correct brace sequence can be generated by shuffling n '(' symbols and n ')' symbols and then prepending '(' and then rotate it circularly in such a way that every prefix contains strictly more '(' than ')'
e.g.
There are C(2n,n) ways to shuffle braces and n+1 choices of the circular shift, which gives us n-th Catalan number (a sanity check)
- emb August 30, 2017@ChrisK: (sorry, can't comment)
Inserting random numbers into BST won't give uniform distribution. there are 6 permutations 123 132 213 231 312 321, and you see that 231 and 231 give you the same binary tree, but there is no other sequence that gives you same tree as 123