ali.kheam
BAN USER- 0of 0 votes
AnswersI was asked this question for the above role:
Create a fixed size cache which is fully associative. The entries are evicted based on the rank. for any entry added, the function int getEntryRank(entry) will return its rank which will not change on lookup
db_read_entry() to get the entry from db
Part 2) The rank will change on lookup
- ali.kheam in United StatesMy solution was: a hashtable (unordered_set<entry>) and a priority queue <pair<int, entry> > lookup(entry) { hashtable lookup. if found, return the entry (O(1)) // otherwise (not found) if limit reached, then // evict the priority queue top, entry e = db_read_entry(); // expensive op //insert the entry into the hash set, //insert pair<int, entry>(get_entry_rank(entry), entry) into the priority queue. (O(logn)) return entry; } Part 2: The rank change on lookup Since the priority queue does not provide the key(hash key) value to be modified, and only provide access to top entry. I propose change in the associated ds for hashset, My proposal was: create an associated binary tree(std::set) with the hashmap (map of entry as key, and the iterator entry in the std::set of entries) . I used iterator to avoid look up for entry into the set during each entry lookup in the hashset. The set contains the [rank information + entry]. On look up, the entry into the set(of rank) is looked up (O(1)) and then erased, and then insert back after recaulated rank value.The iterator inrto the hash_set is also updated with this new iterator. On limit reached, the tree is searched for min value. from that min value item (the rank and the entry), the entry is looked up into the hash_set, and hence removed from there too. The rest of the process is the same as lookup as described below. I believed my slution was faor enough O(1) look up, if not found. the look up (with const rank) O(logn) + db access time. db access time dominate the O(logn) hence this logn (to insert into the priority queue,(or pop and insert into the priority) does not matter For changing rank: The binary tree lookup (O(1) since the hash_map has the iterator update of the rank, (lookup and removal of the rank entry in the binary tree O(1), and insert is O(logn). Hence each operator (lookup, evict and lookup, excluding the db access) will take O(logn) instead of O(1). I was rejected because of not optimize solution (performance is not good). I am wondering whether the solution was not good even though it was phone interview and I have to work within the time frame of 25 minutes. or the interview process is just unrealistic. Please share your solution so that I can see where I failed
| Report Duplicate | Flag | PURGE
Principal Software Engineer Data Structures
struct ent_b {
int tp;
int when;
int current
};
void increament_entb(vector<ent_b> &S, int S) {
for(int i = 0; i < S; i++) {
if (S[i].current < when) {
S[i].current++;
}
}
}
void breakfast(int n, vector<ent_b> items, int days) {
if (days > n) {
return;
}
for(int i = 0; i < N; i++) {
if (b[i].when < b[i].current) {
continue;
}
vector<ent_b> temp = items;
temp[i].current = 0;
//picks b[i]
increament_entb(temp, items.size());
breakfast(n, temp, days++);
items = temp;
}
}
void convert_string(string str) {
int start = 0;
int current = start;
if(!str.length()) {
return;
}
stringstream ss;
while(current < str.length()) {
if(str[start] != str[current]) {
ss << str[start];
ss <<(current-start);
start = current;
}
current++;
}
ss << str[start];
ss<< current-start;
cout << ss.str();
}
void min_substring(string str, string mat) {
unordered_set<char> os;
for(int i = 0; i < mat.length(); i++) {
os.insert(mat[i]);
}
stack<pair<int, string> > st;
set<char> count_set;
// now use the sliding window
int start = 0;
for(int i = 0`; i <str.length(); i++) {
if(os.find(str[i]) != os.end()) {
// match
if (i != start && str[i] == str[start]) {
// just expand the
start++;
} else {
// see it it completes the search characters
if(count_set.find(str[i]) == count_set.end()) {
// new character
count_set.insert(str[i]);
if(count_set.size() == mat.length()) {
// found all the matching
if (st.top().first > (i-start+1)) {
st.pop();
st.push(pair<int, string>((i-start+1),
str.substr(start, i+1)));
}
count_set.erase(str[start]);
start++;
}
}// if already seen character, just increament i, keep starts
// as it is
}
}
}
if(!st.empty()) {
cout << "min string is " << st.top().second << " of length " << st.top().first;
}
}
// we can keep the list aling with map and keep the
// iterator into this list instead of the int. (index)
// struct entry {
// int index;
// list<entry>::iterator iter;
// // add forn and get the list.begin()
//};
bool comp_int(const pair<int, string> &p1, const pair<int, string> &p2) {
return std::less<int>(p1.first. p2.first);
}
void remove_second(string &s1, string s2) {
if(s1.empty() || s2.empty()) {
return;
}
unordered_multimap<string, int> storage;
typedef unordered_multimap<string, int>::iterator it;
stringstream ss(s1);
int index = 0;
string str;
while(!ss.eof()) {
ss >> str;
//cout << "string is " << str <<"\n";
storage.insert(pair<string, int>(str, index));
index += str.length();
}
ss.str(s2);
index = 0xFFFFFFFF;
while(!ss.eof()) {
ss >> str;
//cout << "string is " << str <<"\n";
pair<it, it> pit = storage.equal_range(str);
if (pit.second == storage.end()) {
continue;
}
it iter = pit.first;
int index_iter = 0;
int counter = 0;
while(iter != pit.second) {
if (index < iter->second) {
index = iter->second;
index_iter = counter;
}
counter++;
}
iter = pit_first + index_iter;
storage.erase(iter);
}
// now iterate iver the hashmap and then sort the entries by the value
vector<pair<int, string> > v;
for(it iter = storage.begin(); it != storage.end(); ++it) {
v.push_back(pair<int, string>(iter->second, iter->first));
}
std::sort(v.begin(), v.end(), comp_int);
// print the vector
}
stuct hash_func {
size_t operator()(const entry &a) const {
return std::hash<char>(entry.c);
}
};
stuct equal_func {
bool operator()(const entry &a, const entry &b) const {
return a.c == b.c;
}
};
struct entry {
char c;
unordered_set<entry, hash_func, equal_func> children;
};
typedef unordered_set<entry, hash_func, equal_func>::iterator it_t;
typedef unordered_set<entry, hash_func, equal_func> mp_t;
class trie{
unordered_set<entry, hash_func, equal_func> root;
mp_t & find_next(mp_t &it, char a) {
it_t ret_it;
if (it == mp_t()) {
// trying new
ret_it = root.find(a);
} else {
ret_it = it.find(a);
}
if (ret_it == root.end()) {
// not found then report error
throw bad_exception("Not found");
}
return ret_it->second.children;
}
};
I am not good at dynamic programming, so the border conditions may not be true
Let String is s
and Pattern p
for . or character
if (p[j] == '.' || s[i] == p[j]) {
DP[i+1][j+1] = DP[i][j];
} else if (p[i] == '+') {
DP[i+1][j+1] = DP[i][j+1] || DP[i+1][j];
} else if (p[i] == '*') {
// do back track as we may have already consider pattern like c*(c already matched)
DP[i][j] = DP[i-1][j] || DP[i][j-1];
} else {
DP[i+1][j+1] = false;
}
Take a hans multi set of strings, (use hashfuncion which supress the characters positions, equal uses the sort string matching.
iterate over the input string again, find the equal range if greater than 1, then these are all anagrams, print, and erase the entries from the hash_set
class equal_fun {
public:
bool operator()(const string &a, const string &b) const{
string s1(a); string s2(b);
sort(s1.begin(), s1.end());
sort(s2.begin(), s2.end());
if(!s1.compare(s2)) {
cout <<a << " equals " << b <<"\n";
return true;
}
return false;
}
};
class hash_fun {
public:
size_t operator()(const string &a) const{
size_t asum = a[0];
for(int i = 1; i <a.length(); i++) {
asum ^= a[i];
}
return asum;
}
};
void anagram_set() {
vector<string> input = {"dog", "ogd", "kak", "akk", "abg", "gdo", "alo"};
unordered_multiset<string, hash_fun, equal_fun> hms;
typedef unordered_multiset<string, std::hash<string>, equal_fun>::iterator hit;
for(int i = 0; i < input.size(); i++) {
hms.insert(input[i]);
}
for(hit h = hms.begin(); h != hms.end(); ++h) {
cout <<"Added " << *h <<"\t";
}
cout <<"\n";
for(int i = 0; i < input.size(); i++) {
pair<hit, hit> phit = hms.equal_range(input[i]);
if (phit.first != hms.end() && phit.first != phit.second) {
hit tempiter = phit.first;
while(tempiter != phit.second) {
cout <<"Anagram " << *tempiter <<"\t";
++tempiter;
}
}
cout << "\n";
hms.erase(phit.first, phit.second);
}
}
class subscription {
string name;
string URI;
string channel_type;
TimeRange range;
FeedInterval interval;
boolean is_push;
list<Contents> contents;
int max_contents_entries;
int max_contents_size;
Interval last_read;
void get_new_contents();
// get new contents and update the contents
};
class Monitor {
list<subscription> s;
int min_read_interval;
void gather_reads(); // go over the subs and call the get_new_contens
// if the last_interval + interval < time now.
// if is_push, then ignore.
// if range condition met, then call the remove subscriton on the news
int min_read_interval;
void runner(); // runs after the min_read_interval and call
// gather reads
};
class app {
list<subscription> sub;
Monitor m;
public:
boolean get_subscription(string url);
// and update the monitor with the subscrition
string list_subscritions();
void remove_subsciption(string uri); // lok for uri in the subscritopn list
void display_reads();
};
I am wondering how do you design the schema to store log record. which allows the query the records by Time stamp, by log level as well as the application type to succeed. For time stamp, I believe the logs can be ordered (via re time stamping) at the time of retrieving the logs from the Kafka ( or other queuing system), rather than using the time stamp provided by the application server.
Once the logs are pushed to the log collector (logstash etc), some real time analytics or decision can me made per log instance (e.g event trigger, action on critical log etc, raise alerts). Apart from these real time action, the logs needs to be stored and queried.
Lets suppose the logs are stored in database (RDMS or no sql ??), these needs to be searched to provide the answers which this question poses.
How shall we store the log records to allow fast search/lookups
1) For string matching
a) Create a trie consisting of the query keywords. The match is considered when the word in the string match a complete trie path.
2) Read the file sentence by sentence. Tokenize the readLine, and then pass the word into the trie for matching.
class range {
int start;
int end;
};
class current_smallest_window {
int sz;
list<range> rg; // potentiall have multiple smallest ranges
};
class smallest_window {
ifstream ifile;
trie trie_f;
set<string> query_words;
int start_index;
unordered_set<string> match_set;
current_smallest_window csw;
smallest_window(const char *file_path, const char *query) :
ifile(file_path), start_index(-1) {
// read query words into the vector
}
void build_trie() {
// iterate over the query words and create the trie
}
void process_file_for_range() {
char line[MAX_LINE_SIZE];
int count = 0;
while(1) {
ifile.getline(line, MAX_LINE_SIZE);
if(!ifile.good()) {
if (!ifile.eof()) {
throw exception();
}
}
//
count++;
vector<string> tokens;
get_line_tokens(line, token);
if (!token.empty()) {
// pass these to the trie for matching
set<string> current_match_set = get_match_tokens_from_trie(tokens);
if(current_match_set.empty()) {
continue;
}
if(current_match_set.size() == query_words.size()) {
// all found
insert into current_smallest_window
continue;
}
// iterate over the current_match set and find
// if the current_match consist of everything included in the match_set
// then update the start_index and match_set = current_match_set,
// else ( current match may have new matches but not all the existing matches)
// stay with the same start _index, update the match_set with the new match from the
// current match set
}
}
}
};
class my_exception: public exception {
std::string reason;
public:
my_exception(string s) : reason(s) {}
virtual const char* what() const throw() {
return reason.c_str();
}
};
class cb {
private:
int sz;
int *buffer;
int count;
int rindex, windex;
public:
cb(int mz) : sz(mz+1), count(0), rindex(0), windex(0) {
buffer = new int[sz];
bzero(buffer, sizeof(int)*(sz));
}
bool full() {
return ((windex + 1)%sz == rindex)? true: false;
}
bool empty() {
return rindex == windex? true: false;
}
bool add(int v) {
if (rindex == windex) {
return false;
// assert on count == sz;
}
buffer[windex] = v;
windex = (windex+1)%(sz);
if (full()) {
cout << " the CR is full \n";
}
return true;
}
int get() {
if (empty()) {
throw my_exception("Empty");
}
int v = buffer[rindex];
rindex = (rindex+1)%sz;
if (empty()) {
cout << " the buffer is empty\n";
}
return v;
}
};
int main() {
// your code goes here
cb cr(3);
try {
cr.add(1);
cr.add(2);
cr.add(3);
if(!cr.add(4)) {
cout << "4 cant be added\n";
}
cout << cr.get() <<"\t" << cr.get() << "\t" << cr.get() <<"\n";
cout <<"Getting the 4th element \n";
cr.get();
} catch(const exception &e) {
cout << "Exception caught at addng " << e.what() <<"\n";
}
}
Construct a BST using pre order and then print in order
#include <cstdlib>
#include <iostream>
using namespace std;
#include <iostream>
#include <functional>
#include <string>
#include<map>
struct node {
node *left, *right;
int value;
node (int v) : value(v), left(NULL), right(NULL) {}
};
node *construct_tree(int *ara, int l, int h) {
if (l > h) {
return NULL;
}
node *root = new node(ara[l]);
// find where the next larger is found
int c = l+1;
while(c <= h && ara[c] <= ara[l]) {
c++;
}
// c is not at the index which stands at the next right subtree.
root->left = construct_tree(ara, l+1, c-1);
root->right = construct_tree(ara, c, h);
return root;
}
void print_tree(node *r) {
if (!r) {
return;
}
print_tree(r->left);
cout << r->value << "\t";
print_tree(r->right);
}
int main() {
// your code goes here
int ara[] = {10, 5, 1, 7, 40, 50};
node * root = construct_tree(ara, 0, sizeof(ara)/sizeof(ara[0]) -1);
print_tree(root);
return 0;
}
I will use the combination of stack and the set (in c++ the stack is sorted).
class compr {
bool operator()(const element &a, const element &b) {
// implement a greater comparitor
}
};
struct ds {
stack<element> st;
set<element, compr> ss; // or cas use the std::greater<T> where T is Pod
};
AddElement (push into stack and add into set>
RemoveLast<pop>
GetLastElement(stack.top())
Get_min(*set.begin(). check if the stack is not empty)
Keep the already parsed characters in the vector, and print if all characters from string are read into the vector.
#include <iostream>
using namespace std;
#include <unordered_map>
#include<list>
#include <string>
#include <vector>
#include <cassert>
#include <sstream>
#include <cctype>
void print_comb(const string &s, int index, int max_index, vector<char> st) {
if (index == max_index) {
for(vector<char>::iterator it = st.begin(); it != st.end(); ++it) {
cout << *it;
}
cout <<"\n";
return;
}
st.push_back(s[index]);
print_comb(s, index+1, max_index, st);
st.pop_back(); // remove current
int count = 1;
// now add the number
if (!st.empty()) {
if (isdigit(st.back())) {
// get the digit and increase the count.
count += (int)(st.back() - '0');
st.pop_back();
}
}
char to_print = (char)(count + '0');
st.push_back(to_print);
print_comb(s, index+1, max_index, st);
}
void find_combinations(std::string s) {
if (!s.length()) {
return;
}
vector<char> st;
print_comb(s, 0, s.length(), st);
}
int main() {
// your code goes here
std:string str("ABCD");
find_combinations(str);
return 0;
}
int know_someone(int *ara, int n, int index) {
if (index == n) {
return index;
}
int current_celeb = know_someone(ara, n, index+1);
// current_cele is celeb of people [index+1, n]
if (current_celeb != -1 && !knows(current_celeb, ara[index])) {
// if current celeb
return current_celeb;
}
int i = index+1;
// check whether the ara[index], know anyone from [index+a, n]
for(; i <=n; i++) {
if (knows(ara[index], ara[i])) {
break;
}
}
if (i == n+1) {
// doesnt know
return ara[index];
}
return -1;
}
lets have a set of people not knowing any one in the given list:
start from the list, and if found in the set, then remove that person from the set
else at the end of iterator insert the list current person into the famous person set
// since its just the iteration over the famous people, any ds will do. as long as the insertion and removal is not expensive
Set : O(logn),
Unordered Set/list/dequeue O(1).
I have implemented using the set, though the above choices are better
struct people {
string name;
};
set<people> get_famous_people(const list<people> &c) {
if (c.empty()) {
return set<people>();
}
set<people> famous;
set<people>::iterator fit;
for(list<people>::const_iterator it = c.begin(); it!= c.end(); ++it) {
fit = famous.begin();
bool is_seen = false;
while(fit != famous.end()) {
if (know_each_other(*fit, *it)) {
fit = famous.erase(fit);
} else {
++fit;
}
}
// what to do with it.
if (!is_seen) {
famous.insert(*it);
}
}
return famous;
}
I have not used the linked list hashtable, as it is not available in c++. I just learned that there is also a mapped hashtable available in java.
Given not knowing this hashtable -->linked list datastructure. I initially thought of having two hashtables.
1) First hashtable has the entries which are seen just once
2) second hashtable has duplicate entries
process is like this.
a) for any new entry, do a look up into second table, if present Ignore the entry
b) if no entry in hashtable 2, look up into table 1
b1), if found in table 1, remove entry into the table 1 and insert into table 2.
b 2) else insert into table 1.
c) a and b will provide the separation of single vs multiple entries for O(1) access time,
d) to query the oldest first entry, we need look up into the table 1. In Table 1, we keep the entries as unordered_map<entry, time stamp> where the time stamp is the entry# (or any monotonically increasing number). Do the iteration of the table 1 and find the lower count .
e) The linked hashtable provide this facility
Hence I propose this solution
#include <iostream>
using namespace std;
#include <unordered_map>
#include <list>
#include <algorithm>
#include <functional>
struct entry {
entry(int i): id(i) {}
int id;
};
class hash_s {
public:
size_t operator() (const entry &e) const {
return std::hash<int>()(e.id);
}
};
class comp_s {
public:
bool operator() (const entry &l, const entry &r) const {
if (l.id == r.id ) {
return true;
}
return false;
}
};
class not_found_exception:std::exception {
string str_reason;
public:
not_found_exception(const char *input) : str_reason(input) {}
virtual const char *what() const throw() {
return str_reason.c_str();
}
};
class linked_hash {
unordered_map<entry, list<entry>::iterator, hash_s, comp_s> sobject;
list<entry> lobjects;
public:
void insert_object(int i) {
entry en(i);
unordered_map<entry, list<entry>::iterator, hash_s, comp_s>::iterator it =
sobject.find(en);
if (it != sobject.end()) {
// found it
if (it->second != lobjects.end()) {
// if this is more than the first time.
lobjects.erase(it->second);
it->second = lobjects.end();
}
return;
}
// first time
lobjects.push_front(en);
sobject.insert(pair<entry, list<entry>::iterator>(en, lobjects.begin()));
}
entry get_first_non_repeating() {
if (!lobjects.empty()) {
return lobjects.back();
}
throw not_found_exception("No non repeating element");
}
};
int main() {
// your code goes here
return 0;
}
#include <iostream>
using namespace std;
#include <vector>
vector<pair<int, int> > build_roads(int n, vector<pair<int, int> > in) {
vector<vector<int> > matrix(n, vector<int>(n));
for(vector<pair<int, int> >::iterator it = in.begin(); it != in.end(); ++it) {
// check wheth the it->first and it->second within the limits
matrix[it->first][it->second] = 1;
}
vector<pair<int, int> > out;
for(int i =0; i < n; i++) {
for(int j=i+1; j<n; j++) {
if (!matrix[i][j]) {
out.push_back(pair<int,int>(i,j));
}
}
}
return out;
}
int main() {
// your code goes here
int ara[][2] = {{0,1}, {1,2}, {2,3}};
vector<pair<int, int> > in;
for(int k = 0; k < sizeof(ara)/sizeof(ara[0]); k++) {
in.push_back(pair<int, int>(ara[k][0], ara[k][1]));
}
vector<pair<int, int> > result = build_roads(4, in);
for(vector<pair<int, int> >::iterator it = result.begin(); it != result.end(); ++it) {
cout << "{" << it->first<<"," <<it->second<<"}\n";
}
return 0;
}
I couldnt follow the question either
Based on my understanding of the question
1) Look for independant blocks of the list.
for a given list, the independant block is the set of nodes until it merged into an existing list.
2) assumption: circular list but without loops.
3) if list at index i, has a node (any node, may be head node) point to a list at index k where k > i, the list k is considered a subset of list i, and list k will return pair<k, 0>
4) a list k without any node will return <k, 0> , thus requiring checking whether the 0 case is if the list is empty or exist entirely into an existing list
#include <iostream>
using namespace std;
#include<vector>
#include<unordered_set>
struct node {
int v;
node *next;
node *prev;
};
vector<pair<int, int> > block_nodes(vector<node*> &vlist) {
unordered_set<node*> lookup_set;
vector<pair<int, int> > result_set;
int count_list = 0;
for(vector<node*>::iterator it = vlist.begin(); it!= vlist.end(); ++it) {
node *chead = *it;
if (!chead) {
result_set.push_back(pair<int, int>(count_list++, 0));
continue;
}
node *temp = chead;
int count = 0;
do {
if(lookup_set.find(temp) != lookup_set.end()) {
cout << "The list merged into another list\n";
break;
}
lookup_set.insert(temp);
count++;
temp = temp->next;
} while(temp != chead);
if (temp == chead) {
cout << "This list is independant, located at index " << count_list << " and "
" of size " << count <<"\n";
}
result_set.push_back(pair<int, int>(count_list, count));
count_list++;
}
return result_set;
}
int main() {
vector<node*> vlist;
vector<pair<int, int> > result = block_nodes(vlist);
// your code goes here
return 0;
}
The other solution is to have an array of size 256. Initialized to -1. Iterated over the given string, and do a lookup into the array[str[i]] if its equal to -1, update to i; else change the ara[str[i]] to 255.
Iterate over the array[i]. and find the lowest index. such that index != -1.
str[index] is the characted at position index.
#include <iostream>
using namespace std;
#include <unordered_map>
struct entry {
int count;
int index;
entry():count(0), index(0) {}
};
void find_non_repeating(string str) {
std::unordered_map<char, entry> m;
for(int i =0; i <str.length(); i++) {
entry &e = m[str[i]];
e.count++;
e.index = i;
}
int lowest = str.length();
for(std::unordered_map<char, entry>::iterator it = m.begin(); it != m.end(); ++it) {
if (it->second.count >1) {
continue;
}
if (it->second.index < lowest) {
lowest = it->second.index;
}
}
if (lowest < str.length()) {
cout << "Non repeating at index "<< lowest <<" and is " << str[lowest] <<"\n";
} else {
cout << "Not found\n";
}
}
int main() {
// your code goes here
string str("");
find_non_repeating(str);
return 0;
}
or you can use the backtracking with recursion
void swap(string &str, int a, int b) {
if (a==b){
return;
}
char k = str[a];
str.replace(a, 1,1, str[b]);
str.replace(b, 1,1, k);
}
// backtracking
void permu(std::string &str, int index) {
if (index == str.length()) {
cout <<str <<"\n";
return;
}
for(int i = index; i < str.length(); i++) {
swap(str, i,index);
permu(str, index+1);
swap(str, index, i);
}
}
std::vector<char> out;
std::queue<char> in;
void permute(std::queue<char> &in, std::vector<char> &out) {
// take from the front, add to out, and then push back into the queue
if (in.empty()) {
for(std::vector<char>::iterator it = out.begin(); it != out.end(); ++it) {
cout << *it;
}
cout <<"\n";
return;
}
for(int i = 0; i < in.size(); i++) {
out.push_back(in.front());
in.pop();
permute(in, out);
// the element is not in the in queue.
in.push(out.back());
out.pop_back();
}
}
int main() {
std::string input("abcd");
for(int i = 0; i < input.length(); i++) {
in.push(input[i]);
}
permute(in, out);
// your code goes here
return 0;
}
This can be achieved by simple graph search, Since for graph traversal, the already seen node needs to be identified to avoid search loop, (as oppose to trees where such condition does not exist), the marking the entry as 1 is used to avoid this
#include <iostream>
using namespace std;
#include <vector>
bool is_invalid_or_reached(vector< vector<int> > &v, int cx, int cy) {
if (cx < 0 || cy < 0 || (cx >= v.size()) ||
(cy >= v[cx].size()) || v[cx][cy] == 1) {
return false;
}
return true;
}
bool find_path(vector< vector<int> > &v, int cx, int cy) {
if (!is_invalid_or_reached(v, cx, cy)) {
return false;
}
if (v[cx][cy] == 2 ) {
cout << "Found at " << cx << ":" << cy <<"\n";
return true;
}
// cx and cy is valid
v[cx][cy] = 1;
return find_path(v, cx-1, cy) || find_path(v, cx, cy-1) ||
find_path(v, cx+1, cy) || find_path(v, cx, cy+1);
}
int main() {
vector< vector<int> > i(10, vector<int>(10));
i[8][6] = 2;
find_path(i, 5, 2);
return 0;
}
The program should work regardless(as all the read/write will be in big endian on the same machine), the only difference is that if it is accessing some shared memory or more likely dependent on data from external source which is packing bytes in the little endian. It shouldnt be an issue as far as the network headers are concerned as these are in big endian anyways, for packed application data unless the endianess information is either encoded in the serialized data (or agreed as part of the protocol), you are out of luck.
- ali.kheam May 19, 2017node *&reverse_list_r(node *current, node *&root) {
if (!current) {
return root;
}
node *&received = reverse_list_r(current->next, root);
received = current;
return current->next;
}
void reverse_list(node *&r) {
reverse_list_r(r, r) = NULL;
}
void iterat(node *k) {
while(k) {
cout << k->v <<"\t";
k = k->next;
}
cout <<"\n";
}
Mapreduce will do too, though I am little familar with this paradigm.
the map to take the block file data, and output UID:1 for each UID seen.
Prior to sharding/shuffling the id's (keys) to a reducer, the filtering can be run on the output of each map to filter out the non unique ids.
The reducer will aggregate the unique id, and for any ID, if count ==1, emit(unique id, 1) else consume
I was thinking along the same line. For a large set of data, the distributed hash will work. first hash (or first k bits of the UID) to find the hash table within the hash table set. then run the hash on the UID to locate the entry(update by incrementing the counter or insert). Finally, each set can be individually queried for unique id. Hash and then the query can be run across multiple threads, where each thread reads from a section of the file at some offset, over individual hash tables.
- ali.kheam May 03, 2017#include <iostream>
using namespace std;
#include <cstring>
#include <string>
#include <unordered_map>
#include<stdlib.h>
#include<stdio.h>
#include <deque>
#include <vector>
void permu(deque<char> &i, deque<char> &o) {
if (i.empty()) {
// print the o list
for(int k =0; k <o.size(); k++) {
cout << o[k];
}
cout <<"\n";
return;
}
for(int it = 0; it < i.size(); it++) {
// pick one and insert into the out queue
o.push_back(i.front());
i.pop_front();
permu(i, o);
i.push_back(o.back());
o.pop_back();
}
}
void permutation_do(char *s) {
int slength = strlen(s);
deque<char> in;
deque<char> out;
for(int i = 0; i < slength; i++) {
in.push_back(s[i]);
}
permu(in, out);
}
int main() {
// your code goes here
char a[] = {"ABCD"};
permutation_do(a);
return 0;
}
#include <iostream>
using namespace std;
#include <set>
int main() {
set<int> md;
// your code goes here
int values[] = {10,5,6,3,0,14,33,78,4,15,8,100,7};
int count = sizeof(values)/sizeof(values[0]);
for(int i =0; i < count; i++) {
md.insert(values[i]);
int to_find = (i)/2;
int median = 0;
set<int>::iterator it = md.begin();
{
// odd, the
for(int in = 0; in <to_find; in++) {
++it;
}
median = *it;
}
if (!((i+1)%2)) {
// if this is even; then find the to_find+1;
++it;
median = (median + (*it))/2;
}
for(it = md.begin(); it != md.end(); ++it) {
cout << *it << "\t";
}
cout << "\nMedian is " << median <<"\n\n";
}
return 0;
}
It is O(logn, for insertion into binary tree) + O(n, for search) = O(n) for each median location
- ali.kheam April 30, 2017
I apologize for the confusion
- ali.kheam July 30, 2017The first case is simple, I was asked to implement the cache using fixed ranking for cache entries.
The second case is complicated. When the entry is looked up, the getEntryRank is called to get the updated rank value. If its a LRU cache, we assume that the new rank will be minimum of all the existing cache entries. For this case, we can go about using a hashtable with value as pointer to this entry in a sorted list. The back/front(your choice) of the list will be LRU entriy for eviction. Hence, the operation for look up will be O(1), O(1) hashtable lookup, and O(1) to move the entry for current to front of the list. This move involves erasing the entry from list and move to front.
But, I was specifically told that the getEntryRank can return any value. The new rank value can be lower or higher than its existing rank value. Thus maintaining the ordered list by ranking will result in O(n) look up operating as we have to move the entry iteratively to right spot in the list. I proposed a binary tree associated data structure instead of the list. Here the binary tree look up (keep a pointer to entry in the binary tree in the hast_table, this mean O(1) look up in the binary tree, erase the entry, get the new rank and insert back into the binary tree, thus the lookup will cost O(logn))
Can you work out O(1) cost for this lookup if the new rank is random.