suhaib.realtor
BAN USERIf the transaction is backed by the database, then the data base grantees atomicity.
Two customers can add the item in cart, back end service will get the item info and render the front end with the item information visible in the cart section.
When its time to checkout, just one of the server request will be successful. For the other request, the server can formulate a response which will cause the front end to display the appropriate message to the customer.
Additionally, the front end can use asynchronuous calls(ajax, socket io etc) to back end to periodically update/refresh with the current status (expiration time, status etc) to avoid unnecessary check out attempt.
For a given list of keywords, we also need to perform intersection to gather the list of songs which better match the keywords. One can use the Sql Intersect or if the search entries are in memory (set. or hashtable), then iterate over the set or lookup into hashtable and increament the match count for each keyword appear in the song description(tags etc). gather the one which produces the higher count.
- suhaib.realtor July 04, 20171) For each user, keep all the products purchased or saved.
2) For each product in the above list, develop association of all the products which were saved, purchased or looked at by other customers. For this, we keep a histogram (or priority list in the DB), which is product id: list of related product sorted by the assiciation level. Imagine, a customer purchase item X, and he has already purchased Item A and B. Add this association in the association DB. The association can be directional. A-->X(25) and B-->X(1), G-->X(4). Here the association of A to X is strongest.
So for a customer with purchase history, find all the associated products and gather the top x number by the association level. and present these to him.
In short for every product X, keep the list of associated product (by rank). The association can be stored in a nosql DB as product id and weight average. This table is updated during every product purchase by anyone. and is referred during recommendation time.
Can we use the Mac address plus a sequenced number to generate the uniq Id, The Mac address is 48 bits, plus two bytes for unique number.
Here we partition the unique numbers into set, hence it reduced the available unique ids. as well as non shareable nature of the ids belong to a certain mac address(as prefix).
If distribution is not required. (and not used in a sorted data structure e.g binary tree for look up purposes), then one can use a static counter. Increment when ever next id is requested.
During reading the file.
keep this data structure
1) Hash of the lines read, Key is hash of words xor with each other, the value will be a structure consist of the line offset, the word count, and the # if characters, Hence the equal function will avoid reading the line from offset for every hash collision since the length or word count may be different.
2) keep a queue of the duplicate lines as <offsets>, hence sorted by the increasing offset, If found equal via hashtable push into hashset <current offset>
3). In memory DS (hashtable and the hashset> .
4) Once reading the file. Get the number of lines it contains, the then get the size of hashset. these are the # of lines which needed purged.
5)Iterate over the hashset, and start reading from offset(K)(lenth of file - sizeof(hashset),
for each entry in hashset, get the line from the file at K (provided the offset K is not in the hashset), and overwrite the offset from the hashset. Remove the hashset entry
6) May need a loop till the hashset is empty because of possible offset line in file already in hashset.
or,
Use MapReduce
1) Map lines to the line number. Produces Hash of the line as key, value is the pair<offset, line>
2) the reducer will get the sorted key value pairs(key as hash value) and will dump
just the offset into the output file(in case of the duplicate, the offset will be the first occurrence of the line). Thus the offset of the duplicate lines are not output. (offset, 0)
3) the last job is to merge the offset pair(offset, 0) and write the resulting lines into the output file
I assume we have two sorted list, which needs merging to get a combined merged list, in the same way we get for merge sort The time complexity is O(m+n) though,For O(1) merge, I assume that is just linking the list, (in case we have doubly linked list)
struct node {
int v;
node *next;
};
node *merge(node *l, node *r) {
node *root = NULL;
node **np = &root;
while(l && r) {
if (l->v <= r->v) {
// select l
*np = l;`
l = l->next;
np = &l->next;
} else {
*np = r;
r = r->next;
np = &r->next;
}
}
if (!l) {
*np = r;
} else if (!r) {
*np = l;
}
return root;
}
Two hashtables
- suhaib.realtor July 05, 20171) Friendship grapgh(hashtable<id, hashset<friend_ids>>. This represents an adjacency list graph representing friends. The key is the person id, the hashset is the list of its friends at distance 1.
2) Hashtable of the items and its corresponding people who like it. Since its not a one to one relationship, one can either choose a unordered multimap, or a hashtable of items with value as set/hash of people ids.
3) operation. Lets assume person <id> like item item <it>.
iterate over the friend list from first table, and then do a lookup into second table. if hit, notify that person of this update. (via callback etc, or have each person register each friend as observers, and notify during the above update which include the liked item information)