tusharsappal
BAN USERhttps://github.com/tusharsappal
class SpecialStack(object):
def createStack(self):
stack = []
return stack
def pushElement(self, stack , auxiallaryStack, element):
if len(auxiallaryStack) == 0 :
auxiallaryStack.append(element)
else:
if element < auxiallaryStack[-1] and len(auxiallaryStack) != 0:
auxiallaryStack.append(element)
elif element > auxiallaryStack[-1] and len(auxiallaryStack) != 0:
auxiallaryStack.append(auxiallaryStack[-1])
else:
auxiallaryStack.append(element)
stack.append(element)
def popElement(self,stack, auxillarStack):
if not stack:
print "Stack underflow , not able to pop"
else:
auxillarStack.pop
return stack.pop
def getMin(self, stack , auxillaryStack):
return auxillaryStack[-1]
def printStackElements(self, stack):
for index in range(len(stack)-1 , -1 , -1):
print stack[index],
def implementor(self):
stack = self.createStack()
auxiallyStack = self.createStack()
self.pushElement(stack, auxiallyStack ,18)
self.pushElement(stack, auxiallyStack ,19)
self.pushElement(stack, auxiallyStack ,29)
self.pushElement(stack, auxiallyStack ,15)
self.pushElement(stack, auxiallyStack, 16)
self.printStackElements(stack)
print "\n"
print "Minimum is " ,self.getMin(stack, auxiallyStack)
if __name__=="__main__":
SpecialStack().implementor()
Seems to be a neat approach , The above solution covers the case of large data ( for e.g the large images and videos ) also covers scalability . But I do not think it will cater to million of records if thrown at it .
I would recommend , if we are taking Cloud technology in picture .
1. We would chip off the requests coming to the us , in smaller sections like 32 bits , on which the each thread would operate. After chipping these requests would be held in memory till the thread actively gets them and process them further .
2. We would be launching a new thread , off-course to serve the incoming traffic . The thread increments a common sequence lying in-place in memory ( Not on disk since we would be having a bombarding requests coming in)
3. Each thread for eg is allowed to be carrying itself some small data piece ( for eg 32 bits , equalling the size we chipped off the incoming requests. We would obviously be storing the initial request id while chipping the data off) . Once the sequence is attached to the thread , it will be written to the flat file residing on a high Input / Output disk dive .
4. We can dedicate a huge partition on the disk for incoming requests . On which no other application would be writing on .
5. The folder structure would be designed in a way that a single folder would be dedicated to the each request , for eg a large video would have all files ( related to the video residing in the folder ) .
6. The files will be renamed with thread_id_sequence_number_Data_file, so as to uniquely identify things .
7. We could have monitoring of Zombie processes , and too many file handlers lying ideal . We could cap the number of threads to be spawned at the particular time .
8. Since we have the same on cloud, we could have high availability Load Balancers across Geo having application redundancy across different Geos .
With added level of redundancy we can plan to take periodic backup of disk space .
9 . Obviously we would be removing the folders once the files in complete entity is stored on the backend DB .
10 . For serving this large data files we can devise using CDN functionalities .
Also, this also can be a scenario in renaming
We put a name for a folder and the name already exists , the system gives up an option to merge the created folder with the old one , or replace the old one , or should not allow to create a new one at all.
Replisaramsey773, Blockchain Developer at Adjetter Media Network Pvt Ltd.
I'm a 27 year-old blogger, make-up junkie and follower of Christ.I love all things that bring happiness. My ...
What about the following approach
- tusharsappal July 20, 2017