Scaling

Listened to this podcast while commuting https://twimlai.com/twiml-talk-269-advancing-autonomous-vehicle-development-using-distributed-deep-learning-with-adrien-gaidon/

A research scientist in Toyota working on deep learning simulation for high image resolution needed to learn dev-ops to scale their testing. Tools used: docker, kubernetes, Beegfs. Why beegfs though? He did mention that there were db that could store up to 700gb until a point that they can’t and so moved over to beegfs. (to read up on beegfs)

On a separate note when we were working on scaling to 100k users on a project, it took us on the same route to use kubernetes to scale, and it is very stable.

Leave a comment

Your email address will not be published. Required fields are marked *