From home intrusion detection, to self-driving cars, to keeping data center operations healthy, Machine Learning (ML) has become one of the hottest topics in software engineering today. While much of the focus has been on the actual creation of the algorithms used in ML, the less-talked-about challenge is how to serve these models in production, often utilizing real-time streaming data. The traditional approach to model serving is to treat the model as code, which means that ML implementation has to be continually adapted for model serving. As the amount of machine learning tools and techniques grows, the efficiency of such an approach is becoming more questionable. Additionally, Machine Learning and model serving are driven by very different quality of service requirements; while machine learning is typically batch, dealing with scalability and processing power, model serving is mostly concerned with performance and stability.


I guess you came to this post by searching similar kind of issues in any of the search engine and hope that this resolved your problem. If you find this tips useful, just drop a line below and share the link to others and who knows they might find it useful too.

Stay tuned to my blogtwitter or facebook to read more articles, tutorials, news, tips & tricks on various technology fields. Also Subscribe to our Newsletter with your Email ID to keep you updated on latest posts. We will send newsletter to your registered email address. We will not share your email address to anybody as we respect privacy.


This article is related to


machine learning,ai,tensorflow,akka streams,data streaming