Skip to main content

8 MLops predictions for enterprise machine learning in 2023

image of what appear to be colored pieces of glass, like a mosaic, sitting on a lighted surface.

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


The landscape of MLops is flourishing, in a global market that was estimated to be $612 million in 2021 and is projected to reach over $6 billion by 2028. However, it is also highly fragmented, with hundreds of MLops vendors competing for end users’ operational artificial intelligence (AI) ecosystems.

MLops emerged as a set of best practices less than a decade ago, to address one of the primary roadblocks preventing the enterprise from putting AI into action — the transition from development and training to production environments. This is essential because nearly one out of two AI pilots never make it into production. 

So what trends will emerge in the MLops landscape in 2023? A variety of AI and ML experts shared their predictions with VentureBeat:

1. MLops will move beyond hype

“MLops will not just be a subject of hype, but rather a source of empowering data scientists to bring machine learning models to production. Its primary purpose is to streamline the development process of machine learning solutions.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“As organizations push to promote the best practices of productizing AI, the adoption of MLops to bridge the gap between machine learning and data engineering will work to seamlessly unify these functions. It will be vital in the evolving challenges involved in scaling AI systems. The companies that come to embrace it next year and accelerate this transition will be the ones to reap the benefits.” 

— ​​Steve Harris, CEO of Mindtech

2. Data scientists will favor prebuilt industry-specific and domain-specific ML models

“In 2023, we’ll see an increased number of prebuilt machine learning [ML] models becoming available to data scientists. They encapsulate area expertise within an initial ML model, which then speeds up time-to-value and time-to-market for data scientists and their organizations. For instance, these prebuilt ML models help to remove or reduce the amount of time that data scientists have to spend on retraining and fine-tuning models. Take a look at the work that the Hugging Face AI community is already doing in driving a marketplace for ready-to-use ML models.

“What I expect to see next year and beyond is an increase in industry-specific and domain-specific prebuilt ML models, allowing data scientists to work on more targeted problems using a well-defined set of underlying data and without having to spend time on becoming a subject matter expert in a field that’s non-core to their organization.”  

Torsten Grabs, director of product management, Snowflake

3. AI and ML workloads running in Kubernetes will overtake non-Kubernetes deployments

“AI and ML workloads are picking up steam but the dominant projects are still currently not on Kubernetes. We expect that to shift in 2023.

“There has been a massive amount of focus put into adapting Kubernetes in the last year with new projects that make it more attractive for developers. These efforts have also focused on adapting Kubernetes offerings to allow for the compute-intensive needs of AI and ML to run on GPUs to maintain quality of service while hosted on Kubernetes.”

Patrick McFadin, VP of developer relations, DataStax 

4. Operational efficiency will be a line item for 2023 ML budgets

“Investments centered around operational efficiency have occurred for several years, but this will be a focal point in 2023, especially as macroeconomic factors unfold and a limited talent pool remains. Those advancing their organizations with machine learning (ML) and advanced technologies are finding the most success in designing workflows that include the human-in-the-loop aspect. This approach provides much-needed guardrails if the technology is stuck or needs additional supervision, while allowing both parties to work efficiently alongside one another.

“Expect to see some initial pushback and hesitancy when educating the masses on ML’s quality assurance process, largely due to a lack of understanding of how the learning systems work and the resulting accuracy. One aspect that still incites doubt, but is a core differentiator between ML and the static, traditional technology we’ve come to know, is ML’s ability to learn and adjust over time. If we can educate leaders better on how to unlock the full value of ML — and its guiding hand to achieving operational efficiency — we’ll see a lot of progress in the next few years.”

Tony Lee, CTO at Hyperscience

5. ML project prioritization will focus on revenue and business value

“Looking at ML projects in-progress, teams will have to be far more efficient, given the recent layoffs, and look toward automation to help projects move forward. Other teams will need to develop more structure and determine deadlines to ensure projects are completed effectively. Different business units will have to begin communicating more, improving collaboration and sharing knowledge so these now smaller teams can act as one cohesive unit.

“In addition, teams will also have to prioritize which types of projects they need to work on to make the most impact in a short period of time. I see machine learning projects boiled down to two types: sellable features that leadership believes will increase sales and win against the competition, and revenue-optimization projects that directly impact revenue. Sellable-feature projects will likely be postponed, as they’re hard to get out quickly and, instead, the now-smaller ML teams will focus more on revenue optimization as it can drive real revenue. Performance, at this moment, is essential for all business units and ML isn’t immune to that.”

Gideon Mendels, CEO and cofounder of MLops platform, Comet

6. Enterprise ML teams will become more data-centric than model-centric

“Enterprise ML teams are becoming more data-centric than model-centric. If the input data isn’t good and if the labels aren’t good, then the model itself won’t be good — leading to a higher rate of false positive or false negative predictions. What it means is that there is a lot more focus on making sure clean and well-labeled data is used for training.

“For example, if Spanish words are accidentally used to train a model that expects English words, one can expect surprises. This makes MLops even more important. Data quality and ML observability are emerging as key trends as teams try to manage data before training and monitor model effectiveness post-production.”

Ashish Kakran, principal, Thomvest Ventures

7. Edge ML will grow as MLops teams expand to focus on end-to-end process

“While the cloud continues to provide unparalleled resources and flexibility, more enterprises are seeing the real values of running ML at the edge — near the source of the data where decisioning occurs. This is happening for a variety of reasons, like the need to reduce latency for autonomous equipment, to reduce cloud ingest and storage costs, or because of lack of connectivity in remote locations where highly secure systems can’t be connected to the open internet.

“Because edge ML deployment is more than just sticking some code in a device, edge ML will experience tremendous growth as MLops teams expand to focus on the full end-to-end process.”

Vid Jain, founder and CEO of Wallaroo AI

8. Feature engineering will be automated and simplified

“Feature engineering, the process by which input data is understood, categorized and prepared in a way that is consumable for machine learning models, is a particularly intriguing area. 

“While data warehouses and streaming capabilities have simplified data ingestion, and AutoML platforms have democratized model development, the feature engineering required in the middle of this process is still a largely manual challenge. It requires domain knowledge to extract context and meaning, data science to transform the data, and data engineering to deploy the ‘features’ into production models. We expect to see significant strides made in automating and simplifying this process.”

Rudina Seseri, founder and managing partner of Glasswing Ventures

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.