If we look back five years, most enterprises were just getting started with machine learning and predictive AI, trying to figure out which projects they should choose. This is a question that is still incredibly important, but the AI landscape has now evolved dramatically, as have the questions enterprises are working to answer.
Most organizations find that their first use cases are harder than anticipated. And the questions just keep piling up. Should they go after the moonshot projects or focus on steady streams of incremental value, or some mix of both? How do you scale? What do you do next?
Generative models – ChatGPT being the most impactful – have completely changed the AI scene and forced organizations to ask entirely new questions. The big one is, which hard-earned lessons about getting value from predictive AI do we apply to generative AI?
Top Dos and Don’ts of Getting Value with Predictive AI
Companies that generate value from predictive AI tend to be aggressive about delivering those first use cases.
Some Dos they follow are:
Choosing the right projects and qualifying those projects holistically. It’s easy to fall into the trap of spending too much time on the technical feasibility of projects, but the successful teams are ones that also think about getting appropriate sponsorship and buy-in from multiple levels of their organization.
Involving the right mix of stakeholders early. The most successful teams have business users who are invested in the outcome and even asking for more AI projects.
Fanning the flames. Celebrate your successes to inspire, overcome inertia, and create urgency. This is where executive sponsorship comes in very handy. It helps you to lay the groundwork for more ambitious projects.
Some of the Don’ts we notice with our clients are:
Starting with your hardest and highest value problem introduces a lot of risk, so we advise not doing that.
Deferring modeling until the data is perfect. This mindset can result in perpetually deferring value unnecessarily.
Focusing on perfecting your organizational design, your operating model, and strategy, which can make it very hard to scale your AI projects.
What New Technical Challenges May Arise with Generative AI?
Increased computational requirements. Generative AI models require high performance computation and hardware in order to train and run them. Either companies will need to own this hardware or use the cloud.
Model evaluation. By nature, generative AI models create new content. Predictive models use very clear metrics, like accuracy or AUC. Generative AI requires more subjective and complex evaluation metrics that are harder to implement.
Systematically evaluating these models, rather than having a human evaluate the output, means determining what are the fair metrics to use on all of these models, and that’s a harder task compared to evaluating predictive models. Getting started with generative AI models could be easy, but getting them to generate meaningfully good outputs will be harder.
Ethical AI. Companies need to make sure generative AI outputs are mature, responsible, and not harmful to society or their organizations.
What are Some of the Primary Differentiators and Challenges with Generative AI?
Getting started with the right problems. Organizations that go after the wrong problem will struggle to get to value quickly. Focusing on productivity instead of cost benefits, for example, is a much more successful endeavor. Moving too slowly is also an issue.
The last mile of generative AI use cases is different from predictive AI. With predictive AI, we spend a lot of time on the consumption mechanism, such as dashboards and stakeholder feedback loops. Because the outputs of generative AI are in a form of human language, it’s going to be faster getting to these value propositions. The interactivity of human language may make it easier to move along faster.
The data will be different. The nature of data-related challenges will be different. Generative AI models are better at working with messy and multimodal data, so we may spend a little less time preparing and transforming our data.
What Will Be the Biggest Change for Data Scientists with Generative AI?
Change in skillset. We need to understand how these generative AI models work. How do they generate output? What are their shortcomings? What are the prompting strategies we might use? It’s a new paradigm that we all need to learn more about.
Increased computational requirements. If you want to host these models yourself, you will need to work with more complex hardware, which may be another skill requirement for the team.
Model output evaluation. We’ll want to experiment with different types of models using different strategies and learn which combinations work best. This means trying different prompting or data chunking strategies and model embeddings. We will want to run different kinds of experiments and evaluate them efficiently and systematically. Which combination gets us to the best result?
Monitoring. Because these models can raise ethical and legal concerns, they will need closer monitoring. There must be systems in place to monitor them more rigorously.
New user experience. Maybe we will want to have humans in the loop and think of what new user experiences we want to incorporate into the modeling workflow. Who will be the main personas involved in building generative AI solutions? How does this contrast with predictive AI?
When it comes to the differences organizations will face, the people won’t change too much with generative AI. We still need people who understand the nuances of models and can research new technologies. Machine learning engineers, data engineers, domain experts, AI ethics experts will all still be necessary to the success of generative AI. To learn more about what you can expect from generative AI, which use cases to start with, and what our other predictions are, watch our webinar, Value-Driven AI: Applying Lessons Learned from Predictive AI to Generative AI.
About the author
Staff Machine Learning Engineer, DataRobot
Aslı Sabancı Demiröz is a Staff Machine Learning Engineer at DataRobot. She holds a BS in Computer Engineering with a double major in Control Engineering from Istanbul Technical University. Working in the office of the CTO, she enjoys being at the heart of DataRobot’s R&D to drive innovation. Her passion lies in the deep learning space and she specifically enjoys creating powerful integrations between platform and application layers in the ML ecosystem, aiming to make the whole greater than the sum of the parts.
Meet Aslı Sabancı Demiröz