It’s no secret, everyone is talking about generative AI (GenAI). In 2023, funding shot up five times year over year. According to a McKinsey report, “Generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually… by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion.” McKinsey also identified marketing as one of the four key areas of value for generative AI application, and IDC predicts that generative AI will take over 30% of marketing’s mundane tasks by 2027. For marketing leaders, effectively managing the transition to generative AI with your team is crucial. As the largest privately held software company in the world, we’re diligently exploring the ways we can use GenAI to innovate, improve productivity, reduce time to market – all while ensuring our advancements are made responsibly. Our incredible research and development teams are implementing generative AI into our software for customers to use (including in our software designed for marketers), but I want to dive into our philosophy for adopting AI in our marketing department. While there may not be any experts in this area quite yet, I’d like to share my perspective on how we’re approaching GenAI adoption. In this peek behind the curtain, I’ll share my thoughts and a few of our strategies as a global marketing organization. Hopefully, it will inspire you to innovate boldly, act responsibly, and proactively upskill your teams.
The current state of GenAI in marketing
So, with all the hype, where are we at today? Despite widespread conversation around GenAI, many teams are struggling to implement. The good news? This means there’s still time to be among early adopters. A few years from now (or likely sooner), GenAI will be expected as a part of the consumer experience. When you adopt at that phase, customers often perceive those updates as overdue. At this stage of the AI game, however, there is still time to wow your customers with your use of GenAI. Integrating GenAI into your internal operations and external customer experiences before your competitors do will increase your brand’s reputation, customer engagement and market value. Of course, being a first mover also comes with risks. The market is still new, and regulations are being created in real-time. Only two weeks into 2024, 89 bills or resolutions governing AI use were introduced in the US alone. Since then, governments around the world have continued to introduce various laws and guidance on AI governance, with more expected to be enacted this year. Additionally, many of the companies developing generative AI technologies are new. According to CB Insights, “Of almost 800 GenAI companies we’ve identified, 16% have yet to raise any outside equity funding and about two-thirds are Series A or earlier. Less than 15% are mid- to late-stage startups.” Choosing which companies to invest in and partner with during these early days of widespread adoption is going to be crucial.
Is GenAI coming for our jobs?
In addition to the newness of the market and regulatory requirements, many marketers are hesitant to dive into generative AI adoption for a more personal reason: fears that AI will replace marketing jobs. Many leaders are even asking whether outsourcing marketing tasks to GenAI will put budgets and staffing at risk. AI isn’t coming for our jobs – but marketing professionals with AI skills will. My opinion? AI isn’t coming for our jobs – but marketing professionals with AI skills will. I believe this understanding will fundamentally shift how you approach AI integration and adoption in your marketing department. For me personally, it’s focused my attention on communicating our vision around AI for our marketing team, creating guidelines for ethical and responsible use, and paving a clear pathway for our marketers to increase their AI skills.
Think big: communicating your vision for GenAI use
There are a lot of fears around GenAI adoption. Truthfully, many of them are legitimate. Irresponsible adoption and management can lead to reputational damage, operational mistakes, and lapses in customer data protection. However, I often see leaders starting their GenAI adoption plans with risk mitigation, without communicating the broader vision. Certainly, we need rules and regulations. However, it is crucial to communicate to our teams why we’re developing guidelines in the first place. For me, I see them as the foundation to allow bold, innovative, and transformative use of AI. At SAS, thoughtful and responsible innovation is built into our DNA. We have nearly 50 years of experience leading the data and AI space, but one of the things that sets us apart is our focus on innovating thoughtfully and responsibly. We work with organizations around the world who are handling sensitive and crucial data, and our teams take that responsibility seriously. It is crucial to communicate to our teams why we’re developing guidelines in the first place. For me, I see them as the foundation to allow bold, innovative, and transformative use of AI. We knew we’d need to lay a foundation of responsibility around AI right away. We also knew we needed to encourage our employees to be bold. Change can be intimidating, and we wanted our teams to hear loud and clear from leadership that we’re taking appropriate safety measures and that within those bounds, we wanted them to think big and bold. The message must come from the top: we are relentlessly responsible so that we can be wildly innovative without fear of harming our customers.
Partner with legal: creating collaborative solutions from the start
Once we had a vision in place, we worked to leverage existing channels for regulation, governance, and change management. Our goal was to create a safety network of checks and balances while still fostering a culture of imagination and exploration around the possibilities of GenAI in marketing. First, we knew we needed to partner with our colleagues in legal. Now, there’s a longstanding history of marketers and lawyers not seeing eye to eye. Thankfully, at SAS, we’ve created a collaborative partnership to achieve our goals as one team – including creating AI adoption policies for marketing. One thing that’s been a game-changer for us is having them meet with our marketers and hear what the needs are directly. They understand the vision and are working to create policies that both protect our business and customers, without stifling bold innovation. Our legal team also created a formula to assess reputational risks. They can now run scenarios through this formula to help us make data-driven choices around the risk of new GenAI strategies and allow our marketers to make informed and responsible decisions. As this partnership continues to grow and evolve, we expect to regularly have a legal representative listening to the ideas and needs of our marketers in marketing ideation meetings. I know this is far from the norm of how legal and marketing have historically worked together, and I am so grateful for our legal team who is boldly championing the vision behind adopting GenAI in our marketing department.
Keep humans in the loop for ethical implementation
In addition to making sure your organization’s use of GenAI is legal, it’s important to also ensure that it’s ethical. One of the key practices is to ensure humans stay involved along the way to monitor for errors, bias, and discriminatory practices. “It’s important to note that generative AI, like any form of AI, requires human oversight to be trustworthy,” says Reggie Townsend, Vice President of the SAS Data Ethics Practice. “We’ve not reached a moment in time where AI, or any form of technology, is automatically and persistently aligned with our values. Humans must be present to ensure that. Doing so allows an organization to mitigate risk to its reputation, brand, and bottom line.” When thinking about implementing AI in marketing, there should not be processes where GenAI can work alone. Whether it is determining audience segments, editing content, generating ideas, or creating artwork, a human should be involved in monitoring output and adjusting as necessary. For our marketers, consistent and verifiable human oversight is a requirement for any level of generative AI use. Our approach is to think of AI as a teammate, and each person knows they are ultimately responsible for any outputs generated with AI. For this reason, we knew we wanted to involve…