Companies that educate, explore, experiment, and expand, perpetually, with the right pace and sequencing, are most likely to win with AI
This article was originally published on CIO.com
AI never sleeps. With every new claim that AI will be the biggest technological breakthrough since the internet, CIOs feel the pressure mount. For every new headline, they face a dozen new questions. Some are basic: What is generative AI? Others are more consequential: How do we diffuse AI through every dimension of our business?
Tactically, you can answer these questions in any number of ways. You can build an AI Center of Excellence (COE), launch a strategic task force, or designate a deputy to lead the charge. But whatever you do—if our advisory work and discussions with leading CIOs suggest anything—you’ll have to drive excellence in four related, though not necessarily sequential, streams of work: Educate, Explore, Experiment, Expand. It’s around these four work streams that leading organizations are positioning themselves to mature their data strategies and, in doing so, answer not only today’s AI questions but tomorrow’s.
Educate. You can’t wrangle AI by yourself. Your journey will be fruitful only to the extent that you can instill in those with whom you go to market a digital fluency and a confidence in your ecosystem.
Accordingly, many CIOs have fashioned themselves into the de facto AI professor within their organizations—developing 101 materials and conducting roadshows to build awareness, explain how generative AI differs from other types, and discuss its risks.
To ease collaboration on the topic where it’s likely to surface, Digi-key Electronics, a leading electronic component distributor in North America, has even built networks of influencers. As the company’s CIO, Ramesh Babu, explains, “We identify ambassadors in the organization and position them in the right meetings to drive a common understanding of the many terms floating around.”
Babu also warns against discussing only the benefits of AI. He and his peers make a point of emphasizing the risks. “We’re trying to have balanced conversations,” he says, a practice that underscores the duty CIOs have to develop appropriate policies and usage guidelines in order to mitigate the downsides of AI.
To help educate your own workforce about AI, provide them materials on the topic. Include common definitions, reimagined future states, risks, and policies and guidelines for usage. Have them ready for impromptu meetings, town hall presentations, and other settings. And direct your colleagues to self-service channels so that they may access materials and learn at their own pace.
Explore. To explore is to pose the question: How can I make AI work for my organization?Since the AI landscape is both large and complex, take a two-pronged approach: analyze internally and marry that analysis to marketplace activity.
Internally, start by looking at your value chain or the capabilities that deliver your value proposition. Brainstorm how generative AI could make your processes (and the people supporting those processes) more intelligent and productive. If you’re already using AI for some of the use-cases you brainstorm, no matter – record those too. And pay special attention to use-cases that concern customer service: Of the executives surveyed at the latest Metis Strategy Digital Symposium, 43% said their organizations are prioritizing customer service use-cases for generative AI in 2023.
From all of these sources, compile your use-cases into a backlog and rank them by impact and feasibility. You’ll learn where you can create new ways to win in both the short and long terms while weeding out those cases that are too difficult for their value.
Next, examine the market. At first, you might struggle to wrap your head around the size of it—a $150B addressable market, as estimated by Goldman Sachs—but by doing so you set in motion what should be a continuous evaluation. Search first for vertical-specific and enterprise-wide AI solutions. Categorize them by the capabilities they support. And if your organization permits it, maybe even ask ChatGPT.
Compare and contrast what’s available in the market to your top-ranked use cases and the capabilities you already have. Where an internal capability does not already exist, and the case relies on a large language model (LLM), you will need to determine how you want to proceed: by training and fine-tuning an off-the-shelf model, like Morgan Stanley did with OpenAI; or by building your own, like Bloomberg did.
Experiment. To experiment well is to work through your backlog with urgency and agility and—especially in the case of AI—with a bias for incremental progress. As Baker Tilly CIO Allen Smith explained at a recent panel, “There’s a difference between home runs and singles.” The singles are your friends, says Smith, and a great way to show something tangible, build momentum, and create a vehicle to fuel other interesting ideas.
At the tech juggernaut Lenovo, CIO Art Hu is taking a similar approach. Hu says they are running dozens of proofs of concept. One consequence of being in the early innings of Generative AI, according to Hu, is the rapid pace of development. “Because it’s fast, you can run proofs of concept for not massive investments.” This demonstrates how his team stays in lockstep with the business on investment priorities in a period where economic uncertainty has narrowed the scope of technology investment. “That’s the way you want it. You want small steps for the business without spending or committing a lot of money. They can see the result and decide ‘OK, double down, or shift the investment elsewhere.’”
Many attribute generative AI’s promise to its ascent to the very top of the tech stack, a promise that makes it more approachable than other disruptive technologies that, while undeniably promising, still require technical expertise to be exploited. Acknowledging this nuance, many companies have built experimentation sandboxes in which users from across the organization can try their hand at AI in a controlled environment.
Expand. Research reports have dangled that generative AI could add trillions to the global economy. But generally, these reports assume that AI can be implemented at scale. Here, AI leaps from the Batcave to the streets of Gotham, confronting a new set of challenges.
With regard to creating that scale, Chris Davis, a Partner at the digital advisory Metis Strategy and a leader of his firm’s AI practice, worries less about scaling the technology than he does about people’s role in that scale. “Someone has to develop, train, and supervise the models,” he explains. “…the irony is that people could actually be the limiting factor.”
As a means of overcoming this limitation, he stresses how necessary it is that organizations revisit—and where appropriate, revise—their operating models. “You need to re-envision business strategies with the exponential scale of AI in mind,” he says. “And train product managers on how they might weave AI into anything—core digital products, customer experiences, employee experiences, and so on.” He goes on to explain, that means also ironing out the roles and responsibilities among various players in your organization: “AI laboratories, data scientists, product teams—they all have to know how to work together efficiently every step of the way, from identifying use-cases to building algorithms and models, from following AI operating procedures to monitoring any models that are already in use.”
And there’s plenty of evidence to support Davis’s point. For example, after recently redefining the roles, responsibilities, and delivery methods of its IT product teams to suit its specific AI ambitions, a global financial services provider discovered many gaps in its capacity: some that it could address through upskilling, but also some that would require it to hire new people.
Looking forward. Meanwhile, hyperbolic headlines will continue to outpace adoption; yet, they won’t outpace the exponential rate at which the volume of data is growing, especially as technologies such as 5G and IoT hit their stride. So, if you, too, want to leverage AI to its fullest extent, you must first look in the mirror: Can I manage this growing volume of data? If you can’t convert the data into something meaningful, then, as Lenovo’s tech chief, Art Hu, suggests, you might lose ground: “If you don’t figure out as a company how to (manage a growing volume of data) effectively and efficiently, the competitor that does is potentially going to have a significant advantage.”
As you mature your data strategy, remember that you have many data-driven tools at your disposal, only one of which is AI. It’s wedged between an ocean of use-cases to the North and your core data foundation to the South, and progress in each of these layers is linked to the other two inextricably. There’s no use in thinking of your data strategy as something binary, as if it were a building under construction that will one day be complete. Those that educate, explore, experiment, and expand, perpetually, with the right pace and sequencing, are those most likely to win with AI.
Mike Bertha is a Partner at Metis Strategy