5 Things CEOs Need to Know About ChatGPT and Generative AI. (Photo credit should read CFOTO/Future … [+]
FINTECH SNARK RESERVOIR OBSERVATIONS
If you’ve attended any industry conferences this year, you know that ChatGPT and generative AI (and artificial intelligence in general) are dominating the agendas.
However, much of the content is preachy and meaningless – for example, “AI is going to be disruptive” or “AI is going to be a game changer.”
CEOs (and other senior executives for that matter) need – and want – clearer perspectives on the impact of these new technologies and how to implement them.
So here are five things CEOs need to know about ChatGPT and generative AI:
1) Cost reduction is not the goal of generative AI
The initial goal of deploying generative AI tools and technologies should be productivity improvement, particularly process acceleration.
Estimates of staff reductions vary by role and position type, and range from 20% to even 80%. While there are isolated examples of companies completely (or almost completely) replacing their employees with generative AI, they are rare and the results have been less than spectacular.
The impact of generative AI on businesses is not the replacement of personnel, but the acceleration of human productivity and creativity. According to Charles Morris, Microsoft’s chief data officer for financial services: “Don’t think of Generation AI as an automation tool, but as a co-pilot: humans do it, and the co-pilot helps them do it faster. »
From running marketing campaigns to website development to developing code to create new data models, the benefits of these generative AI use cases are not cost reduction, but reduction of time to market.
2) You need to assess the risks of extended language models
Although ChatGPT is currently the best-known extended language model (LLM) (Microsoft’s Gorilla and Facebook’s Llama are coming in strong), almost every major technology vendor has an LLM in the works or has recently launched one.
By the end of the decade, you should expect to have between 10 and 100 LLMs depending on your industry and the size of your company. There are two things you can bet on: 1) tech vendors will claim to integrate generative AI technology into their offerings when they don’t actually do so, and 2) tech vendors won’t tell you what are the weaknesses and limitations of their LLM (if they really have one) are.
As a result, companies will need to assess the strengths, weaknesses and risks of each model for themselves. According to Chris Nichols, director of capital markets at South State Bank:
“There are certain standards that companies must apply to each model. Risk groups should monitor these models and evaluate them based on their accuracy, potential for bias, security, transparency, data privacy, audit approach/frequency, and ethical considerations ( (e.g. intellectual property infringement, creation of deep fakes, etc.).
3) ChatGPT is until 2023 what Lotus 1-2-3 was until 1983
Remember the Lotus 1-2-3 worksheet? Although it was not the first PC spreadsheet on the market, when it was introduced in early 1983, it sparked a boom in personal computer adoption and was considered “the application that kills” for PCs.
Lotus 1-2-3 also sparked a boom in employee productivity. It has enabled users to track, calculate and manage digital data like never before. Few people in the active ranks today remember how we (oops, I meant “they”) had to rely on HP calculators to do calculations and then write things down.
Despite the huge productivity gain, some problems arose: 1) Users hardcoded errors into the calculations, which caused big problems for some companies; 2) Documentation of the assumptions entered into the spreadsheets was weak (rather non-existent), creating a lack of transparency; and 3) there was a lack of consistency and standardization in the design and use of the spreadsheets.
The same problems that businesses faced 40 years ago with Lotus 1-2-3 are present today with the use of ChatGPT and other generative AI tools: relying on the often incorrect results of ChatGPT, there is no documentation (or “paper trail”) of the use of the tool, and there is no consistency in the use of the tool between employees of the same service, and even less from the same company.
At the time, Lotus 1-2-3 spawned a number of plugins that enhanced the spreadsheet’s functionality. Likewise, hundreds of plugins already exist for ChatGPT. In fact, much of the power needed to generate output such as audio, video, programming code, and other forms of non-textual output comes from these plugins, not ChatGPT itself.
4) Data quality makes or breaks generative AI efforts
Consultants have been urging you to get your internal data house in order for years, and when you start using generative AI tools, you’ll see how successful you’ve been. The adage “garbage in, garbage out” was tailor-made for generative AI.
For open source LLMs that use public data on the Internet, you need to be very careful about the quality of the data. Although the Internet is a gold mine of data, it is a gold mine located in the middle of a data dump. Get your hands dirty for some data, and you won’t know if you have a nugget of gold or a handful of trash.
Businesses have struggled for decades to give their employees access to the data they need to make decisions and do their jobs. Part of the challenge is having tools to access data and training employees to use them.
Generative AI tools help eliminate some of the problems associated with using data access and reporting software applications. This is a huge advantage (and one of the reasons these new tools help accelerate human performance).
What remains, however, is data quality.
Paradoxically, however, we must stop talking about “data”, at least in a generic way. Instead, assess the quality, availability and accessibility of specific types of data, for example customer data, customer interaction data, transaction data, financial performance data, operational performance data, etc.
Each of these types of data powers generative AI tools.
5) Generative AI requires new behaviors
You cannot prohibit the use of generative AI tools. What you can – and should – do is establish guidelines for their use. For example, ask employees to: 1) Document the prompts they use to drive results; 2) Reread the generative AI results (and prove they did it); and 3) Adhere to internal document guidelines that include the use of keywords, clear titles, graphics with alt tags, short sentences, and formatting requirements.
It’s a tall order, but according to South State Bank’s Nichols, “poorly structured documents cause most of the inaccuracies in generative AI.”
The direction of management will also change over the remainder of the decade.
Businesses have spent the last 10 years on a “digital transformation” journey, where the focus has been on digitizing high-volume transaction processes like account opening and customer support.
This focus is evolving – broadening would be a better word – towards improving the productivity of knowledge workers within the organization (IT, legal, marketing, etc.).
In the short term, you’d be crazy to trust generative AI tools to run the business without human intervention or oversight. There is too much bad data leading to too many “hallucinations”.
In the long term, generative AI will be “disruptive” and “game-changing.” CEOs need to be proactive and take big steps to ensure that these disruptions and changes are positive for their organizations.