The hottest topic of 2023 was AI, as billions of dollars in venture capital funding flowed into the sector and the industry grappled with crucial questions about the technology’s place in Company.
getty
2023 has been the year of AI. After ChatGPT launched in November 2022, it became one of the fastest growing apps on record, gaining 100 million monthly users in two months. With AI becoming the hottest topic of the year (as Bill Gates predicted in January), a series of startups have exploded into the market with AI tools capable of generating everything from synthetic speech to videos. Clearly, AI has come a long way since the beginning. of the year, when people wondered if ChatGPT would replace Google Search.
“I’m much more interested in thinking about what goes way beyond research… What are we doing that’s totally different and way cooler?” said Sam Altman, CEO of OpenAI. Forbes in January.
Rapid advances in technology caught the attention of venture capitalists as billions of dollars flowed into the sector. Microsoft’s $10 billion investment in AI MVP OpenAI, which is now would have raising to a valuation of $80 billion. In June, Inflection, a leading AI startup, launched its AI chatbot Pi and raised $1.3 billion at a valuation of $4 billion. A month later, Hugging Face, which hosts thousands of open source AI models, reached a valuation of $4 billion. In September, Amazon announced plans to invest $4 billion in OpenAI challenger Anthropic, which rolled out its own Claude 2.0 conversational chatbot in July and is now valued at $25 billion.
But not every AI founder has had a straightforward path to fundraising. Stability AI raised funding worth $1 billion in September 2022 for its popular text-to-image AI model, Stable Diffusion, but has struggled to do so since. Its CEO, Emad Mostaque, has made misleading statements about his own credentials and the company’s strategic partnerships with investors, a Forbes investigation discovered in June. In December, a Stanford study found that the dataset used to train Stable Diffusion contained illegal child sexual abuse material.
The AI gold rush has spawned several other unicorns like Adept, which creates AI assistants that can browse the internet and run software for you, and Character AI, which is used by 20 million people to create and chat with AI chatbot characters like Taylor Swift and Elon Musk. Business-focused generative AI startups like Typeface, Writer, and Jasper, which help businesses automate tasks like email writing and summarizing long documents, have also seen an influx of financing. But in the midst of the race to create and launch AI tools, Google found itself caught off guard and playing catch-up. The tech giant launched its conversational AI chatbot Bard and its own AI model Gemini at the end of the year.
Over the past year, AI has penetrated virtually every facet of life. Teachers were concerned that students were using ChatGPT to cheat on their homework and the tool was banned in the most popular school districts in the United States. Doctors and hospitals have started using generative AI tools not only for note-taking and tedious work but also to diagnose patients. While some political candidates have begun deploying AI in their campaigns to interact with potential voters, others have used generative AI tools to create deep fakes of political opponents.
AI-generated content has flooded the internet, sparking concerns that widely available AI tools are being exploited to create toxic content. For example, fake news produced using generative AI software went viral on TikTok and YouTube, and non-consensual AI-generated pornography proliferated on Reddit and Etsy. As low-quality AI-generated content took over the web, ChatGPT wreaked havoc on the freelance world as many feared losing their jobs to the hot new AI software capable of producing content faster and cheaper than humans.
Companies have also used AI chatbots to screen, interview and recruit employees, signaling the biases and risks inherent in the technology. Cybercriminals have found ChatGPT useful for writing code for malware and others have used it as a social media monitoring tool. To combat some of these problems, tech giants like Microsoft and Google have hired red teams to jailbreak their own AI models and make them safer.
“There are still a lot of unresolved questions,” said Regina Barzila, professor of electrical engineering and computer science at MIT CSAIL. “We need tools that can uncover what kinds of issues and biases are in these datasets and meta-AI technologies that can regulate AI and help us be in a much safer position than where we find ourselves today with AI.”
In 2023, prominent AI startups like OpenAI, Stability AI, and Anthropic were hit by a wave of copyright infringement lawsuits from artists, writers, and coders, who claimed that these tools were built on large datasets that used their copyrighted content without consent or payment. Legal expert Edward Klaris predicts that these class actions will pave the way for nuanced new rules regarding fair use of AI by the US Copyright Office in 2024.
“In the legal world, there are a huge number of AI-related transactions happening. Some people are upset that their work was taken away to create training data and so want to be able to license their content to AI companies and get paid for using their content,” said Klaris, CEO and Managing Partner of the Intellectual Property Rights Law Firm. KlarisIP company.
After the European Union sought to regulate the technology through its EU AI law, the Biden administration issued its own executive order, requiring startups developing large AI models that could pose risks for national security disclose them to the government. While tech companies largely supported the executive order, startups feared it could stifle the pace of innovation.
“If you look at the executive order, it formulates principles, which are good to articulate, but it doesn’t really translate into how we take those principles and translate them into a technology or a guardrail that helps us to ensure that the tool we use is truly safe,” Barzilla said.
The year 2023 also saw a divide among AI leaders, divided over whether AI technology should be developed openly or by powerful companies behind closed doors, like Google, OpenAI and Anthropic. Some have spoken about the security concerns associated with open source AI models, since anyone could ostensibly abuse these models. Others, like Yann LeCun, Meta AI’s chief scientist, who oversaw the development of Meta’s open source model, Llama 2, are advocates of stress testing open source AI in an open and transparent manner.
“Large open source language models will reach the level of large closed language models in 2024,” Clément Delangue said during a press briefing.
An internal division became public in late November when OpenAI CEO Sam Altman was ousted from the company by its board, which said he had not been “candid” in his statements to the board of directors. Days later, he was reinstated in his former role as CEO, after employees threatened to leave the company if Altman did not return. The company also added new directors to its board, including Bret Taylor and Larry Summers.
The key questions that remain unanswered in 2024 concern the AI economy, Delangue said, particularly how these AI startups will achieve profit margins and make money for their investors. Relying on GPUs from semiconductor giants like Nvidia and AMD, most AI models are increasingly expensive and have a high carbon footprint because they must be trained on large amounts of data. “By 2024, most companies will realize that smaller, cheaper, more specialized models make more sense for 99% of AI use cases,” Delangue said.