Generative AI: How It Works and Recent Transformative Developments

In Seconds, This Artificial Intelligence Technology Can Produce New Content Responding to Prompts

Sam Altman CEO of OpenAI, sitting on a podium with OpenAI sign behind him

Bloomberg / Contributor / Getty Images

Generative AI is an artificial intelligence (AI) that can produce content such as audio, text, code, video, images, and other data. While previous AI algorithms were used to identify patterns within a training data set and make predictions, generative AI uses machine learning algorithms to create outputs based on a training data set. 

Generative AI can produce outputs in the same medium in which it is prompted (e.g., text-to-text) or in a different medium from the given prompt (e.g., text-to-image or image-to-video). Popular examples of generative AI include ChatGPT, Gemini, DALL-E, Midjourney, and Perplexity, among others.

The technology is the most discussed technological change since the smartphone. In 2023, the worldwide market in AI was worth about $196 billion, according to Grand View Research, with the U.S. market alone, according to Bloomberg, expected to grow to $1.3 trillion within the next decade.

Key Takeaways

  • Generative AI is a form of machine learning that can produce text, video, images, and other types of content.
  • ChatGPT, DALL-E, and Gemini are generative AI applications that produce text or images based on user-given prompts.
  • Generative AI is used in everything from creative to academic writing and translation; composing, dubbing, and sound editing; infographics, image editing, and architectural rendering; and in industries from automotive to media/entertainment to healthcare and scientific research.
  • Concerns about generative AI include its potential legal, ethical, political, ecological, social, and economic effects.

How Does Generative AI Work?

Generative AI is a type of machine learning that works by training software models to make predictions based on data without the need for explicit programming.

Specifically, generative AI models are fed vast quantities of existing content to train the models to produce new content. They learn to identify underlying patterns in the data set based on a probability distribution and, when given a prompt, create similar patterns (or outputs based on these patterns).

Part of the umbrella category of machine learning called deep learning, generative AI uses a neural network that allows it to handle more complex patterns than traditional machine learning. Inspired by the human brain, neural networks don't require human supervision or intervention to distinguish differences or patterns in the training data.

Generative AI can be run on various models, which use different mechanisms to train the AI and create outputs. These include generative adversarial networks, transformers, and variational autoencoders.

Generative AI Interfaces

Integrating AI into everyday technology has altered many people's interactions with digital devices. Voice-activated AI assistants, now ubiquitous in smartphones, smart speakers, and other everyday devices, illustrate this shift. Similarly, generative AI is becoming increasingly accessible through various user-friendly software interfaces.

A fundamental change driving the widespread adoption of generative AI has been the development of intuitive user gateways. Unlike earlier iterations that required technical expertise or data science knowledge, modern generative AI interfaces allow users to interact using natural language. This access has significantly expanded the user base and potential applications of generative AI.

Here are some of the most popular recent examples of generative AI interfaces.

ChatGPT

Created by OpenAI, ChatGPT is an example of text-to-text generative AI—essentially, an AI-powered chatbot trained to interact with users via natural language dialogue. Users can ask ChatGPT questions, engage in back-and-forth conversation, and prompt it to compose text in different styles or genres, such as poems, essays, stories, or recipes, among others.

When it was first released in November 2022, ChatGPT quickly brought wide attention to generative AI's uses; within months, ChatGPT was for AI what Google was for search or Kleenex was for tissues, a virtual synonym for its product line.

Many people use the free version of ChatGPT online. OpenAI also sells the application programming interface (API) for ChatGPT, among other enterprise subscription and embedding options.

DALL-E

DALL-E is an example of text-to-image generative AI released in January 2021 by OpenAI. It uses a neural network trained on images with accompanying text descriptions. Users can input descriptive text, and DALL-E will generate photo-realistic imagery based on the prompt. It can also create variations on the generated image in different styles and perspectives.

DALL-E can also edit images, whether by making changes within an image (known in the software as Inpainting) or extending an image beyond its original proportions or boundaries (called "outpainting").

Gemini

Formerly Bard, Gemini is a text-to-text generative AI interface based on Google’s large language model. Like ChatGPT, Bard is a chatbot powered by AI technology that can answer questions or generate text based on user-given prompts.

Google first billed it as a “complementary experience to Google Search.” By the spring of 2024, Google Snippets was using Gemini to present answers to search queries atop Google's traditional lineup of search results.

50%

The growth in Google's emissions since 2019, primarily because of data center energy consumption and supply chain emissions.

AI's Effect on the Environment

In 2019, Alphabet Inc. (GOOGL) announced aggressive plans to cut its total greenhouse gas emissions in half by 2030 from its 2019 baseline. Even as its annual reports speak over dozens of pages about AI's "potential" to lower such emissions through vague promises of model efficiency and "resiliency," the takeaway is in its own data.

By 2024, its emissions had grown almost 50% since 2019, increasing 13% in 2023 alone. Even as Google still promises to meet its 2030 timeline, Google's 2024 report noted that the increase in emissions "was primarily because of increases in data center energy consumption and supply chain emissions" involved in generative AI. The report said, in a notable understatement given the planetary stakes, that any mitigation or decrease in AI's climate effects "may be challenging."

The History of Generative AI

Modern AI really kicked off in the 1950s with Alan Turing’s research on machine thinking and his creation of the eponymous Turing test.

The first neural networks (a key piece of technology underlying generative AI) that were capable of being trained were invented in 1957 by Frank Rosenblatt, a psychologist at Cornell University.

The journey from these early concepts to the AI powerhouses we see today has been marked by waves of innovation and periods of stagnation. Neural networks gained traction in the 1980s, but it was the introduction of generative adversarial networks (GANs) in 2014 by Ian Goodfellow and his colleagues that truly revolutionized the field. GANs, which pit two neural networks against each other to produce increasingly realistic data, opened new frontiers in generating images, music, and text.

The 2010s saw an explosion in deep learning capabilities, fueled by advances in computing power and the availability of massive data sets. The release of GPT-3 in the 2020s was a watershed moment, showcasing AI's potential to produce coherent, contextually relevant content across various domains.

The 2020s Breakthrough

The true economic impact of generative AI began to crystallize in 2022 with the public release of ChatGPT. Its user-friendly interface gave the public access to powerful AI capabilities, reaching an estimated 100 million users within just two months of launch. Such rapid adoption underscored the technology's potential to reshape industries and economies.

In the mid-2020s, companies across a range of sectors have been integrating generative AI into finance, healthcare, education, creative industries, and more. Giants like Google (with its Gemini model) and Anthropic (with Claude) are pushing forward generative AI's capabilities, developing multimodal AI systems that can process and generate text, images, and code.

The economic implications are often predicted to be staggering. A 2024 McKinsey report estimated that generative AI could add between $2.6 trillion to $4.4 trillion annually to the global economy. This potential is driving unprecedented investment in the technology, with companies like JP Morgan Chase committing over $1 billion annually to AI capabilities.

Generative AI's Second Wave

A June 2024 report from Deloitte on generative AI's "second wave" was far more cautious in tone than the consulting industry's largely celebratory publications on AI in 2022 and 2023. Deloitte said two key areas of "trust" remain a "major barrier to large-scale generative AI adoption": trust in generative AI's output and trust from workers that it won't replace them.

The report notes that this has "not prevented organizations from rapidly adopting" the technology, "with 60% reporting they are effectively balancing rapid implementation with risk management." While casting about for profitable companies in its surveys, Deloitte noted that's thus not been the case.

A 2024 Deloitte report offered the sobering conclusion that, thus far, most companies' costs for highly skilled labor and sophisticated computing technology for generative AI have run well ahead of any earnings. The consulting firm also said that the most reliable earnings from generative AI have been in cybersecurity to combat AI-powered cyberattacks.

How Is Generative AI Used?

Most generative AI systems are based on foundation models, which can perform multiple open-ended tasks. The possibilities of generative AI are wide-ranging for applications, and arguably, many have yet to be discovered, let alone implemented.

The ability for generative AI to work across types of media (text-to-image or audio-to-text, for example) has opened up creative and lucrative possibilities. No doubt, as businesses and industries continue to integrate this technology into their research and workflows, many more uses will emerge.

Important

A major concern with generative AI is that algorithms can amplify or replicate existing biases inherent from its training data. Amazon, for example, created (and then abandoned) an AI-powered recruiting tool that was biased against women.

Recent Transformative Developments in Generative AI

Despite the relative novelty of generative AI, the technology has already seen rapid advances.

Today's generative AI models can be used for the following tasks:

  • Translation
  • Creative, academic, and business writing
  • Code writing
  • Composing and songwriting
  • Dubbing
  • Dictation and transcription 
  • Speech and voice recognition
  • Illustration
  • Infographics
  • 3D modeling
  • Image editing
  • Architectural rendering  

As time passes and the technology grows more sophisticated, it's expected to become more effective at these tasks and be able to take on new and more complex work.

Software development is one field that is expected to see massive shifts thanks to the power of generative AI, with as much as 9.3% of industry revenue being attributed to AI's ability to quickly create code and generate user interfaces.

Generative AI will also impact the banking industry, helping overhaul the industry's legacy code systems, personalize retail banking services to customers, and create more accurate risk models for lending and investing.

Applications by Industry

Examples of applications across different fields include the following:

  • Automotive industry: Synthetic data produced by AI can run simulations and train autonomous vehicles. 
  • Healthcare and scientific research: Scientists can use AI to model protein sequences, discover new molecules, or suggest new drug compounds to test, while doctors and practitioners can use AI to analyze images to aid in diagnoses. 
  • Media and entertainment: AI can be used to quickly, easily, and more cheaply generate content, or (as a tool) to improve the work of creatives like designers.
  • Climate science and meteorology: AI can simulate natural disasters, forecast the weather, and model different climate scenarios.
  • Education: AI can be used to supplement classroom learning with one-to-one tutoring via a chatbot or to create course materials, lesson plans, or online learning platforms. 
  • Government: The U.S. government has publicly released information about its use of generative AI since 2022. Its list has included employing AI to analyze weather hazards, process veteran feedback on their experiences with the U.S. Veterans Affairs, and for patent searches.

Of course, AI can be used in any industry to automate routine tasks such as minute-taking, documentation, coding, or to improve existing workflows alongside or within preexisting software. 

The Pros and Cons of Generative AI

Pros and Cons of Generative AI

Pros
  • Automation can increase productivity.

  • Reduces the skill and time barriers for creative roles and content creation.

  • Could allow for faster and more accurate analysis of complex data.

  • AI can be used to create synthetic data sets for training other AI systems, speeding AI development.

Cons
  • AI hallucinations can result in users being provided with inaccurate or completely fictional information.

  • AI training relies on accurately labeled data. Inaccurately labeled data can cause issues with training AI models.

  • There is almost no regulatory framework for AI, raising issues such as privacy and ownership of AI output.

  • Most generative AI systems thus far have been trained on copyrighted content found online.

Like any major technological development, generative AI opens up a world of potential, which has already been discussed above in detail, but there are also drawbacks to consider. 

Overall advantages of generative AI include the following:

  • Increasing productivity by automating or speeding up tasks
  • Removing or lowering skills or time barriers for content generation and creative applications
  • Enabling analysis or exploration of complex data 
  • Using it to create synthetic data on which to train and improve other AI systems

Disadvantages of generative AI include the following:

  • Hallucination: This refers to the tendency for certain AI models to generate nonsense or errors that do not correspond to fact or real-world or common-sense logic.
  • Reliance on data labeling: Although many generative AI models can be trained in an unsupervised manner using unlabeled data, data quality and veracity remains an issue. Many tech companies, including OpenAI, Facebook, and TikTok, rely on low-paid contract workers who perform data enrichment work such as labeling or generating training data.
  • Difficulty with content moderation: Another concern is the ability for AI models to recognize and filter out inappropriate content. As is the case with data labeling, much of this work still relies on human contractors to tag and filter through large amounts of offensive and potentially traumatizing content.
  • Ethical issues: In addition to labor concerns like the examples above, algorithms have been demonstrated to amplify or replicate existing discrimination and biases inherent in the training data.
  • Legal and regulatory issues: The U.S. has been slow to offer any legal or regulatory framework for the use of AI. Yet, its development could pose the following problems:
  1. Copyright issues: Since generative AI models are trained on a vast quantity of data, it can be difficult to verify whether the materials included in the data or the resultant works generated are in violation of copyright laws.
  2. Privacy issues: Generative AI raises concerns around the collection, storage, use, and security of data, both personal and business-related.
  3. Autonomy and responsibility: AI technology raises concerns around liability. For example, for autonomous systems like self-driving cars, it is unclear how to determine liability for accidents.
  • Political implications: Generative AI raises issues around false or misleading information and the veracity of media such as photo-realistic imagery or voice recordings. It can also interfere with processes that invite democratic engagement by falsifying a high volume of comments, submissions, or messages.
  • Energy consumption: AI models have a large ecological impact, as they require a vast quantity of electricity to run. As usage grow, so will the demand on the environment.

Which Industries Can Benefit from Generative AI?

Generative AI can benefit just about any type of field or business by increasing productivity, automating tasks, enabling new forms of creation, facilitating deep analysis of complex data sets, or even creating synthetic data on which future AI models can train.

Generative AI is also widely used in many different government applications.

What Are Some Popular Examples of Generative AI?

Popular generative AI interfaces include ChatGPT, Bard, DALL-E, Midjourney, and DeepMind.

What Is Machine Learning?

Machine learning is the ability to train computer software to make predictions based on data. Generative AI uses machine learning algorithms.

What Is a Neural Network?

A neural network is a type of model, based on the human brain, that processes complex information and makes predictions. This technology allows generative AI to identify patterns in the training data and create new content.

The Bottom Line

Generative AI is still a relatively new technology with the potential to transform many of the ways we work and live. Traditionally, AI has been the realm of data scientists, engineers, and experts, but the ability to prompt software in plain language and generate new content in seconds has opened up AI to a much broader user base. 

However, there are wide-ranging concerns and issues to be cautious of its applications. Many implications, ranging from legal, ethical, and political to ecological, social, and economic, have been and will continue to be raised as generative AI continues to be adopted and developed.

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. Grand View Research. "Artificial Intelligence AI Market."

  2. Bloomberg. "Generative AI To Become $1.3 Trillion Market by 2032."

  3. Google for Developers. “What Is Machine Learning?

  4. Google Cloud Tech, via YouTube. “Introduction to Generative AI.”

  5. IBM. “What Is Artificial Intelligence (AI)?

  6. Murf Resources. “Generative AI: All You Need to Know.”

  7. OpenAI. “Introducing ChatGPT.”

  8. OpenAI. “Pricing.”

  9. OpenAI. “DALL-E 2.”

  10. OpenAI. “DALL-E: Introducing Outpainting.”

  11. Google. "Environmental Report: 2024," Page 31.

  12. Stanford News. “Stanford Researcher Examines Earliest Concepts of Artificial Intelligence, Robots in Ancient Myths.”

  13. MIT News. “Explained: Neural Networks.”

  14. Deloitte. "Now Decides Next: Getting Real About Generative AI."

  15. Deloitte. "Changing the Game: The Impact of Artificial Intelligence on the Banking and Capital Markets Sector."

  16. Deloitte. "US State of Generative AI Quarterly Report."

  17. NVIDIA. “What Is Generative AI?: How Does Generative AI Work?

  18. Reuters. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.”

  19. McKinsey & Company. "What's the Future of Generative AI? An Early View in 15 Charts."

  20. Automotive Testing Technology International. "The Rising Role of Synthetic Data in the Automotive Industry."

  21. Harvard Medical School. "Artificial Intelligence Beyond the Clinic."

  22. AIMultiple. “Top 6 Use Cases of Generative AI in Education.”

  23. National Artificial Intelligence Initiative Office. “Agency Inventories of AI Use Cases.”

  24. Ziwei Ji, Nayeon Lee, et al., via ACM Digital Library. “Survey of Hallucination in Natural Language Generation.” ACM Computing Surveys, Vol. 55, No. 12, Pages 1–38.

  25. IBM. “What Is Unsupervised Learning?

  26. NBC News. “ChatGPT Is Powered by These Contractors Making $15 an Hour.”

  27. IBM. “What Are AI Ethics?

  28. U.S. Copyright Office. “Copyright and Artificial Intelligence.”

  29. The Brookings Institution. “How Generative AI Impacts Democratic Engagement.”

  30. Data Center Dynamics. “Researchers Claim They Can Cut AI Training Energy Demands by 75%.”

  31. Nature. “ChatGPT: Tackle the Growing Carbon Footprint of Generative AI.”

  32. IBM. “The Neural Networks Model.”

Open a New Bank Account
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Part of the Series
A Primer on Investing in Transformative Technology