Milestones in Generative AI Technology
Alexnet – 2012
AlexNet, a deep convolutional neural network, made headlines by winning the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with a significant margin over traditional computer vision methods, showcasing the power of deep learning in image classification tasks. This event is often referred to as the beginning of today’s deep learning revolution
Variational Autoencoders – 2013
This marked a turning point in computer vision, with neural networks becoming the dominant approach. Additionally, Variational Autoencoders (VAEs) were introduced, enabling unsupervised learning and generation of complex data distributions
Generative Adversarial Networks – 2014
Generative Adversarial Networks (GANs) were introduced, revolutionizing the field of generative modeling. GANs consist of two neural networks, a generator and a discriminator, competing against each other to produce realistic data samples
AlphaGo – 2016
DeepMind’s AlphaGo defeated world champion Go player Lee Sedol, demonstrating the capabilities of reinforcement learning and deep neural networks in mastering complex games
Transformer Architecture – 2017
Transformer architecture was introduced by Google researchers in their paper “Attention is All You Need”. Transformer architecture featured self-attention mechanisms that revolutionized natural language processing tasks. Transformers achieved state-of-the-art results in machine translation, text generation, and other natural language processing (NLP) tasks
GPT-1 and Bert – 2018
OpenAI introduced the Generative Pre-trained Transformer (GPT) model, kickstarting the era of large-scale pre-trained language models. Bidirectional Encoder Representations from Transformers (BERT) set new benchmarks in natural language understanding (Quantpedia, 2023)
GPT-2 and Improved Generative Models – 2019
OpenAI released GPT-2, a larger and more powerful version of their pre-trained language model (OpenAI, 2019). This year also saw advancements in generative models, including improved architecture and training techniques for generating realistic images, text, and other data types
GPT-3 and Self-Supervised Learning – 2020
OpenAI unveiled GPT-3, the largest and most powerful language model to date, demonstrating remarkable capabilities in natural language understanding and generation. Self-supervised learning gained traction as a powerful paradigm for training deep learning models without requiring labeled data
AlphaFold 2, DALL-E, and GitHub Copilot – 2021
DeepMind’s AlphaFold 2 made significant advancements in protein folding prediction, with implications for drug discovery and bioinformatics. OpenAI introduced DALL-E, a model capable of generating diverse and creative images from textual descriptions (OpenAI, 2021). GitHub Copilot, powered by OpenAI’s Codex, brought AI-assisted coding to developers, showcasing the potential of large language models in software development.
How Generative AI works
For the most part, generative AI operates in three phases:
- Training, to create a foundation model that can serve as the basis of multiple-gen AI applications
- Tuning, to tailor the foundation model to a specific gen AI application
- Generation, evaluation and returning, to assess the gen AI application’s output and continually improve its quality and accuracy
Training
Generative AI begins with a foundation model, a deep learning model that serves as the basis for multiple different types of generative AI applications. The most common foundation models today are large language models (LLMs), created for text generation applications, but there are also foundation models for image generation, video generation, and sound and music generation as well as multimodal foundation models that can support several kinds of content generation.
To create a foundation model, practitioners train a deep learning algorithm on huge volumes of raw, unstructured, unlabeled data e.g., terabytes of data culled from the internet or some other huge data source. During training, the algorithm performs and evaluates millions of ‘fill in the blank’ exercises, trying to predict the next element in a sequence e.g., the next word in a sentence, the next element in an image, the next command in a line of code and continually adjusting itself to minimize the difference between its predictions and the actual data (or ‘correct’ result).
The result of this training is a neural network of parameters, encoded representations of the entities, patterns and relationships in the data, that can generate content autonomously in response to inputs, or prompts.
This training process is compute-intensive, time-consuming and expensive: it requires thousands of clustered graphics processing units (GPUs) and weeks of processing, all of which costs millions of dollars. Open-source foundation model projects, such as Meta’s Llama-2, enable gen AI developers to avoid this step and its costs.
Tuning
A foundation model is a generalist: It knows a lot about a lot of types of content but often can’t generate specific types of output with desired accuracy or fidelity. For that, the model must be tuned to a specific content generation task. This can be done in a variety of ways.
Fine tuning
Fine tuning involves feeding the model labeled data specific to the content generation application questions or prompts the application is likely to receive, and corresponding correct answers in the desired format. For example, if a development team is trying to create a customer service chatbot, it would create hundreds or thousands of documents containing labeled customers service questions and correct answers and then feed those documents to the model.
Fine-tuning is labor-intensive. Developers often outsource the task to companies with large data-labeling workforces.
Reinforcement learning with human feedback (RLHF)
In RLHF, human users respond to generated content with evaluations the model can use to update the model for greater accuracy or relevance. Often, RLHF involves people ‘scoring’ different outputs in response to the same prompt. But it can be as simple as having people type or talk back to a chatbot or virtual assistant, correcting its output
Generation, evaluation, more tuning
Developers and users continually assess the outputs of their generative AI apps and further tune the model even as often as once a week for greater accuracy or relevance. (In contrast, the foundation model itself is updated much less frequently, perhaps every year or 18 months.)
Another option for improving a gen AI app’s performance is retrieval augmented generation (RAG). RAG is a framework for extending the foundation model to use relevant sources outside of the training data, to supplement and refine the parameters or representations in the original model. RAG can ensure that a generative AI app always has access to the most current information. As a bonus, the additional sources accessed via RAG are transparent to users in a way that the knowledge in the original foundation model is not
Generative AI model architectures and how they have evolved
Truly generative AI models deep learning models that can autonomously create content on demand have evolved over the last dozen years or so. The milestone model architectures during that period include
- Variational autoencoders (VAEs), which drove breakthroughs in image recognition, natural language processing and anomaly detection
- Generative adversarial networks (GANs) and diffusion models, which improved the accuracy of previous applications and enabled some of the first AI solutions for photo-realistic image generation
- Transformers, the deep learning model architecture behind the foremost foundation models and generative AI solutions today
Variational autoencoders (VAEs)
An autoencoder is a deep learning model comprising two connected neural networks: One that encodes (or compresses) a huge amount of unstructured, unlabeled training data into parameters, and another that decodes those parameters to reconstruct the content. Technically, autoencoders can generate new content, but they’re more useful for compressing data for storage or transferring, and decompressing it for use, than they are for high-quality content generation.
Introduced in 2013, variational autoencoders (VAEs) can encode data like an autoencoder, but decode multiple new variations of the content. By training a VAE to generate variations toward a particular goal, it can ‘zero in’ on more accurate, higher-fidelity content over time. Early VAE applications included anomaly detection (e.g., medical image analysis) and natural language generation.
Generative adversarial networks (GANs)
GANs, introduced in 2014, also comprise two neural networks: A generator, which generates new content, and a discriminator, which evaluates the accuracy and quality the generated data. These adversarial algorithms encourage the model to generate increasingly high-quality outputs.
GANs are commonly used for image and video generation, but can generate high-quality, realistic content across various domains. They’ve proven particularly successful at tasks as style transfer (altering the style of an image from, say, a photo to a pencil sketch) and data augmentation (creating new, synthetic data to increase the size and diversity of a training data set).
Diffusion models
Also introduced in 2014, diffusion models work by first adding noise to the training data until it’s random and unrecognizable and then training the algorithm to iteratively diffuse the noise to reveal a desired output.
Diffusion models take more time to train than VAEs or GANs, but ultimately offer finer-grained control over output, particularly for high-quality image generation tools. DALL-E, Open AI’s image-generation tool, is driven by a diffusion model.
Transformers
First documented in a 2017 paper published by Ashish Vaswani and others, transformers evolve the encoder-decoder paradigm to enable a big step forward in the way foundation models are trained, and in the quality and range of content they can produce. These models are at the core of most of today’s headline-making generative AI tools, including ChatGPT and GPT-4, Copilot, BERT, Bard, and Midjourney to name a few.
Transformers use a concept called attention, determining and focusing on what’s most important about data within a sequence to
- process entire sequences of data e.g., sentences instead of individual words simultaneously
- capture the context of the data within the sequence
- encode the training data into embeddings (also called hyperparameters) that represent the data and its context
In addition to enabling faster training, transformers excel at natural language processing (NLP) and natural language understanding (NLU), and can generate longer sequences of data e.g., not just answers to questions, but poems, articles or papers with greater accuracy and higher quality than other deep generative AI models. Transformer models can also be trained or tuned to use tools e.g., a spreadsheet application, HTML, a drawing program to output content in a particular format.
What generative AI can create
Generative AI can create many types of content across many different domains.
Text: Generative models. especially those based on transformers, can generate coherent, contextually relevant text, everything from instructions and documentation to brochures, emails, web site copy, blogs, articles, reports, papers, and even creative writing. They can also perform repetitive or tedious writing tasks (e.g., such as drafting summaries of documents or meta descriptions of web pages), freeing writers’ time for more creative, higher-value work.
Images and video: Image generation such as DALL-E, Midjourney and Stable Diffusion can create realistic images or original art, and can perform style transfer, image-to-image translation and other image editing or image enhancement tasks. Emerging gen AI video tools can create animations from text prompts and can apply special effects to existing video more quickly and cost-effectively than other methods.
Sound, speech and music: Generative models can synthesize natural-sounding speech and audio content for voice-enabled AI chatbots and digital assistants, audiobook narration and other applications. The same technology can generate original music that mimics the structure and sound of professional compositions.
Software code: Gen AI can generate original code, autocomplete code snippets, translate between programming languages and summarize code functionality. It enables developers to quickly prototype, refactor, and debug applications while offering a natural language interface for coding tasks.
Design and art: Generative AI models can generate unique works of art and design or assist in graphic design. Applications include dynamic generation of environments, characters or avatars, and special effects for virtual simulations and video games.
Simulations and synthetic data: Generative AI models can be trained to generate synthetic data, or synthetic structures based on real or synthetic data. For example, generative AI is applied in drug discovery to generate molecular structures with desired properties, aiding in the design of new pharmaceutical compounds.
Use cases for generative AI
The following are just a handful of gen AI cases for enterprises. As technology develops and organizations embed these tools into their workflows, we can expect to see many more.
Customer experience
Marketing organizations can save time and amp up their content production by using gen AI tools to draft copy for blogs, web pages, collateral, emails and more. But generative AI solutions can also produce highly personalized marketing copy and visuals in real time based on when, where and to whom the ad is delivered. And it will power next generation chatbots and virtual agents that can give personalized responses and even initiate actions on behalf of customer, a significant advancement compared to the previous generation of conversational AI models trained on more limited data for very specific tasks
Software development and application modernization
Code generation tools can automate and accelerate the process of writing new code. Code generation also has the potential to dramatically accelerate application modernization by automating much of the repetitive coding required to modernize legacy applications for hybrid cloud environments.
Digital labor
Generative AI can quickly draw up or revise contracts, invoices, bills and other digital or physical ‘paperwork’ so that employees who use or manage it can focus on higher level tasks. This can accelerate workflows in virtually every enterprise area including human resources, legal, procurement and finance.
Science, engineering and research
Generative AI models can help scientists and engineers propose novel solutions to complex problems. In healthcare, for example, generative models can be applied to synthesize medical images for training and testing medical imaging systems