What Is a Large Language Model (LLM)?

Mark Jackley | Content Strategist | February 16, 2024

A large language model (LLM) is an increasingly popular type of artificial intelligence designed to generate human-like written responses to queries. LLMs are trained on large amounts of text data and learn to predict the next word, or sequence of words, based on the context provided—they can even mimic the writing style of a particular author or genre.

LLMs emerged from labs and made the news in the early 2020s and have since turned into both standalone products and value-added capabilities embedded in many types of business software. Thanks to their impressive ability to interpret requests and produce helpful responses, LLMs are used in a wide range of applications, including natural language processing, machine translation, content generation, chatbots, and document summarization.

What Is a Large Language Model?

A large language model (LLM) is an artificial intelligence system that has been trained on a vast dataset, often consisting of billions of words taken from books, the web, and other sources, to generate human-like, contextually relevant responses to queries. Because LLMs are designed to understand questions—called “prompts” in LLM terminology—and generate natural language responses, they can perform tasks such as answering customer questions, summarizing information in a report, generating first drafts of emails, even writing poetry and computer code. LLMs typically have a deep understanding of the grammar and semantics of the language in which they are trained, and they can be refined using a company’s own data.

Because they can recognize and interpret human language—though not truly understand it the way humans do—LLMs represent a significant advance in natural language processing. The most well-known LLM is probably ChatGPT, the AI program from OpenAI trained on billions of words from books, articles, and websites. The company offers direct access to ChatGPT via a web browser or mobile app, or it can be linked to business software via programmable APIs. Other common LLMs include Cohere, GPT-4, and BARD.

The textual data used to train an LLM can be structured, as in a database, or unstructured. Most businesses have vast amounts of unstructured data, including text messages, emails, and documents.

Popular business uses of LLMs include customer service chatbots, digital assistants, and translation services that are more contextual, colloquial, and natural-sounding than traditional word-for-word translation tools. LLMs can also perform fairly advanced tasks, such as predicting protein structures and writing software code. Healthcare, pharmaceuticals, finance, and retail are among the industries putting LLMs to good use. For example, a healthcare provider might use an LLM to triage patients calling into a hotline, while an investment company might use one to sift through and summarize earnings reports, news stories, and social media posts to spot stock trends. LLMs can help organizations manage and analyze data, deriving insights that may create business value. And in both scenarios, the LLM is performing the task faster than human analysts possibly could.

That’s led to great interest in the technology, so much so that the global market for LLMs is predicted to grow at a compound annual growth rate of 21.4% to reach US$40.8 billion by 2029, according to 2023 research by Valuates Reports.

There are some key concepts to understand when thinking about LLMs. They include:

  • Natural language. Any language that humans use in ordinary situations, such as in conversations or written reports, not developed for a technical purpose, such as computer code.
  • Natural-language processing. A kind of data processing that can analyze the structure and meaning of written or spoken text.
  • Language model. A model of a natural language that can predict the next best word in a phrase or sentence within the desired context.

Like human beings, LLMs aren’t perfect. The quality of their output depends on the quality of their input—that is, the information used to train them. Outdated data can result in mistakes, such as a chatbot giving a wrong answer about a company’s products. A lack of sufficient data can cause LLMs to make up answers, or “hallucinate.” While LLMs are great at prediction, for now anyway, they’re less good at explaining how they came to a given conclusion. And many LLMs are trained with books, newspaper articles, and even Wikipedia pages, leading to concerns about copyright infringement. When not rigorously managed, LLMs may present security challenges by, for example, using sensitive or private information in a response.

An AI technique called retrieval-augmented generation (RAG) can help with some of these issues by improving the accuracy and relevance of an LLM’s output. RAG provides a way to add targeted information without changing the underlying model. RAG models create knowledge repositories—typically based on an organization’s own data—that can be continually updated to supply timely, contextual answers. For example, chatbots and other conversational systems might use RAG to make sure their answers to customers’ questions are based on current information about inventory, the buyer’s preferences, and previous purchases, and to exclude information that is out-of-date or irrelevant to the LLM’s intended operational context.

Establishing an AI center of excellence before organization-specific training commences makes for a higher likelihood of success. Our ebook explains why and offers tips on building an effective CoE.

Large Language Model FAQs

What are the top five large language models?

Experts disagree on the top LLMs, but five that many tout are GPT-4 from OpenAI, Claude 2 from Anthropic, Llama 2 from Meta, Orca 2 from Microsoft Research, and Command from Cohere. ChatGPT is also from OpenAI.

What is the difference between LLMs and AI?

Artificial intelligence is a broad term that encompasses many technologies that can mimic human-like behavior or capabilities. Large language models are a type of generative AI, the umbrella term for AI models that generate content including text, images, video, spoken language, and music.