The business case for generative AI: A conversation with Cohere’s CEO

The generative AI pioneer discusses the intersection of LLMs and the enterprise and how Oracle is helping his firm lead the way.

Jeff Erickson | February 23, 2024

Aidan Gomez, left, CEO of Cohere, speaks with Oracle Executive Vice President Juan Loaiza during Oracle CloudWorld 2023.

If you’re the co-founder and CEO of a pioneering generative AI firm, 2023 was a hard year for downtime. Just ask Aidan Gomez, CEO of Cohere, a Toronto-based AI firm that has partnered with Oracle and others to bring its powerful large language models (LLMs) to enterprise customers. After inking the partnership with Oracle, Gomez tried to slip away to go skiing. That’s when he found himself snowed in on a Chilean mountainside, laptop open, leading an engineering meeting between his team and folks from the Oracle Cloud Infrastructure (OCI) group. “I was working with Greg [Pavlik] and the OCI team from a little hotel lobby on top of this mountain, you know, debugging things, figuring out issues, and getting pilots started,” he says. “Greg has been fantastic to work with and so has the whole OCI team.”

As part of the partnership, Oracle gave Cohere engineers OCI’s AI-ready compute clusters and offered guidance to help the firm become a leading AI platform in the enterprise sector. “Oracle has some of the best supercomputers on the planet, and we were eager to get our hands on those,” says Gomez. Beyond that, he says, Oracle provided a global platform to offer Cohere’s cutting-edge LLMs to businesses. “It really lives up to our mission of building large language models for enterprise in a way that is hyperprotective of their data,” he says.

The whole field is so recent that this technology has only existed since right around the time that Cohere was formed”

Aidan Gomez CEO, Cohere

With models from Cohere, Oracle has added generative AI capabilities into its entire suite of Fusion Applications, and, along with LLMs from Meta, is using them to power OCI’s Generative AI service. With these services, OCI customers are able customize and fine-tune LLMs to meet their specific needs. These include Cohere’s high performance Command model for text generation, which helps businesses create and deploy chatbots, search engines, and copywriting. Cohere’s Summarize model provides high-quality summaries that accurately capture the most important information from your documents. OCI also offers Cohere’s Embed model, which transforms text into vectors—a semantic numeric representation for text classification, clustering, and even semantic search. This ability is a cornerstone of enterprise adoption of LLMs, and Cohere is at the forefront, says Gomez. “The whole field is so recent that this technology has only existed since right around the time that Cohere was formed,” he says. “And so we were in a very good position to execute and to create an organization that delivers these reliable, extremely high-quality language models.”

I caught up with Gomez to find out more about life at the forefront of generative AI, and especially his mission to bring the technology to the enterprise.

The field of LLMs for the enterprise is very young. How was Cohere able to get in so early?

Aidan Gomez: While I was at Google Brain, I was lucky enough to be a part of the team that created the transformer, which forms the backbone of this latest generation of generative AI and large language models. After that, I was fortunate enough to work with Turing Award winner Geoffrey Hinton, as well as head of Google's AI, Jeff Dean.

That's where I met one of my co-founders Nick Frosst, and we created Cohere to really bring these large language models to the enterprise so they could bring them to the world. We were frustrated because we saw the opportunity for these models to create transformational change, but it just wasn't happening. So, we decided we're going to be the ones to do it ourselves.

Help us understand the basic storyline of generative AI and business.

Aidan Gomez: Language is our default mode of intellectual exchange. It's the way we like to interact. Now our technology has acquired the ability to use language. We can hold a conversation with a machine—and a compelling conversation.

Cohere’s vision is to give technology language, which opens up totally new product experiences, new efficiencies, and a totally new mode of us working; it's going to change in a way that's much more natural and intuitive to humans. We’re already seeing changes across marketing, ecommerce, legal, and finance. I expect that every single industry is going to be impacted.

How do you intend to help businesses use this technology?

Aidan Gomez: Cohere makes it easy for enterprises to adopt and use this technology by building a platform that gives them access to state-of-the-art language models.

Cohere is a model factory. We’re a company that updates models frequently. And we also build a platform which lets enterprise customers customize them and do that in a way that is totally private and is available on your preferred cloud. And we do all of that within a completely private and secure system.

What are the principal challenges for your customers?

Aidan Gomez: The principal challenge with our customers—or really any enterprise adopting generative AI—is data privacy. You need to customize these models, and you need to do it in a way that doesn't leak your data to the outside world or give your competitors access to that data. With Cohere, the way that we've structured the platform gives you the ability to deploy it completely privately within your VPC [virtual private cloud]. Not even Cohere can see your data. It's truly your data, and our models are being brought to that data. So that's a key differentiator. You don't need to send your data over the wire into our models.

How do you make models more helpful to enterprise customers?

Aidan Gomez: One way is with a technology called retrieval augmented generation, or RAG. It's a way to make models much more relevant, much more reliable and trustworthy.

RAG gives the model the ability to query databases that are proprietary to you. Imagine, as a consumer, if you could chat to a chatbot that had access to your emails, it would know much more about you than a generic chatbot. It makes the experience more relevant and personalized to each individual user or enterprise. You can augment these models via RAG with your institution's knowledge center.

So this helps with the accuracy of the AI’s outputs?

Aidan Gomez: Right. For example, a well-known issue with LLMs is hallucination, which is when these models dream up facts. And one interesting point about that is that it's actually difficult to dream up facts, and the models don't want to do it. If they know the correct answer to something, they'd much prefer to just offer that correct answer to the input. They only hallucinate when they feel like they have to. When they don't have access to that information.

That's why RAG is so crucial. What that does is it allows the model to go out and search for information, discover information, and bring it back and use it as part of its answer. That boosts the reliability, the trustworthiness, because instead of just having to take the AI on its word, it can now provide a citation saying, I read this over here in this document. And so not only is it more accurate, but humans can verify it, and they can catch errors much more easily.

Are there other technologies that help enterprise customers take advantage of LLMs?

Aidan Gomez: I think it's very important to have vector databases in Oracle Database [and MySQL HeatWave] because vector databases are what allows for truly semantic search. Semantic search means you aren’t just matching keywords in some query to documents, but matching the intent and meaning of a query to those documents, so it surfaces dramatically more relevant results.

Relevance in search queries is key to enterprise customers. When you search a document base, you don't want to be having to try three, four, or five times, crafting different queries, changing the keywords, trying to figure out that query that surfaces the information that you want. Vector search, or embeddings-based search, or semantic search—there are a bunch of different names for it—gives you an intuitive way to search. It understands the intent of your query and searches with that instead of the specific words that are contained within it.

I'm extremely excited to be partnering with the Oracle Database team to provide Cohere’s latest embedding model, which is dramatically more accurate. And the new state of the art in semantic search.

How is Oracle helping you reach your goals?

Aidan Gomez: The relationship with Oracle has been hugely impactful here. Oracle is one of the most well-known and trusted brands in the enterprise [space], and so we benefit massively from getting to partner and learn from Oracle.

At the same time, the transformation that we're seeing across Oracle's product suite as well as the collaboration in creating entirely new products on OCI, is just so exciting. I think the Oracle team has been extraordinary to work with at every level from the technical teams within OCI to the more product-focused teams, the level of engagement, partnership, and support.

What does the future hold?

Aidan Gomez: Generative AI applications are exciting principally because you can interact with them in a way that you could never interact with products or technology before. Suddenly you have the ability to use language, which leads to a magical product experience where you feel like the technology and the product that you're interacting with understands you. I think that's why you're seeing such a stir around the AI space and why these applications are exploding.

We want to enable this new interface onto the products and the services that our enterprise customers deliver. Then, we move beyond language. Language will be one piece of the puzzle, but we'll also have vision and the ability to speak via audio to these models, creating much more automated experiences. For example, using our models to power agents, which can go off and do work on your behalf.


View more Oracle Connect articles

Cohere and Oracle Partnership Brings Generative AI Solutions to Customers

How Cohere’s LLMs are easily customized by augmenting them with additional data and discuss the future of data and how to easily access and manage the data you need to integrate new semi-siloed flows, and collect and share your insights.