HeatWave GenAI Features

In-database large language models

In-database large language models (LLMs) greatly simplify the development of GenAI applications. You can quickly benefit from generative AI; you don’t need to select an external LLM and don’t have to consider integration complexity, costs, or the availability of an external LLM in various data centers.

  • You can, for example, use the built-in LLMs to help generate or summarize content and search data to perform retrieval-augmented generation (RAG) with HeatWave Vector Store.
  • You can also combine generative AI with other built-in HeatWave capabilities, such as machine learning, to help reduce costs and get more accurate results faster.
  • You can use the built-in LLMs in all OCI regions, OCI Dedicated Region, and across clouds and obtain consistent results with predictable performance across deployments.
  • There are no additional costs to use the in-database LLMs. You can reduce infrastructure costs by eliminating the need to provision GPUs. Additionally, system resources are optimized (optimal configuration of thread count, batch size, and segment size) to further help reduce costs.
  • In-database LLMs and HeatWave Chat help developers deliver apps that are preconfigured for contextual conversations in natural language. There’s no need to subscribe to external LLMs or provision GPUs.
  • Native LLM execution within HeatWave helps minimize the risks associated with data movement. The LLMs can take advantage of HeatWave Vector Store to expand their knowledge using proprietary data instead of relying on fine-tuning.
  • Oracle HeatWave GenAI is integrated with the OCI Generative AI service for accessing pretrained, foundational models from Cohere and Meta.

HeatWave Vector Store

HeatWave Vector Store lets you combine the power of LLMs with your proprietary data to help get more accurate and contextually relevant answers than using models trained only on public data. The vector store ingests documents in a variety of formats, including PDF, and stores them as embeddings generated via an embedding model. For a given user query, the vector store helps identify the most similar documents by performing a similarity search against the stored embeddings and the embedded query. These documents are used to augment the prompt given to the LLM so that it provides a more contextual answer for your business.

  • HeatWave Vector Store lets you use generative AI with your business documents without moving data to a separate vector database and without AI expertise.
  • The generation of embeddings in the vector store processes multiple input files in parallel across multiple threads on all cluster nodes. As a result, creating the vector store and ingesting unstructured data in various formats, such as PDF, DOCX, HTML, TXT, or PPTX, is very fast and scales with the cluster size.
  • The pipeline to discover and ingest proprietary documents in the vector store is automated, including transforming users’ unstructured text data and generating embeddings, making it very easy for developers and analysts without AI expertise to leverage the vector store.
  • The vector store resides in object storage, making it very cost-effective and highly scalable, even with large data sets. You can also easily share the vector store with different applications.
  • Data transformation is completed inside the database, which helps reduce security risks by eliminating data movement and helps reduce costs by eliminating the need for client resources.

Flexible and fast vector processing

Vector processing accelerates with the in-memory and scale-out architecture of HeatWave. HeatWave supports a new native VECTOR data type, enabling you to use standard SQL to create, process, and manage vector data.

  • You can combine vectors with other SQL operators. For example, you can run analytic queries that join several tables with different documents and perform similarity searches across all documents.
  • In-memory representation and a scale-out architecture mean that vector processing is parallelized across up to 512 HeatWave cluster nodes and executed at memory bandwidth—extremely fast and without any accuracy loss.

HeatWave Chat

A new HeatWave Chat interface lets you have contextual conversations augmented by proprietary documents in the vector store, using natural language.

  • You can interact with unstructured data stored in MySQL Database and in object storage using natural language. The context of the questions is preserved to enable a human-like conversation with follow-up questions. HeatWave maintains a context with the history of questions asked, citations of the source documents, and the prompt to the LLM. This facilitates a contextual conversation and lets you verify the source of answers generated by the LLM. This context is maintained in HeatWave and is available for all applications using HeatWave.
  • The integrated Lakehouse Navigator lets you see data available in MySQL Database and object storage. You can then easily load selected data in HeatWave Vector Store and instruct the LLM to retrieve information from that specific source. As a result, you can reduce costs by searching through a smaller data set while increasing both speed and accuracy.
  • You can search across the entire database or restrict the search to a folder.
  • Several LLMs can be selected via HeatWave Chat, either built-in or accessible with the OCI Generative AI service.