AI Solution

Fine-Tuning LLMs with OCI Generative AI Playground


When we work with large language models (LLMs), we typically have the model respond to the training data it’s been given. However, training these models can be difficult since they use a lot of resources, such as GPUs and power.

Thankfully, model optimization has advanced to allow for a “smaller training” version with less data, through a process called fine-tuning.

The specific sample solution below provides a method for fine-tuning an LLM using the Oracle Cloud Infrastructure (OCI) Generative AI playground, an interface in the OCI console.


Demo: Fine-Tuning LLMs with OCI Generative AI Playground (1:29)

Prerequisites and setup

  1. Oracle Cloud account—sign-up page
  2. Getting started with OCI Generative AI—documentation for OCI Generative AI
  3. OCI Generative AI fine-tuning—documentation for OCI Generative AI fine-tuning
  4. OCI Generative AI playground—documentation for OCI Generative AI playground
  5. Python 3.10
  6. Open source package manager—Conda