AI Solution

Fine-Tuning LLMs with OCI Generative AI Playground

Introduction

When we work with large language models (LLMs), we typically have the model respond to the training data it’s been given. However, training these models can be difficult since they use a lot of resources, such as GPUs and power.

Thankfully, model optimization has advanced to allow for a “smaller training” version with less data, through a process called fine-tuning.

The specific sample solution below provides a method for fine-tuning an LLM using the Oracle Cloud Infrastructure (OCI) Generative AI playground, an interface in the OCI console.

Demo

Demo: Fine-Tuning LLMs with OCI Generative AI Playground (1:29)

Prerequisites and setup

  1. Oracle Cloud account—sign-up page
  2. Getting started with OCI Generative AI—documentation for OCI Generative AI
  3. OCI Generative AI fine-tuning—documentation for OCI Generative AI fine-tuning
  4. OCI Generative AI playground—documentation for OCI Generative AI playground
  5. Python 3.10
  6. Open source package manager—Conda

注:为免疑义,本网页所用以下术语专指以下含义:

  1. Oracle专指Oracle境外公司而非甲骨文中国。
  2. 相关Cloud或云术语均指代Oracle境外公司提供的云技术或其解决方案。