New models available from Cohere and Meta for OCI Generative AI include
With Dedicated AI clusters, you can host foundational models on dedicated GPUs that are private to you. These clusters provide you with stable high-throughput performance that’s required for production use cases and can support hosting and fine-tuning workloads. OCI Generative AI enables you to scale out your cluster with zero downtime to handle changes in volume. Up to 50 custom, fine-tuned models can be hosted on the same dedicated hosting cluster as long as these fine-tuned models all share the same base foundational model.
OCI Generative AI is integrated with LangChain, an open source framework that can be used to develop new interfaces for generative AI applications based on language models. LangChain makes it easy to swap out abstractions and components necessary to work with language models.
OCI Generative AI provides content moderation controls, endpoint model swap with zero downtime, and endpoints deactivation and activation capabilities. For each model endpoint, OCI Generative AI also captures a series of analytics, including call statistics, tokens processed, error counts, and more.
By embedding features created with OCI Generative AI directly into its business applications, Oracle is making it easy for customers to instantly access AI-driven features without complex integrations.