The project is licensed under the Universal Permissive License v1.0. As the name states, it is a permissive, OSI- and FSF-approved, GPL-compatible license.
The code for this project is in the repository. Should you wish, you can read and understand how each feature is implemented and make your contributions.
LLM and embedding models are optimized to achieve different goals. Some are generalists, some are inference focused, some are trained with corpuses that contain information in different languages, some are domain specific. With AI Optimizer and Toolkit, you can use the ones that align with your goals and data. For production, you can use a service provider or a self-hosted model.
Every model exposes a set of parameters that modify its creativity, the number of possible continuations that it considers, or restricts the ones that it will consider. You can easily modify these parameters to reduce hallucinations and adjust model behavior to meet your business needs.
Each request to the LLM includes the user’s prompt, some information about past exchanges, and specific instructions to the LLM on how to behave and respond. You have to flexibility to choose from the predefined options or add your custom instructions to get the most from your model.
Provide foundation models with access to your private data using retrieval-augmented generation. AI Optimizer and Toolkit will help with the setup so your production application has proper access to your data.
Store and use unstructured data by breaking it up into chunks and assigning the corresponding embedding. Users can use structured and unstructured data in their interactions with the AI application you create.
Oracle Database 23ai is capable of semantic search using vector embeddings. Similar items within the database are determined by distance calculations using sophisticated vector indexes.
Select AI uses natural language to query Oracle Databases and can be used to extend the capabilities of the application you create.
AI Optimizer and Toolkit can generate a set of questions and answers to assess the quality of your chatbot. Use them to automatically test every iteration.
Use a second model to automatically assess the quality of the responses from your AI application.