Select AI Features
Select AI supports the following features.
- Select AI Conversations
Conversations in Select AI refer to the interactive dialogue between the user and the system, where a sequence of user-provided natural language prompts are stored and managed to support long-term memory for LLM interactions. - Select AI with Retrieval Augmented Generation (RAG)
Select AI with RAG augments your natural language prompt by retrieving content from your specified vector store using semantic similarity search. This reduces hallucinations by using your specific and up-to-date content and provides more relevant natural language responses to your prompts. - Synthetic Data Generation
Generate synthetic data using random generators, algorithms, statistical models, and Large Language Models (LLMs) to simulate real data for developing and testing solutions effectively. - Feedback
Select AI enables you to provide feedback to help improve your selected LLM's ability to generate more accurate SQL queries. - Generate a Summary with Select AI
Select AI enables you to generate a summary of your text, especially large texts, generally supporting up to 1 GB using AI providers. You can extract key insights from texts or large files as per your specific needs. This feature uses the LLM specified in your AI profile to generate a summary for a given text. - Translate
With Select AI, you can use generative AI from the OCI translation service to translate your text into the language of your choice. - Private Endpoint Access for Select AI Models
You can enable secure, private access to generative AI models by deploying Ollama or Llama.cpp behind a private endpoint within your Virtual Cloud Network (VCN). This architecture is designed for organizations that need to keep AI processing fully private. The setup isolates both the Autonomous AI Database Serverless and your AI model servers from the public internet using private subnets, security lists, and controlled routing.
Private Endpoint Access for Select AI Models
You can enable secure, private access to generative AI models by deploying Ollama or Llama.cpp behind a private endpoint within your Virtual Cloud Network (VCN). This architecture is designed for organizations that need to keep AI processing fully private. The setup isolates both the Autonomous AI Database Serverless and your AI model servers from the public internet using private subnets, security lists, and controlled routing.
The setup uses a jump server in a public subnet for secure SSH access, while the database and AI models run in private subnets connected through Internet Gateway, Service Gateway, and NAT Gateway.
You create a VCN, configure subnets and gateways, and set up security rules that allow only internal traffic. See Setting up a private endpoint for AI models using Ollama and Llama.cpp for more information. The document walks you through installing Ollama and Llama.cpp, configuring a private API endpoint using Nginx as a reverse proxy, and validating connectivity from Autonomous AI Database. This configuration ensures that all AI processing occurs privately within your network boundary, enabling Select AI to integrate model capabilities while keeping sensitive data secure and fully isolated.
Parent topic: Select AI Features