SUTRA With AISuite
🧠 Build with SUTRA-V2 using AISuite
Welcome to the SUTRA-V2 Integration Guide for AISuite — designed to help developers leverage SUTRA-V2’s advanced reasoning capabilities using the AISuite library, a unified interface for generative AI providers.
🔍 What is SUTRA-V2?
SUTRA-V2 is a powerful AI model optimized for:
- Multi-step logical inference
- Structured problem-solving
- Deep contextual analysis
- High-performance conversational AI across 50+ languages
Perfect for applications in:
- Legal & policy interpretation
- Business intelligence
- Scientific reasoning
- Complex query answering
🧪 Quickstart with AISuite
Prerequisites
- Python 3.10 or higher.
- SUTRA API key from https://developer.two.ai.
- AISuite library installed.
Step 1: Install Dependencies
Install AISuite with OpenAI support:
pip install 'aisuite[openai]'
Step 2: Configure AISuite with SUTRA-V2
Set your SUTRA API key as an environment variable:
export SUTRA_API_KEY="your-sutra-api-key"
Then, configure the AISuite client to use the OpenAI provider with the SUTRA endpoint:
import aisuite as ai
import os
# Configure AISuite with SUTRA-V2
provider_configs = {
"openai": {
"api_key": os.getenv("SUTRA_API_KEY"),
"base_url": "https://api.two.ai/v2"
}
}
client = ai.Client(provider_configs)
# Test a basic completion
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain artificial intelligence in 3 sentences."}
]
response = client.chat.completions.create(
model="openai:sutra-v2",
messages=messages,
max_tokens=1024,
temperature=0.7
)
print(response.choices[0].message.content)
Step 3: Verify Setup
Run the script to ensure SUTRA-V2 responds correctly. If successful, you’ll see a concise explanation of artificial intelligence.
🔁 Streaming Response Example
Stream responses for real-time interaction:
import aisuite as ai
import os
# Configure AISuite with SUTRA-V2
provider_configs = {
"openai": {
"api_key": os.getenv("SUTRA_API_KEY"),
"base_url": "https://api.two.ai/v2"
}
}
client = ai.Client(provider_configs)
# Stream a response
messages = [
{"role": "system", "content": "You are a creative writer."},
{"role": "user", "content": "Write a short story about a robot discovering emotions."}
]
stream = client.chat.completions.create(
model="openai:sutra-v2",
messages=messages,
max_tokens=1024,
temperature=0.7,
stream=True
)
for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end='', flush=True)
🧠 Ideal Use Cases with AISuite
Scenario | Example |
---|---|
Legal Analysis | Interpret complex legal documents and policies |
Business Intelligence | Analyze market trends and strategic decisions |
Scientific Reasoning | Evaluate cause-effect relationships in research |
Complex Q&A | Answer multi-step queries with logical inference |
⚙️ Recommended Parameters
Parameter | Purpose | Recommended for V2 |
---|---|---|
temperature | Controls randomness | 0.7 for balanced output |
max_tokens | Limits response length | 512–1024 |
presence_penalty | Reduces repetition | 0.0–0.6 |
🔗 API Access (cURL)
Test the SUTRA-V2 endpoint directly:
curl -X POST https://api.two.ai/v2/chat/completions \
-H "Authorization: Bearer $SUTRA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "sutra-v2",
"messages": [
{"role": "user", "content": "Explain how blockchain improves supply chain transparency."}
]
}'
🛠 Troubleshooting
- Invalid API Key: Ensure
SUTRA_API_KEY
is set in your environment variables. - Rate Limit Exceeded: Check your SUTRA-V2 plan limits or reduce request frequency.
- Empty Response: Increase
max_tokens
or refine your prompt. - AISuite Compatibility: Verify AISuite’s OpenAI provider supports custom
base_url
and the SUTRA-V2 model.