ChatBot Basics
The ChatBot
class is your main entry point for creating conversational AI applications. It provides a simple, chainable API for configuring behavior and generates Conversation
objects that manage message history automatically.
Your First ChatBot
Start with the simplest possible chatbot:
import talk_box as tb
# Create a basic chatbot
= tb.ChatBot()
bot
# Start chatting
= bot.chat("Hello! How are you today?")
conversation print(conversation.get_last_message().content)
The Chainable Configuration API
Configure your chatbot by chaining methods together:
import talk_box as tb
# Build a specialized technical advisor
= (tb.ChatBot()
tech_bot "gpt-4-turbo") # Choose the AI model
.model("technical_advisor") # Apply behavior template
.preset(0.3) # Lower randomness for accuracy
.temperature(1500) # Allow detailed responses
.max_tokens("Senior Python developer with 10+ years experience"))
.persona(
# Now use it
= tech_bot.chat("How do I optimize database queries in Django?") response
Core Configuration Methods
Model Selection
Choose which AI model powers your chatbot:
# OpenAI models
= tb.ChatBot().model("gpt-4-turbo") # Best reasoning
bot = tb.ChatBot().model("gpt-3.5-turbo") # Fast and cost-effective
bot
# Anthropic models
= tb.ChatBot().model("claude-3-opus-20240229") # Excellent for creative tasks
bot = tb.ChatBot().model("claude-3-haiku-20240307") # Fast for simple tasks
bot
# Or specify provider explicitly
= tb.ChatBot().provider_model("anthropic:claude-3-opus-20240229") bot
Behavior Presets
Apply professional behavior templates instantly:
# Customer support chatbot
= tb.ChatBot().preset("customer_support")
support_bot
# Technical advisor for developers
= tb.ChatBot().preset("technical_advisor")
tech_bot
# Creative writing assistant
= tb.ChatBot().preset("creative_writer")
writer_bot
# Data analysis expert
= tb.ChatBot().preset("data_analyst") analyst_bot
Fine-Tuning Behavior
Control how your chatbot responds:
= (tb.ChatBot()
bot 0.2) # Low randomness (0.0-2.0)
.temperature(500) # Limit response length
.max_tokens("Friendly but professional customer service rep")
.persona("technical jargon", "complex explanations"])) .avoid([
Understanding Conversations
Every chat interaction returns a Conversation
object that manages message history:
= tb.ChatBot().preset("technical_advisor")
bot
# First interaction creates a new conversation
= bot.chat("What's the difference between lists and tuples in Python?")
conv
# Continue the same conversation (maintains context)
= bot.chat("Which one should I use for storing coordinates?", conversation=conv)
conv
# Access the conversation history
for message in conv.get_messages():
print(f"{message.role}: {message.content[:50]}...")
Conversation Management
# Start an empty conversation
= bot.start_conversation()
conversation
# Add messages and get responses
= bot.continue_conversation(conversation, "Hello!")
conversation = bot.continue_conversation(conversation, "Tell me about Python")
conversation
# Check conversation details
print(f"Message count: {conversation.get_message_count()}")
print(f"Last message: {conversation.get_last_message().content}")
# Filter messages by role
= conversation.get_messages(role="user")
user_messages = conversation.get_messages(role="assistant") assistant_messages
Common Patterns
Quick Task-Specific Bots
import talk_box as tb
# Code reviewer
= (tb.ChatBot()
code_bot "gpt-4-turbo")
.model("technical_advisor")
.preset(0.2)
.temperature("Senior engineer focused on code quality and security"))
.persona(
# Creative writer
= (tb.ChatBot()
story_bot "claude-3-opus-20240229")
.model("creative_writer")
.preset(0.8)
.temperature("Imaginative storyteller with vivid descriptions"))
.persona(
# Data analyst
= (tb.ChatBot()
data_bot "gpt-4-turbo")
.model("data_analyst")
.preset(0.3)
.temperature("Statistical expert focused on actionable insights")) .persona(
Multi-Turn Conversations
# Create a persistent technical consultant
= tb.ChatBot().preset("technical_advisor").temperature(0.3)
consultant
# Start a consulting session
= consultant.chat("I'm building a web API. What framework should I use?")
session
# Continue the conversation with context
= consultant.chat("I'm using Python and need it to handle 1000 requests/second", conversation=session)
session = consultant.chat("What about database choice?", conversation=session)
session = consultant.chat("How do I handle authentication?", conversation=session)
session
# The consultant remembers the entire conversation context
print(f"Full conversation has {session.get_message_count()} messages")
Dynamic Configuration
class SmartChatBot:
def __init__(self):
self.bot = tb.ChatBot().model("gpt-4-turbo")
def answer(self, question: str, domain: str = "general"):
# Configure based on the domain
if domain == "technical":
self.bot.preset("technical_advisor").temperature(0.2)
elif domain == "creative":
self.bot.preset("creative_writer").temperature(0.8)
elif domain == "support":
self.bot.preset("customer_support").temperature(0.4)
else:
self.bot.temperature(0.7)
return self.bot.chat(question)
# Usage
= SmartChatBot()
smart_bot
= smart_bot.answer("How do I optimize SQL queries?", "technical")
tech_response = smart_bot.answer("Write a short story about AI", "creative")
creative_response = smart_bot.answer("I can't log into my account", "support") support_response
Advanced Configuration
Custom System Prompts
For complete control over behavior:
# Method 1: Direct string
= tb.ChatBot().system_prompt("""
custom_bot You are a senior software architect specializing in microservices.
Always consider scalability, security, and maintainability.
Provide concrete, actionable recommendations with code examples.
""")
# Method 2: Using PromptBuilder (recommended for complex prompts)
= (tb.ChatBot().prompt_builder()
prompt "software architect", "microservices and distributed systems")
.persona("scalability", "security", "maintainability"])
.core_analysis(["concrete recommendations", "code examples"]))
.output_format([
= tb.ChatBot().system_prompt(prompt) structured_bot
Tool Integration
Enable specific capabilities:
= (tb.ChatBot()
enhanced_bot "gpt-4-turbo")
.model("code_executor", "web_search", "calculator"])
.tools([True)) # Enable detailed logging .verbose(
Error Handling and Debugging
import talk_box as tb
try:
= tb.ChatBot().model("gpt-4-turbo")
bot = bot.chat("Hello!")
response print(response.get_last_message().content)
except Exception as e:
print(f"Error: {e}")
# Check LLM status
= bot.check_llm_status()
status if not status["enabled"]:
print("Help:", status["help"])
Configuration Inspection
= (tb.ChatBot()
bot "gpt-4-turbo")
.model("technical_advisor")
.preset(0.3))
.temperature(
# See current configuration
= bot.get_config_summary()
config print(f"Model: {config['model']}")
print(f"Temperature: {config['temperature']}")
print(f"Preset: {config['preset']}")
print(f"System prompt: {config['system_prompt'][:100]}...")
Best Practices
✅ Do’s
- Chain configurations for readable setup
- Use presets as starting points, then customize
- Keep conversations for multi-turn context
- Match temperature to task (low for accuracy, high for creativity)
- Test with different models for optimal performance
❌ Don’ts
- Don’t mix conversation objects between different bots
- Don’t ignore conversation context for multi-turn interactions
- Don’t use extreme temperatures without testing
- Don’t hardcode API keys in your code
Troubleshooting
“No response from chatbot” - Check if your API key is set correctly - Verify internet connection - Try a simpler model like gpt-3.5-turbo
“Inconsistent responses” - Lower the temperature (try 0.3 instead of 0.7) - Use more specific prompts or presets - Consider using PromptBuilder
for structured prompts
“Responses too short/long” - Adjust max_tokens
parameter - Modify your persona or system prompt - Use output format specifications in PromptBuilder
Quick Reference
Basic Setup: tb.ChatBot().model("gpt-4-turbo").preset("technical_advisor")
Chat: conversation = bot.chat("Your message")
Continue: conversation = bot.chat("Next message", conversation=conversation)
Access: conversation.get_last_message().content