ChatBot.model
Configure the language model to use for generating responses.
USAGE
ChatBot.model(model_name)
Sets the specific language model that will be used when the chatbot generates responses. This method supports models from various providers including OpenAI, Anthropic, Google, and others through the chatlas integration. The model choice significantly impacts response quality, speed, cost, and capabilities.
The chatbot automatically detects the appropriate provider based on the model name and handles authentication via environment variables. Different models have different strengths as some excel at reasoning, others at creativity, and others at specific domains like code generation.
Parameters
model_name : str
-
The name of the language model to use. Exact model names may vary by provider. Check provider documentation for the most current model names and capabilities.
Returns
ChatBot
-
Returns self to enable method chaining, allowing you to configure multiple parameters in a single fluent expression.
Raises
: ValueError
-
If the model name is empty or None. The method does not validate model availability at configuration time. Validation occurs when creating chat sessions.
Model Types by Provider
Here are some examples of model types by provider:
OpenAI Models:
"gpt-4o"
: latest multimodal model with excellent capabilities"gpt-4-turbo"
: GPT-4 with improved performance and lower cost"gpt-4"
: Original GPT-4 model with excellent reasoning capabilities"gpt-3.5-turbo"
: fast, cost-effective model good for most tasks (default)
Anthropic Models:
"claude-3-5-sonnet-20241022"
: Claude model with excellent reasoning"claude-3-haiku-20240307"
: fast, efficient model for simple tasks"claude-3-opus-20240229"
: very capable Claude model for complex tasks
Google Models:
"gemini-pro"
: The flagship model from Google
Examples
Using different models for different purposes
Configure chatbots with models optimized for specific tasks:
import talk_box as tb
# High-performance model for complex reasoning
= tb.ChatBot().model("gpt-4-turbo")
reasoning_bot
# Default balanced model (recommended starting point)
= tb.ChatBot().model("gpt-3.5-turbo")
balanced_bot
# Fast, cost-effective model for simple tasks
= tb.ChatBot().model("gpt-3.5-turbo")
quick_bot
# Creative model for storytelling
= tb.ChatBot().model("claude-3-opus-20240229")
creative_bot
# Multimodal model for image analysis
= tb.ChatBot().model("gpt-4o") vision_bot
Model selection with method chaining
Combine model selection with other configuration options:
import talk_box as tb
# Technical advisor with high-performance model
= (
tech_bot
tb.ChatBot()"gpt-4-turbo")
.model("technical_advisor")
.preset(0.2) # Low creativity for factual responses
.temperature(2000)
.max_tokens(
)
# Creative writer with Claude
= (
writer_bot
tb.ChatBot()"claude-3-opus-20240229")
.model("creative_writer")
.preset(0.8) # High creativity
.temperature("Imaginative storyteller with rich vocabulary")
.persona( )
Dynamic model switching
Change models based on task requirements:
import talk_box as tb
= tb.ChatBot().preset("technical_advisor")
bot
# Use fast model for quick questions
"gpt-3.5-turbo")
bot.model(= bot.chat("What is Python?")
quick_response
# Switch to powerful model for complex analysis
"gpt-4-turbo")
bot.model(= bot.chat("Explain the architectural trade-offs between microservices and monoliths") detailed_response
Model capabilities and selection guide
Choose models based on your specific requirements:
import talk_box as tb
# For code generation and technical tasks
= tb.ChatBot().model("gpt-4-turbo").preset("technical_advisor")
code_bot
# For creative writing and storytelling
= tb.ChatBot().model("claude-3-opus-20240229").preset("creative_writer")
creative_bot
# For cost-effective general tasks
= tb.ChatBot().model("gpt-3.5-turbo").preset("customer_support")
general_bot
# For multimodal tasks (text + images)
= tb.ChatBot().model("gpt-4o") vision_bot
Notes
Provider Authentication: ensure appropriate API keys are set in environment variables (e.g., OPENAI_API_KEY
, ANTHROPIC_API_KEY
) for the chosen model provider.
Model Availability: model availability may change over time. Check provider documentation for current model names and deprecation schedules.
Cost Considerations: different models have different pricing structures. Consider cost implications for production deployments.
Rate Limits: each model/provider has different rate limits. Plan accordingly for high-volume applications.
Context Windows: models have different maximum context window sizes, affecting how much conversation history can be included in requests.
See Also
preset : Apply behavior presets that work well with specific models temperature : Control response randomness and creativity max_tokens : Set response length limits appropriate for the chosen model