ChatBot.model

Configure the language model to use for generating responses.

USAGE

ChatBot.model(model_name)

Sets the specific language model that will be used when the chatbot generates responses. This method supports models from various providers including OpenAI, Anthropic, Google, and others through the chatlas integration. The model choice significantly impacts response quality, speed, cost, and capabilities.

The chatbot automatically detects the appropriate provider based on the model name and handles authentication via environment variables. Different models have different strengths as some excel at reasoning, others at creativity, and others at specific domains like code generation.

Parameters

model_name : str

The name of the language model to use. Exact model names may vary by provider. Check provider documentation for the most current model names and capabilities.

Returns

ChatBot

Returns self to enable method chaining, allowing you to configure multiple parameters in a single fluent expression.

Raises

: ValueError

If the model name is empty or None. The method does not validate model availability at configuration time. Validation occurs when creating chat sessions.

Model Types by Provider

Here are some examples of model types by provider:

OpenAI Models:

  • "gpt-4o": latest multimodal model with excellent capabilities
  • "gpt-4-turbo": GPT-4 with improved performance and lower cost
  • "gpt-4": Original GPT-4 model with excellent reasoning capabilities
  • "gpt-3.5-turbo": fast, cost-effective model good for most tasks (default)

Anthropic Models:

  • "claude-3-5-sonnet-20241022": Claude model with excellent reasoning
  • "claude-3-haiku-20240307": fast, efficient model for simple tasks
  • "claude-3-opus-20240229": very capable Claude model for complex tasks

Google Models:

  • "gemini-pro": The flagship model from Google

Examples


Using different models for different purposes

Configure chatbots with models optimized for specific tasks:

import talk_box as tb

# High-performance model for complex reasoning
reasoning_bot = tb.ChatBot().model("gpt-4-turbo")

# Default balanced model (recommended starting point)
balanced_bot = tb.ChatBot().model("gpt-3.5-turbo")

# Fast, cost-effective model for simple tasks
quick_bot = tb.ChatBot().model("gpt-3.5-turbo")

# Creative model for storytelling
creative_bot = tb.ChatBot().model("claude-3-opus-20240229")

# Multimodal model for image analysis
vision_bot = tb.ChatBot().model("gpt-4o")

Model selection with method chaining

Combine model selection with other configuration options:

import talk_box as tb

# Technical advisor with high-performance model
tech_bot = (
    tb.ChatBot()
    .model("gpt-4-turbo")
    .preset("technical_advisor")
    .temperature(0.2)  # Low creativity for factual responses
    .max_tokens(2000)
)

# Creative writer with Claude
writer_bot = (
    tb.ChatBot()
    .model("claude-3-opus-20240229")
    .preset("creative_writer")
    .temperature(0.8)  # High creativity
    .persona("Imaginative storyteller with rich vocabulary")
)

Dynamic model switching

Change models based on task requirements:

import talk_box as tb

bot = tb.ChatBot().preset("technical_advisor")

# Use fast model for quick questions
bot.model("gpt-3.5-turbo")
quick_response = bot.chat("What is Python?")

# Switch to powerful model for complex analysis
bot.model("gpt-4-turbo")
detailed_response = bot.chat("Explain the architectural trade-offs between microservices and monoliths")

Model capabilities and selection guide

Choose models based on your specific requirements:

import talk_box as tb

# For code generation and technical tasks
code_bot = tb.ChatBot().model("gpt-4-turbo").preset("technical_advisor")

# For creative writing and storytelling
creative_bot = tb.ChatBot().model("claude-3-opus-20240229").preset("creative_writer")

# For cost-effective general tasks
general_bot = tb.ChatBot().model("gpt-3.5-turbo").preset("customer_support")

# For multimodal tasks (text + images)
vision_bot = tb.ChatBot().model("gpt-4o")

Notes

Provider Authentication: ensure appropriate API keys are set in environment variables (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY) for the chosen model provider.

Model Availability: model availability may change over time. Check provider documentation for current model names and deprecation schedules.

Cost Considerations: different models have different pricing structures. Consider cost implications for production deployments.

Rate Limits: each model/provider has different rate limits. Plan accordingly for high-volume applications.

Context Windows: models have different maximum context window sizes, affecting how much conversation history can be included in requests.

See Also

preset : Apply behavior presets that work well with specific models temperature : Control response randomness and creativity max_tokens : Set response length limits appropriate for the chosen model