Attachments

File attachment handler for Talk Box conversations.

USAGE

Attachments(*file_paths)

The Attachments class enables you to include files in your AI conversations for analysis, review, and discussion. It automatically handles different file types (text, images, PDFs) and integrates seamlessly with ChatBot for programmatic conversations.

Primary Use Cases:

  • Code Review: attach source files for automated code analysis
  • Document Analysis: process PDFs, reports, and documentation
  • Data Analysis: include CSV, JSON, or other data files for insights
  • Content Generation: attach references for context-aware content creation
  • Image Analysis: process diagrams, charts, or photos with vision models
  • Research Assistance: attach papers, articles, or research materials

Key Features:

  • multi-file support with automatic content type detection
  • rich metadata collection for debugging and analytics
  • seamless ChatBot integration for programmatic workflows
  • chainable API following Talk Box design patterns
  • error handling with graceful fallbacks

Parameters

*file_paths : Union[str, Path] = ()

Variable number of file paths to attach to the conversation.

Examples


Single File Analysis

import talk_box as tb

# Analyze a single document
files = tb.Attachments("quarterly_report.pdf").with_prompt(
    "Summarize the key financial metrics and trends in this report."
)

bot = tb.ChatBot().provider_model("openai:gpt-4-turbo")
analysis = bot.chat(files)

Code Review Workflow

# Review multiple source files
code_files = tb.Attachments(
    "src/main.py",
    "src/utils.py",
    "tests/test_main.py"
).with_prompt(
    "Review this Python code for bugs, performance issues, and best practices. "
    "Focus on the main logic and test coverage."
)

reviewer = (
    tb.ChatBot()
    .provider_model("openai:gpt-4-turbo")
    .preset("technical_advisor")
    .temperature(0.3)
)

review = reviewer.chat(code_files)

Data Analysis Pipeline

# Analyze data files with context
data_analysis = tb.Attachments(
    "sales_data.csv",
    "customer_segments.json",
    "analysis_notes.md"
).with_prompt(
    "Analyze the sales trends, identify top customer segments, "
    "and suggest actionable insights based on the data and notes provided."
)

analyst = (
    tb.ChatBot()
    .provider_model("openai:gpt-4-turbo")
    .temperature(0.4)
    .max_tokens(2000)
)

insights = analyst.chat(data_analysis)

Image and Document Combination

# Combine visual and textual content
presentation_review = (
    tb.Attachments(
        "slide_deck.pdf",
        "speaker_notes.md",
        "chart_image.png"
    ).with_prompt(
        "Review this presentation for clarity, visual impact, and alignment "
        "between slides and speaker notes. Suggest improvements."
    )
)

presentation_bot = tb.ChatBot().provider_model("openai:gpt-4-turbo")
feedback = presentation_bot.chat(presentation_review)

Batch Processing Multiple Files

# Process multiple documents for comparison
for file_path in ["doc1.pdf", "doc2.pdf", "doc3.pdf"]:
    analysis = tb.Attachments(file_path).with_prompt(
        "Extract the main thesis and key arguments from this document."
    )
    result = bot.chat(analysis)
    print(f"Analysis of {file_path}:")
    print(result)

Jupyter Notebook Integration

# HTML display in Jupyter notebooks
files = tb.Attachments("code.py", "data.csv", "report.pdf").with_prompt(
    "Analyze these project files for insights and recommendations."
)

# Just displaying the object shows an HTML summary
files  # Displays file count, sizes, types, and prompt in formatted HTML
# Then process with ChatBot
bot = tb.ChatBot().provider_model("openai:gpt-4-turbo")
result = bot.chat(files)

The result here can also be displayed with HTML formatting:

result

Notes

  • file attachments are designed for single-turn programmatic conversations
  • for interactive multi-turn conversations, use bot.show("browser") instead
  • large files are automatically chunked and processed efficiently
  • unsupported file types are handled gracefully with informative errors
  • all file processing includes timing and error metadata for debugging
  • HTML representation: displays rich summary in Jupyter notebooks so just print the object!