PromptBuilder

Builds structured prompts using attention mechanisms and cognitive principles.

Usage

Source

PromptBuilder()

The PromptBuilder leverages insights from modern prompt engineering research to create prompts that maximize model attention on critical information while maintaining natural conversation flow.

Returns

PromptBuilder
A new instance ready for fluent method chaining

Notes

The PromptBuilder applies proven principles that enhance model performance and response quality through strategic information placement and cognitive load management.

Attention Mechanisms Applied:

  • Positional encoding: critical information placed strategically
  • Multi-head attention: different types of context handled separately
  • Hierarchical structure: information organized by importance and relevance
  • Context windowing: optimal information density for model processing

Cognitive Psychology Integration:

  • Primacy effect: important instructions placed early
  • Recency effect: final emphasis reinforces key objectives
  • Chunking: information grouped into digestible, logical units
  • Salience: critical constraints highlighted for maximum attention

Prompt Building Methods

The PromptBuilder provides a comprehensive set of methods for creating structured, attention-optimized prompts. All methods support fluent chaining for natural prompt construction.

The core foundation methods:

  • persona(role, expertise=None): set the AI’s identity and behavioral framework
  • task_context(context, priority=CRITICAL): define the primary objective and scope
  • critical_constraint(constraint): add front-loaded, non-negotiable requirements
  • constraint(constraint): add important but secondary requirements

The structure and analysis methods:

  • structured_section(title, content, priority=MEDIUM, required=False): create organized content sections
  • core_analysis(analysis_points): define required analytical focus areas
  • output_format(format_specs): specify response structure and formatting requirements
  • example(input_example, output_example): provide concrete input/output demonstrations

The focus and guidance methods:

  • focus_on(primary_goal): emphasize the most important objective
  • avoid_topics(topics): explicitly exclude irrelevant or problematic areas
  • final_emphasis(emphasis): add closing reinforcement using recency bias

Output methods:

  • build(): generate the final structured prompt string
  • preview_structure(): preview the prompt organization and metadata

Each method is designed to work together in the attention-optimized prompt structure, with positioning and formatting automatically handled to maximize model performance.

Examples

Basic prompt construction

Create a simple prompt with persona and task:

import talk_box as tb

prompt = (
    tb.PromptBuilder()
    .persona("data scientist", "machine learning")
    .task_context("analyze customer churn patterns")
    .focus_on("identifying the top 3 risk factors")

)

We can easily print the prompt that was generated for this task:

print(prompt)

Structured analysis prompt

It is possible to build a much more comprehensive analysis prompt with multiple sections:


prompt = (
    tb.PromptBuilder()
    .persona("senior software architect")
    .critical_constraint("focus only on production-ready solutions")
    .task_context("review the codebase architecture")
    .core_analysis([
        "identify design patterns used",
        "assess scalability bottlenecks",
        "review security implications"
    ])
    .structured_section(
        "Performance Metrics", [
            "response time requirements",
            "throughput expectations",
            "memory usage constraints"
        ],
        priority=tb.Priority.HIGH
    )
    .output_format([
        "executive summary (2-3 sentences)",
        "detailed findings with code examples",
        "prioritized recommendations"
    ])
    .final_emphasis("provide actionable next steps")

)

The generated prompt can be printed as follows:

print(prompt)

Code review prompt

Create a specialized prompt for code reviews:

prompt = (
    tb.PromptBuilder()
    .persona("senior developer", "code quality and best practices")
    .task_context("review the pull request for potential issues")
    .critical_constraint("flag any security vulnerabilities immediately")
    .structured_section(
        "Review Areas", [
            "logic and correctness",
            "security considerations",
            "performance implications",
            "code readability and documentation"
        ]
    )
    .output_format([
        "critical issues (must fix)",
        "suggestions (should consider)",
        "positive feedback"
    ])
    .avoid_topics(["personal coding style preferences"])
    .focus_on("providing constructive, actionable feedback")

)

We can look at the generated prompt:

print(prompt)