Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Examples

Copyright 2026 Firefly Software Foundation. Licensed under the Apache License 2.0.

Runnable example scripts demonstrating the major features of fireflyframework-agentic.

Prerequisites

  • Python 3.13+
  • uv package manager
  • An OpenAI API key (set OPENAI_API_KEY or enter it when prompted)

All examples use the model openai:gpt-4o.

Running

From the repository root:

export OPENAI_API_KEY="sk-..."
uv run python examples/<example_name>.py

If OPENAI_API_KEY is not set, each script will prompt you interactively.

Agent Examples

  • basic_agent.py — Create a FireflyAgent with instructions and tags, run a prompt.
  • conversational_memory.py — Multi-turn conversation with MemoryManager and create_conversational_agent.
  • summarizer.pycreate_summarizer_agent with tuneable length, style, and format.
  • classifier.pycreate_classifier_agent with categories and ClassificationResult structured output.
  • extractor.pycreate_extractor_agent with a custom Pydantic model for structured data extraction.
  • router.pycreate_router_agent with an agent map and RoutingDecision structured output.

Security Examples

  • security_guards.pyPromptGuard and OutputGuard standalone scanning. Demonstrates injection detection, PII/secrets/harmful content scanning, sanitise mode, custom deny patterns, and max output length. No API key required.

Tool Examples

  • cached_tool.pyCachedTool wrapping a slow tool with TTL-based memoisation. Shows cache hits/misses, TTL expiry, invalidate(), clear(), and max_entries eviction. No API key required.
  • tool_timeout.pyBaseTool(timeout=...) per-tool execution timeout and ToolTimeoutError handling. Shows fast/slow/no-timeout tools and graceful fallback patterns. No API key required.

Memory Examples

  • conversation_export_import.pyexport_conversation() and import_conversation() for conversation backup, migration, and restoration. Also demonstrates create_llm_summarizer(). No API key required for export/import.

Observability Examples

  • observability_usage.pyUsageTracker with bounded max_records, cumulative cost tracking, per-agent and per-correlation summaries. No API key required.

Delegation Examples

  • delegation_strategies.pyDelegationRouter with all four strategies: RoundRobinStrategy, CapabilityStrategy, CostAwareStrategy, and ContentBasedStrategy (LLM routing).

Pipeline Examples

  • pipeline_branching.pyBranchStep for conditional routing in a DAG, PipelineEventHandler for live progress, and DAGNode.backoff_factor for exponential retry backoff. No API key required.

Complex Examples

  • idp_pipeline.py (+ idp_tools.py) — Full Intelligent Document Processing pipeline that downloads a real 33-page PDF (Unilever Certificate of Incorporation & Bylaws) and processes it end-to-end through a 7-node DAG: ingest → split → classify → extract → validate → assemble → explain. Exercises all major framework features together:

    • AgentsFireflyAgent, create_classifier_agent (with category descriptions), create_extractor_agent
    • Tools@firefly_tool, ToolKit, CachedTool (TTL-based memoisation of PDF downloads), tool-to-agent bridging via as_pydantic_tools()
    • SecurityPromptGuardMiddleware (injection detection/sanitisation), OutputGuardMiddleware (PII/secrets/harmful content scanning), CostGuardMiddleware (budget tracking in warn-only mode)
    • PromptsPromptTemplate with declared variables (split, classification, extraction, explainability)
    • Reasoning patternsReflexionPattern for validation self-correction
    • Content processingTextChunker, ContextCompressor, TruncationStrategy
    • MemoryMemoryManager with working memory and conversation memory
    • ValidationOutputValidator, GroundingChecker, OutputReviewer (custom retry prompt), field rules, cross-field rules
    • Pipeline DAGPipelineBuilder, CallableStep, .chain(), PipelineEngine, PipelineEventHandler (live progress logging)
    • Document splitting — LLM-powered boundary detection splits the PDF into 4 sub-documents, each processed independently
    • ExplainabilityTraceRecorder, AuditTrail, ReportBuilder, plus an LLM agent that generates a comprehensive human-readable narrative
    • Pretty JSON output — ANSI-colored JSON rendering with key/value colour differentiation
    • Loggingconfigure_logging

    Requires pdfplumber (included in dev dependencies).

  • corpus_search/ — Drop a folder, get a queryable corpus. Hybrid retrieval over local files: markitdown converts each document, chunks land in SQLite (FTS5/BM25) plus a Chroma vector store. Query with natural language → Haiku expands the question into reformulations → BM25 + vector search per variant → Reciprocal Rank Fusion merges rankings → Sonnet synthesises an answer with [chunk_id] citations. No knowledge graph, no extractors, no reranker — just qmd-style hybrid search.

    # Ingest (Azure OpenAI for embeddings — no Anthropic key needed)
    EMBEDDING_BINDING_HOST=https://...openai.azure.com EMBEDDING_BINDING_API_KEY=... \
      uv run python -m examples.corpus_search ingest --folder ./drop
    
    # Watch a folder for new files
    uv run python -m examples.corpus_search ingest --folder ./drop --watch
    
    # Ask questions (needs ANTHROPIC_API_KEY for expansion / rerank / answer)
    uv run python -m examples.corpus_search query "Who is the CEO of OpenAI?"
    
    # Inspect a chunk by id (no API keys needed)
    uv run python -m examples.corpus_search show-chunk <chunk-id>

    Outputs land under ./kg/:

    ./kg/
    ├── corpus.sqlite     # chunks, chunks_fts (BM25), ingestions
    └── chroma/           # OpenAI chunk vectors
    

    See docs/use-case-corpus-search.md for the full design.

Reasoning Pattern Examples

  • reasoning_cot.py — Chain of Thought: step-by-step reasoning with ReasoningThought and trace inspection.
  • reasoning_react.py — ReAct: Reason-Act-Observe loop via run_with_reasoning().
  • reasoning_reflexion.py — Reflexion: Execute-Reflect-Retry with ReflectionVerdict self-critique.
  • reasoning_plan.py — Plan-and-Execute: structured planning with PlanStepDef status tracking.
  • reasoning_tot.py — Tree of Thoughts: parallel branch exploration with BranchEvaluation scoring.
  • reasoning_goal.py — Goal Decomposition: hierarchical GoalPhase breakdown and task execution.
  • reasoning_pipeline.py — Pipeline: chaining Chain-of-Thought into Reflexion with a merged trace.
  • reasoning_memory.py — Memory: reasoning with MemoryManager working memory enrichment.