Agent Skills

Comparison Table Gen

AIPOCH

Auto-generates comparison tables for concepts, drugs, or study results.

3
0
FILES
comparison-table-gen/
skill.md
scripts
main.py
references
guidelines.md
comparison.json
85100Total Score
View Evaluation Report
Core Capability
88 / 100
Functional Suitability
11 / 12
Reliability
10 / 12
Performance & Context
8 / 8
Agent Usability
14 / 16
Human Usability
8 / 8
Security
10 / 12
Maintainability
10 / 12
Agent-Specific
17 / 20
Medical Task
18 / 20 Passed
90Auto-generates comparison tables for concepts, drugs, or study results
4/4
86Use this skill for evidence insight tasks that require explicit assumptions, bounded scope, and a reproducible output format
4/4
84Auto-generates comparison tables for concepts, drugs, or study results
4/4
82Packaged executable path(s): scripts/main.py
4/4
76End-to-end case for Scope-focused workflow aligned to: Auto-generates comparison tables for concepts, drugs, or study results
2/4

SKILL.md

Comparison Table Gen

Generates comparison tables for medical content.

When to Use

  • Use this skill when the task needs Auto-generates comparison tables for concepts, drugs, or study results.
  • Use this skill for evidence insight tasks that require explicit assumptions, bounded scope, and a reproducible output format.
  • Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.

Key Features

See ## Features above for related details.

  • Scope-focused workflow aligned to: Auto-generates comparison tables for concepts, drugs, or study results.
  • Packaged executable path(s): scripts/main.py.
  • Reference material available in references/ for task-specific guidance.
  • Structured execution path designed to keep outputs consistent and reviewable.

Dependencies

See ## Prerequisites above for related details.

  • Python: 3.10+. Repository baseline for current packaged skills.
  • Third-party packages: not explicitly version-pinned in this skill package. Add pinned versions if this skill needs stricter environment control.

Example Usage

See ## Usage above for related details.

cd "20260318/scientific-skills/Evidence Insight/comparison-table-gen"
python -m py_compile scripts/main.py
python scripts/main.py --help

Example run plan:

  1. Confirm the user input, output path, and any required config values.
  2. Edit the in-file CONFIG block or documented parameters if the script uses fixed settings.
  3. Run python scripts/main.py with the validated inputs.
  4. Review the generated output and return the final artifact with any assumptions called out.

Implementation Details

See ## Workflow above for related details.

  • Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
  • Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
  • Primary implementation surface: scripts/main.py.
  • Reference guidance: references/ contains supporting rules, prompts, or checklists.
  • Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
  • Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.

Quick Check

Use this command to verify that the packaged script entry point can be parsed before deeper execution.

python -m py_compile scripts/main.py

Audit-Ready Commands

Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.

python -m py_compile scripts/main.py
python scripts/main.py --help

Workflow

  1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
  2. Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
  3. Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
  4. Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
  5. If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.

Features

  • Side-by-side comparisons
  • Markdown table output
  • Drug comparison templates
  • Study result comparisons

Parameters

ParameterTypeDefaultRequiredDescription
--items, -istring-YesItems to compare (comma-separated)
--attributes, -astring-YesComparison attributes (comma-separated)
--output, -ostring-NoOutput JSON file path

Usage


# Compare two drugs
python scripts/main.py --items "Drug A,Drug B" --attributes "Mechanism,Dose,Side Effects"

# Save to file
python scripts/main.py --items "Surgery,Chemo,Radiation" --attributes "Cost,Efficacy" --output comparison.json

Input Format

  • items: Comma-separated list of items to compare (e.g., "Drug A,Drug B")
  • attributes: Comma-separated list of comparison attributes (e.g., "Mechanism,Dose")

Output Format

{
  "markdown_table": "string",
  "html_table": "string"
}

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

No additional Python packages required.

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support

Output Requirements

Every final response should make these items explicit when they are relevant:

  • Objective or requested deliverable
  • Inputs used and assumptions introduced
  • Workflow or decision path
  • Core result, recommendation, or artifact
  • Constraints, risks, caveats, or validation needs
  • Unresolved items and next-step checks

Error Handling

  • If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
  • If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
  • If scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.
  • Do not fabricate files, citations, data, search results, or execution outcomes.

Input Validation

This skill accepts requests that match the documented purpose of comparison-table-gen and include enough context to complete the workflow safely.

Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:

comparison-table-gen only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.

Response Template

Use the following fixed structure for non-trivial requests:

  1. Objective
  2. Inputs Received
  3. Assumptions
  4. Workflow
  5. Deliverable
  6. Risks and Limits
  7. Next Checks

If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.