Meta Criteria Generator
Generates scientifically sound inclusion and exclusion criteria for Meta-Analysis based on a given title or keywords. Use when user wants to design eligibility criteria for a systematic review or meta-analysis.
SKILL.md
When to Use
- Use this skill when the request matches its documented task boundary.
- Use it when the user can provide the required inputs and expects a structured deliverable.
- Prefer this skill for repeatable, checklist-driven execution rather than open-ended brainstorming.
Key Features
- Scope-focused workflow aligned to: Generates scientifically sound inclusion and exclusion criteria for Meta-Analysis based on a given title or keywords. Use when user wants to design eligibility criteria for a systematic review or meta-analysis.
- Packaged executable path(s):
scripts/extract_criteria.pyplus 1 additional script(s). - Structured execution path designed to keep outputs consistent and reviewable.
Dependencies
Python:3.10+. Repository baseline for current packaged skills.Third-party packages:not explicitly version-pinned in this skill package. Add pinned versions if this skill needs stricter environment control.
Example Usage
See ## Usage above for related details.
cd "20260316/scientific-skills/Data Analytics/meta-criteria-generator"
python -m py_compile scripts/extract_criteria.py
python scripts/extract_criteria.py --help
Example run plan:
- Confirm the user input, output path, and any required config values.
- Edit the in-file
CONFIGblock or documented parameters if the script uses fixed settings. - Run
python scripts/extract_criteria.pywith the validated inputs. - Review the generated output and return the final artifact with any assumptions called out.
Implementation Details
- Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
- Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
- Primary implementation surface:
scripts/extract_criteria.pywith additional helper scripts underscripts/. - Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
- Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.
Validation Shortcut
Run this minimal command first to verify the supported execution path:
python scripts/extract_criteria.py --help
Meta-Analysis Criteria Generator
This skill generates inclusion and exclusion criteria for Meta-Analysis based on the PICO framework (Population, Intervention, Comparator, Outcomes) and Study Design.
Usage
- Ask for Title/Keywords: If not provided, ask the user for the Meta-Analysis topic.
- Generate Inclusion Criteria: Use LLM to generate criteria based on the input.
- Generate Exclusion Criteria: Use LLM to generate exclusion criteria that do not contradict the inclusion criteria.
- Format Output: Use
scripts/extract_criteria.pyto extract the final criteria from the LLM outputs and present them clearly.
Workflow Details
Step 1: Generate Inclusion Criteria
Prompt the LLM to act as a Meta-Analysis expert. Input: User provided title/keywords. Requirements:
- Cover P (Population), I (Intervention), C (Comparator), O (Outcomes), S (Study Design).
- Output must be in English.
- Crucial: Enclose the final criteria list in
{}for extraction. - Format:
{(1) Participants: ...; (2) Interventions: ...; ...}
Step 2: Generate Exclusion Criteria
Prompt the LLM to generate exclusion criteria. Input: Inclusion Criteria from Step 1, User title. Requirements:
- Must NOT contradict Inclusion Criteria.
- Must NOT repeat Inclusion Criteria.
- Output must be in English.
- Crucial: Enclose the final criteria list in
{}for extraction.
Step 3: Extract and Format
Run the extraction script to clean up the outputs.
python scripts/extract_criteria.py --inclusion "<inclusion_text>" --exclusion "<exclusion_text>"
Quality Rules
- Language: All outputs must be in English.
- Format: The final output must be clearly separated into "Inclusion Criteria" and "Exclusion Criteria".
- Consistency: Exclusion criteria must be logically consistent with inclusion criteria.
When Not to Use
- Do not use this skill when the required source data, identifiers, files, or credentials are missing.
- Do not use this skill when the user asks for fabricated results, unsupported claims, or out-of-scope conclusions.
- Do not use this skill when a simpler direct answer is more appropriate than the documented workflow.
Required Inputs
- A clearly specified task goal aligned with the documented scope.
- All required files, identifiers, parameters, or environment variables before execution.
- Any domain constraints, formatting requirements, and expected output destination if applicable.
Recommended Workflow
- Validate the request against the skill boundary and confirm all required inputs are present.
- Select the documented execution path and prefer the simplest supported command or procedure.
- Produce the expected output using the documented file format, schema, or narrative structure.
- Run a final validation pass for completeness, consistency, and safety before returning the result.
Output Contract
- Return a structured deliverable that is directly usable without reformatting.
- If a file is produced, prefer a deterministic output name such as
meta_criteria_generator_result.mdunless the skill documentation defines a better convention. - Include a short validation summary describing what was checked, what assumptions were made, and any remaining limitations.
Validation and Safety Rules
- Validate required inputs before execution and stop early when mandatory fields or files are missing.
- Do not fabricate measurements, references, findings, or conclusions that are not supported by the provided source material.
- Emit a clear warning when credentials, privacy constraints, safety boundaries, or unsupported requests affect the result.
- Keep the output safe, reproducible, and within the documented scope at all times.
Failure Handling
- If validation fails, explain the exact missing field, file, or parameter and show the minimum fix required.
- If an external dependency or script fails, surface the command path, likely cause, and the next recovery step.
- If partial output is returned, label it clearly and identify which checks could not be completed.
Quick Validation
Run this minimal verification path before full execution when possible:
python scripts/extract_criteria.py --help
Expected output format:
Result file: meta_criteria_generator_result.md
Validation summary: PASS/FAIL with brief notes
Assumptions: explicit list if any
Deterministic Output Rules
- Use the same section order for every supported request of this skill.
- Keep output field names stable and do not rename documented keys across examples.
- If a value is unavailable, emit an explicit placeholder instead of omitting the field.
Completion Checklist
- Confirm all required inputs were present and valid.
- Confirm the supported execution path completed without unresolved errors.
- Confirm the final deliverable matches the documented format exactly.
- Confirm assumptions, limitations, and warnings are surfaced explicitly.