clinic-research-design
Generates a structured prompt framework for clinical study protocols. Supports Diagnostic, Efficacy, Etiology, and Prognosis studies. Calculates sample size and provides logic guides for LLMs.
Veto GatesRequired pass for any deployment consideration
| Dimension | Result | Detail |
|---|---|---|
| Scientific Integrity | PASS | Scientific integrity held because the archived workflow stayed at the level of study planning, hypothesis framing, and experiment design rather than claiming completed results. |
| Practice Boundaries | PASS | The package remained on the planning side of the boundary and did not cross into clinical or diagnostic advice. |
| Methodological Ground | PASS | Methodological grounding was preserved through the documented inputs, transformations, and expected artifacts. |
| Code Usability | N/A | This package is packaging-first and output-first, not code-first, so code usability is treated as not applicable. |
Core Capability83 / 100 — 8 Categories
Medical TaskExecution Average: 96.8 / 100 — Assertions: 20/20 Passed
The archived run for Generates a structured prompt framework for clinical study... confirmed the helper entrypoint and left the workflow in a stable state.
The archived run for Generates a structured prompt framework for clinical study... confirmed the helper entrypoint and left the workflow in a stable state.
The archived run for Generates a structured prompt framework for clinical study... confirmed the helper entrypoint and left the workflow in a stable state.
The archived run for Packaged executable path(s): scripts/calculators/sample_size.py... confirmed the helper entrypoint and left the workflow in a stable state.
The Generates a structured prompt framework for clinical study protocols. Supports Diagnostic,... path verified the packaged helper command without exposing a deeper execution issue.
Key Strengths
- Primary routing is Protocol Design with execution mode B
- Static quality score is 83/100 and dynamic average is 84.6/100
- Assertions and command execution outcomes are recorded per input for human review
- Execution verification summary: Script verification 4/4; adjustment=5. main.py: OK; protocol_writer.py: OK; study_classifier.py: OK; validate_skill.py: OK