crossref-database
Veto GatesRequired pass for any deployment consideration
| Dimension | Result | Detail |
|---|---|---|
| Scientific Integrity | PASS | The legacy audit did not indicate that retrieval outputs were presented as unsupported findings. |
| Practice Boundaries | PASS | Practice boundaries held because the package remained focused on source handling, lookup, or structured evidence use. |
| Methodological Ground | PASS | No methodological-grounding issue was recorded for crossref-database in the archived evaluation. |
| Code Usability | PASS | The packaged retrieval surface remained understandable at the command and parameter level in the archived review. |
Core Capability83 / 100 — 8 Categories
Medical TaskExecution Average: 96 / 100 — Assertions: 20/20 Passed
For You have a DOI and need authoritative bibliographic metadata..., the preserved evidence is lightweight but positive: the packaged validation command behaved as expected.
The archived run for You need to search CrossRef by keywords (title/author/topic) to... confirmed the helper entrypoint and left the workflow in a stable state.
The DOI Resolution: Resolve a DOI to CrossRef metadata and canonical URLs path verified the packaged helper command without exposing a deeper execution issue.
The archived run for Work Search: Query CrossRef by free-text keywords (e.g., title,... confirmed the helper entrypoint and left the workflow in a stable state.
The End-to-end case for DOI Resolution: Resolve a DOI to CrossRef... path verified the packaged helper command without exposing a deeper execution issue.
Key Strengths
- Primary routing is Evidence Insight with execution mode B
- Static quality score is 83/100 and dynamic average is 83.6/100
- Assertions and command execution outcomes are recorded per input for human review
- Execution verification summary: Script verification 2/2; adjustment=5. query_crossref.py: OK; validate_skill.py: OK