Evidence Insight
biopython-entrez
86100Total Score
Core Capability
84 / 100
Functional Suitability
11 / 12
Reliability
9 / 12
Performance & Context
7 / 8
Agent Usability
14 / 16
Human Usability
8 / 8
Security
10 / 12
Maintainability
9 / 12
Agent-Specific
16 / 20
Medical Task
20 / 20 Passed
92You need to search PubMed for articles by keyword, author, journal, or date range and then retrieve metadata or abstracts
4/4
88You want to download GenBank records (e.g., nucleotide/protein sequences) in batch given accession IDs or search queries
4/4
86Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
4/4
86Query-based searching and ID list retrieval for downstream batch operations
4/4
86End-to-end case for Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
4/4
Veto GatesRequired pass for any deployment consideration
Skill Veto✓ All 4 gates passed
✓
Operational Stability
System remains stable across varied inputs and edge cases
PASS✓
Structural Consistency
Output structure conforms to expected skill contract format
PASS✓
Result Determinism
Equivalent inputs produce semantically equivalent outputs
PASS✓
System Security
No prompt injection, data leakage, or unsafe tool use detected
PASSResearch Veto✅ PASS — Applicable
| Dimension | Result | Detail |
|---|---|---|
| Scientific Integrity | PASS | The archived evaluation kept the skill tied to retrieved records or indexed source material rather than invented scientific claims. |
| Practice Boundaries | PASS | Practice boundaries held because the package remained focused on source handling, lookup, or structured evidence use. |
| Methodological Ground | PASS | No methodological-grounding issue was recorded for biopython-entrez in the archived evaluation. |
| Code Usability | N/A | This package is packaging-first and output-first, not code-first, so code usability is treated as not applicable. |
Core Capability84 / 100 — 8 Categories
Functional Suitability
Related legacy finding for biopython-entrez: Improve stress-case output rigor. Stress and boundary scenarios show weaker consistency
11 / 12
92%
Reliability
Related legacy finding for biopython-entrez: Improve stress-case output rigor. Stress and boundary scenarios show weaker consistency
9 / 12
75%
Performance & Context
A modest deduction remained in performance context for biopython-entrez in the archived review.
7 / 8
88%
Agent Usability
A modest deduction remained in agent usability for biopython-entrez in the archived review.
14 / 16
88%
Human Usability
The legacy audit gave full marks to human usability for this package.
8 / 8
100%
Security
The archived evaluation left some headroom for biopython-entrez under security.
10 / 12
83%
Maintainability
The legacy audit deducted points for biopython-entrez in maintainability.
9 / 12
75%
Agent-Specific
Related legacy finding for biopython-entrez: Improve stress-case output rigor. Stress and boundary scenarios show weaker consistency
16 / 20
80%
Core Capability Total84 / 100
Medical TaskExecution Average: 87.6 / 100 — Assertions: 20/20 Passed
92
Canonical
You need to search PubMed for articles by keyword, author, journal, or date range and then retrieve metadata or abstracts
4/4 ✓
88
Variant A
You want to download GenBank records (e.g., nucleotide/protein sequences) in batch given accession IDs or search queries
4/4 ✓
86
Edge
Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
4/4 ✓
86
Variant B
Query-based searching and ID list retrieval for downstream batch operations
4/4 ✓
86
Stress
End-to-end case for Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
4/4 ✓
92
Canonical✅ Pass
You need to search PubMed for articles by keyword, author, journal, or date range and then retrieve metadata or abstracts
This canonical case stayed inside the documented workflow and remained instruction-led.
Basic 36/40|Specialized 56/60|Total 92/100
✅A1The biopython-entrez output structure matches the documented deliverable
✅A2The instruction path remains actionable for the documented case
✅A3The output stays fully within the documented skill boundary
✅A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
88
Variant A✅ Pass
You want to download GenBank records (e.g., nucleotide/protein sequences) in batch given accession IDs or search queries
This variant a case stayed inside the documented workflow and remained instruction-led.
Basic 34/40|Specialized 54/60|Total 88/100
✅A1The biopython-entrez output structure matches the documented deliverable
✅A2The instruction path remains actionable for the documented case
✅A3The output stays fully within the documented skill boundary
✅A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
86
Edge✅ Pass
Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch,... was evaluated as a bounded documentation path, not as a runnable script workflow.
Basic 33/40|Specialized 53/60|Total 86/100
✅A1The biopython-entrez output structure matches the documented deliverable
✅A2The instruction path remains actionable for the documented case
✅A3The output stays fully within the documented skill boundary
✅A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
86
Variant B✅ Pass
Query-based searching and ID list retrieval for downstream batch operations
Query-based searching and ID list retrieval for downstream batch... was evaluated as a bounded documentation path, not as a runnable script workflow.
Basic 32/40|Specialized 54/60|Total 86/100
✅A1The biopython-entrez output structure matches the documented deliverable
✅A2The instruction path remains actionable for the documented case
✅A3The output stays fully within the documented skill boundary
✅A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
86
Stress✅ Pass
End-to-end case for Supports core NCBI E-utilities via Bio.Entrez: esearch, efetch, esummary, elink
End-to-end case for Supports core NCBI E-utilities via Bio.Entrez:... was evaluated as a bounded documentation path, not as a runnable script workflow.
Basic 29/40|Specialized 57/60|Total 86/100
✅A1The biopython-entrez output structure matches the documented deliverable
✅A2The instruction path remains actionable for the documented case
✅A3The output stays fully within the documented skill boundary
✅A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
Medical Task Total87.6 / 100
Key Strengths
- Primary routing is Evidence Insight with execution mode A
- Static quality score is 84/100 and dynamic average is 79.6/100
- Assertions and command execution outcomes are recorded per input for human review
- Execution verification summary: No script verification was applicable