Evidence Insight

semantic-scholar-database

90100Total Score
Core Capability
87 / 100
Functional Suitability
11 / 12
Reliability
10 / 12
Performance & Context
8 / 8
Agent Usability
14 / 16
Human Usability
8 / 8
Security
9 / 12
Maintainability
10 / 12
Agent-Specific
17 / 20
Medical Task
20 / 20 Passed
97You need to find relevant papers by keyword, title, or known identifiers (e.g., Semantic Scholar Paper ID)
4/4
93You want to fetch detailed metadata for a paper (abstract, venue, year, fields of study, etc.)
4/4
91Paper search via the Semantic Scholar Graph API
4/4
91Paper details retrieval (e.g., abstract, venue, citations-related fields depending on requested fields)
4/4
91End-to-end case for Paper search via the Semantic Scholar Graph API
4/4

Veto GatesRequired pass for any deployment consideration

Skill Veto✓ All 4 gates passed
Operational Stability
System remains stable across varied inputs and edge cases
PASS
Structural Consistency
Output structure conforms to expected skill contract format
PASS
Result Determinism
Equivalent inputs produce semantically equivalent outputs
PASS
System Security
No prompt injection, data leakage, or unsafe tool use detected
PASS
Research Veto✅ PASS — Applicable
DimensionResultDetail
Scientific IntegrityPASSThe archived evaluation kept the skill tied to retrieved records or indexed source material rather than invented scientific claims.
Practice BoundariesPASSThe package stayed in retrieval, extraction, or evidence-organization scope rather than drifting into unsupported interpretation.
Methodological GroundPASSThe legacy audit preserved a method-grounded interpretation of the Access the Semantic Scholar Graph API to search papers and retrieve paper/author/citation data when you need literature discovery or citation graph exploration workflow.
Code UsabilityPASSThe packaged retrieval surface remained understandable at the command and parameter level in the archived review.

Core Capability87 / 1008 Categories

Functional Suitability
The archived deduction in functional suitability traces back to: Improve stress-case output rigor. Stress and boundary scenarios show weaker consistency
11 / 12
92%
Reliability
Reliability was softened by the legacy issue 'Improve stress-case output rigor'. Stress and boundary scenarios show weaker consistency
10 / 12
83%
Performance & Context
No point loss was recorded for performance context in the legacy audit.
8 / 8
100%
Agent Usability
The archived evaluation left some headroom for semantic-scholar-database under agent usability.
14 / 16
88%
Human Usability
The legacy audit gave full marks to human usability for this package.
8 / 8
100%
Security
The legacy audit deducted points for semantic-scholar-database in security.
9 / 12
75%
Maintainability
A modest deduction remained in maintainability for semantic-scholar-database in the archived review.
10 / 12
83%
Agent-Specific
The archived deduction in agent specific traces back to: Improve stress-case output rigor. Stress and boundary scenarios show weaker consistency
17 / 20
85%
Core Capability Total87 / 100

Medical TaskExecution Average: 92.6 / 100 — Assertions: 20/20 Passed

97
Canonical
You need to find relevant papers by keyword, title, or known identifiers (e.g., Semantic Scholar Paper ID)
4/4
93
Variant A
You want to fetch detailed metadata for a paper (abstract, venue, year, fields of study, etc.)
4/4
91
Edge
Paper search via the Semantic Scholar Graph API
4/4
91
Variant B
Paper details retrieval (e.g., abstract, venue, citations-related fields depending on requested fields)
4/4
91
Stress
End-to-end case for Paper search via the Semantic Scholar Graph API
4/4
97
Canonical✅ Pass
You need to find relevant papers by keyword, title, or known identifiers (e.g., Semantic Scholar Paper ID)

This canonical case stayed inside the documented workflow and remained instruction-led.

Basic 36/40|Specialized 60/60|Total 97/100
A1The semantic-scholar-database output structure matches the documented deliverable
A2The instruction path remains actionable for the documented case
A3The output stays fully within the documented skill boundary
A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
93
Variant A✅ Pass
You want to fetch detailed metadata for a paper (abstract, venue, year, fields of study, etc.)

This variant a case stayed inside the documented workflow and remained instruction-led.

Basic 34/40|Specialized 59/60|Total 93/100
A1The semantic-scholar-database output structure matches the documented deliverable
A2The instruction path remains actionable for the documented case
A3The output stays fully within the documented skill boundary
A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
91
Edge✅ Pass
Paper search via the Semantic Scholar Graph API

This edge case stayed inside the documented workflow and remained instruction-led.

Basic 33/40|Specialized 58/60|Total 91/100
A1The semantic-scholar-database output structure matches the documented deliverable
A2The instruction path remains actionable for the documented case
A3The output stays fully within the documented skill boundary
A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
91
Variant B✅ Pass
Paper details retrieval (e.g., abstract, venue, citations-related fields depending on requested fields)

Paper details retrieval (e.g., abstract, venue, citations-related... was evaluated as a bounded documentation path, not as a runnable script workflow.

Basic 32/40|Specialized 59/60|Total 91/100
A1The semantic-scholar-database output structure matches the documented deliverable
A2The instruction path remains actionable for the documented case
A3The output stays fully within the documented skill boundary
A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
91
Stress✅ Pass
End-to-end case for Paper search via the Semantic Scholar Graph API

The archived run for End-to-end case for Paper search via the Semantic Scholar Graph API remained guidance-driven rather than command-driven.

Basic 29/40|Specialized 60/60|Total 91/100
A1The semantic-scholar-database output structure matches the documented deliverable
A2The instruction path remains actionable for the documented case
A3The output stays fully within the documented skill boundary
A4The response quality is acceptable for the documented path
Pass rate: 4 / 4
Medical Task Total92.6 / 100

Key Strengths

  • Primary routing is Evidence Insight with execution mode B
  • Static quality score is 87/100 and dynamic average is 79.6/100
  • Assertions and command execution outcomes are recorded per input for human review
  • Execution verification summary: Script verification 1/1; adjustment=5. client.py: OK