target-journal-matcher
Matches a manuscript abstract to target journals using Tier 1/2/3 classification, NLP/clinical-trial/methodology-aware scoring, mandatory IF disclaimer, and open-access filter. Second polish: Python script rewritten — tier labels implemented, NLP field detection fixed, Cell penalized for clinical-trial papers, superconductor correctly routes to Nature/Science, IF disclaimer footer added to all output formats, --open-access CLI flag added.
Veto GatesRequired pass for any deployment consideration
| Dimension | Result | Detail |
|---|---|---|
| Scientific Integrity | PASS | Impact factor values sourced from bundled journals.json; no fabricated IFs, DOIs, PMIDs, or clinical outcome data. |
| Practice Boundaries | PASS | No diagnostic or prescriptive medical conclusions produced; skill is limited to journal recommendations. |
| Methodological Ground | PASS | No methodological fallacies; no ethical compliance requirements triggered by journal-matching task. |
| Code Usability | PASS | Script (main.py) runs successfully on Python 3.9. Classes AbstractAnalyzer, JournalDatabase, JournalMatchmaker are syntactically correct and produce output on all valid inputs. |
Core Capability84 / 100 — 8 Categories
Medical TaskExecution Average: 88.2 / 100 — Assertions: 25/25 Passed
Tier 1/2/3 labels now present in output (Tier 1: Cell, Nat Med, Nat Methods, Nat Biotechnol; Tier 1: Cell Res). IF disclaimer present. Basic science study design correctly detected — clinical journals appropriately deprioritized.
NLP field now detected first (score highest). TACL (#1) and Computational Linguistics (#2) now lead recommendations. CV journals (TPAMI, IJCV) correctly deprioritized. Tier labels present. IF disclaimer present.
Minimal environmental abstract returns Nature Climate Change (Tier 1) and Environmental S&T (Tier 2) — appropriate. Tier labels and IF disclaimer present.
Clinical trial design detected. Cell now receives 0.2× penalty (basic-science-only journal) and does not appear in top 5. Lancet (#1), JAMA (#2), NEJM (#3) correctly lead — all Tier 1. Tier labels and IF disclaimer present.
Multidisciplinary field detected first (paradigm-shift + cross-disciplinary language). Nature (#1, Tier 1), Science (#2, Tier 1) correctly lead. Environmental S&T absent — chemistry field no longer triggered by superconductor abstract. npj Quantum Materials appears as physics/materials option. Tier labels and IF disclaimer present.
Key Strengths
- Structured CLI interface with multiple output formats (table, json, markdown) and configurable --min-if / --max-if filtering enables realistic tier-scoped searches
- Self-creating default journal database (auto-generates journals.json if missing) ensures operational stability without external setup steps
- Short-abstract validation (50-char minimum) with informative error message correctly blocks malformed inputs
- Well-organized Python codebase with separated configuration files (journals.json, fields.json, scoring_weights.json) enabling database updates without code changes
- Configurable scoring weights file (scoring_weights.json) allows tunable matching behavior without rewriting logic