Development of Manuscript Evaluation Tool for Reporting Integrity in Cancer Science (METRICS)
Rationale
Recently, the number of cancer registries and research output across Asia has grown substantially. Consequently, the Asian Pacific Journal of Cancer Prevention (APJCP) and other emerging journals in the network (including APJCB, APJCC, APJCN, and APJEC) have observed a marked increase in submissions reporting initial or interim findings from cancer research. These submissions span a broad spectrum of cancer science—from mechanistic studies at the cellular level to social research on end-of-life care.
However, we have identified wide-ranging variations in the quality, structure, and completeness of reporting across these manuscripts. This lack of standardization limits comparability, hinders meta-analyses, and reduces the overall utility of published data.
To address this gap, we identified a clear need to develop:
- A reporting guideline to communicate expectations to authors, helping them prepare more transparent, complete, and comparable manuscripts.
- An evaluative instrument (tool) to enable editors, reviewers, and researchers to systematically assess the quality of submitted or published manuscripts across different fields of cancer science.
To achieve this, we have developed a standard, multi-step methodology grounded in evidence-based principles. This methodology includes the formation of a core working group to draft the tool, a broader scientific committee to refine and approve it, and rigorous validity and reliability testing.
Project Support
This project is logistically supported by the West Asia Organization for Cancer Prevention (the West Asia chapter of the Asia Pacific Organization for Cancer Prevention, APOCP).
Definition of the METRICS Tool
The final output of this development process will be named the METRICS tool. While the full expansion may be finalized by the core committee, for the purposes of this project, METRICS can be understood as:
Manuscript Evaluation Tool for Reporting Integrity in Cancer Science
(Note: The exact wording will be confirmed during Step One and presented in the final published tool.)
Role of the Scientific Committee
You are invited to serve as a member of the Scientific Committee. In this role, your involvement will focus on Steps Two and Three of the methodology described below. Specifically, you will:
- Evaluate, comment on, and rate each suggested item of the draft guideline.
- Provide feedback on importance, necessity, relevance, simplicity, and clarity.
- Suggest corrections, revisions, or entirely new items.
All communication and collaboration will be conducted via email or online document sharing (e.g., Google Docs). Participating scientists will be acknowledged as contributors and included in the author list of any resulting publication.
Methodology and Framework for Guideline Development
The development roadmap consists of seven steps. Steps 1–3 focus on tool development, while Steps 4 and beyond assess the validity, reliability, and quality of the tool.
Step One: Initial Draft Development by Core Working Committee
A core working committee of five experts will develop the first draft of the guideline by:
- a) Conducting a comprehensive review of available relevant quality assessment tools (both generic, e.g., CONSORT, STROBE, and cancer-specific tools).
- b) Extracting and adapting important and relevant items from existing tools to suit the specific needs of cancer research reporting.
- c) Developing new items that address gaps not covered by previous tools, particularly those relevant to cancer registry reports and diverse study designs in oncology.
- d) Holding multiple meetings to finalize the first draft based on selected and newly developed items.
Step Two: Scientific Committee Delphi Review
Following the first draft, a Scientific Committee comprising at least 10 editorial experts from different countries, journals, and international societies will be formed. The steering committee will coordinate at least two rounds of a modified Delphi process, during which members will:
- Assess each item in the first draft for importance, necessity, relevance, simplicity, and clarity.
- Suggest corrections or revisions to existing items.
- Propose new items to be added to the draft.
Step Three: Consensus Meeting and Semi-Final Approval
After collecting the Scientific Committee’s feedback, the core working group will produce a revised draft. This draft will be presented during an online meeting attended by both the core and scientific committee members. Before the meeting, a mini-workshop will be held to describe the procedure and rating criteria. After the meeting:
- All final corrections will be made.
- A semi-final version of the tool will be approved by consensus of all meeting participants.
Step Four: Face and Content Validity Assessment
An online survey will be distributed to at least 50 reviewers (manuscript reviewers or methodologists). Participants will quantitatively rate each item of the semi-final tool based on:
- Importance
- Clarity
- Relevance
- Necessity
Open-ended written comments will also be collected. All quantitative data will be analyzed using statistical software, while qualitative comments will undergo content analysis. Based on the results, the steering committee will revise the tool to produce a near-final version.
Step Five: Inter-Observer Reliability Testing
A second online survey will be conducted using 60 manuscripts previously submitted to APJCP (30 accepted and 30 rejected). Three independent reviewers will assess each manuscript using the current version of the tool. Inter-observer reliability will be calculated (e.g., using intraclass correlation coefficients or Fleiss’ kappa). Any item failing to achieve an acceptable reliability threshold will be revised or removed by consensus among the three reviewers and the steering committee.
Step Six: Final Refinement and Publication
Following reliability testing, final refinements will be made, resulting in the final version of the METRICS tool. This version will be formally approved by all members of both the steering committee and the core expert committee. The final tool, along with a full report of the multi-phase development and validation study, will be submitted for publication. Authorship will include all committee members (core and scientific) as well as the three independent reviewers involved in Step Five.
Step Seven: Post-Publication Impact and Satisfaction Surveys
After the final tool is published, two additional surveys will be conducted to pilot test:
- The tool’s impact on quality improvement – assessing whether use of the METRICS tool leads to measurable improvements in manuscript quality over time.
- Author and reviewer experiences and satisfaction – gathering feedback from end-users regarding the tool’s usability, clarity, and perceived value.
The detailed methodology for these post-publication surveys will be described in a separate protocol.
To see what has been achieved in this line, please visit: APOCP’s website, Research/Education submenu.