1. Start with the exact question
Before reading results, define population, intervention, comparator, and outcome. If the paper answers a different question, the claim is already overstated.
- Population: age, risk profile, baseline status.
- Intervention: dose, formulation, timing, adherence.
- Comparator: placebo, standard care, or active control.
2. Check design before outcomes
Randomized and blinded designs reduce major bias, but execution still matters. High dropout or poor adherence can distort the apparent effect.
- Was allocation concealed?
- Were assessors blinded for subjective outcomes?
- Was intention-to-treat analysis reported?
3. Read effect size, not only p-values
Statistical significance does not guarantee clinical relevance. A small but "significant" change may have negligible real-world impact.
Reviewer note
Prefer absolute change, confidence intervals, and baseline-adjusted results over isolated p-values.
4. Assess applicability and harms
If participants are highly selected, extrapolation can fail. Review adverse event reporting and interaction risks before translating findings into practice.
Evidence quality is not a badge. It is a chain of method decisions, each of which can weaken or strengthen inference.