Website Design & Development
We create stunning, user-friendly websites that drive growth.
We create stunning, user-friendly websites that drive growth.
We build custom apps to drive innovation.
We manage your IT, so you can focus on your core business.
We deliver scalable, secure cloud services for seamless operations.
Peer review forms the cornerstone of scholarly publishing—the process through which experts evaluate research before publication. But "peer review" isn't a single method; it encompasses several distinct models with different approaches to anonymity, transparency, and reviewer accountability. Understanding these models helps journal editors choose approaches that serve their communities effectively.
Before examining models, consider what peer review accomplishes. Expert evaluation serves multiple functions: identifying errors and methodological problems, suggesting improvements that strengthen work, filtering submissions to match journal scope and standards, and providing quality certification that readers rely upon.
Effective peer review improves published research. Ineffective peer review—whether through inadequate evaluation, bias, or problematic dynamics—undermines the scholarly literature it's meant to protect.
In single-blind review, reviewers know author identities, but authors don't know who reviewed their work. This traditional model remains common across many disciplines.
Authors submit manuscripts with full identifying information. Editors assign reviewers who can see author names, affiliations, and other identifying details. Reviewers evaluate the work knowing who wrote it. Authors receive anonymous feedback without knowing which experts evaluated their submission.
Proponents argue that knowing author identity provides useful context. Reviewers can assess whether claims align with authors' previous work, identify potential conflicts of interest, and evaluate whether the research team has appropriate expertise for the claimed methods.
Practical considerations also apply: removing author identity completely is difficult when manuscripts cite previous work by the same authors, describe recognizable research programs, or involve well-known projects.
Critics note that known authorship enables bias—conscious or unconscious—based on author reputation, institutional affiliation, gender, nationality, or other characteristics. Famous authors might receive more favourable reviews; authors from less-prestigious institutions might face scepticism their work wouldn't otherwise encounter.
Early-career researchers and those from underrepresented groups may be particularly disadvantaged when author identity influences evaluation.
Double-blind review anonymises both directions: reviewers don't know author identities, and authors don't know reviewer identities. This model has become standard in many humanities and social science fields.
Authors submit anonymised manuscripts with identifying information removed. Editors screen submissions for identifying details before assigning reviewers. Reviewers evaluate work without knowing who wrote it. Authors receive anonymous feedback as in single-blind review.
Anonymity reduces opportunities for bias. Reviewers evaluate work on its merits rather than author reputation. Junior researchers and those from less-known institutions receive evaluation equivalent to established scholars. The playing field levels somewhat.
Some research suggests double-blind review increases acceptance rates for papers by women and researchers from lower-income countries—though findings vary across studies.
True anonymity proves difficult to achieve. Small fields have few experts; writing styles, citation patterns, and research focuses may reveal identity to knowledgeable reviewers. When anonymization fails, the model provides only illusion of blindness.
The anonymization process also creates logistical complexity—manuscripts must be carefully prepared, and slips occur. Some argue that imperfect double-blind review may be worse than transparent single-blind processes.
Setting Up Peer Review Workflows?
OJS supports various peer review models with proper configuration. Our team can help establish workflows that match your editorial policies.
Open peer review encompasses several transparency-oriented approaches that reveal some or all aspects of the review process. "Open" can mean different things depending on implementation.
Open Identities: Reviewers and authors know each other's identities. Reviews are conducted transparently rather than anonymously.
Open Reports: Review reports are published alongside articles, allowing readers to see what reviewers said and how authors responded. Reviewer identities may or may not be disclosed.
Open Participation: Beyond invited reviewers, the broader community can comment on manuscripts—sometimes before publication (as public preprint comments) or after (as post-publication review).
Open Interaction: Authors and reviewers can communicate directly during the review process, discussing concerns and revisions collaboratively rather than through editor mediation.
Transparency creates accountability. Reviewers who sign their names may provide more constructive, careful feedback than anonymous reviewers who face no consequences for unhelpful criticism. Published reviews demonstrate the evaluation work behind published papers.
Open review also credits reviewer contributions publicly, addressing concerns that anonymous peer review provides inadequate recognition for essential scholarly labour.
Power dynamics complicate open review. Junior researchers may hesitate to criticize senior colleagues' work when their identities are known. Reviewers might soften legitimate criticism to avoid professional consequences. The pressure to be "nice" could compromise critical evaluation.
Some disciplines have attempted open review and found reviewer participation declining—experts unwilling to review when their identities would be revealed, particularly for negative recommendations.
Beyond these main models, journals experiment with various hybrid approaches:
Registered Reports: Reviewers evaluate study design and methodology before data collection. Acceptance decisions happen before results are known, reducing publication bias toward positive findings.
Portable Review: Reviews travel with manuscripts rejected from one journal to subsequent submissions elsewhere, reducing redundant evaluation of the same work.
Post-Publication Review: Articles publish first, then receive community evaluation through comments, ratings, or formal post-publication peer review.
Collaborative Review: Multiple reviewers discuss submissions together rather than submitting independent reports, potentially improving evaluation quality through deliberation.
Model choice should reflect your journal's context:
Discipline Norms: Fields have established expectations. Deviating from norms may face resistance or confusion. Understanding what your community expects provides a starting point.
Reviewer Pool: Will reviewers participate under your chosen model? Open review has struggled in some communities where reviewers prefer anonymity. Consider your ability to recruit reviewers.
Power Dynamics: In fields with sharp hierarchies, open review may disadvantage junior researchers. Anonymity can protect honest evaluation from professional consequences.
Transparency Goals: If demonstrating rigorous review matters for your indexing or credibility goals, open or transparent elements might serve those purposes.
Practical Capacity: Complex models require more editorial management. Match ambition to capacity.
Open Journal Systems supports various peer review configurations. Settings control whether reviewer identities appear to authors, whether author identities appear to reviewers, and how review information flows through the system.
Proper configuration ensures your stated review model matches actual system behaviour. Misalignment between claimed and actual processes creates ethical and credibility problems.
Indexing applications and ethical standards require clear peer review documentation. Your journal should explain:
What model you use and why it fits your discipline and community. How reviewers are selected and what expertise they're expected to have. What evaluation criteria guide reviewer assessment. How conflicts of interest are managed. How decisions are made based on reviews. What authors can expect regarding feedback timing and content.
This documentation demonstrates genuine peer review implementation—not just claimed processes.
Altechmind helps journals establish peer review systems that match their editorial policies. From OJS configuration to workflow design, we ensure your review processes work efficiently and meet indexing standards.