We describe our use of common research weakness enumerations, taxonomies of research issues on which ElementReviews are based.
We require authors to grant C-SQD a perpetual, non-exclusive license (similar to arxiv.org)
We discuss the typical review phases a manuscript may experience on C-SQD.
We discuss how the work of Kahneman, Sibony and Sunstein informs C-SQD's design choices.
We discuss the benefits of C-SQD to conference organizers.
Every platform user must be an identified human being. Bots are not allowed to control accounts. Managing multiple accounts is not allowed.
We describe "reviewer communities" as self-organizing, dynamic, overlapping and emergent associations.
How we make referencing prior work more relevant and informative.
Replacing the traditional accept/reject dichotomy with more a nuanced and interpretable evaluation.
We describe the process for gaining Reviewer status on C-SQD.
The challenge system is a reviewer initiated process for changing the relative prominence of an ElementReview or SynthesisReview.
We see at least two ways AI can directly support the goals of C-SQD.
A simple summary of the submission, publication and review process flows.
Elevating peer review to a level that reflects its importance to knowledge creation entails innovations in community engagement.
We explore some of the existing models and then argue for a new model.
In a nutshell: error correction is more important than verification. Verification is not a worthy goal because it is impossible.
It's about creating the right incentive structures. Does compensating reviewers lead to incurable bias? We don't think it has to.