Before describing the evaluation tuple itselt, we discuss the context within which evaluations are generated.
The platform can dynamically calculate the evaluation tuple for a manuscript (M) once the reviewer community (R) and time window (T) have been specified. See the "Reviewer Communities" post for a description of R.
In addition to being able to dynamically calculate the evaluation tuple for each manuscript individually, platform users can create one or more evaluation profiles, and set one of them as their default. For example, a mathematical economist creates two evaluation profiles:
Profile 1: R1 = {'monetarist_group' + 'dynamical_systems_group' + 'has completed at least 5 expert synthesis reviews'}; T1 = since submission to C-SQD
Profile 2 (user default): R2 = {'Neo-Keynesians' + 'causal_modeling'}; T2 = during the last 6 months
C-SQD default Profile: R = {'all'}; T = since submission to C-SQD
Since Profile 2 is set as the user's default, she will see manuscript evaluation tuples only from reviewers matching R2 and within the time window represented by T2. She can at any point create a new profile, change her default profile, or recalculate the evaluation tuple dynamically for a particular manuscript.
The Manuscript Evaluation Tuple
• N : Reported non-ethical problems
• M : Reported ethical problems
• P : Perceived significance
• L : Depth of review scrutiny
• C : Citation impact
This multi-dimensional approach preserves important distinctions that would be lost in a single metric. For instance, a manuscript might contain technical issues while still making significant contributions to its field. Authors can influence the review depth metric L directly by paying for higher-tiered review or indirectly by writing papers that attract uncompensated reviews and high challenge rates. See the "Citations on C-SQD" post for more details on C.