Message 00197 [Homepage] [Navigation]
Thread: joxT00189 Message: 5/77 L2 [In date index] [In thread index]
[First in Thread] [Last in Thread] [Date Next] [Date Prev]
[Next in Thread] [Prev in Thread] [Next Thread] [Prev Thread]

Re: Re: [jox] Multi-rating mode of evaluation (was: Multi-rating mode of evaluation / Updating papers)

[Converted from multipart/alternative]

[1 text/plain]
Hi All,

My view is indeed that rating is unnecessary for many reasons. These are my
more important ones but I can think of many more:

   - you are treating the readers as idiots who cant do it themselves
   - you are essentially playing favorites with the texts
   - rating is a very subjective business and disagreeing with someone's
   position irrespective of the quality of the paper might confuse the two, if
   you disagree but the work is great how do you rate? The mechanism you
   describe really doesnt say anything to me
   - especially when on cutting edge research matters, it is difficult to
   even rate altogether
   - you are creating a HIERARCHY of  papers and  i do find hierarchies in
   general quite distasteful. For a progressive journal on p2p this could be
   very problematic.
   - What is a better paper and who decides? Majority decision making does
   not work well in real life, why do you want to transfer it onto an
   experimental progressive journal?
   - Also, people on it might feel uncomfortable with the rating and it is a
   very technocratic thing to do altogether. I mean this is not a marketing
   product and we dont sell DVD players!!
   - Lastly personally trust very few researchers' opinion about research to
   begin with. Do you trust everyone? Do you trust the majority?

Nevertheless, I told Mathieu that all you can  do is experiment see how it
goes in my personal communication with him. In which case, please go ahead
and I just register my concerns on the issue. Only trial and error can
settle this one.


On Fri, Dec 4, 2009 at 8:56 AM, Mathieu O&#039;Neil <
mathieu.oneil> wrote:

[Converted from text/html]

Hi Brian, Johan, all

I looked quickly at the paper Johan posted - the argument is that
improvement of submissions should be inspired by FLOSS development by
adopting mechanisms such as this email list: authors post a
suggestion, reviewers collectively point out possible omissions, the
author goes back to work then submits a more fleshed-out version,
people on the list point out more "bugs", then more fixing up, etc,
until publication. The author argues that there has to be a
(worldwide?) social contract that no-one will quote the discussions /
submissions until publication. This last part seems hard to enforce,
plus the process seems to me a bit drawn-out but I really like the
initial stage (get list feedback on abstracts / submissions). Perhaps
this could this be combined to a rating system?



From: bwhitworth<b.whitworth>

This is an interesting discussion - sorry I cant participate fully due
to a current overload.

By our First Monday analysis journals accepting only the best, and
those "open to all", are both paths well travelled in academic
publishing, with known outcomes - the first gives rigor but not
relevance and the second reduces quality and recognition.

Statements like "We should publish only papers that we agree are fit
for publication" or "We should ..." in general assume that we control
the journal. Our paper at
ojs/index.php/fm/article/view/2609/2248<>opposes that control mentality
to introduce the ideal of democracy in academic publishing, i.e.
government by the people for the people

Reviewers imposing grades on accepted publications only denigrate
authors if the journal makes them publish. If authors choose to
publish how can they be offended? If they were offended they would
just choose not to publish. Reviewing is then just the journal doing
its job.

A journal that cant be bothered rating its submisions doesn't deserve
to succeed. Equally one that selects the best and leaves the rest is
elitist. There is no easy way between these options, so we suggested
both highly selective reviewing and completely open publishing. The
multi-grade system lets anyone publish but all neednt be rated equal -
though there can be multiple criteria. I guess this goes against the
politically correct idea that we are all equal, but actually some of
us run faster, others cook better and a few of us can actually do
mathematics - so really "equality" is a myth. The real equality is of
opportunity not ability, which is why this approach lets everyone in
who wants to come in.

Likewise the ratings of registered readers, while informal, are not
unexpected nor imposed. The public is always entitled to its opinion.
The system need only identify and ban spammes and trolls, as Wikipedia
does. The view of the public should not be a secret, so people can
rate what they read.

The social principles outlined in our paper were fairness, public
good, transparency, freedom and order. Achieving all these in one jump
is probably impossible, as socio-technical systems are still
struggling to evolve. It is all very compex. Hopefully this journal
can make some advance on what went before, even if we dont get it
right first time.

Regardless of the outcome, thank you for trying!

Brian Whitworth
----- Original Message -----
From: "Felix Stalder" <felix>
To: <journal>
Sent: Friday, December 04, 2009 1:36 AM
Subject: Re: [jox] Multi-rating mode of evaluation (was: Multi-rating
mode of evaluation / Updating papers)

On Wednesday, 2. December 2009, Stefan Merten wrote:

We should publish only papers that we agree are fit for

But "fit for publication" is not based on a single reason. There may
be articles which we consider great in many dimensions but they lack
some certain feature. Lack of this feature normally would make them
unacceptable but if we can express this lack by a rating then the
credibility of our journal is maintained and the article is

I think multi-rating models are too complicated, and patronizing to
the author and the reader. I mean, if we like the paper enough to
it in our journal, we should do it. Period.

Do we really need to say something like: we give this paper an 'a' in
grammar, a 'b+' in originality, an "a-" in methodology and a 'b-' for
bibliography? Shouldn't the reader be able to figure it our him/

If we think a paper would be great to publish, but lacks some critical

aspects, we should ask the authors to revise it before publishing. I
see this as censorship or forcing anything upon the author, but rather
as a
process of critical evaluation that leads to an improvement.



--- ----------------------------- out now:
*|Mediale Kunst/Media Arts Zurich.13 Positions.Scheidegger&Spiess2008
*|Manuel Castells and the Theory of the Network Society. Polity, 2006
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005


No virus found in this incoming message.
Checked by AVG -
Version: 8.5.426 / Virus Database: 270.14.83/2529 - Release Date: 11/
26/09 19:42:00

[2 text/html]

Dr Mathieu O'Neil
Adjunct Research Fellow
Australian Demographic and Social Research Institute
College of Arts and Social Science
The Australian National University
email: mathieu.oneil


Dr Athina Karatzogianni
Lecturer in Media, Culture and Society
The Dean's Representative (Chinese Partnerships)
Faculty of Arts and Social Sciences
The University of Hull
United Kingdom
T: ++44 (0) 1482 46 5790
F: ++44 (0) 1482 466107,_culture_and_society/staff/karatzogianni,_dr_athina.aspx

Check out Athina's work

Check Virtual Communication Collaboration and Conflict (Virt3C) Conference

[2 text/html]

Thread: joxT00189 Message: 5/77 L2 [In date index] [In thread index]
Message 00197 [Homepage] [Navigation]