Message 00248 [Homepage] [Navigation]
Thread: joxT00189 Message: 19/77 L6 [In date index] [In thread index]
[First in Thread] [Last in Thread] [Date Next] [Date Prev]
[Next in Thread] [Prev in Thread] [Next Thread] [Prev Thread]

Re: [jox] Cutting the Knot



Matthieu and other colleagues:

I like the signals approach. One way forward is to have author signals -- a) desire for traditional accept/reject treatment, b) desire for multi-valent assessment, c) desire for feedback and suggestions for improvement. One could imagine these being non-exclusive categories so all three could be selected. My only concern here is one of versioning -- should there be a rule concerning how many iterations that an author gets in revising a submitted paper (do later submissions replace earlier submissions and, if so, does the version history disappear?) There is a balance here between 1) calling upon the community to aid in author production and 2) community co-production. There is also a question of explicit or implicit hierarchy -- does opting
for traditional treatment buy front page treatment?

I think the issue of rejection has become a detail of implementation. Should 'rejected' papers (i.e. those in which the author signalled traditional treatment) be entered at author request in the 'journal' (archive) or does the selection of the traditional approach constitute an author commitment to abide by the decision? Does this include not submitting the same or 'slightly revised' article without the traditional treatment flag? Does this have any relation to the next comment below on periodisation of the journal (e.g. traditionally reviewed papers
'accepted' within a particular time frame)?

I still am not understanding whether there is a periodisation of the 'journal' and, if so, how it is established. I do think that a reference point in time is useful both for citation practice and as a guide to 'currency' of contribution that is a bit more granular than one or more date fields -- e.g. so one could examine the titles/abstracts of contributions in some period
in 2011.

Best regards, Ed Steinmueller

Mathieu ONeil wrote:
[Converted from multipart/alternative]

[1 text/plain]
Hi Toni, all

Thanks for comments below. From what has been posted in the last week, here is my take on where we are at concerning the decisions I suggested needed to be made:

- regarding web or list discussions, all people so far have expressed a preference for the web (with Toni giving some extra food for thought below.

- regarding categories there seems to be consensus that "rate" is not a good way to define how we call them - "signal" might be better. These "signals" allow us to publish papers which we might otherwise reject because of this or that failing. Authors have to bear the burden of deciding whether they want to be published with a "no" answer to a signal (remembering there was strong disquiet with numerical ratings). I will post another message with a proposed list of "signals".

- Ed and Toni both offered criticisms and suggestions of the whole submission process. The question of rejection in particular needs to be addressed. It is clear that there would need to be another reviewer intervention after any revisions have been made: whether this would simply be a case of creating signals or a more substantive demand (further revisions for example) needs to be decided. In addition Toni has endorsed StefanMn's suggestion to offer a submission process choice to authors: either accept/reject, or signals. This would mean devising two publication processes. What do others think?
Now is the time to voice your opinion...

cheers
mathieu

----- Original Message -----
From: Toni Prug <tony irational.org>
Date: Sunday, February 28, 2010 4:07 am
Subject: Re: [jox] Cutting the Knot
To: journal oekonux.org

Hi all,

given the length, here's the summary:

i support StefaMn proposal for authors choosing the qualifying model (binary/ratings)

i extend it with the choices of _early screening_ and peer reviewing models (both open/closed choices).

However, i'm not convinced _rating_ is the most suitable name for signaling the attributes of an article.

i also support use of Plone over email list.

Journal Commons is a new project working on providing advanced tools and organizational techniques for cooperation in knowledge production - we're using Plone, working closely with two journals (in private for now), i will inform the list of our milestones in coming months.

yours,
toni

-------

my firm belief is that academic journals should facilitate, assist and improve the production and spread/distribution of new knowledge. Journals do not produce new knowledge. Authors do. This, in my experience, is lost on almost all journals i learned about so far.

Instead, there's rigidity on the side of journals, dictating to authors a set of rules which, given our level of technological development, make little sense today. At the same time, i believe to a large extent because of the opaqueness of peer reviewing system, authors are not being helpful to journal editors either: they regularly send articles with vast deficiencies in quality of argument, novelty, or simply required formatting, which rightly drives editorial boards mad and adds to their workload by what it is often seen as lack of respect on the side of authors.

I think that two moves could help to reduce tensions on both sides and improve the production: journals should be more accommodating/open to the needs of authors, simultaneously with being more strict in social pressures on authors to observe these new, more flexible, and more appropriate to our times, set of rules.

StefanMn then proposed a choice where authors submitting a
proposal could
indicate whether they want a binary model (publish or reject)
or a
multi-dimensional rating system:
[See http://www.oekonux.org/journal/list/archive/msg00212.html]
I like this proposal. I see it as a desirable increases in flexibility on the side of journal, which authors, i believe, will appreciate. As to readers, i don't see the problem with some articles being rated, and others not. There could be an icon always displayed next to the article, indicating it is a rated article. I expect that as the time goes, more authors will choose ratings, since they will figure out that it gives them more chance of being published (with problems that article might have noted by the ratings), rather then rely on the binary publish/reject decision.

My view is that binary model is terrible, unscientific, and antagonistic for entirely wrong reasons (i'm a big supporter of antagonism as method, when suitable) and should be gradually replaced. However, new models should not be imposed, but rather offered as alternatives that are monitored, evaluated and improved. We have had too many centuries of binary model to displace it over night, and authors would rightly be skeptical of such move without seeing the benefit of it first in practice.

Hence, i think that StefanMn proposal is an excellent way to win over authors in favor of new systems, in a way which gives them the chance to both choose and observe how the new models work. I much prefer this approach, than having only new systems (ratings, open-process, or otherwise), because i see how it would enable us to win authors over and to demonstrate that we are not claiming that we definitely know what is right and what is wrong. This, position of 'we know best for sure' is the attitude of almost all the journals i interacted with (perhaps i was unlucky, but it seems a pattern), and no wonder authors send all kind of junk - they see journals as arrogant, uncooperative and self-serving gatekeepers to their career advance (i've sent my first ever journal article submission, one on the open process in academic publishing recently to a journal i.e. this is not entirely my own experience, but what i observed from years of being surrounded by academic friends and colleagues + the interactions i had with several journals in the past year). Pushed by the publish-or-perish evaluating model, by the academic rewards model in general, and by the arrogance and uncooperativeness of journals, authors do often behave badly. But it comes partially, perhaps largely, out of desperation.

We can improve this relation a lot by these new models.The result, i hope (convinced by the arguments and analysis), will be a far better mutual respect and relationship authors-journal- readers and it's fitting for a peer production journal to be self-reflective and hence the innovator in own field of production. The general spirit of the two papers referenced (Reinventing academic publishing online + Open Process) is for me the spirit of peer production, of less centralized, yet more structured and more beneficially (for all sides) organized systems - in the context and sphere of knowledge production.

Second, if we want to open up the journal selection process and
provide rewards to those who do
normally invisible work (i.e. reviewers), in line with Toni Prug's
proposal [2] for a community peer review system (through a list
where proposals are
vetted and reviews are released), then by definition we are rejecting
the publish / don't publish model: vetting and orientation occur
upstream, even before an actual full submission.
Indeed. The open process model relies on discovering, and fixing when possible, problems in the early stage, when the cost of doing so is low on both sides (authors, editors/reviewers). However, in humanities and social sciences, it is often the case that the quality and novelty of the contribution cannot be seen until the whole argument is developed into a longer, more fined grained piece i.e. the details are sometimes all that matters in a piece, and we cannot see it in an early stage. Situation seems even worse for natural sciences, where months, or years, of lab experimental work might not results into a single publishable paper nor new findings (i was told). Yet, it is funders, or heads of research centres, who still make a judgment on the plausibility of the project before it starts, hence acting as a form of early peer review. We can try to act in a similar way, judging the plausibility of a short proposal being developed in something we consider worthy of being publishable. Also, many ideas are visibly worthy of developing in their early stage, within the first thousand words, as a rough proposal.

In short, there are issues with the open process as well, but i also do not see it as a binary either/or. I would like to see us offering to authors the options to choose between open/closed twice:

a) EARLY SCREENING: authors choose whether to submit proposal (let's say up to 1000 words) for a paper; they can do so either in an open OR closed way i.e.their proposals, our comments, their comments back, and our decision (either Yes, please develop and submit; OR No, we don't think it's for this journal) are either visible, or not, publicly, depending on what authors choose. b) REVIEWING: use open OR closed peer reviewing for full length articles i.e. authors do not have to go through a) at all, they simply follow the traditional model and submit full paper, choosing for peer reviewing to be open or closed.

(c) special cases: There are several possible complications in which the journal will have to make decision. One example is that a series of peer reviewers approached can refuse to peer review the article if their name is used, since they might be intimidated by the importance of the author, or for some other reason. In that case, the journal might approach the author and insist that peer reviews must be done without peer reviewers names used. There are other possible twists and resolutions, we'll learn as we do it.

Later, we could move to a more refined set of early screening and peer reviewing workflows (where some aspect might be open to some groups at some stage in the process, and not in other stages) as we all learn by practices what works well, what not, and how to develop it further.

In order to protect the
reputation of the journal, we need to alert readers that we are aware
of flaws, but that _we decided to publish anyway_. Hence the
need to
“qualify” or “signal” (rather than “rate”) published
submissions. Yes, i agree, very well put. I'm not convinced _rating_ is the best term for what we're intending to do either, although i like the procedure and support it. Qualify or signal seem clumsy, though not entirely unusable substitutes. When we say that a paper has been rated, it is intuitive. Not so when we say it has been signaled, or qualified. I'm undecided. Any other ideas how to name this?

So, this is the first point to decide: what categories do we have?
i have to think about this separately.

4-decision: review discussion system
+ email OR web platform for reviewing - the shades of openness

I love email lists, and i find it hard to accept more complex web tools for collaboration, since, as a rule with rare exceptions, i find them less, not more helpful as tools/environments to assist and change work positively. However, i was recently given an introduction to Plone, and i was won over by it. Excuse me for the technical language for a moment: it seems to me that worklows, transitions and fine grained access for groups, including the acquisition mechanism [1], will lend themselves well to variety of degrees of openness and structures of workflows for peer reviewing. Even more so, if each Folder, or Collection object (which contain other objects i.e submitted documents) can have RSS feeds to which we can subscribe too, including a unified feed - this is likely to be not so diffifult to add, even if it currently does not exist in the Plone and its plugins (call products in Plone). For example, see this product, http://plone.org/products/collective.watcherlist which enables any object within Plone (documents, folder, etc - if i'm not mistaken) to be watched, so that watchers automatically receive emails on any change in the observed object.

For the two of the projects i'm working on with two other journals, we decided to use Plone. We're now working on building worklfows to present it to academics involved in those projects.

I would therefore recommend using Plone (and not just any other web tool), and not an email list, with one important note: to increases the chances of Plone academic users being satisfied with it as a helpful platform, we must keep things simple to start with i.e we must not alienate early users with complex workflows and procedures.

Complexity can be added once the initial usage picks up and once the usual resistance (i have plenty of it too) to new systems is overcome through the satisfaction with the benefits that the platform brings to its users.

When we're done with the projects i'm currently working on, we will share workflows and all the information on our Plone setup, use, and comments we got from academic communities we're working within. So far, journals and communities expressed the wish to do this development process in private, until they reach the decision on whether to use the new models we are presenting - this should be within the next month or two.

Finally, Juan Grigera (who is working with me on this) and i decided the create a project out of this, we named it Journal Commons. We will launch it once the proposals we're working on are decided on. We aim to support journals to implement new processes of cooperation, using advanced web tools and organizational techniques. Given the number of technically skilled people here, and the use of Plone, we have a basis for a potential close cooperation on the Journal Commons as a separate project. We'll keep the list informed with major project milestones in the coming months.

--------

[1] 'Acquisition allows behavior to be distributed hierarchically throughout the system [...] you can change an object’s behavior by changing where it is located in the object hierarchy.' http://docs.zope.org/zope2/zope2book/Acquisition.html






______________________________
http://www.oekonux.org/journal

****
Dr Mathieu O'Neil
Adjunct Research Fellow
Australian Demographic and Social Research Institute
College of Arts and Social Science
The Australian National University
email: mathieu.oneil[at]anu.edu.au
web: http://adsri.anu.edu.au/people/visitors/mathieu.php





[2 text/html]
______________________________
http://www.oekonux.org/journal

______________________________
http://www.oekonux.org/journal



Thread: joxT00189 Message: 19/77 L6 [In date index] [In thread index]
Message 00248 [Homepage] [Navigation]