Wednesday, 6 July 2011

ERA rankings

The article Dropping ERA rankings 'correct decision': Ellen Hazelkorn includes some rather silly statements:
According to Professor Hazelkorn, best known for her work on global university rankings, there are serious doubts about the role of journals in academic culture.
Perhaps in non-scientific disciplines ...
Although there were relatively few complaints about rankings being wrongly assigned, they were bitter and persistent, arguably distracting from an exercise that on the whole had been judged effective and worthwhile by higher education leaders in Australia.
Indeed, it could be argued that the soaring number of journals is a response to the increasing complexity of knowledge.
To some extent this is true.
This proliferation might be an acknowledgment that there were many legitimate ways of thinking or could be a reaction to the perception that journals were closed shops to contrary viewpoints or methodologies.
No. The principal reasons for this proliferation are:
  • publishers see money in new journals;
  • one publisher wants to get market-share from another;
  • the number of academics is increasing, especially in China and India, and they all desire to have their research published;
  • the ease of web and electronic publication;
  • University libraries are often forced into subscribing to a suite of journals from a publisher.
Rowbotham writes:
Another problem was an over-reliance on peer review to measure research impact.
Impact is perceived simply as that which is read within the academic community rather than impact on society.
No. Impact is measured by how the research affects the discipline.
Many articles are published, but how many actually have beneficial value for society?
A good question, but not relevant to the original topic.
Assessment should go beyond simply reviewing what one academic has written and another has read.
Obviously true!
Today, policy-makers and the wider society want to know how the research can be used to solve major societal and global challenges.
Yes, but this has almost nothing to do with peer review of journal articles, which is still the best mechanism for determining the quality of research.

The real problem is with the quality and consistency of peer review.

US Studies Centre chief operating officer Sean Gallagher wrote in last week's HES that
With such strategic importance placed on interdisciplinary research by the world's best universities, along with society's heavy expectations, the pressure will increase on global university rankings to measure a university's IDR performance alongside its disciplinary output.
So, come up with a universally agreed way to measure this, and it will be measured.

No comments:

Post a Comment