Tag Archive: evaluation
UKSG top takeaways: “open or broken”, intelligent textbooks, research stories
Several years ago, I started the UKSG blog to report on the organization’s annual conference, which provides a forum for publishers and librarians to network and share strategic and practical ideas. Between 2006 and 2012, I enthused on the blog about topics including metrics, publishing evolution, innovation, research assessment, user behaviour and workflows. All those topics still fascinate me today (expect more on all of these from my Touchpaper postings) – and they were all covered again at UKSG this year. But this year – shock, horror! – I wasn’t blogging about them; my role for UKSG has changed, and others are carrying the blog torch now.
This frees me up to take a more reflective look at what I have learned at UKSG, rather than trying to capture it all in realtime for those who can’t attend. So – here on “my” new blog, TBI’s Touchpaper – is my snapshot of another great conference:
1. Let go of “publish or perish”. Accept “open or broken”.
UK academics’ submissions to REF 2020 (the process by which government evaluates academic institutions) *must* be OA at the point of publication. That is surely the game-changer that will mean, from this point on, academics will be trying to submit their best work to a publication that supports immediate OA. We may not yet have completely worked out the kinks, but events have overtaken us; it’s time to satisfice – adopting an imperfect model, refining it as we go. The lack of additional government funding for article processing charges (APCs) means that this particular mandate will have to be met as much by “green” self-archiving OA as by “gold” version-of-record OA. Both publishers and higher education institutions need to be sure that they have a clear strategy for both. (More from Phil Sykes’ opening plenary)
2. Information resources should be SO much more intelligent.
We were all blown away by student Josh Harding‘s vision of textbooks that “study me as I study them” – using learning analytics to identify a student’s strengths and weaknesses, comparing this to other students, adapting content as a consequence, reminding the student to study (“telling me I’m about to forget something because I haven’t looked back at it since I learned it”) and generally responding to the fact that we learn not just by reading, but also by hearing, speaking, and (inter)acting with information. (The highlight of the conference – Josh’s talk is must-see inspiration for all publishers’ product development and innovation.)
3. Authors need help to tell better stories about their research.
With increased pressure to justify funding, and the need to communicate more effectively with business and the general public, researchers need to be able to highlight what’s interesting about, and demonstrate the impact of, their work. Journal articles are but one node in a disaggregated network that together makes up a picture of their research. That network needs to be more cohesively visible. At the moment, the journal article is the hub but it doesn’t do a great job of opening up the rest of the network. I think publishers’ futures will be shaped by the extent to which they help academics surface / tell that whole story. (More from Paul Groth and Mike Taylor‘s breakout on altmetrics).