The following are some things I particularly noticed at the Berlin 11 open access conference, which I attended earlier this week, with apologies for any misunderstandings, misattributions or mis-categorisations.
The research and education enterprise is in need of urgent transformation (Manfred Laubichler, Arizona State University)
The epistemic web: a universal and traceable web of knowledge. (Ulrich Poschl, Max Planck)
There can’t be insiders and outsiders in scientific knowledge (David Willets, UK Government)
Aim for ZEN: Zero Embargo Now (Glyn Moody)
Students at the heart of the open access movement (Nick Shockey, Right to Research Coalition)
Open access: disruption is a feature, not a bug. (Mike Taylor, University of Bristol)
We need disruption to ensure that we don’t carry unhelpful practices into a new scholarly communications arrangements (Cameron Neylon, PLOS)
While cultural change takes time, we need to watch that we don’t set in stone things that we know not to be right (Cameron Neylon, PLOS)
The status of scientific communication
Researchers should control the organisation of scientific knowledge, and bottom-up standardisation ensures acceptance (Robert Stogl, MPG)
Scientific knowledge is a public good, and top-down policies ensure action to ensure it remains so (Cameron Neylon, PLOS)
Clay Shirky says that publishing is not a job or an industry, it’s a button. The publishing services we need are analogous to Red Hat services for Linux software (Glyn Moody)
We need to recognise that public research and education is paid in advance, and the IPR created is in a distinct class of its own (John Willinsky, PKP and Stanford)
All disciplines can move to OA, in different ways (Gunter Stock, ALLEA)
Scientific knowledge is diverse; we need biblio-diversity, beyond the Web of Science monoculture. (Marin Dacos, OpenEdition)
The social sciences and humanities need their own subject repositories (Nicholas Canny, ERC)
Some of the concerns about OA from the social sciences and humanities are legitimate, and some are not (Nicholas Canny, ERC)
We are balancing cost-effectiveness, openness, fast access and effective quality review (Roger Genet, Directorate General for Research and Innovation, France)
Will the US OSTP Directive be codified into law and, if so, will it be stronger, weaker or about the same? (Heather Joseph, SPARC)
Researchers don’t write to communicate, but to be seen and counted; if communication is important, then it needs to be incentivised. (Cameron Neylon, PLOS)
Most scientists don’t directly benefit from OA (Robert Schlogl, Max Planck)
Unless their publication is in the repository, then it doesn’t exist (Bernard Rentier, University of Liege)
Making the transition to OA
Subscription funds must be moved to pay for Gold OA (Peter Gruss, MPG)
Take international coordinated action to cut subscription budgets by 30% and reallocate the money for APCs (Ulrich Poschl, Max Plank)
Authors might refuse to submit papers to hybrid journals that are seen to be “double dipping”. (David Willets, UK Government)
We expect publishers to take action on transparent and competitive pricing to show that the UK was right to support the hybrid model (David Willets, UK Government)
A UK Minister of State cannot advise independent universities on their promotion practices, for example to encourage OA, but the Royal Society might (David Willets, UK Government)
As students, we use music, art, poetry and even free medical examinations to interest people in OA (Daniel Mutonga, Medical Student Association of Kenya)
The core competences in a digital age are navigation, authentication, integration and innovation (Manfred Laubichler, Arizona State University)
Trust us (David Carroll and Joseph McArthur, students and inventors of the OA button, see below)
A new kind of library, the DPLA, has been launched by bringing together coalitions of libraries and of funders (Robert Darnton, Harvard University)
There needs to be greater coordination between research funders, and the French Academy has agreed to support a biannual funders meeting between Berlin conferences. (Peter Gruss, MPG)
The Berlin conferences will from now on be biannual (Peter Gruss, MPG)
We need every research organisation to have a committee reporting directly to the head of the organisation, to monitor progress, experiments, requirements and infrastructure (Robert Schlogl, Max Planck)
Standards and interoperability are key: we need a new standards body for open access and open data (Robert Schlogl, Max Planck)
Linux, arXiv and the Web all started in a single week in August 1991 (Glyn Moody)
With Scielo, tailored versions of OJS and DSpace, Brazil and Latin America are leading OA (Sely Costa, University of Brasilia)
China and India have heard concerns from the west that they are not opening their research as quickly as the west. Now, some 34% of Chinese papers are OA (Xiaolin Zhang, Chinese Academy of Sciences)
The lack of OA journals is limiting Gold OA growth (Ulrich Poschl, Max Planck)
We need to instrument the research process (Cameron Neylon, PLOS)
We need to agree which data are useful to us in monitoring progress toward OA (Robert Schlogl, Max Planck)
We need evidence on both the costs and wider benefits of shorter embargo periods (Heather Joseph, SPARC)
The open access button will make visible the occasions where people hit a paywall (David Carroll and Joseph McArthur, students)
Research, scholarship and the wider economy and society
Germany invests 2.9% GDP on R+D, and is revising copyright law to help innovation (Georg Schutte, Feneral Ministry of Education and Research, Germany)
Chinese spending on R+D is rising at 15% – 20% p.a. Citations and collaborations in Chinese papers are increasing (Xiaoling Zhang, Chinese Academy of Sciences)
Of the 1m unique users of PubMedCentral a day, two-thirds come from outside the academic domain (Heather Joseph, SPARC)
Who needs access outside research institutions? whoneedsaccess.org (Mike Taylor, University of Bristol)
The fragments (documents, photographs, etc) that are of little importance to someone, might be of immense value to someone else as a piece of their history, and together they create stories for the future (Haim Gertner, Yad Vashem)
The DPLA is a distributed and democratic model; we have “scanabego” vehicles going to local communities to digitise content that is important to them (Robert Danton, Harvard University)
Why is software not included in the Berlin Declaration (Glyn Moody)
Researchers might see software as their core intellectual property, and might not want to share it openly. The Royal Society will consider this. (David Willetts, UK Government)
Only two of the top 20 big data companies are European, public-private partnerships might improve this for Europe (Carl-Christian Buhl, EC)
The scholarly record is challenged by a separation between idea (publication) and evidence (data), and more concretely by link rot. (David Willetts, UK Government)
Research communities need to take the lead on data: the Royal Society Open Data Forum will consider the issues of standards and skills (David Willetts, UK Government)
Over the next few weeks we plan to blog a series of posts covering some of the main topics surround the creation, curation and consumption of ebooks in teaching, learning and research.
There’s little doubt about the growing use and importance of ebooks within universities.
Statistics compiled by the University of York, for example, show that the number of ebooks provided by the library has increased massively: by 22,878 in 2010/11 to a total of 576,689 in 2011/12.
One of the reasons for this rapid and exponential growth is that they provide access to library collections 24/7, every day, off-campus, for students and researchers and from their preferred device.
However, research by Jisc, Jisc Collections and others has highlighted the barriers that pose serious challenges to institutions who wish to exploit the potential of ebooks and ebook technology.
A recent Jisc project “The challenges of e-books in academic institutions” by Ken Chad has produced a number of case studies to illustrate how ebooks are created and managed by institutions and analysed the ways in which ebooks are used.
In a series of blog posts we‘ll try to give an overview of the current ebook landscape based on the work of the project and by adding further relevant content.
Each post will describe a particular topic and highlight challenges, lessons learned and emerging trends. Some of the topics we’re thinking of covering are:
- Patron Driven Acquisition
- Campus based publishing
- ebooks and the role of the library
- Beyond the pdf
- Licensing and legal issues
- Preservation of ebooks
- Impact – the usage of ebooks and the student experience
We’re also going to include examples and links to further resources for each topic.
Stay tuned for more on new purchasing models for ebooks later this week!
Research Data Alliance (RDA) – second plenary meeting – Washington DC – 16th – 18th September.
Many readers of this blog will know about the Research Data Alliance already, but there will, I guess also be a lot of people that don’t. I am using this post as an introduction to the RDA – having this week been to Washington DC to attend the second plenary meeting of the organisation.
What with all of the interest and some urgency around research data publishing, management and re-use, at Government level, at university level, disciplinary level; and of course with an eye on research being global, there is a need to join the data up with shared practices, standards, policies and infrastructure. That’s where the RDA comes in.
Building on initiatives such as Data One in the US, the initiatives across Europe, such as the Jisc research data activity, that take place in many member states & have collectively informed the EC’s direction on research data infrastructure as part of the forthcoming Horizon 2020– and the Australian National Data Service, the RDA has been formed. It’s been formed to address the ‘joining-up’ challenges and to build a global community that can contribute to shared practice and ultimately a more sustainable way to build an infrastructure and the intersections required to support data-driven research and innovation.
The founding members from funding type agencies are the US National Science Foundation (working also with Chris Greer from NIST), the European Commission and the Australian National Data Service (ANDS) – over the past year these partners have carefully consulted and built a community that is global and encourages bottom up sharing and agreement. I have been to some prior gatherings, and had discussions with Ross Wilkinson from ANDS, Carlos Morais-Pires from the EC, Juan Bicarregui from STFC, and others; and witnessed their planning and progress. In Europe engagement is overseen by RDA Europe, Norman Wiseman from Jisc is on the Strategic Forum that oversees this on behalf of the Knowledge Exchange/KE (KE do alot of work on Research Data!). It’s a big ask – forming a structure that can collaboratively take on progressing the research data challenge. And I have to say the meeting this week in Washington demonstrated pretty impressive progress.
So in short over the past year a set of working groups and interest groups have been formed to collectively work on key issues, and Washington was really the first time that they were there face to face to develop their work – there was a first plenary meeting from the 18th -20th of March 2013 in Gothenburg, Sweden where the initiative was formally launched and groups started to form their case statement for work – but in Washington these groups were able to show early outcomes and to form firmer priorities and plans.
So what are they (we) working on ? it’s a long list [see here for the current list -https://rd-alliance.org/working-and-interest-groups.html]. Some of the areas that the groups are tackling: metadata & a metadata standards directory; legal interoperability; data citation; a community capability model; persistent identifiers; practical policy; data foundation & terminology; big data and analytics & more – including interest groups that cover some disciplinary areas – such as agriculture and history and ethnography.
This Alliance is forming – but from what I experienced in Washington it certainly has a lot of potential and should be an essential vehicle to research data interoperability. In Washington this week, following the group discussions there was a plenary update from all of them highlighting their priorities (given in the grand setting of the US National Academy of Sciences) and Mark Parsons, RDA/US Managing Director facilitated a discussion on the scope and ways of working. It was a really useful discussion; and one where I think there was consensus that RDA isn’t a standards body but more of a clearing house for best practice, standards and approaches. So if you’re interested join up? I think it is an important initiative that will help to address the organisational,social and technical infrastructure required for real research data sharing. Jisc, and the Digital Curation Centre (DCC) are engaged in the initiative and will continue to be so; and we will tie in UK activities as best we can so we can learn from others and also input the lessons and emerging practice from the UK so we get to that utopia …a global research data infrastructure (note:there are many UK participants already).
We will continue to give updates on progress to try and keep people in the loop. But if it is your bag – go ahead and join in the discussions. Currently there are 800 members from over 50 countries, and I can say from having been there this week it’s an impressive crowd…
Yes it is early days – but it’s important and thus far very positive. Looking forward to seeing more progress – I think there will be!
Jisc[i] in partnership with NERC[ii] have commissioned work to examine the value of impact of the British Atmospheric Data Centre (BADC). Charles Beagrie Ltd, the Centre for Strategic Economic Studies Victoria University, and the British Atmospheric Data Centre are pleased to announce key findings for the forthcoming publication of the results of the study on the value and impact of the British Atmospheric Data Centre (BADC). The study will be available for download on 30th September at: http://www.jisc.ac.uk/whatwedo/programmes/di_directions/strategicdirections/badc.aspx
The study shows the benefits of integrating qualitative approaches exploring user perceptions and non-economic dimensions of value with quantitative economic approaches to measuring the value and impacts of research data services.
The measurable economic benefits of BADC substantially exceed its operational costs. A very significant increase in research efficiency was reported by users as a result of their using BADC data and services, estimated to be worth at least £10 million per annum.
The value of the increase in return on investment in data resulting from the additional use facilitated by the BADC was estimated to be between £11 million and £34 million over thirty years (net present value) from one-year’s investment – effectively, a 4-fold to 12-fold return on investment in the BADC service.
The qualitative analysis also shows strong support for the BADC, with many users and depositors aware of the value of the services for them personally and for the wider user community.
For example, the user survey showed that 81% of the academic users who responded reported that BADC was very or extremely important for their academic research, and 53% of respondents reported that it would have a major or severe impact on their work if they could not access BADC data and services.
Surveyed depositors cited having the data preserved for the long-term and its dissemination being targeted to the academic community, as the most beneficial aspects of depositing data with the BADC, both rated as a high or very high benefit by around 76% of respondents.
The study engaged the expertise of Neil Beagrie of Charles Beagrie Ltd and Professor John Houghton of Victoria University, to examine indicators of the value of digital collections and services provided by the BADC.
The findings of this study are relevant to the community attending the conferences below hence the announcement.
13th EMS Annual Meeting & 11th European Conference on Applications of Meteorology (ECAM) | 09 – 13 September 2013 | Reading, United Kingdom
2013 European Space Agency Living Planet Symposium
The British Atmospheric Data Centre (BADC)
The BADC, based at the STFC Rutherford Appleton Laboratory in the UK, is the Natural Environment Research Council’s (NERC) Designated Data Centre for the Atmospheric Sciences. Its role is to assist UK atmospheric researchers to locate, access, and interpret atmospheric data and to ensure the long-term integrity of atmospheric data produced by NERC projects. There is also considerable interest from the international research community in BADC data holdings.
The following post appeared as a question and answer piece in the August edition of the Cilip Update magazine
What are the main benefits to the library of adopting open source?
There are some well known benefits that open source could bring to libraries, these include:
• Lower costs: Open source offers a lower total cost of ownership than traditional library systems. There are none of the traditional license costs associated with open source. Libraries are able take advantage of the reduced costs the cloud offers by reducing local support and hosting costs (if it is supported and hosted by a third party).
• No lock-in: Libraries are, in a sense, removed from the traditional lock-in associated with library systems. There is a greater opportunity to pick and choose components, and take advantage of what is, generally, better interoperability with open source solutions. Related to this is also the idea that open source is more sustainable: If a vendor goes out of business the software may disappear or be sold-on. With open it is always available, and there is usually a community involved in it to continue its development.
• Adaptation and Innovation: Connected to the above is the greater capacity that libraries have to innovate with open systems and software. There is no need to await the next update or release, instead in either isolation or collaboratively, can develop the functionality required. This enables much more agile services and systems, as well as ensuring user expectations are exceeded.
• A richer library systems ecosystem: A less direct impact of open source is a richer library systems ecosystem. This is both in terms of the library solutions available (a healthier marketplace with both proprietary and open solutions) and in terms of collaboration and engagement between libraries themselves. Libraries are able to collaborate and share code on the functionality and fixes they require. Indeed, there are open source systems such as Evergreen, which were developed as an open source library system for a consortial approach.
While these benefits are the headline grabbing ones, it might be argued there are more subtle, but none the less powerful benefits in the adoption of open source in libraries, especially within higher and further education. There are broader trends and themes emerging (and some fairly well entrenched) within the new information environment that make open source particularly timely for libraries. These developments include: open (linked) data; managing research data; open scholarship and science; Open content such as OERs; crowdsourcing, and, of course, open access. Open source solutions for the library fit very well into this broader open momentum affecting the academic world at present.
Away from the academic world it is difficult not to notice the close correlation between the open, learning, sharing and peer-production culture libraries embody and that of the open source culture.
So it may be that one of the greatest benefits of adopting open source is that it mirrors the very philosophy and values of the library itself.
Is it something all libraries should consider, or are there limitations to its usefulness as a solution (if so, what are the limitations)?
There are very few barriers to any library adopting an open source library system. The business models that surround open source library systems are currently based on third parties offering support and hosting services for libraries looking to implement a solution. Effectively, this means any library could take advantage of an open system.
There can sometimes be very pragmatic limitations to the systems themselves – the open source management system Koha, for example, doesn’t include an inter-library loan module (although they recognise this and have a wiki to collect the requirements for the module’s development).
For me, open source offers libraries an exciting opportunity: better understand the skills, roles and processes that are critical to the library’s community of users (whether academic, public or other). Open source can be about simply outsourcing your system and support to a third party; but it can also be about re-evaluating services, systems and understanding where the real value of the library lies. This may mean that support for the open source LMS is outsourced to a third party, so the local developers can work with librarians to ensure the services are innovative and meeting the needs of users.
Open source is an opportunity for the library to become more agile, and adopt a more ‘start-up’ like culture to the development and deployment of services.
What are the main barriers to a library adopting open source? (fear of the unknown, lack of technical ability etc)
It would be simple to blame the slow adoption of open source systems on fear – fear of the unknown, cost, security, perception, the list could go on. These are real concerns within the library community. But, it would miss the fact that libraries are using open source software. There are discovery interfaces that include Blacklight and VuFind. These open products themselves often run on top of the open search platform Apache Solr, for example.
Search and discovery are critical functions of the library, so these are not inconsequential adoptions.
Furthermore, there is a small, but growing recognition of the viability of open source for libraries. Halton Borough Council was the first to adopt open source for its public libraries, the University of Staffordshire was the first UK university to adopt an open source management system . These early adopters are helping raise the profile of open source and helping make it a visible alternative.
These developments point at potentially more entrenched barriers to adoption. One such barrier is the impact institutional and organisational procurement processes have on the decision making process (This, it might be argued, is a barrier to the development and adoption of proprietary systems as much as it is to the adoption of open source) . The procurement process for libraries (certainly in the academic sphere) has not been one that has traditionally explored innovative approaches – instead it has focused on relatively static and core specifications. This has had the effect of reinforcing the type of system, and the systems approach institutions and organisations adopt in their tender to suppliers.
For many organisations it might be summed up as simply as: who do you put a tender out to in the case of an open source solution?
However, many of the more superficial barriers are already largely redundant within the sector – the viability of open in general has been proved with the adoption of open source operating systems such as Linux in most sectors including business. Some of the more embedded organisational issues may take time to resolve, but already these are starting to dissolve as institutions seek to make effeciencies and adopt new approaches to procurement.
Are there issues over ongoing support? and do libraries need a decent IT dept to even consider open source?
As I mention above, IT support isn’t necessarily an issue for the library, this can be outsourced to a third party if necessary . But, having the right technical skills in the library is essential; it’s essential whether or not you’re choosing an open source solution.
However, the IT department does play an important role (whether they are in the library or wider organisation) as they are the people you’ll be talking to a lot about your decision. I think they key issue regarding the IT department is making sure they understand what you’re doing, and get them on your side!
There are also opportunities for libraries to engage in projects which share many of the characteristics of open source, but which have a slightly different approach. Examples include shared community activity such as Knowledge Base+ in the UK (a shared community knowledge base for electronic resources) which is a collaboration between HE libraries to improve the quality of e-resource metadata. Or the US ‘community source’ project KualiOLE (an Open Library Environment designed by libraries) where you pay to join the project to affect development, but the code for the system is open source. These examples build on the library’s tradition of openness and collaboration, and provide similar kinds of benefits to straightforward open source software.
Finally, it might just be that the greatest issue of open source facing libraries has already been overcome. David Parkes, Associate Director at University of Staffordshire, jokes that you should never be first. Of course, Staffordshire was the first HE institutions to implement an open source library system, so in many ways he’s removed the biggest hurdle to adoption there is!
- HELibTech wiki (open source library software page): http://lglibtech.wikispaces.com/Open+Source+Library+Systems+in+the+UK #
- The business case for an open source library management system (Video of a presentation given by David Parkes, University of Staffordshire): http://www.esi.ac.uk/meetings/1114/videos/4807
- LIS-OSS@JISCMail.ac.uk – discussion list about open source systems and software in libraries
In his article in the New York Times, Robert Crease wrote:
We look away from what we are measuring, and why we are measuring, and fixate on the measuring itself.
For libraries, so used to collecting, managing and analysing various sets of data and metrics, this is a critical point.
It is also a sentiment that kicked off the 10th Northumbria conference on Performance Measurement in Libraries held in York earlier this week.
Elliot Shore from ARL (Association of Research Libraries) spoke about the need for libraries to take heed of this advice: To focus on the ‘fit’ of what we’re measuring.
This fit, as Shore calls it, has been evolving over the past 10 years as the role and presence of the library has changed. The digital environment and changing technologies and expectations of users means that what was once important to measure and capture may no longer have the same urgency.
This focus on what should be measured – and how it impacts on the role and shape of the library – was developed in a great talk by Margie Jantti at the University of Wollongong in Australia.
Margie talked about the constant flow of information and data that her staff (relationship managers) get from the researchers and academic staff, which is used to tailor services and focus resources on priority services. This has seen the library develop expertise in publication support for researchers by the library.
The large knowledgebase of data the library collects on its users enables it to punch far above it’s weight: helping develop a fast; agile and world-class library team.
Finally, one thing that emerged from a majority of the presentations during the conference was the increasing recognition that data and metrics from inside or about the library were no longer enough. The field from which the data and metrics is harvested is growing, and reaching further beyond the library. Into the teaching and learning space through to research, registry and student services and beyond.
The idea that library performance and measurement requires only data from the library – or within the immediate vicinity of the library – is no longer an option.
So, it was against this background that the Library Analytics and Metrics Project (LAMP) presented at the conference.
We provided some of the background to the project (where it has come from and the work that has led us to this point) and provided an overview of the work so far and how you can get involved and follow the progress of the project.
For me, what’s really interesting, is that LAMP has the potential to bring in data from across the institution (and beyond) to help inform decision making and how and where resources are allocated. It also takes away the burden of collecting the data and provides the space for libraries to act on the data, and to think strategically about what they want and should be measuring and analysing.
The conference was also useful in bringing to my attention LibQual, and the potential for LAMP to work with that data too (although this may be something for further down the development pipeline).
You can find a link to our presentation here. At the end are some ways that you and your library can get involved – so do feel free to get in touch.
In the UK, the repository network is well established, and supports access to research papers and other digital assets created within universities. Increasingly, repositories are part of a research information management solution helping to track and report on research.
Over the past few years, Jisc has worked with a number of partners, including the University of Nottingham (Sherpa Services), EDINA, Mimas and the Open University (Knowledge Media Institute) to develop a range of services that benefit UK research by making institutional repositories more efficient and effective. Estimates put the annual net benefit, simply in terms of time saved by universities, at around £1.4m. Following a comprehensive review of the business case for these services, Jisc now intends to build on the RepositoryNet+ project led by EDINA, to put key services onto a more sustainable footing, including financial, organisational and technical aspects of their operation.
Sherpa Services run Sherpa-RoMEO and Sherpa-Juliet, the former providing trusted information about the rights of authors to deposit their papers into repositories, the latter providing a list of research funders’ open access policies, to which grant holders should comply. Together, these underpin the new Sherpa-FACT service, whose initial development has been supported by the Research Councils and the Wellcome Trust. Jisc proposes to work with these services over the next few months to identify a medium term strategy for each, and to support them thereafter, working in partnership where appropriate with others such as the Councils and Wellcome.
EDINA have developed the Repository Junction Broker, which promises to support mass deposit of papers from publishers and subject repositories into institutional repositories. Should the proposed HEFCE policy with respect to OA and the REF be confirmed, this will be a key service enabling institutional repositories to play their role in submissions to the next REF. Again, the proposed plan is to work with EDINA to develop a medium term strategy for RJB, and support it thereafter, while exploring a range of sustainability options with other stakeholders.
Mimas provide IRUS-UK, which enables institutional and other repositories to share usage data in a way that complies with international standards, so that usage reports can be reliably compared. Over 30 UK repositories are already part of the IRUS-UK network, with more joining all the time. Jisc intends to continue its support for IRUS-UK.
Other services have grown up alongside these, to varying levels of maturity. For example, EDINA have developed an organisational and repository identifier service, and explored a disaster recovery service for repositories. The Open University Knowledge Media Institute has developed CORE, a sophisticated aggregation and discovery service for repositories worldwide. Jisc looks forward to working with these services too, to ensure that UK repositories, their host organisations, and the people who use them benefit from greater efficiencies and a more responsive infrastructure.
As we move forward with this work, we will post updates here and elsewhere. If you have any questions or comments on this work, please contact Neil Jacobs (firstname.lastname@example.org) or Balviar Notay (email@example.com).
On Monday this week the Library Systems Programme held a one-day workshop in London.
I’ll talk more about some of the things that cam out of the workshop in later posts – for now I just wanted to share some of the presentations which were given during the day.
You can also see what people were saying about the event on Twitter with this storify created by Helen Harrop from the LMS Change project:
[View the story “Jisc Library Systems Programme Event” on Storify]
The workshop was a chance for the projects that made up the programme to talk about the work they had done and the tools and resources they have created, and a chance for the community to discuss some of the issues and challenges that the sector currently faces.
The workshop was opened by Rachel Bruce of Jisc and Ann Rossiter of SCONUL and introduced some of the main themes of the day.
The workshop had three main strands that explored:
- Collaborative Systems and Services;
- Transforming workflows and practices, and;
- Tools and Techniques for Systems Change
The workshop was then drawn to a close with a panel, chaired by Suzanne Enright of the university of Westminster, which explored what would be on your LMS wishlist.
- there was a call for a greater focus on (maybe even a commitment to) open source systems and the need for us to transcend the LMS,
- the need for better exploitation of the data in our systems, and
- the suggestion that the library sector may not understand, or have the right skills, to effectively inhabit an increasingly web-based environment.
Collaborative Services and Systems
This session included presentations from projects exploring the potential to develop shared library systems and services. These were projects by SCURL in Scotland, WHELF in Wales and the Bloomsbury Consortium in London.
This project has contributed towards a new vision for library systems by investigating the following question: “How would a shared library management system improve services in Scotland?”
Building on the work of the earlier ‘WHELF: Sharing a Library Management System’ feasibility report the project has explored the potential benefits and pain points inherent in a move from distributed to centralised hosting and infrastructure model for a suite of library systems software, while building a possible overall business case for such a move by the HEIs within the WHELF consortium.
The Bloomsbury Library Management Consortium is building on the strengths of the Bloomsbury Colleges and Senate House Library and their track record for sharing and collaboration. The group undertook a study of the landscape of the 21st century Library Management Systems (LMSs) – and evaluating the options for building, commissioning or procuring a Bloomsbury Library Management System (BLMS) as a shared-service.
The presentation from the Bloomsbury consortium can be found here: 2013-07-15_JISC-Event-BLMS-for-circulation.
The group have made a decision in principle to go with KualiOLE open source /community library system.
Transforming workflows and processes
This session included a number of presentations exploring the impact of new systems and technologies on traditional library workflows and processes.
HIKE is exploring the integration of next generation library systems (specifically Intota) at the University of Huddersfield with Knowledgebase+ and the impact on traditional workflows and processes.
EBASS25 in a collaborative project, led by Royal Holloway, University of London, to develop shared models of ebook procurement using Patron-driven acquisition approaches.
[presentation to be added]
The Collaborative collections management project saw King’s College London and Senate House libraries collaborate on above campus initiatives around collection management for the benefit of students and researchers, and the use of the Copac collection management tool.
Tools and techniques for systems change
The LMS Change project took on the entire burden of this session themselves, showcasing the tools and approaches they have developed during the project and getting participants introduced to some of the tools. The LMS change presentation is below, and Ken Chad’s presentation on the business case for change can also be found here: Business_case_for_change_Jisc_LMSchange_wkshop_KenChad_July2013
The Canadian philosopher of communication and media, Marshall McLuhan, famously argued:
We see the world through a rear-view mirror. We march backwards into the future
In the early days of the web it was common for retailers to replicate paper brochures online, so called ‘brochureware’, missing the interactivity and format opportunities the web provides (and losing customers in the process too!). We continue to transpose our experiences of physical paper and books online, with little or no adaptation to the opportunities for interaction and multi-media.
While mobile technology has been available for decades, its current ubiquity and power (both socially and technologically) mean we find ourselves at the edge of a technological shift. As we move from a desk top to a mobile lifestyle we must be careful not to succumb to the rear-view mirror effect and replicate the desk top experience in the services and systems we design for the mobile user.
We find ourselves inhabiting a very different environment to a few years ago. Where once our computing power was located in one place, it now travels with us, capturing and distracting us no matter where we find ourselves. It connects us to people, places and things in ways not previously possible.
With this mobile lifestyle in mind I want to explore 4 challenges that mobile technologies present to libraries. In articulating these challenges I hope it will become increasingly clear what strategies and opportunities there are for libraries, and their services, systems and collections.
When you take a look at some of the best mobile experiences, whether apps or websites they usually have one thing in common: They do one thing extremely well. Everything extraneous is stripped away to leave only the most essential and relevant information.
Exemplars include Rise, an alarm clock app that incorporates visually simple interfaces, combined with gesture recognition and your music playlists. Or Clear, a ‘to do’ app, with intuitive gesture controls and the use of colour to denote urgency – nothing else.
Amazon’s stripped down app is a good example of a website that has adapted its presence to a mobile experience: Only the relevant information is included and all the complexity is hidden away from sight (although you can dig deeper if you wish).
The Amazon example is an interesting one. It invites comparisons with the library catalogue, and it certainly provides an effective template for mobile discovery. However, libraries have a physical infrastructure, processes and technologies that mean refining the mobile experience to a single thing can be hard. When we use a phrase like ‘discovery’ in a library or information-seeking context we often mean a set of interrelated actions, such as: search, select, find and use. Is it possible to break these down into their component parts and still deliver a positive experience for the user, both in terms of the mobile experience and of using the library?
The challenge the mobile devices present to libraries in this context is one of needs over solutions. The challenge is to think beyond the solutions already in place (the catalogue, discovery layer), to articulating the actual need. In the case of discovery maybe, ‘I need to answer a question’, or; ‘I need to find something’. Formulated in this way it is clear that a solution may be very different to the ones already available.
It forces us to consider the context we’re operating in; it invites us to invent, not retro-fit!
People and Place
Increasingly, the mobile device is a bridge between our online social connectivity and our localised real-world interactions. If you explore a map on your phone you don’t have to tell it where you are, the internal GPS has already told it. Similarly, it can tell you when a friend is near-by through apps like Facebook, FourSquare and so on.
There are a number of interesting examples where libraries and others have exploited these inherent benefits of mobile devices. Mendeley, the reference manager, is a good example of a service that is explicitly looking to build a social layer on top of the bibliographic data they have crowdsourced from the academic community in the form of bibliographies. You can follow academics with similar research interests, build groups and curate and build your own, personalised discovery network.
Increasingly, the discovery experience unfolds and is led by the content itself. What used to be the destination, the content or resource, is now the beginning of the journey.
For example, projects like Bomb Site, from the National Archives, have taken bomb site map data and made it available as a responsive website so that academics, researchers and members of the public can explore where bombs fell. This data is augmented over a map and includes images, descriptions and people’s memories.
Similarly, the PhoneBooth project from the London School of Economics mobilised the Charles Booth poverty maps of London so that students and researchers could use and annotate the maps in context, i.e., on the streets of London as part of their learning experience.
Increasingly the discovery process will find itself facilitating peer-to-peer and social recommendation experiences.
The traditional catalogue will itself begin to disappear from these interactions. Instead, the discovery experience will have an intimacy and personalisation associated with it that mirrors the intimately personal experience of the mobile device itself.
The web provides unparalleled opportunities for scale. The local bric-a-brac shop becomes eBay, the bookshop Amazon, the University becomes the massively open online course (MOOC) such as Cousera. Similarly the library begins to operate at ‘web-scale’ with its systems and services.
Yet, the mobile experience is an intimately personal one. It challenges libraries and information providers to find a balance between these two types of scale: the singular (the personal) and the ‘web-scale’. It is not enough simply to adopt web-scale systems and services: mobile challenges us to think about how that web-based interaction is transformed into real-world action.
One opportunity for libraries is in the data that circulates through their systems, both the management data and the user-generated interaction data. There are an increasing number of services and projects looking at exploiting this data for the personalisation of the user experience. These include commercial offerings, of which the best known is bX from Ex Libris.
There are also a number of academic libraries exploring the use of this data, including: SALT (surfacing the academic long tail) and RISE (Recommendations improve the search experience) which are exploring how different sets of data can be used to enhance and personalise the library experience.
The ability of libraries to exploit this data will grow increasingly important. The data provides a way for libraries to continue delivering services to hundreds and thousands of users, while providing a personalised experience that users expect from web-based services.
If the mobile shift challenges libraries to invent new experiences, it also invites us to rethink how we develop and implement these.
As information becomes abundant and digital, the models for how libraries develop and implement new services and systems will radically change too. Libraries are no longer comparing themselves and their services to other libraries; instead they are being compared to the web, and the types of services and resources users can access there. Increasingly libraries will find themselves needing to adopt approaches that would normally be more associated with web start-ups.
This implies a greater focus on ideas (ideas from everywhere: librarians, users et al), rapid iteration and testing, and implementation of the idea (or quick relegation of ideas). This more entrepreneurial approach recognises that there is no simple crossing between how things are now and the future. There is not a simple roadmap from the complexities of the information environment as they are now, to some stable future; disruption is a feature, not a bug of the system.
While the change in a libraries approach to the user and the work it undertakes is significant, and not easy, there are some straightforward starting points. There are already great examples and case studies of mobile innovation in libraries. The M-Libraries community support blog, for example, includes a large amount of information, including case-studies, best practice guides and inspiration from other organisations on how they have transformed services with mobile technology.
Indeed, as many of the examples on the M-Libraries blog demonstrate, the financial overhead for this type of change should be low. Rethinking your approach to design of mobile services shouldn’t include significant barriers, either financial or technical. A good place to start is by borrowing ideas from other domains, like software development and design. The example of paper-prototyping, used in a recent mobile development workshop, provides a good place to start.
What many of these examples share is a renewed focus on the user. It moves us away from a focus on internal systems and processes, toward the behaviours and requirements of the user. The centre of gravity moves away from the technology and toward the user; the mobile-turn is one where the technology is overshadowed by the needs of the user.
The challenges mobile technologies present to libraries are ones drenched in paradox. The hardware (the phone, tablet, ereader) gradually fades from view, and it is the user, with their intricate behaviours and requirements that remain the focus of our attention.
Unlike so many other technologies, mobile enables the library to rethink its services, systems and processes to ensure that it is the user that remains at their heart. This does not mean business as usual, however. But it does mean that by understanding these challenges and their implications, libraries are in a position to design and deliver mobile experiences that users will want to engage with.
As part of the Library Systems Programme, two reports have been published exploring the potential for shared library systems across Universities in both Scotland and Wales.
In the first of two posts I wanted to briefly introduce you to the two recently published reports, and their main findings/recommendations. In the second post I want to highlight some of the other developments on the shared library systems landscape, and highlight some of the implications.
A Shared LMS for Wales (WHELF)
The Welsh Shared Service Library Management System Feasibility Report focussed on the most prevalent and practical issues for a shared all Wales HE library management system in broad terms:
A set of high-level agreed consortium requirements for a shared LMS.
A proposed governance model for the consortium.
High level recommendations on integration requirements for local systems; map communications standards which are applicable to the project against standards in use by suppliers.
A business case for a Wales-wide consortium LMS, including cost matrices for the different approaches presented.
Recommendations on the most cost-effective approach for software, hosting and ongoing management of the LMS.
The report makes the following recommendations:
The Project recommended setting up an All-Wales Consortium with formal governance. This requires the consortium to formally agree which processes, working practices and configurations will be adhered to by all members as a whole.
A cloud solution hosted by a vendor (or open source vendor) is the preferred option, because this will provide the most cost-effective resilient solution.
Further work will be required to develop a clear statement on the vision for shared LMS services in Wales, ensuring clarity of purpose and providing a compelling statement of intent for senior stakeholders and staff to achieve buy-in to the strategic direction proposed.
The report suggests a phased approach to implementation; anticipating that the first implementations will be no sooner than Summer 2014.
The report also suggests a task and finish group should be convened to quickly put together a high level plan, costs and cost allocation (i.e. funding) for the establishment of a project team.
The Benefits of Sharing (SCURL)
How would a shared library management system improve services in Scotland?
While the question is simple, the answer is a little more complex. Indeed, the project began looking at the question with an initial workshop and subsequent report.
It then broke the problem into 3 parts:
The project also published a summary report which concludes with a number of recommendations, including the following:
From a systems perspective, sharing technical infrastructure and support structures would offer benefits of economies of scale, with more efficient use of staffing and greater expertise than any single library could offer. System options such as Open Source (OS) alternatives to ‘off the shelf’ commercial products could, therefore, become viable. It is recommended that at the tender and procurement phases of a shared LMS, all options, including OS systems, are reviewed and assessed.
Both reports make very interesting reading – and also tell us a lot about the current library systems landscape. In particular there is a renewed vigour in the potential for sharing and collaborating around services and systems between libraries and institutions.
There is also a clear recognition that open source solutions are viable options for the community, and may represent a feature of this new library landscape.
In the second post on shared library services and systems I’ll explore some of the other developments within this landscape, and the implications they have for institutions, libraries and systems vendors.