The Canadian philosopher of communication and media, Marshall McLuhan, famously argued:
We see the world through a rear-view mirror. We march backwards into the future
In the early days of the web it was common for retailers to replicate paper brochures online, so called ‘brochureware’, missing the interactivity and format opportunities the web provides (and losing customers in the process too!). We continue to transpose our experiences of physical paper and books online, with little or no adaptation to the opportunities for interaction and multi-media.
While mobile technology has been available for decades, its current ubiquity and power (both socially and technologically) mean we find ourselves at the edge of a technological shift. As we move from a desk top to a mobile lifestyle we must be careful not to succumb to the rear-view mirror effect and replicate the desk top experience in the services and systems we design for the mobile user.
We find ourselves inhabiting a very different environment to a few years ago. Where once our computing power was located in one place, it now travels with us, capturing and distracting us no matter where we find ourselves. It connects us to people, places and things in ways not previously possible.
With this mobile lifestyle in mind I want to explore 4 challenges that mobile technologies present to libraries. In articulating these challenges I hope it will become increasingly clear what strategies and opportunities there are for libraries, and their services, systems and collections.
When you take a look at some of the best mobile experiences, whether apps or websites they usually have one thing in common: They do one thing extremely well. Everything extraneous is stripped away to leave only the most essential and relevant information.
Exemplars include Rise, an alarm clock app that incorporates visually simple interfaces, combined with gesture recognition and your music playlists. Or Clear, a ‘to do’ app, with intuitive gesture controls and the use of colour to denote urgency – nothing else.
Amazon’s stripped down app is a good example of a website that has adapted its presence to a mobile experience: Only the relevant information is included and all the complexity is hidden away from sight (although you can dig deeper if you wish).
The Amazon example is an interesting one. It invites comparisons with the library catalogue, and it certainly provides an effective template for mobile discovery. However, libraries have a physical infrastructure, processes and technologies that mean refining the mobile experience to a single thing can be hard. When we use a phrase like ‘discovery’ in a library or information-seeking context we often mean a set of interrelated actions, such as: search, select, find and use. Is it possible to break these down into their component parts and still deliver a positive experience for the user, both in terms of the mobile experience and of using the library?
The challenge the mobile devices present to libraries in this context is one of needs over solutions. The challenge is to think beyond the solutions already in place (the catalogue, discovery layer), to articulating the actual need. In the case of discovery maybe, ‘I need to answer a question’, or; ‘I need to find something’. Formulated in this way it is clear that a solution may be very different to the ones already available.
It forces us to consider the context we’re operating in; it invites us to invent, not retro-fit!
People and Place
Increasingly, the mobile device is a bridge between our online social connectivity and our localised real-world interactions. If you explore a map on your phone you don’t have to tell it where you are, the internal GPS has already told it. Similarly, it can tell you when a friend is near-by through apps like Facebook, FourSquare and so on.
There are a number of interesting examples where libraries and others have exploited these inherent benefits of mobile devices. Mendeley, the reference manager, is a good example of a service that is explicitly looking to build a social layer on top of the bibliographic data they have crowdsourced from the academic community in the form of bibliographies. You can follow academics with similar research interests, build groups and curate and build your own, personalised discovery network.
Increasingly, the discovery experience unfolds and is led by the content itself. What used to be the destination, the content or resource, is now the beginning of the journey.
For example, projects like Bomb Site, from the National Archives, have taken bomb site map data and made it available as a responsive website so that academics, researchers and members of the public can explore where bombs fell. This data is augmented over a map and includes images, descriptions and people’s memories.
Similarly, the PhoneBooth project from the London School of Economics mobilised the Charles Booth poverty maps of London so that students and researchers could use and annotate the maps in context, i.e., on the streets of London as part of their learning experience.
Increasingly the discovery process will find itself facilitating peer-to-peer and social recommendation experiences.
The traditional catalogue will itself begin to disappear from these interactions. Instead, the discovery experience will have an intimacy and personalisation associated with it that mirrors the intimately personal experience of the mobile device itself.
The web provides unparalleled opportunities for scale. The local bric-a-brac shop becomes eBay, the bookshop Amazon, the University becomes the massively open online course (MOOC) such as Cousera. Similarly the library begins to operate at ‘web-scale’ with its systems and services.
Yet, the mobile experience is an intimately personal one. It challenges libraries and information providers to find a balance between these two types of scale: the singular (the personal) and the ‘web-scale’. It is not enough simply to adopt web-scale systems and services: mobile challenges us to think about how that web-based interaction is transformed into real-world action.
One opportunity for libraries is in the data that circulates through their systems, both the management data and the user-generated interaction data. There are an increasing number of services and projects looking at exploiting this data for the personalisation of the user experience. These include commercial offerings, of which the best known is bX from Ex Libris.
There are also a number of academic libraries exploring the use of this data, including: SALT (surfacing the academic long tail) and RISE (Recommendations improve the search experience) which are exploring how different sets of data can be used to enhance and personalise the library experience.
The ability of libraries to exploit this data will grow increasingly important. The data provides a way for libraries to continue delivering services to hundreds and thousands of users, while providing a personalised experience that users expect from web-based services.
If the mobile shift challenges libraries to invent new experiences, it also invites us to rethink how we develop and implement these.
As information becomes abundant and digital, the models for how libraries develop and implement new services and systems will radically change too. Libraries are no longer comparing themselves and their services to other libraries; instead they are being compared to the web, and the types of services and resources users can access there. Increasingly libraries will find themselves needing to adopt approaches that would normally be more associated with web start-ups.
This implies a greater focus on ideas (ideas from everywhere: librarians, users et al), rapid iteration and testing, and implementation of the idea (or quick relegation of ideas). This more entrepreneurial approach recognises that there is no simple crossing between how things are now and the future. There is not a simple roadmap from the complexities of the information environment as they are now, to some stable future; disruption is a feature, not a bug of the system.
While the change in a libraries approach to the user and the work it undertakes is significant, and not easy, there are some straightforward starting points. There are already great examples and case studies of mobile innovation in libraries. The M-Libraries community support blog, for example, includes a large amount of information, including case-studies, best practice guides and inspiration from other organisations on how they have transformed services with mobile technology.
Indeed, as many of the examples on the M-Libraries blog demonstrate, the financial overhead for this type of change should be low. Rethinking your approach to design of mobile services shouldn’t include significant barriers, either financial or technical. A good place to start is by borrowing ideas from other domains, like software development and design. The example of paper-prototyping, used in a recent mobile development workshop, provides a good place to start.
What many of these examples share is a renewed focus on the user. It moves us away from a focus on internal systems and processes, toward the behaviours and requirements of the user. The centre of gravity moves away from the technology and toward the user; the mobile-turn is one where the technology is overshadowed by the needs of the user.
The challenges mobile technologies present to libraries are ones drenched in paradox. The hardware (the phone, tablet, ereader) gradually fades from view, and it is the user, with their intricate behaviours and requirements that remain the focus of our attention.
Unlike so many other technologies, mobile enables the library to rethink its services, systems and processes to ensure that it is the user that remains at their heart. This does not mean business as usual, however. But it does mean that by understanding these challenges and their implications, libraries are in a position to design and deliver mobile experiences that users will want to engage with.
I am very pleased to announce fifteen new projects to enhance the digital infrastructure to support open content in education.
The Call for proposals was released in November 2011. We received 34 proposals, the competition was very tough. I’m grateful to all the expert reviewers who helped evaluate bids. Because of the high standard of proposals we were able to allocate more funding than anticipated to approximately £350,000 of HEA/JISC OER Programme funds.
These projects will be completed by November 2012, hence they are Rapid Innovation projects using open innovation methods: plenty of blogging, lots of user involvement, and they are driven by delivering new tools and functionality.
Here is a taste of what they cover
OER Rapid Innovation Projects: the full list:
|Attribute images||Further developing a tool that allows users to upload images (singly or in bulk), select a Creative Commons licence and specify the name of the copyright holder, publication date and a URL. The tool will then embed a licence attribution statement in the image. It will have integration with Flickr.||University of Nottingham|
|Bebop||The main outcome of this work will be a WordPress plugin that can be used with BuddyPress to extend an individual’s profile to re-present resources that are held on disparate websites such as Slideshare, Jorum, etc.||University of Lincoln|
|Breaking down barriers||Developing open options for Landmap and geo-aware functionality in Jorum. To enable easier and richer sharing of geo-based resources.||University of Manchester, MIMAS|
|CAMILOE||This project reclaims and updates 1800 quality assured evidence informed reviews of education research, guidance and practice that were produced and updated between 2003 and 2010 and which are now archived and difficult to access.||University of Canterbury Christchurch|
|Improving Accessibility to Mathematics||Turn an existing research prototype into an assistive technology tool that will aid accessibility support officers in their task of preparing fully accessible teaching and assessment material in mathematical subjects by translating it into suitable markup.||University of Birmingham|
|Linked data approaches to OERs||Extending MIT’s Exhibit tool to allow users to construct bundles of OERs and other online content centred around playback of online video||Liverpool John Moore’s University|
|Portfolio Commons||Create a plugin for Mahara open source e-portfolio software that will enable users to select content from their portfolio and deposit it into Jorum and EdShare.||University of the Arts London|
|RedFeather||RedFeather aims to provide users with a lightweight Resource Exibition and Discovery platform for the annotation and distribution of teaching materials.||University of Southampton|
|RIDLR||Dynamic Learning Maps meets Learning Registry UK node (JLeRN) to harvest OERs for specific topics within curriculum and personal learning maps and share paradata.||University of Newcastle|
|This will be using cheap/free automatic transcription services to transform video to text to enable richer subject specific metadata for cataloguing purposes, using recognised standards and data formats.||University of Oxford|
|SupOERGlue||Will pilot the integration of Tatamae’s OER Glue with Dynamic Learning Maps, enabling teachers and learners to generate custom content by aggregating and sequencing OERs related to specific topics.||University of Newcastle|
|SWAP sharing paradata across widget stores||Using the Learning Registry infrastructure to share paradata about Widgets across multiple Widget Stores, improving the information available to users for selecting widgets and improving discovery by pooling usage information across stores.||University of Bolton|
|synote mobile||Creating a new HTML5 mobile version of Synote to will meet the important user need to make web-based OER recordings easier to access, search, manage, and exploit for learners, teachers and others.||University of Southampton|
|TRACK OER||OER in the wild can get lost. This project will add a tracer to find where they go for attribution, research and remix.||Open University|
|Xenith||Adding HTML5 as a delivery platform to Xerte Online Toolkits, allowing content to reach a much greater range of devices.||University of Nottingham|
See the JISC strand page for more detail.
Humanities and the social sciences have traditionally been disciplines aligned closely with the institutional library and its resources and services. Increasingly, in my conversations with librarians, there is a concern that while the library as a space remains popular, this masks a growing distance between the services the library provides and the needs and expectations of researchers (to say nothing of undergrads).
As subjects like digital humanities find themselves transformed by their engagement with technology, is the library facing the threat of redundancy?
There has been a flurry of research recently including the RLUK report: Re-skilling for Research and JISC Collections’ UK Scholarly Reading and the Value of Library Resources, exploring the evolving role of the library in supporting researchers.
Similarly, Ithaka S+R in the US is exploring the changing support needs of scholars across a variety of disciplines. The researcher-centric programme has recently published a ‘memo’ on the interim findings of their NEH funded History project (they are also exploring Chemistry, funded by JISC). And, as the report makes clear:
To many in the history field and in libraries, it is unclear what the role of the library should be in digital humanities. This is not to imply that there is no role for libraries – only that this role has not yet been widely developed and adopted effectively. Libraries remain very much in transition when it comes to expanding models for supporting research on campus
So, I wanted to explore some of the roles that libraries might have in the Digital Humanities:
- Managing Data: This has undoubtedly become a cliche, but it’s the transformative factor changing research practice. Humanities researchers are increasingly interacting with large corpora; how do libraries support them in this, and the data that is an output from this type of research? This might involve libraries supporting the data management infrastructure, or providing one-to-one support for departments and researchers on best practice. I see libraries playing a role in the collection, re-purposing and organising of data that may lead to further analysis by individual researchers or (sub)departments. What’s critical is that libraries work collaboratively with the researchers/departments: This is not ‘selling’ library services; it is about understanding researchers needs and providing the right support.
- Closely connected to this point is the idea of the ‘embedded’ librarian: Providing the support wherever the researcher is; a distributed approach to library services. The librarian becomes the campus Flaneur: Inhabiting the campus and acquiring an understanding of its practices. This active role participates in the activity of the academic metropolis, while always maintaining a distance. The embedded librarian provides immediate support, while always maintaining an eye on the evolution of research practice and relevant support.
- Digitisation and Curation: The examples above assume that much of the data being managed by the library will, in some way, be created by the researcher themselves. Libraries, are of course, great sources of content and this means they often hold the expertise and infrastructure for digitisation. Libraries have a very meaningful role in the digitisation and curation of that content.
- Digital Preservation: Libraries, probably better than anywhere else on campus, understand preservation. It is unlikely that developers and researchers involved in a DH project probably do not, although they will acknowledge its importance. Closely linked with sustainability this is a significant area for libraries to play a role. Close collaboration early on will ensure the library is able to provide advice and guidance on standards and best practice. However, as the Preservation of Complex Objects Symposia makes clear – digital resources tend to be complex and their preservation far from straightforward. This is an area that libraries can build on and start having a real impact on these research outputs and their ongoing preservation.
- Discovery and Dissemination: Libraries are increasingly judged by the services they provide, not as a large store of content. This means that for digital humanists the library can play a critical role in enabling the discovery of content from across academic, and cultural heritage. Furthermore, this role may evolve into one of dissemination of scholarly outputs. Whether this is through campus-based publishing or aggregation of research outputs, advising on metadata and formats to enable dissemination and discovery, and tracking impact across new platforms and interactions (what is increasingly being termed altmetrics).
Questions remain around the ability of the library, and the wider institution, to adapt to the changes that are affecting scholarly practice. While much of the focus of research has been on the library services and how these can be made attractive to researchers, it is clear that a researcher-centric approach needs to be adopted to ensure requirements and future needs are clearly understood.
Finally, I wonder if the values the library represents (openness, access, contemplation etc…) might also be something that needs ‘capturing’. If we only focus on researcher needs, is there a danger that what they see as the value of the library is lost? Is the library an expression of knowledge and prestige within the research community, and does this have a value in itself?
Last week saw a two-day workshop, held at Warwick University, exploring the future of library systems. I wanted to briefly highlight the format of the two days, and reflect on some of the outcomes from the event. In particular, how the workshop has helped inform a new funding call that will be published in early February.
Not so long ago the library management system was the neglected sibling of the library world; but the landscape is changing and it is starting to become centre-stage once again. Yet this is a very different world to even just a few years ago. While it regains its moment in the lime-light, it is constrained on either side by the emerging importance of resource discovery and e-resource management.
Entitled: ‘The Squeezed Middle’ the JISC and SCONUL sponsored event was a chance for directors and senior library managers to review the evolving role and requirements of the institutional Library Management System (LMS).
Specifically the workshop focused on the key developments impacting the shape of library systems, given the current work that is taking place in both Resource Discovery (discovery.ac.uk) and developments in the management of subscription and e- resources (Knowledge Base+).
Since 2008 and the publication of the JISC LMS landscape report and the jiscLMS programme things have changed significantly in the library systems environment. A number of open source systems are emerging, including Evergreen, Koha and Kuali OLE. More importantly, UK higher education has seen the first implementation of open source LMS at Staffordshire University – open source library systems have become a viable option.
The workshop aimed to explore this complex landscape, and end the two days with a clear direction of travel for what the future of library systems might look like (and some concrete ways to get there).
The role and functions of the LMS are, to say the least, fairly well embedded in the workflows and everyday business of the academic library. It’s a cliche to invoke the paradigm word, but it could be argued that much of the discussion within this space is caught up in a historic paradigm that has, for a long time, prevented the evolution (let alone revolution) of this business critical system.
The format of the workshop aimed to disrupt this paradigm.
The workshop began with some contextual information on the current library systems landscape. The first day of workshop was divided into two group discussion sessions focused around four themes: Space, Collections, Systems and Expertise.
The workshop watched a short video presentation by Lorcan Dempsey of OCLC that provided some business modelling context to the discussions. Lorcan’s full video is available here: http://www.youtube.com/watch?v=zzxA4vdJYok&context=C3b48ce9ADOEgsToPDskJJB-K_kohdSvm4fK0yprv9
Each of the break out discussions sessions were interrupted by four ‘provocations’ from within and beyond the library world. These short, provocative presentations were designed to help extend the discussions around library systems, and prevent the groups from falling back on long held assumptions and arguments. These future visions (they were meant to be a vision of the library world in 2020), were both very creative, and helped provide talking points for the groups.
An example of the presentations can be found on Paul Walk’s blog and Paul Stainthorp’s blog. The other two were by Ken Chad (Ken’s provocation can be found here) and David Kay, and all their presentations will be made available shortly.
The day ended with some ‘homework’ where delegates were asked to prioritise and comment upon some 60 ‘objectives’ on the future role and functionality of the LMS.
The second day was focused on cementing the discussions and explorations of the first day – groups prioritised some of the identified objectives from the homework exercise and slowly a number of critical themes emerged.
Emerging Themes and Priorities
A number of core themes emerged during the two day workshop. Below I have very subjectively chosen a couple to highlight. A full list of the prioritised list of library systems ‘objectives’ that was the main outcome from the workshop can be found here. This was very kindly collated by David Kay who helped facilitate the second day of the workshop.
Data Data everywhere, and not a drop to…
I agree with Richard Nurse from the Open University who attended the workshop and blogged about the event here, who said:
It also struck me that a lot of the issues, concerns and priorities were about data rather than systems or processes… I do find it particularly interesting that despite the effort that goes into the data that libraries consume, there are some really big tasks to address to flow data around our systems without duplication or unnecessary activity.
I think this is an interesting point. In the conversations I joined it was clear that a lot of discussion was taking place around the data across the library (and the campus) and how a library system might bring this together. Someone mentioned the LMS as a dashboard that aggregated disparate data sets from across the library and campus. The system becomes secondary to the data.
This also came out in the discussions around ‘non-traditional assets’ and how libraries are able to integrate services such as reading lists with resource discovery, VLEs and repositories.
Skills and roles
This was a theme that seemed to run throughout the two days. In particular there was significant discussion around the future and transformation of library systems and its impact on current and future staff roles and the skills required.
This issue runs through the library from the practitioner librarians and the new skills and roles that are developing, to managers and senior managers and how they adjust to managing and obtaining these new roles. these new roles may also be frequently outside the physical library, or roles that are not traditionally recognised as part of the library skill-set, and so new ways of working and adaptation to those roles will be required.
Furthermore, there may be a tension between another of the themes, sharing services and systems, and the ability to develop, maintain and justify the relevant skills locally. There was a lot of discussion around whether the outsourcing or sharing of infrastructure (systems in this case), actually affects the local skills the library has. Infrastructure and skills are often thought of as separate, yet the two are more intimately connected than might be expected.
The reality, however, is I suspect more complex than this. Institutions may have already outsourced or shared services and systems; the question is then whether they are able to still develop skills and new roles. Furthermore, there might be some potential for shared services to become central ‘pools’ for developing and deploying these new roles and developing skills. Deployed locally when necessary: enabling institutions to continue to innovative and collaborate.
Unsurprisingly this was a big topic of discussion – both in terms of skills as discussed above, and in terms of defining those services and functions that are maintained locally and those that can benefit from above-campus infrastructure.
There was also some interesting suggestions around a UK research reserve for monographs (something that has been discussed at JISC as well), and considerations around national union catalogues and similar initiatives. Resurrecting the notion of a national union catalogue did somewhat divide the delegates; it was clear that discussions around such infrastructure should be driven by requirements, rather than the assumption that a union catalogue is the answer.
While I don’t think it was ever articulated openly, there seemed to be a sense that the large, one size fits all shared LMS (whether local or shared) was no longer viable, or particularly attractive. Instead new models are needed – I don’t know what these are necessarily, but they seem to demand a new vision of shared infrastructure around library systems (and services).
It was clear that any future library system (whether local, shared, above campus etc) would provide the user with the ability to personalise, and to a greater extent, control their library experience. This relates back to the considerations of data earlier, but more significantly the user is able to take that data with them as they both progress within the institution and move beyond it (warning: I may be straying slightly into Paul Walk’s future vision of the library!).
JISC has done significant amounts of work around personalisation, in particular the activity data work could be very instrumental in understanding this area further. Iportant work still needs to be done on simple issue around ownership of the data and legal issues, before the more technical issues can start to be addressed more fundamentally.
The discussion was far richer than my abve comments might lead one to believe, but I just wanted to outline some of the highlights.
One of the critical things I took away with me was the need to constantly place these kinds of discussions within wider institutional strategic contexts (research etc). It is easy to deal with these types of issues as if they are hermetically sealed, whereas the reality is much more complex, with various different drivers and barriers.
As I mentioned above, the workshop had a very clear purpose: To help shape a new vision for library systems. This aim was made concrete in a recent funding call I have written and that will be published in very early February: see here for details. This workshop therefore provides a baseline that I can look back on in 12 months time and see what the landscape looked like in early 2012!
[All the presentations and provocations will be made available online as part of the forthcoming Library Systems Programme on the JISC webpages].
Over the last couple of weeks 3 very interesting reports have drifted through my news feeds on libraries and linked data:
- The library of congress has announced plans for pursuing a replacement for MARC and these plans “will be focused on the Web environment, Linked Data principles and mechanisms, and the Resource Description Framework (RDF) as a basic data model”.
- The W3C library linked data incubator group released their report. This report recommends that librarians experiment more with linked data by releasing data, building on top of linked data sets, engaging with standards bodies and bring their preservation skills to bear on datasets and vocabularies.
- A CLIR report has been published on a linked data workshop and survey run by Stanford. The purpose of the workshop was to discuss the “the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources.” The report itself is useful for everyone as it contains sections on the value of a linked data approach for library content and talks about potential killer apps linked data could support.
These seem significant to me and I am inclined to believe that they represent a growing interest in linked data in libraries. Naturally I have some observational bias in this area since JISC has been funding a fair bit of work investigating the potential for linked library data.
- The Discovery programme funded 8 projects that made metadata openly available, most of these took a linked data approach. Summaries of lessons from these projects will be available very soon from the Discovery website
- Andy Powell and Pete Johnston produced a discussion document on a possible metadata approach to support discovery of library, museum and archive content based on linked data. This attracted detailed and passionate discussion from metadata experts.
- The OpenBib project investigated the issues and possibilities offered by open linked data for bibliographic metadata. We have recently funded this team to build on their initial work to show how this approach to bibliographic data can benefit researchers.
- The ArchivesHub are engaged in a project called linking lives which will use linked data to enable researchers to explore the relationships between people and things that are contained in the archives metadata that the archiveshub aggregates. This builds on the earlier Locah project .
- Suncat are making their journal bibliographic information available as linked data
There is lots of interesting linked data work happening in the wider world of cultural heritage:
- Europeana is taking a linked data approach
- The British Museum is up to some very interesting stuff as part of the Mellon funded ResearchSpace project
- The British Library is engaged in some exciting experiments with a linked data version of the British National Bibliography
- The BBC Digital Public Space project is making use of RDF data to produce a very exciting aggregation of content with many possibilities
This is just a flavour of some of the developments that I am aware of, there are many more, and I don’t doubt that I’ve missed some of the most interesting ones.
So why are so many organisations putting resources into engaging with linked data? Well the advantages of linked data at a very simple level are:
- It enables us to make links between different items in different collections to enable the development of new interfaces that support new ways of exploring collections.
- It can make aggregation and exploration of very different types of data and resources easier
- It works very well on the web enabling clever people to reuse the data to create new tools for engaging with the resources.
- It breaks down the concept of a record of a resource to allow us to make better use of the fields in the record such as people’s names, place names, dates etc.
- It can potentially lead to reduction of duplication of effort if key datasets are shared, this could mean that you just need to link to a trusted dataset rather than devoting effort to creating that data yourself.
However, it is far from certain whether linked data will transform the way libraries work or simply become a tool that is used for some datasets. Many people that I trust still have reservations about linked data as the skills required to model and create linked data are not commonly held by people in most libraries and it is not clear yet that there is an obvious return on the investment required to create and exploit linked data.
My personal opinion is that judging by the amount of effort and work that influential organisations are putting into linked data projects then it is not something that is going away soon. It seems likely that linked data will develop into a useful tool for at least some of the metadata or sets of metadata that librarians use. Senior librarians or those interested in personal development will probably need to think about the skills required to engage with this emerging technology.
As part of the JISC Discovery project we will be dedicating effort to making sure that librarians can learn from the projects we fund to investigate linked data. We hope that this will be a useful learning tool for those with an interest in developing their linked data knowledge or skills. This should include high level messages on value of the approach and detailed lessons on the technical and licensing issues involved. All of our resources will be made available on the Discovery website. We are also planning to provide training on some key topics so keep your eyes peeled for developments.
If any UK libraries are interested in experimenting in this space or in following the innovations of others, they may want to look at our current funding call which makes money available for UK HE libraries, museums and archives to make metadata openly available. There may be just enough time to put a bid together before the deadline of the 21st of November.
Finally, if you are interested in linked data it is worth watching this blog as my colleague David Flanders is planning some further posts to talk about the possibilities linked data offers for higher education.
What better way to welcome the freshly rebranded Digital Infrastructure team blog than to announce a new funding call that spans nearly all the activities that the team is involved in.
The call is available now from the JISC site and the deadline for submissions is 12 noon on Monday 21st of November.
The call seeks projects in the following areas:
- Resource Discovery – up to 10 projects to implement the resource discovery taskforce vision by funding higher education libraries archives and museums to make open metadata about their collections available in a sustainable way. Funding up to £250,000 is available for this work.
- Enhancing the Sustainability of Digital Collections – up to10 projects to investigate and measure how effectively action can be taken to increase the prospects of sustainability for specified digital resources. Funding of up to £500,000 is available for this work.
- Research Information Management – 3 projects to explore the feasibility and pilot delivery of a national shared service for the reporting of research information from Research Organisations to funders and other sector agencies, to increase the availability of validated evidence of research impact for research organisations, funders and policy bodies, and to formally evaluate JISC-funded activities in the Research Information Management programme and to gather robust evidence of any benefits accruing to the sector from these activities. Funding of up to £450,000 is available for this work.
- Research Tools – 5 to 10 projects on exploiting technologies and infrastructure in the research process as well as innovating and extending the boundaries to determine the future demands of research on infrastructures. Funding of up to £350,000 is available for this work.
- Applications of the Linking You Toolkit – Up to 10 projects investigating the implementation and improvement of the ‘Linking You Toolkit’ for the purpose of demonstrating the benefits that management of institutional URLS can bring to students, researchers, lecturers and other University staff. Funding of up to £140,000 is available for this work.
- Access and Identity Management – 5 to 10 projects investigating the embedding of Access and Identity Management outputs and technological solutions within institutions. Funding of up to £200,000 is available for this work.
As always, JISC programme managers are keen to speak to prospective bidders. We’re always keen to talk ideas through and clarify the finer points of the call document. We have set aside 2 specific days for these conversations, the 26th and 27th of October so if you are considering a bid, please do get in touch to arrange a conversation. If those days aren’t good for you then my team mates and I will be happy to arrange alternative times.
This is always an exciting process for JISC staff as we get to hear lots of exciting ideas so I’m really looking forward to seeing what you clever people come up with this time.
Back in May 2011 I blogged an update on OER and Jorum, here are some progress updates for readers interested in digital infrastructure issues around online educational resources.
The Jorum team have been really busy over the summer transferring the service to a new hosting environment.
Due out imminently is the report on the Value of Re-Use, from the OER Impact study, and we also intend to release the report of the Literature Review of Learners Use of Online Educational Resources shortly. Also . Both reports deepen our understanding of how open educational resources are used, what evidence we have about their use, and recommendations on realising the potential of open content in education.
Looking back at a post I did last August on OER eInfrastructure Update, there were a number of issues I’d raised as being challenges that need to be addressed in the OER space. I think I’m able to say that things have progressed in the last year, so I’ve taken the opportunity to review how my thinking has progressed, but more importantly how the open content world has progressed in the last twelve months.
“Rights – how effectively are creative commons licenses being used, are they being accompanied by attribution information, are they being used by machine services to help find and filter content?” …. PROGRESS? In a post on choosing open licences I described the range of support available to ensure you choose the right license, and highlighted the importance of embedding licensing metadata. Developers have been busy creating the Creative Commons Open Attribute tool which is the most promising direction I have seen, forming a building block for potential services like CaPRet (mentioned above), and hopefully many more. I hope we can support the implementation of these tools.
“Platforms – what’s the mix of institutionally- JISC- and commercially- managed services that best support the range of OERs produced within the UK FE/HE community?” … PROGRESS? John Robertson’s post on UKOER 2 Content management platforms explores this. One of the surprises of OER Phase 2 to me has been the importance of the Potential of WordPress blogging platform to developers in this space. But in the lead, web2.0 services are being used widely. One question here is about which services can be most useful to institutions, and services such as OpenLearn are exploring which the most effective social media platforms are. Another is how these sorts of content can best be managed, for efficient content management in the longer term. That’s something we hope to explore further this year. Its also worth a specific plug for Nick Sheppard’s blog where he is at the sharp end of implementing innovative approaches into institutional services.
“Aggregation – how is the distributed content drawn back together, by who, for what purpose? Will people use search to source content that is then packaged into e-textbooks, courses, journals, wikis and blogs?” … PROGRESS? Well I explored the aggregation question a bit further. We have a project planned to create a prototype of the JISC content portal for OER collections. The UK Discovery initiative is making progress with agreeing on ways to share open metadata. And in the US the Learning Registry project is tackling the aggregation project in an innovative way and we’ve funded some UK contributions to that.
“Data model – will content be embedded, rendered, mirrored, copied? Do we want or need to track it? Is the virtuous circle of use, reuse, feedback an idealised process rather than a reality?” … PROGRESS? These are explored in the forthcoming report on the Value of Reuse. As I described in making the most of open content, I think many organisations providing open content will need to understand better the way content is used in order to make effective decisions about provision.I think embedded metadata allowing auto-attribution will really help this become a loop, but we’re still a way off.
“Scope and scale – how far do we need to zoom out to find the most effective points of critical mass for presenting content? Should we only focus on open resources? Is granularity an issue for aggregation and resource discovery?” … PROGRESS? I need to reflect further on this. I tried to explore the supply chain in connecting people through open content, but I am still hearing mixed messages about which points in the discovery of resources the open licensing is a key filter, especially given the issues around the O in OER.
“Curation and sustainability – how do we sustain subject collections not owned by individual institutions? What needs preserving? Who pays for the long-term hosting?” … PROGRESS? Hmmm. There is a lot of thinking about curation, but I’m not sure how sustainable the models are: it feels like the interest is in an ephemeral curation flow, of twitter-overlay services like paper.li and scoop-it. The hard decisions about how we pay for it seem to be happening in a space outside OER and I’m not sure what that means for the sustainability of open practices.
The above isn’t comprehensive but its been useful to reflect on where we have got to in these issues around digital infrastructure for OER.
Trying to see the wood for the trees, I think some paths are emerging.
To move these digital infrastructure issues forward within the OER Programme Phase 3, I have some exciting work planned for this academic year. I am planning a Rapid Innovation Call for November and some kind of developer challenge for the spring 2012.
The issues I think might be worth exploring in small technical projects are:
- OER on the move: collating and transforming open assets into “cooked” open educational resources suitable for mobile devices and readers, using appropriate formats such as e-pub and html5, taking reuse and licensing into account by providing open source versions where necessary.
- OER and courses: linking content to courses, particularly for sample or taster content, for users to be able to move between content and course information and vice versa. JISC’s programme on course data should provide opportunities for linkage.
- OER and academic profiles: building scaleable automatic / semi-automatic profiles of OER release of academics to align with emerging approaches to managing researcher profiles, e-portfolios and other CPD and CV systems.
- Improving release and aggregation platforms: custom open source / reusable improvements to platforms (e.g wiki, blog, cms and repository software) arising from use cases identified in OER phases 1 and 2
- Recommendation, favouriting and liking services: scaleable approaches to integrating web-based services into OER platforms and content, building on existing initiatives
- Web analytics: dashboards and visualisation to show patterns for OER release, connections and usage tracking. This would be about using existing tools to meet clear use cases, in ways that can be replicated, and build on the work of JISC’s activity data programme.
I’ll be working to develop and refine these targets for small rapid innovation projects, but I hope that sharing these ideas now will help us identify the best candidates for development funding. Details of funding amounts and timescales will be agreed nearer the time.
If you are interested in these sorts of issues please join the oer-discuss mailing list!
OER Digital Infrastructure Update by Amber Thomas is licensed under a Creative Commons Attribution 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://www.jisc.ac.uk/contactus
Image Credits: See embedded credits thanks to Xpert Attribution tool
Recently the library, museum and archive world has taken to experimenting with open data with a vengeance. It seems an interesting new dataset is released under an open licence most weeks.
There are many motivations behind these data releases but one of the major ones is the hope that someone else will think of something cool to do with the data (to mangle a Rufus Pollock quote).
The rules of the competition are laid out in detail on the Discovery site but in essence all that’s needed to enter the competition is to develop something using one of 10 recommended datasets. You can use other datasets too but you have to do it in conjunction with one or more of the 10 datasets listed on the Discovery site.
I’m probably revealing my nerdy librarian hand here but the 10 datasets are really rich and exciting:
- There is library data from the British Library, Cambridge and Lincoln
- There is archives data from the National Archives and the Archives hub;
- Museum data from the Tyne and Wear Museums collections
- English Heritage places data
- Circulation data from a few UK university libraries
- The musicnet codex
- And search data from the OpenURL router service
There are 13 prizes to be won so there is every incentive to enter even if you are somehow able to resist the siren call of all that exciting data!
The competition is open now and closes on the 1st of August.
CETIS has recently updated our guidance to OER Projects. To supplement this, we’ve been thinking about how to ensure that your content is as findable as possible. There are many routes to finding the content released by UK OER projects, including through Jorum, Xpert and search engines. Myself and David Kernohan have drafted the following guidance to help projects think through the best ways to make OER visible and findable.
Who do you want to be able to find your resources? How can you help them do this?
Firstly, know what RSS functionality your platform offers. One lesson that came clearly from the phase one OER projects was that you choose a platform for all sorts of reasons, it’s rarely a choice from a blank sheet of paper. Whether you are using iTunesU or an institutional e-prints repository or a homegrown solution, make sure you know how it exposes its content to the web so that you can optimise it for discovery. RSS feeds need to be easily findable, both by people and machines, and contain enough information about the items to be useful in identifying the items.
1. What sorts of rss feed functionality does your chosen platform offer? Do you understand the options?
2. Have you got an rss reader set up to display the feeds, so that you understand how they work?
3. Do you have an rss logo clearly visible from your homepage, does it indicate there is a feed of OER content available (whether from your own website or elsewhere)?
Imagine someone has heard of your project. Via mailing list, a casual comment from a colleague, a tip from a tutor, or your own publicity. The first thing they would do is head for your website, maybe via a query to a search engine if they don’t have your url.
4. Is your content findable using major search engines?
5. If your content is hosted outside your institution, what comes up first in the search results, the content or the project website? Do you want to optimise both for search engines, or choose one as your main presence?
6. Is your project website findable from the home page of the institution or network you are linked to? (by browsing and/or by searching)?
If they find your project website, they would immediately ignore all of your fine project documentation, your “about the team” list and your contact details. They would head straight for the resources. so it matters how your project website links to your content (wherever the content is held).
7. Are your open educational resources visible from your project website? Is there an obvious link?
8. When you get to the resources (either from the index of your website or a specific “resources” page) do they look attractive? Is it easy to browse, search and identify resources?
So, someone has found the resource, whether on your own site or a third party site. On finding a resource that interests them, our user may want to view, bookmark or download the resource for later use.
9. Is it possible to view your resource (or at least enough information about the resource to allow people to understand what it is) from your website?
10. Is it possible to directly link to a resource for bookmarking or sharing?
11. Is it obvious how someone can download a resource? Is it obvious what someone needs to do with a resource when they have downloaded it?
The use of the downloaded or direct-linked resource may happen some time after it’s discovery. At the point of use, our user would need to know how the resource can be used, and what conditions would need to be fulfilled in order to meet the requirements of the license.
12. Is license information available within the actual resource, now it is downloaded?
13. If a license requiring attribution has been used, is full attribution information available within the resource?
We are increasingly seeing the rise of OER aggregators, automated tools that collect information about OER wherever it is stored and allow people to search across it. Aggregators have the potential to become a key means of discovery for OER. Examples of aggregators include:
- The UKOER Strand Ci (Collections) Projects – see http://www.jisc.ac.uk/oer
- OER Commons – http://www.oercommons.org/
- Xpert – http://www.nottingham.ac.uk/xpert/
- OCW search – http://www.ocwsearch.com
- DiscoverEd – http://discovered.labs.creativecommons.org/search/en/
Different aggregators have different requirements and it is worth properly examining these needs before you attempt to have your material included on an aggregator. CETIS is working on some advice about registering to aggregators. There are some needs that all aggregators have in common. First off all, an aggregator will need an RSS feed of your material (or a choice of feeds). Like an RSS feed for a blog or a news site, each “item” has its own address that leads directly to that item. The feed also describes the item, allowing aggregators and people to know what the item is and how it should be described to others.
Machine readable feeds are important, and its also important that humans find your OER entry pages attractive. We’d also like to have a showcase like the JISC content portal, for very little money we could create one for UK OER content, but for it to work we need good attractive homepages describing the nature of the content you have released and linking clearly to the content, wherever it lives.
We think the wealth of content created so far would benefit from a little extra push to make sure it is found in all the right places.
Post by Amber Thomas and David Kernohan. With particular thanks to Phil Barker’s recent post on sharing service information, Pat Lockley’s tireless crusade for better quality RSS to feed Xpert , and colleagues across the OER projects.