Research Data Sharing without barriers…get involved?

Research Data Alliance (RDA) – second plenary meeting – Washington DC – 16th – 18th September.

Many readers of this blog will know about the Research Data Alliance already, but there will, I guess also be a lot of people that don’t. I am using this post as an introduction to the RDA – having this week been to Washington DC to attend the second plenary meeting of the organisation.

What with all of the interest and some urgency around research data publishing, management and re-use, at Government level, at university level, disciplinary level; and of course with an eye on research being global, there is a need to join the data up with shared practices, standards, policies and infrastructure. That’s where the RDA comes in.

Building on initiatives such as Data One in the US, the initiatives across Europe, such as the Jisc research data activity, that take place in many member states & have collectively informed the EC’s direction on research data infrastructure as part of the forthcoming Horizon 2020– and the Australian National Data Service, the RDA has been formed. It’s been formed to address the ‘joining-up’ challenges and to build a global community that can contribute to shared practice and ultimately a more sustainable way to build an infrastructure and the intersections required to support data-driven research and innovation.

The founding members from funding type agencies are the US National Science Foundation (working also with Chris Greer from NIST), the European Commission and the Australian National Data Service (ANDS) – over the past year these partners have carefully consulted and built a community that is global and encourages bottom up sharing and agreement. I have been to some prior gatherings, and had discussions with Ross Wilkinson from ANDS, Carlos Morais-Pires from the EC, Juan Bicarregui from STFC, and others; and witnessed their planning and progress. In Europe engagement is overseen by RDA Europe, Norman Wiseman from Jisc is on the Strategic Forum that oversees this on behalf of the Knowledge Exchange/KE (KE do alot of work on Research Data!). It’s a big ask – forming a structure that can collaboratively take on progressing the research data challenge. And I have to say the meeting this week in Washington demonstrated pretty impressive progress.

So in short over the past year a set of working groups and interest groups have been formed to collectively work on key issues, and Washington was really the first time that they were there face to face to develop their work – there was a first plenary meeting from the 18th -20th of March 2013 in Gothenburg, Sweden where the initiative was formally launched and groups started to form their case statement for work – but in Washington these groups were able to show early outcomes and to form firmer priorities and plans.

So what are they (we) working on ? it’s a long list [see here for the current list -]. Some of the areas that the groups are tackling: metadata & a metadata standards directory; legal interoperability; data citation; a community capability model; persistent identifiers; practical policy; data foundation & terminology; big data and analytics & more – including interest groups that cover some disciplinary areas – such as agriculture and history and ethnography.

This Alliance is forming – but from what I experienced in Washington it certainly has a lot of potential and should be an essential vehicle to research data interoperability. In Washington this week, following the group discussions there was a plenary update from all of them highlighting their priorities (given in the grand setting of the US National Academy of Sciences) and Mark Parsons, RDA/US Managing Director facilitated a discussion on the scope and ways of working. It was a really useful discussion; and one where I think there was consensus that RDA isn’t a standards body but more of a clearing house for best practice, standards and approaches. So if you’re interested join up? I think it is an important initiative that will help to address the organisational,social and technical infrastructure required for real research data sharing. Jisc, and the Digital Curation Centre (DCC) are engaged in the initiative and will continue to be so; and we will tie in UK activities as best we can so we can learn from others and also input the lessons and emerging practice from the UK so we get to that utopia …a global research data infrastructure (note:there are many UK participants already).

We will continue to give updates on progress to try and keep people in the loop. But if it is your bag – go ahead and join in the discussions. Currently there are 800 members from over 50 countries, and I can say from having been there this week it’s an impressive crowd…

Yes it is early days – but it’s important and thus far very positive. Looking forward to seeing more progress – I think there will be!

Services to support UK repositories

In the UK, the repository network is well established, and supports access to research papers and other digital assets created within universities. Increasingly, repositories are part of a research information management solution helping to track and report on research.

Over the past few years, Jisc has worked with a number of partners, including the University of Nottingham (Sherpa Services), EDINA, Mimas and the Open University (Knowledge Media Institute) to develop a range of services that benefit UK research by making institutional repositories more efficient and effective. Estimates put the annual net benefit, simply in terms of time saved by universities, at around £1.4m. Following a comprehensive review of the business case for these services, Jisc now intends to build on the RepositoryNet+ project led by EDINA, to put key services onto a more sustainable footing, including financial, organisational and technical aspects of their operation.

Sherpa Services run Sherpa-RoMEO and Sherpa-Juliet, the former providing trusted information about the rights of authors to deposit their papers into repositories, the latter providing a list of research funders’ open access policies, to which grant holders should comply. Together, these underpin the new Sherpa-FACT service, whose initial development has been supported by the Research Councils and the Wellcome Trust. Jisc proposes to work with these services over the next few months to identify a medium term strategy for each, and to support them thereafter, working in partnership where appropriate with others such as the Councils and Wellcome.

EDINA have developed the Repository Junction Broker, which promises to support mass deposit of papers from publishers and subject repositories into institutional repositories. Should the proposed HEFCE policy with respect to OA and the REF be confirmed, this will be a key service enabling institutional repositories to play their role in submissions to the next REF. Again, the proposed plan is to work with EDINA to develop a medium term strategy for RJB, and support it thereafter, while exploring a range of sustainability options with other stakeholders.

Mimas provide IRUS-UK, which enables institutional and other repositories to share usage data in a way that complies with international standards, so that usage reports can be reliably compared. Over 30 UK repositories are already part of the IRUS-UK network, with more joining all the time. Jisc intends to continue its support for IRUS-UK.

Other services have grown up alongside these, to varying levels of maturity. For example, EDINA have developed an organisational and repository identifier service, and explored a disaster recovery service for repositories. The Open University Knowledge Media Institute has developed CORE, a sophisticated aggregation and discovery service for repositories worldwide. Jisc looks forward to working with these services too, to ensure that UK repositories, their host organisations, and the people who use them benefit from greater efficiencies and a more responsive infrastructure.

As we move forward with this work, we will post updates here and elsewhere. If you have any questions or comments on this work, please contact Neil Jacobs ( or Balviar Notay (

UKOER: what’s in a tag?

Tuesday 13th November saw the final programme meeting for the UK OER Programme 2009-2012. An aim of the programme had been to find sustainable practices for the release of OER, and there were many success stories shared. It seems to me that the funded programme marks the start of a general move towards greater OER practice.

There was a mandated requirement for projects within the Programme to tag their content with “ukoer”. Whether the content is images on flickr, courseware on institutional webpages or videos on youtube, they should be tagged as ukoer.  It also got used for discussions about OER on twitter and in blogposts. As with many mandated requirements it was not universally or consistently applied, despite our best efforts otherwise.

It soon became clear that it can be hard to distinguish between the content that *is* OER and the content that is *about* OER. Particularly because openly licensed materials designed for other people to reuse in a training/learning context can be both about OER and OER themselves: OER squared! There’s quite a lot of content like that.

Recently members of the oer-discuss jiscmail list have been debating whether we should make continued use of the ukoer tag, and whether we can even control tag use post-funding now it is out there in the wild.

What does the tag mean? Is it …

Sometimes the tag might be about the contributor, the OER, or about activities such as workshops.

Focusing on its use for the OER content itself: each of these meanings above might suggest different use cases for how people might wish to slice and present content. Its worth noting that the tag is only one metadata item: each piece of content also has a publish/release data (often relating to when it went live on the platform being used), and an owner/author/contributor (sometimes an institution, or a team, or individual, or combination). Using these variables we can imagine use cases such as:
– see all content tagged ukoer dated 2009-2012
– see all content tagged ukoer
– see all content before 2012 and all content after 2013 (two searches to compare)
and of course, to look at the usage of that content too.

If we take the ukoer tag as a single identifier for content released as part of the programme, that might still be messy – “as part of”, “as a result of”, “with an awareness of”. Those latter meanings could continue to be true. Many people might still see benefits of signifying their content is contributing to a UK OER commons. That commons is the real impact of the programme and it would be healthy to see that continue.

However that does make it harder for people to derive clear narratives / patterns from the data in Jorum (or any other aggregation).  As Sarah Currier puts it “it’s harder to disambiguate a large number of resources with the same tag expressing different properties (“funded by UKOER” *and* “produced by member of UK OER community”), than to just have a new tag that expresses the new property”. “It is very bad data management practice to munge together two concepts in one tag. It is very easy to agree a new tag; data from both can be brought together for analysis much more easily than disambiguating data about two things from one tag”.

However our decision about whether to encourage continued use of the “ukoer” tag will not just be about best practice. It is about weighing up best practice against common practice and the cultural considerations. At the risk of sounding like I’m overcomplicating things: it is a socio-technical issue. There is a balance to be made between the stated or tacit requirements of funders, the role Jorum plays for the funders, the role of Jorum for contributors, and the effort of people involved with OER. Of course by contributors, we are talking about the deposit/share point within an institution or team, who need to keep messages and requirements as simple as possible.

The list members have therefore looked to JISC to say whether/how we will want to draw on these figures as evidence of the impact of the programme. In a sense the measurement of the impact of the programme is inherently fuzzy and that causes complexities for service providers like Jorum who are rightly  trying to anticipate future use cases.

We are lucky to have experts in this field, including two members of the Jorum team who wrote about the challenges of metadata in learning object repositories  and members of JISC Cetis who are writing about resource description in OER. I have gathered their input into this post so that we can try to start articulating the issues here. It is through this exchange that we can make the right decisions for JISC, HEA and the wider community.

The point I make here is that we have before us a classic problem space. It illustrates that metadata decisions are about current and future use, that they are about balancing the needs of contributors and users, and that these things require discussion and the unpacking of assumptions. There are solutions out there, involving the sources, the aggregations … but it depends on what we want.

What’s the answer? Should we continue using ukoer as a community tag for a fuzzy concept or try to restrict use to a controlled tag for a funding stream? If we chose the latter (for any reason) could it actually be controlled in that way?

We would be interested to know what people think. The oer-discuss list leans towards the former but there can be many other perspectives and those of you who have been at the sharp end of evidencing impact may have some valuable war stories to share.


Amber Thomas

Post written with input from Sarah Currier, David Kernohan, Martin Hawksey, Lorna Campbell, Jackie Carter.

Data-Driven Library Infrastructure: UKSG 2012 Presentation

Below is a copy of the plenary presentation I gave to the UKSG conference 2012. I have also included a much reduced transcript of the talk to provide some context to the slides.

My presentation was about looking at library services and systems from a data-centric point of view. Specifically, it was about the potential that library data has for the creation of new services and improved systems.

This isn’t a radically new vision – indeed the idea of data-driven is something that seems all pervasive at the moment (data-driven journalism etc). Rather it is a way to refocus, or possibly to re-align our thinking so what may appear problems at the present are viewed as new opportunities.

There is also a video of the presentation available:

I began my presentation with a video. The video was made by University of Lincoln students without formal permission from the university and upoaded to YouTube.

Incredibly it got over 2.1 million hits. Is this the most watched University recruitment film ever!? Even more impressive is that it will be watched by exactly the demographic Lincoln would want to appeal to – young people. Lincoln recognised the potential of the film, and officially branded it and it is now a part of their advertising.

So, I think the film highlights nicely the three main themes of my presentation:

  1. Situating services and infrastructure within the wider ecosystem (this might be institution; community; society etc) – allow innovation to flourish anywhere, and ensure you’re in a position to take advantage of it;
  2. Redistributing effort – focus on the services that have an impact for users, ensure you have the talent to recognise those emergent opportunities and embrace them;
  3. Covering all eventualities: Future proofing –  become agile and more entrepreneurial. The barriers for students creating the video were incredibly low: flip cam and youtube. Barriers to students using library data should be low too
Why is this so important to libraries…? Well, I think there are three compelling reasons why libraries should take a data-centric approach to their systems and services.

1. Ecosystem

Taking a data centric approach enables the library to affect the entire ecosystem that they inhabit.

Focusing on the data forces us to think about the other sources of important data within the institution: the Repository, VLE’s, student records etc.  The wider data ecosystem becomes evident, and the potential of the data underpinning those systems can be realised.

A really good example of this is the Discovery work that’s currently being undertaken by JISC and RLUK and Mimas at the Uni of Manchester. Discovery’s aim is to provide a metadata ecology’ for UK education and research – and it does this by focusing on open and accessible data.

When you start to think like this you realise there is incredibly rich and important metadata describing content outside of HE libraries – museums, galleries, archives, museums. Researchers and students want this content as much as anything in the library – so why wouldn’t you include that?

What was largely hidden or difficult to find becomes visible. The possibilities of those large, cross-sector discovery tools can be realised, as well as those small, un-thought of possibilities for individual researchers to create their own unique discovery tools, searching a corpus of data they curated and specific t their research.

What happens, suddenly, is the data ecosystem starts to mingle with the human ecosystems libraries are inevitably a part of. The free flow of data provides the fertile ground for new ideas and services to grow – Innovation is allowed to flourish everywhere on campus – not just within the confines of the traditional walls of the library.

Libraries and their institutions need to ensure an environment where this flourishing of innovation can happen,  and that there are the right skills and people to recognise those opportunities, and help develop further the ideas and prototypes.

2. Effort

If you think only of shared services as a way to reduce effort then you lose the ability to respond and build on the opportunities that may emerge from your fertile data ecosystem. It’s not of reduction of effort, but of redistribution of effort.

The aim is to reduce the effort on those chore jobs – admin, back office functions – critical, but not what the user sees as having an impact. This refocus allows you to redistribute effort to the core services you provide.

Shared services such as JUSP and Raptor are great examples of how you can stop doing the administration, and use those services to provide you with the data to make changes.  

These shared services also demonstrate the way data begets data: the way data is used produces more data, that enables better understanding of how the data is used and how it can be improved.

Data seems to bestow the need for iterative thinking – to constantly revisit, act, think, revisit.  Providing a virtuous circle of data, action, data.

Another example is Knowledge base+: a shared academic knowledge base for electronic resources – a great example of how you enable libraries to do something once and share it with everyone so they don’t have to repeat it locally. KB+ also recognises that the innovative services built on top of data do not necessarily have to be undertaken by the project itself, but can emerge from the community as well as third parties and commercial suppliers.

One interesting aspect of KB+   is it’s focus on data means that the use cases for it can develop over time. While the focus at the moment is on e-resources and their management, it may enable innovations around inter-library sharing of content, collection management etc.

Indeed, the use cases can quite happily shift from individual institutions:from those that envisage a use-case as an ERM, to those who see it as a backup for their local holdings and helping facilitate easy movement between external systems.

As librarians we’re very aware that the past needs protection from the future; but we need to recognise that the future needs to be protected from the past – I don’t know what will be needed in a few months time; but it probably won’t be the thing I think it will be based on my past experiences.

3. Eventualities

So, this leads me onto my third E! How a data-driven approach ensures that services and systems are Future proofed.

This is about libraries being able to be, at a fundamental level, more entrepreneurial. I want to pick up on an earlier point about the iterative imperatives of data: data – action – data: the process is one similar to how a small start up company might work.

There are some wonderful examples of libraries playing with this kind of innovative approach.  At the University of Huddersfield they have taken the generally unappealing library circulation data and turned it into a game for students:  Lemon Tree.

The library experience is gamified, and in a way that engages students and enhances their experience.

Supporting this kind of thinking are technical infrastructures like the JISC Elevator: A platform for new ideas to be posted, and for members of the community to vote on them and then for JISC to potentially fund.

This is about agility, and moving quickly.

This is again where the importance of redistribution of effort is so essential – the negotiation between shared above campus services and local capabilities is incredibly important. The negotiation between what is shared and kept local defines the institution, and how effective they can be in meeting the rapidly changing needs of their students and researchers.

As libraries begin to understand and curate their own data effectively it begins to demonstrate the libraries potential role in an increasingly data-driven academic environment.

As data management and curation become the next big problem for institutions, libraries can position themselves as the experts.

Enhancing Digital Infrastructure to Support Open Content in Education: announcing 15 new projects

I am very pleased to announce fifteen new projects to enhance the digital infrastructure to support open content in education.

The Call for proposals was released in November 2011. We received 34 proposals, the competition was very tough. I’m grateful to all the expert reviewers who helped evaluate bids. Because of the high standard of proposals we were able to allocate more funding than anticipated to approximately £350,000 of HEA/JISC OER Programme funds.

These projects will be completed by November 2012, hence they are Rapid Innovation projects using open innovation methods: plenty of blogging, lots of user involvement, and they are driven by delivering new tools and functionality.

Here is a taste of what they cover


wordcloud created with the free tool

OER Rapid Innovation Projects: the full list:

Attribute images Further developing a tool that allows users to upload images (singly or in bulk), select a Creative Commons licence and specify the name of the copyright holder, publication date and a URL. The tool will then embed a licence attribution statement in the image. It will have integration with Flickr. University of Nottingham
Bebop The main outcome of this work will be a WordPress plugin that can be used with BuddyPress to extend an individual’s profile to re-present resources that are held on disparate websites such as Slideshare, Jorum, etc. University of Lincoln
Breaking down barriers Developing open options for Landmap and geo-aware functionality in Jorum. To enable easier and richer sharing of geo-based resources. University of Manchester, MIMAS
CAMILOE This project reclaims and updates 1800 quality assured evidence informed reviews of education research, guidance and practice that were produced and updated between 2003 and 2010 and which are now archived and difficult to access. University of Canterbury Christchurch
Improving Accessibility to Mathematics Turn an existing research prototype into an assistive technology tool that will aid accessibility support officers in their task of preparing fully accessible teaching and assessment material in mathematical subjects by translating it into suitable markup. University of Birmingham
Linked data approaches to OERs Extending MIT’s Exhibit tool to allow users to construct bundles of OERs and other online content centred around playback of online video Liverpool John Moore’s University
Portfolio Commons Create a plugin for Mahara open source e-portfolio software that will enable users to select content from their portfolio and deposit it into Jorum and EdShare. University of the Arts London
RedFeather RedFeather aims to provide users with a lightweight Resource Exibition and Discovery platform for the annotation and distribution of teaching materials. University of Southampton
RIDLR Dynamic Learning Maps meets Learning Registry UK node (JLeRN) to harvest OERs for specific topics within curriculum and personal learning maps and share paradata. University of Newcastle

This will be using cheap/free automatic transcription services to transform video to text to enable richer subject specific metadata for cataloguing purposes, using recognised standards and data formats. University of Oxford
SupOERGlue Will pilot the integration of Tatamae’s OER Glue with Dynamic Learning Maps, enabling teachers and learners to generate custom content by aggregating and sequencing OERs related to specific topics. University of Newcastle
SWAP sharing paradata across widget stores Using the Learning Registry infrastructure to share paradata about Widgets across multiple Widget Stores, improving the information available to users for selecting widgets and improving discovery by pooling usage information across stores. University of Bolton
synote mobile Creating a new HTML5 mobile version of Synote to will meet the important user need to make web-based OER recordings easier to access, search, manage, and exploit for learners, teachers and others. University of Southampton
TRACK OER OER in the wild can get lost. This project will add a tracer to find where they go for attribution, research and remix. Open University
Xenith Adding HTML5 as a delivery platform to Xerte Online Toolkits, allowing content to reach a much greater range of devices. University of Nottingham

See the JISC strand page for more detail.

Digital Infrastructure to Support Open Content for Education

Background to this blog post

The OER Rapid Innovation Call for Proposals was announced in November 2011. It is open to HEFCE-funded institutions to bid.

I am very aware that the issues in scope for this Call are broader then the UK. It includes a snapshot of the digital infrastructure space at November 2011, it builds on the understanding and experiences of projects within the UKOER Programme and beyond, and is particularly informed by the expertise at JISC CETIS . It therefore seems useful to make the snapshot available as a blog post so that it is more accessible to people working in open content for education around the world.

The following is taken from Paragraphs 25-75 of the Call, but with added headings to enable easier reading online. Please read the full Call for further understanding of what the requirements are for projects.

The Global Picture

The OLnet initiative has recently identified Key Challenges for the OER Movement. These challenges include:

  • How can we improve access to OER?
  • What are the issues surrounding Copyright and Licensing and how can they be overcome?
  • What technologies and infrastructure are needed/in place to help the OER movement?

It is these global challenges that underpin this Call for projects to enhance the digital infrastructure to support open content.

The Story so Far

Through the JISC Digital Infrastructure Team, JISC  supports the creation and use of a layer of scholarly resources for education and research across the network. This includes the development of infrastructure, technology, practice and policy to support processes from creation and access to re-use of resources. Major activities include sharing and storing content, providing access to content (via licences and technologies), developing solutions for curation and delivering data and content resources via data centres and distributed solutions.

Through the OER Technology Support Project, the OER IPR Project, the evaluation and synthesis, and the experiences of funded projects, and aided particularly by JISC CETIS’s technology synthesis work,  JISC is developing a clearer understanding of the role of technologies and infrastructure in supporting open practice and open content.

In particular JISC has funded a number of elements that support the sharing of learning materials including Jorum, the Repositories Infokit,  previous rapid innovation funding for the Xpert search, the SWORD protocol, the CaPRet project and an OER Programme-funded prototype showcase of UKOER content that is currently under development.

Opportunities and Challenges

There are some key areas that JISC has identified where developments under this call are encouraged. What follows is a description of some of the opportunities and challenges that have been identified in this space. However this list is not exhaustive and bidders are welcome to submit proposals that address different areas if they fulfil the main aims of the call.

Open licensing is key to open content, and fertile ground for developing digital infrastructure. Tools built around Creative Commons licences may provide a useful backbone, so the Open Attribute tool and projects using those conventions, such as OERGlue and CaPRet are useful in that they provide benefits to users (easy attribution) rewarded by benefits to content providers (analytics). Tools such as Xpert Attribution Tool help the flow of rights. Implementation of Open Attribute into tools and services, and a set of services around embedded licenses are potential areas that proposals could tackle.

Improved resource description, both machine-readable and human-readable are important to enable content to be effectively found, shared and selected. CETIS have provided a summary of the key initiatives to track, namely Learning Resources Metadata Initiative which is a profile of the initiative for improving html markup. HTML5 may offer promise in this area. Including provenance and licensing information in the sharing of resources is important to digital literacies as well as meeting the requirements of attribution such as in the Creative Commons BY clause.

Aggregation and discovery is another area of interest for open content (see OER aggregation blog post). The OER Thematic Collections projects have explored a range of approaches. The Content Clustering and Sustaining Resources publication provides a good description of the approaches in this area generally. The Shuttleworth-funded OER Roadmap Project proposes an ecosystem of repositories and services, characterised by the use of APIs and shared protocols such as JISC-funded SWORD. The Discovery Initiative promotes an open metadata ecology to enable better use and aggregation of content. The Learning Registry approach explores the use of activity data to enhance the metadata and discovery of resources and the OER Programme is funding a UK experimental node. Solutions might be developed that build on these initiatives, specifically to enhance the digital infrastructure for open content in education.

Many sites hosting collections of educational materials keep logs of the search terms used by visitors to the site when searching for resources. There might be solutions that could be developed to aid the understanding of search activity. For example, a project could deliver a tool that facilitates the analysis of search logs to classify the search terms used with reference to the characteristics of a resource that may be described in the metadata. Such information should assist a collection manager in building their collection (e.g. by showing what resources were in demand) and in describing their resources in such a way that helps users find them. The analysis tool should be shown to work with search logs from a number of and should produce reports in a format that are readily understood, for example a breakdown of how many searches were for “subjects” and which were the most popular subjects searched for. A a degree of manual classification will be required, but if the system is capable of learning how to handle certain terms and that this learning would be shared between users: a user should not have to tell the system that “Biology” is a subject once they or any other user has done so. Further information on the sort of data that is available and what it might mean is outlined in CETIS’s blog post on Metadata Requirements from the Analysis of Search Logs. Solutions should be developed as open source software then made free to use or install without restriction, with full documentation. The tool proposed above is one way that we could improve the understanding of search, other suggested solutions are welcome.

Effective Search Engine Optimisation is key to open educational resources providing benefits of discoverability, reach reputation and marketing. Guidance on “improving your online presence” needs applying to the wide range of platforms and content types used for OER, as described in JISC CETIS’ UKOER technical synthesis. Projects have explored SEO in several ways, for example, the SCOOTER project has produced guidance on its chosen approach to search engine optimisation and the MMTV project experimented with Google AdWords to improve SEO. The variations in format types and platforms mean that it is exposed to web search in a variety of ways. A particular key issue is how “repositories” compare to “web 2.0 services” in terms of search engine optimisation. To answer that, we may need to go beyond theory into running a structured experiment. For example, a technical investigation/tool for the SEO of commons platforms and formats for OER would be very useful. This project would be a repeatable approach, using technical tools to run the SEO work and capture and present the findings in a useful way. The outputs of such an investigation would include the methodology, a findings report to JISC, and an accessible set of outputs aimed at OER projects. Other solutions to improving SEO for open content would also be very welcome.

Understanding use has been a major theme of the OER Programme Phase Two. The Value of Reuse report and the Literature Review of Learners Use of Open Educational Resources captured what is known about use of open educational resources. The Learning Registry is relevant here. The Listening for Impact study analysed the feedback and usage of some open content collections. Further useful resources are available from the Activity Data Programme. Analytics may be an important way to provide evidence of the benefits of open educational resources, so enhancing content and platforms to enable enhanced usage tracking, exploiting APIs of third party systems, exploring ways of capturing and visualising use, and providing dashboards to manage analytics data may be very useful.

Online profiles are becoming a part of academic identity and open content provides a significant opportunity for academics to enhance their profile, alongside managing and reflecting on their professional work. To this point many efforts at creating academic profiles building on institutional information and open content have focused exclusively on profiles of publications and the provision of open access to scholarly communications. However, other forms of open content can play a significant role in academic identity and professional development. A key opportunity is therefore linking a broader range of open content to academic profiles.This might involve fully/semi-automated integration of publication/release/record of multiple types of open content into academic staff profiles. This is not about creating new platforms but of using feeds and APIs to enhance existing systems that handle continuing professional development / CVs / ePortfolios etc. Examples of this sort of functionality can be found in Humbox’s profile on contributing authors which also allows users to embed that author’s content list elsewhere, and Rice Connexions offers author profiles. Services such Slideshare and Youtube host user-generated content are well used as platforms for open content.Proposals could demonstrate fully/semi-automated approaches that can flexible draw on multiple distributed sources of open access articles, OER, blog posts and so on. Proposals to address this opportunity are very welcome.

One mechanism that connects people to content is social recommendation. This includes favouriting, liking, bookmarking, reviewing, and social curation tools such as Scoopit,, zite, storify, pearltrees and so on.  Often this involves browser-based tools such as bookmarklets making it very easy for people to capture, share and store useful resources. There are two OER-specific bookmarking tools available that handle the licensing characteristics of open content: FavOERites developed at Newcastle University (as a UKOER funded project) and the OER Commons tool both of which have APIs and have open sourced their code. The implementation and enhancement of these tools to handle open content may be a useful area for projects to explore. For example, projects might develop solutions for making content “share-friendly” to these tools, how the tools can use automatically generated metadata about licences, the user and their context, and how shared tags and vocabularies might enable more effective sharing for educational purposes.

The growth in e-books and e-readers, both open and proprietary, is of interest to education. Books are a familiar format to use in teaching, but also digital technologies affording new ways of creating, sharing and using books. For example, the College Open Textbooks initiative states that “We have found that open textbooks should be:

  • easy to use, get and pass around
  • editable so instructors can customize content
  • cross-platform compatible
  • printable
  • and accessible so they work with adaptive technology”

In the UK, JISC Collections have been running the ebooks observatory and examining business models for etextbooks. Developments from the research world are emerging around Enhanced Publications which combine research text, date and rich media. There is a recently announced pressbooks platform. International initiatives such as the The Saylor Open Textbook Challenge the WA State open course library etextbook initiative and have raised the profile of open textbooks. JISC CETIS have described the use case for open e-textbooks. There is guidance on ebooks available from JISC digital media, and JISC has funded the #jiscpub R&D projects. Several campus-based publishing projects have piloted reusable approaches, including Epicure, CampusROAR, Larkin Press and another useful example to look at is “living books about life”.

Phases 1 and 2 of OER programme made use of a wide range of platforms, blogs, wikis, repositories and often made modifications to the software to fully support OER use cases. It is likely to mean improving ingest and expose mechanisms, handling licence information, addressing syndicated feeds, APIs, widgets and apps. An example of platform enhancement would be the work Oxford University and others have done with WordPress or the CUNY Academic Commons in a Box work. Proposals are welcome to enhance platforms for open content. Bidders may wish to create enhancements to existing release, aggregation and remix platforms to improve the transfer of open content for educational purposes. Projects may wish to combine existing tools to provide enhanced functionality. The outcomes of these projects should be a richer exchange of metadata between publishing platforms, aggregators and other services used in the sharing of openly licensed content.

The opportunities and challenges above are only indicative and not exhaustive.

Please read the full Call for further understanding of what the requirements are for projects.

Bidders are welcome to use the oer-discuss mailing list to refine ideas and identify potential collaborators. JISC will not provide a matchmaking service, but commercial and overseas experts are welcome to use the mailing list to express an interest in collaborating.

I hope you find this useful. Comments very welcome.

Amber Thomas

JISC Programme Manager: digital infrastructure for learning and teaching materials


Enhancing platforms for open content: the project cited is from City University New York not State University New York (now corrected, thanks to Matthew Gold, CUNY for spotting the error)

OER Rapid Innovation Call

***THIS CALL FOR PROPOSALS CLOSED ON 27TH JANUARY 2012 and this blog post will no longer be updated***

The Joint Information Systems Committee (JISC) and the Higher Education Academy (HEA) invites institutions to submit funding proposals for projects to enhance digital infrastructure to support open content for education.

Read the Call for Proposals.

Supplementary Information


CLARIFICATION: Proposals can be up to 6 pages long, the coversheet does not count as part of the 6 pages and the Use Case does not count as part of the 6 pages either.

REMINDER: Bidders are strongly advised to ask a peer with “fresh eyes” to read through the Call and Proposal before submission.

An online briefing session was held on Friday 9th December 2011, 10:00-11:00.  A recording of the briefing and the Slides are available. I also ran a skype surgery on Wednesday 11th January 2012. Further queries are very welcome.

An extract of the Call is available: Digital Infrastructure to Support Open Content for Education

Further information on Use Case Requirement is available.

Summary of the Call

Eligible institutions (HEFCE capital) may request between £10,000 and £25,000 per project.  A total of £200,000 is available for this strand. Between 10 and 18 projects are likely to be funded.

I previewed the Call earlier in November 2011.

wordcloud of scope of the Call of OER RI Call

The OLnet initiative has recently identified Key Challenges for the OER Movement. These challenges include:

  • How can we improve access to OER?
  • What are the issues surrounding Copyright and Licensing and how can they be overcome?
  • What technologies and infrastructure are needed/in place to help the OER movement?

It is these global challenges that underpin this Call for projects to enhance the digital infrastructure to support open content. The Call outlines some of the opportunities and challenges that have been identified in this space, proposals are welcome that meets these, or more generally the main aims of the Call.

Intended benefits of these projects are:

  • A clearly identified use case will be met by the solution provided;
  • Increased understanding about how to identify and implement digital infrastructure solutions to support open content for education
  • An informed developer community, more aware of the target groups they are developing for;
  • Enhanced capacity, knowledge and skills to enable positive and informed change in the sector (through piloting new technologies and approaches)
  • Ideas for new or enhanced services, infrastructure, standards or applications that may be used at departmental, institutional, regional or national levels

These are Rapid Innovation projects.  In keeping with the size of the grants and short duration of the projects, the bidding process is lightweight and the reporting process will be blog-based.

Bidders are welcome to use the oer-discuss mailing list to refine ideas and identify potential collaborators. JISC will not provide a matchmaking service, but commercial and overseas experts are welcome to use the mailing list to express an interest in collaborating.

The outputs of these projects will be made available open access and open source.

Key Dates

  • Call Released: Tuesday 29th November 2011
  • Online Briefing Session: 10-12 GMT Friday 9th December 2011
  • Bid Deadline: Friday 27th January 2012
  • Projects should start by Monday 19th March 2012 for 4-6 months and complete by Friday 19th October 2012

Please do post questions as comments to this blog post, join oer-discuss, or contact me direct.

Amber Thomas

JISC Programme Manager: digital infrastructure for learning and teaching materials (CONTACT INFO)

OER Rapid Innovation Call: Preview

Released later this month, with a deadline of mid January, this Call will be for short (max 6 month) projects to develop solutions to enhance the digital infrastructure to support the use of open content in education.

Eligible institutions (HEFCE capital) can bid for between £10,000 and £25,000. Technical staff should already be in place. Existing partnerships with commercial and overseas organisations is welcome. Proposals should be focussed on a clear use case and have user involvement build it. In keeping with the relatively small grants and tight timeframe, there will be a lightweight reporting process based on blog posts.

Open Education, open academic practice, open scholarship and open content all need digital infrastructure to thrive. The emphasis in this Call is on making use of existing tools, services and standards, to meet clearly articulated use cases.

Areas to bid to will include:

A: Open content and academic profiles

B: Enhancing platforms for open content

C: Enhancing tools and services for open e-books

D: Search log analysis

E: SEO of common platforms and format types for OER

F: Open Call, including:

  • recommending, bookmarking, favouriting and liking
  • aggregations of open content
  • analytics tools and approaches for open content and open practice
  • usage tracking
  • presentation / visualisation of aggregations
  • embedded machine-readable licences
  • use of OAI ORE
  • validation and test tools for metadata and standards
  • sustainable approaches to RSS endpoint registries
  • common formats for sharing search logs
  • analysis of use of advanced search facilities
  • other areas, in keeping with the scope of this Call

As you can see, the scope is broad. It includes discovery, analytics, social web and platform work, so don’t be put off if you haven’t been involved in the OER Programme so far. Read my latest programme update, join oer-discuss mailing list, follow #ukoer on twitter, check out the work of the programme and start making connections. Bidders are welcome to use the oer-discuss mailing list to refine ideas and identify potential collaborators. JISC will not provide a matchmaking service, but commercial and overseas experts are welcome to use the mailing list to express an interest in collaborating.

We have high hopes for the technical outputs of his strand. The CETIS OER mini projects call, which this supersedes, funded the CaPRet project for £10k, which may now become a core part of Creative Commons licensing technology. The SWORD protocol was originally funded in this way, and is now used all over the world. Great solutions can come from humble beginnings.

Get your thinking caps on and watch this space!

Amber Thomas

Programme Manager, JISC


Shared Academic Knowledge Base (KB+) – Library Directors event

Yesterday saw the shared academic knowledge base (KB+) briefing day for approx. 60 library directors and senior managers take place in London, at the Wellcome Trust.

The project, known as KB+, is developing a shared community service that will improve the quality, accuracy, coverage and availability of data for the management, selection, licensing, negotiation, review and access of electronic resources for UK HE.

The aims of the day were to:

  • Provide an update on the progress of the shared academic knowledge base project;
  • Surface and share some of the questions, concerns and ideas participants may have about the project and the management of electronic resources in general;
  • To let participants know what will be happening next with the project and how you can get involved if you would like.

The day began with Ben Showers (JISC Programme Manager) providing some context to the work and situating the project within the wider subscriptions management landscape. The presentation can be found here: Shared Academic Knowledge Base: Context and Landscape

Liam Earney, the project lead from JISC Collections, then went on to outline the vision and approach that the project will be adopting, as well as providing the participants with an idea of what will happen next and how, if institutions wish, they can get involved.
Liam’s presentation can be found here:  shared academic knowledge base: Approach and Vision

The meeting engendered a large amount of discussion about the project, with participants freely sharing concerns, ideas and possible solutions to some of the issues that surfaced.

Extensive notes were taken from the Q&A sessions to help inform the project, but instead of repeating verbatim the questions and answers I have tried to highlight some of the themes that emerged during the meeting below.


A number of themes emerged during the day and, while this is not an exhaustive list, these are some of the recurring or critical issues that were surfaced:

Transformation of current practice

It was acknowledged that this project was potentially transformative; it has the potential to change what might be termed the bread and butter of library work.  Therefore its impact on the community, and how it works, could be significant.

This means that the community, from senior managers to practitioners and beyond will be keenly interested in the developments and the project will need to build trust and facilitate the involvement of the whole library community. Which brings me on to another of the days themes:


This was a theme that seemed to surface at regular intervals during the day. There was a clear message that the project needs to be able to communicate regularly with the library community on both progress and developments as they take place.  This might manifest itself in a newsletter such as that employed by the Discovery programme, or utilising existing communication channels from JISC, JISC Collections and other sector bodies (or indeed a combination).

The combination of communications channels is also important given the range of stakeholders interested in the developments, from commercial vendors and publishers to librarians in the UK and internationally.

Under this theme there were issues surrounding how the communication channels would allow for more interactivity than might otherwise be usual in a JISC funded project given both the high profile nature of the project, as well as the need for ongoing community engagement in the work.


Closely related to communications was the topic of engagement.

Specifically a lot emerged on how the community, especially ERM librarians and similar, could be engaged in the project in a useful and meaningful way.

In his presentation Liam made it clear that the project hopes to ‘recruit’ a number of embedded librarians where the project will pay for a proportion of their time to work on the project.  It was made clear by the participants, however, that it would need to be made clear what expectations any involvement might have, from the skill levels and expertise of the person, through to the time length they might be involved.

Clarity on these issues would be key to maintain sector engagement.

It was also suggested there might be the need for something like an ‘advocacy pack’ so that library directors had the arguments to convince senior staff of the benefits of engaging with the project.

An interesting sub-theme within engagement was the power of the institutions themselves to help engage with the commercial companies and organisations they work with to put pressure on them to both work with the project as well as implement the recommendations and standards the project might recommend.

The message was clearly that this was a partnership.

Collaboration and leveraging other work

It was expressed a number of times how important it will be for the project to leverage this work and funding with other initiatives and projects that can help the KB+ project deliver its outputs.

It was acknowledged how much work was currently taking place around this area, such as national projects such as KBART , TERMs and JISC funded projects such as the journal usage statistics portal and e-journal archiving work including Peprs and the entitlement registry, as well as international projects such as the Open Library Environment.

This helped reinforce the projects own ambitions of engaging with, and where possible working with these complimentary projects and initiatives.

Sharing problems

This is a shared service, but it will be important that when issues are surfaced by an individual institution, or indeed a problem is resolved by someone, that the whole community can be made aware of this.

What tends to happen now is a problem will be reported to a supplier and that problem is then normally resolved, but no one other than the originating institution knows about this.

Further points of discussion

There were a number of other points of discussion including:

  • The potential conflict between aiming for quality of data and ensuring its timeliness.  It’s essential that quality doesn’t impact on the ability of libraries to deliver services to users as and when they want them.
  • Print subscriptions: The briefing day concentrated largely on electronic resources, but it was clear that participants wanted to see print incorporated in the work.  The project is taking a unified approach so this won’t be an issue, although electronic will remain the focus for much of the work.
  • The Identifiers elephant! It was clear that participants also felt that the issue of how the project deals with identifiers (be those institutional, journal title etc) will be a critical.
  • Decision making and workflows will be two potential aspects of the final service, but it is important to recognise that a focus on the decision making components that the service will deliver could help strengthen potential business models, and demonstrate real value to institutions.

As the event demonstrated, there will be a lot more work going on over the next few months to get the project into a position where it can successfully transfer to a service.

In the meantime, this won’t be the last you’ll hear of the project, with plans already in place to start communicating and engaging with the community over this important shared service.

If you would like to find out more about the project, or have any questions them please feel free to contact Liam Earney at JISC Collections.

What has the inf11 programme achieved?

The information environment programme 2009-11 (mercifully shortened to inf11) is drawing to a close and we are starting to reflect on what it has achieved.

We chose to manage this programme as one very broad programme rather than a number of smaller programmes and it has included work on:

This represents a lot of work that has produced some exciting outputs and interesting results. To try and help people see what outputs and results are relevant to them, we have prepared a list of 27 questions that the programme has addressed or started to address. This was put together by Jo Alcock from Evidence Base who are evaluating the programme.

The programme won’t finish until July so we will continue to add to these questions. If you have any suggestions for things to be included, please let me know.

For our next programme of work we will have 4 separate programmes:

  • Information and Library Infrastructure
  • Research Management
  • Research
  • Digital Infrastructure Directions

We will be blogging more about these programmes soon.

Next Page →