We are pleased to announce a new report that explores how activity data and analytics can benefit universities and proposes how institutions can cope with the associated challenges and opportunities. The report is called Activity data – delivering benefits from the data deluge and is available on the Jisc website now. The eagle eyed will have spotted a link to it in the current issue of Jisc inform.
The report was written by David Kay of Sero Consulting and Mark van Harmelen of Headtek and it builds on the work we have been doing with activity data over the last couple of years. Over those two years it has felt that activity data has moved from being a relatively fringe and immature area in universities to something that is likely to be of vital importance in the next few years.
I think that this is emphasised by a flurry of exciting new developments. My colleague Myles Danson has worked with CETIS to release the Analytics Series. This is a series of seven useful and interesting reports that explore analytics from a number of different angles. This includes thinking about the implications for research and teaching and learning.
I’ll pause here to explain what I see as the difference between analytics and activity data. Analytics is a broad heading for the mining of data to inform business decisions or provide improved services to end users. Activity data is one type of data that falls under the analytics heading. Activity data specifically focuses on the data recorded about a user’s actions when they interact with a website or software or even a physical space.
Another exciting development is a project to explore a shared library analytics service. This project is seeking to develop a pilot shared service that builds on some of the experiments we have been doing in our activity data work. It is expected to complete in Autumn 2013 and should provide libraries with a useful new way to study how their services are working and to gather data to inform crucial decisions over allocation of resources. More detail will be available on this soon.
One project that will be an important part of delivering the library analytics suite will be Huddersfield’s Library Impact Data project. They released version 2 of their toolkit last week. So if you can’t wait for the library analytics suite to start exploring your library activity data then head over to their blog for more information.
So, there is a lot going on. That makes the Activity Data report even more timely since it provides an accessible and useful introduction to the topic. The report discusses the benefits that are on offer to institutions. It includes case studies on UK and US institutions who are leading the way with activity data. It finishes by offering some pointers on strategies that may be useful in getting ready to seize the opportunities offered by activity data.
This is a fast moving area and it looks like 2013 should see some even more exciting developments.
Further / Higher Education (F/HE) in the UK is in the fortunate position to have talented and experienced developers working in its organisations, driving both service development and applied research. Because of this, developers in F/HE frequently contribute to a particularly rich source of technical innovation to the sector.
At ALT-C Paul Walk and I ran a session exploring the concept of the Strategic Developer. Paul is the Deputy Director of UKOLN and oversees DevCSI, a JISC-funded initiative to focus on the development of technical talent with in the UK FE/HE/Research sectors. An experienced manager of developers himself, Paul has been looking at the ways in which technology staff are situated in decision-making processes. Back in the spring, his colleague Mahendra Mahey and I ran a discussion session at Dev8eD about the role of developers in e-learning and the ideas UKOLN are exploring clearly had some resonance. This session was a discussion on the issues around in-house technology expertise in a learning and teaching systems context. Our focus was on the hard technical skills end of the technology spectrum: it is about the coders, hackers and integrators, the people who build and develop software solutions.
Paul’s slides below describe the ideas around local developers, connected developers and strategic developers which underpin the DevCSI initiative.
We had a small but very experienced group of participants.
A-Z by surname: Suzanne Hardy (Newcastle), Martin Hawksey (JISC CETIS), Jo Matthews (UCL), Mark Stubbs (MMU), Jim Turner (JMU), Scott Wilson (JISC OSSWatch).
My take-home messages from our discussions are below, and I hope that other participants will add their thoughts.
The cloud, and software-as-a-service model is often conflated in people’s minds with the outsourced model. In truth there are many models of SaaS that have greater and lesser levels of control for the client. This reminded me of one of my favourite talks Dev8eD: Alex Iaconni on different sorts of hosting . Paul’s observation is that the push into the cloud is sometimes mistakenly associated with a reduction in expertise required from the client. Cloud and SaaS just make some aspects of the system remote, not necessarily all, they certainly don’t always negate the need for in-house expertise.
That said, there are some trends where complexity moves up to the “above campus level”. The sorts of shared services that libraries use are changing the division of labour between technical experts within libraries and those working at a vendor/supplier level. Certainly in JISC’s work on repository and curation infrastructures we are seeing potential for abstracting some functionality (and its expert design) up to a network level. I am interested to see whether e-learning will see similar trends: with some specialisms focussed at the shared service level rather than locally.
In open source, we also see that pooling of technical expertise across employer boundaries. Certainly moodle is a really good example of technical skills distributed between institutions, service providers and the developer community in its own right. The recent case study on MMU’s use of ULCC’s hosted moodle solution is a good example of that. The point was also made that OS coders are connected developers out of necessity and that brings the benefits of greater awareness of other software and approaches.
Thinking now about big contracts for outsourced services, we discussed how an institution needs in-house technical expertise to:
- specify technical requirements to vendors
- evaluate proposals
- negotiate technical detail
- oversee technical delivery
- integrate the external service with local integrations
and so on. In short, to act as an “intelligent client”/”intelligent customer” to ensure that institutions are getting value for money from their suppliers. The complexity of university technical infrastructures mean that vendors who overpromise or underperform are hugely costly to universities. When we’re talking about huge contracts like that at London Met the potential for inefficiency is huge and those suppliers must be carefully managed.
I think Paul’s diagram is worthy of reproducing here:
Incidentally I’m not suggesting here a crude “them and us” characterisation of suppliers and customers. I’m arguing that for IT contracts to deliver effective solutions there needs to be a meeting in the middle. I would argue that it is a good test of a vendor that they are happy to get “their guys” talking to “your guys” as soon as possible. Any supplier who is happy to be judged on results will want to get it right and they would rather have frequent access to accurate technical information than to a contract manager with no mandate for decisions. I would love to hear from developers working for suppliers on whether that rings true, but in all my experience, they need to be met half way by the client on getting the technical implementation right.
We also discussed the way in which in-house technical expertise is managed, and on reflection we were describing some common variations, each of which combines to make institutional set-ups quite diverse:
- institution size matters: a small institution may provide more space for a networked and strategic developer
- seniority of developers matters: some will be more involved in procurement decisions described above
- e-learning developers might be central or embedded in departments
- VLEs treated as part of the enterprise suite or as specialist applications supported by e-learning team
- Whether IT/library is converged or not also impacts on where e-learning developers sit in the organisation
- Patterns of home-grown systems/tools becoming integrated or discarded
- In-house open source solutions mean in-house expertise, but externally hosted OSS has more variation
- Mix of core staff and contracted staff (both long and short term)
- Mix of external technical consultants and coders and the ways in which their knowledge is sustained
- Extent of tactical use of internal and external project funding to enhance in-house technical capacity
- Extent to which developers technical skills and approaches are actively nurtured
- Extent to which developers soft skills are developed in areas like pitching, presenting, supporting users, business analysis, cost assessments etc
Even within our small group there was considerable variation. That certainly suggests that in sharing our emerging models of managing distributed and cloudy infrastructures, we need to clearly state our local contexts.
It was a thought-provoking discussion. It emphasised to me the value of JISC’s support for connecting developers, and the need to continue investing in in-house technology expertise.
Amber Thomas, JISC
The Knowledge Exchange, of which JISC is a member, has just released a report on the sustainability of OA services and infrastructure. The report identifies services that are considered critical, and what they are critical for. It then considers the perspectives of a range of stakeholders, and considers the value offered to them by these services. It is the first part of a series of reports, with the next one being undertaken now under the auspices of SPARC in the US, focusing on discrete business models and related issues (governance, etc).
The KE report is important for JISC, as we are working with the UK and wider repositories community to develop a repositories service infrastructure. You may know that this is based around RepositoryNet+ at Edina, includes an “innovation zone” at UKOLN, and strong relations with centres of excellence such as the Universities of Nottingham and Southampton, and MIMAS. The repositories infrastructure, including services such as Sherpa/RoMEO, needs to be sustainable and cost-effective, and the KE report helps us understand what that means in particular cases. In a time of constrained resources, and a strong policy direction in favour of open access, we will need to be creative in sustaining the repositories infrastructure. We need business planning perspectives to complement vital technical and academic expertise and understanding.
The same challenges, in a more commercial context, face Gold OA. The UK Open Access Implementation Group is already working hard with others on interoperability and service models, eg for an “intermediary” in Gold OA transactions.
There was an interesting report released yesterday on the UK Government’s progress towards their open data goals. The report revealed that the Government was on track in terms of the volume of data released but it highlighted some significant challenges that need to be addressed in order to ensure the data is useful to those who want to use it. In the higher education sector we are facing similar challenges.
The report highlights a lot of challenges but in my reading of it these resolve into the following main questions that need to be answered:
- How can the data released be made easier for the public to browse, consume and reuse?
- How can the Government ensure that the right information is released and that it is adequate to meet the transparency objectives?
- How can the Government ensure that the data released allows for easy comparison and analysis?
- What are the costs and benefits involved in the release of the data?
As a result of the JISC linked data projects we scoped and commissioned a report that Curtis and Cartwright produced on the benefits of linked data to higher education. This report includes a benefits map which provides a useful outline of the potential of engaging with linked data (pages 13 and 15).
We are seeing widespread engagement with open and linked data in the library, archive and musuem world with the British Library, Harvard University, OCLC, the British Museum and many, many others all releasing data and conducting experiments. In JISC we have been active in this area through the Discovery programme and are learning how and when open data can deliver benefits to libraries museums and archives.
Southampton University and Lincoln University are making interesting progress with open data. Both institutions have dedicated open data sites:
- data.lincoln.ac.uk (currently being prepared for a relaunch in August 2012)
And both have interesting examples of the ways in which this data can be used. In Lincoln, students have used the open data to develop tools that they need. In Southampton open data has been used to develop a whole raft of apps including a catering search to find out where the snack you want can be bought on campus.
I think it is fair to say that the Government has so far focused on getting data released and is now starting to deal with the challenges that this open data is posing. In JISC we have had a similar focus as we believe a useful first step is to work out the processes involved in releasing data and to build up a decent corpus before starting to address these difficult challenges in earnest. Any future work we scope in this area will start to grapple with these key questions. We believe that the work we have done so far in this area and work done by others in higher education has indicated that there are significant benefits on offer if we get this right so these questions are well worth the effort required to develop answers.
It will be interesting to see how the Open Data Institute helps to address these challenges when they are up and running. We’ll be keeping a close eye on their work and other Government efforts as any progress they make is bound to be relevant to our work in Higher Education.
Great progress is being made on the emerging UK community working with CASRAI (Consortia Advancing Standards in Research Administration Information). We are drawing together members of our existing research information management network, other subject experts and our CERIF friends to drive this exciting collaboration forward.
We now have the kernel for a UK standards committee to help adapt the existing CASRAI data dictionary, as well as the beginnings of a dedicated standards committee for the research impact parts of the global dictionary. A real buzz is building around the proposed new Canada/UK joint standards committee looking to add research dataset metadata to the dictionary, and Simon Hodson and David Baker are identifying and will soon be inviting the stakeholders and experts to form this joint committee. The vision at the moment is of a common global dictionary (with discipline and national extensions) that will start by targeting known needs, for example incorporating the information needed for a data management plan template to meet funder requirements.
These joint initiatives are community-centred, and their activities have caught the eye of research funders in the UK, as well as in Canada and the US. Interest also exist in developing shared models for designing research data management programmes across Europe, as illustrated by the JISC-led EC-funded sim4rdm project. This international attention really reinforces the value of this initiative to cross-border research collaboration The focus on the re-use, sharing and comparing of research information is more important than ever in an era of global research in which the best researchers are themselves globally active. This is borne out by the recent analysis of the UK’s research performance which found that the most effective researchers not only collaborated internationally, but worked and moved across borders. In this context, the value of this initiative to researchers, and to funders, is clear.
As more UK collaborations take shape, and the CASRAI/UK dictionary begins to be used in earnest in UK research management, I think we’ll see real benefits from, for instance, the full integration of CASRAI and CERIF. These initial collaborations around targeted areas of research management will help us to identify the next UK priorities for developing further the common global dictionary, as well as demonstrating the power of the expertise that has been built up in research management in our respective communities.
David Baker, CASRAI Executive Director, will be providing an update soon on the newest developments, so keep an eye out for his blog post for more news…
As part of its implementation work following the Hargreaves Review of IP, the Intellectual Property Office www.ipo.gov.uk has just published the Government’s policy on modernising copyright licensing http://www.ipo.gov.uk/response-2011-copyright.pdf following the latest consultation to which JISC amongst others has submitted various responses. In this policy, the Government has signalled its intention to publish draft legislation to facilitate new schemes for commercial and non commercial use of Orphan Works (works in copyright for which the rights holders are unknown or cannot be traced), voluntary extended collective licensing of copyright works, subject to a number of important safeguards, and to require collecting societies to adopt codes of conduct based on minimum standards. There are likely to be further opportunities to respond to any outlined measures once the draft legislation has been published.
The Government has also indicated that it will be publishing similar policy statements relating to extending the copyright exceptions later on in the year.
JISC will be working to ensure that the needs of the FE/HE sectors are represented.
Amber Thomas, on behalf of JISC
2011-12-02. British Library, London. The EU (FP7) funded Digoiduna project has come out with its recommendations on what it is calling “digital identifiers”, which (for lack of a better phrase) seems to be ‘a re-branding exercise’ for the “Persistent Identifiers” community. However (as I understand it), “Digital Identifier” as used by the Digoiduna project is actually an umbrella term that includes “persistent identifiers” as just one of the layers in the identifiers stack; the additional layers they have put atop the technology stack of PIDs include:
- A.) interoperability (both machine and human), e.g. do URNs and DOIs both do content negoations to machine readable data or do the humans even agree that ‘content negotiation’ is the correct method to expose machine readable metadata from the identifier?
- B.) stakeholder engagement, e.g. what reputation does the identifier have: do scholarly think bit.ly links are Academic or DOI links are more academic?
- C.) cultural influences, e.g. does the UK respect centralised big business companies as the sole priopritier of their most important links in comparison to how the US feels about government providing centralised leadership?
- D.) temporal status, e.g. what are realistic models for persisting citable links over time, not just flippant statements like “forever” but real cost models for 10 years, 50 years, 100 years, 500 years and so on, how do we actually start to compare models for time?
While we still need to look over the full Digoiduna report in depth, this change in perspective (the new ‘DI’ brand) for PIDs is a welcome change from JISC’s PoV as our previous reports in the area support a complex view of identifiers which are primarily driven by the user need (personally, I think Digoiduna could be a bit more user-centric in their presentation of this new identifier stack), but on the whole their call to action to “mobilise resources” is a welcome one:
“…promote actions to mobilize technical, human, financial resources aiming at triggering a wider demand of usage…”
This recommendation clearly support the previous work JISC has been done in Persistent Identifiers and in fact we are hoping to take more real world action in supporting further end user technologies, to quote from our own report:
“JISC should draw a line under long-running arguments about particular persistent identifier schemes and instead should focus its efforts on enabling HEI’s to choose and implement schemes appropriate to their needs… [support] should be provided on how an HEI might choose between identifier schemes based on their own needs and contexts…the pros and cons of various approaches in different circumstances, for different purposes, should be outlined… [especially on how] the adoption and management on the various identifier schemes available.” -JISC Consultation on Identifiers 2010-
The other encouraging aspect of the Digoiduna work is that they are highlighting efforts such as the Den Haag Manifesto which *is* ‘drawing a line under long-running arguments’ and embracing the potential there is to be had by persistent identifier and linkeddata communities coming together. While the Den Haag manifesto might still have some technical difficulties it is the importance of not always arguing about the correct way forward and just trying to move forward in areas where we can collaborate and interoperate without trying to claim one is better (aka more persistent) than the other (just do it).
This hope for the community adopting a “fail fast; fail soon” attitude was further supported by the announcement by Salvatore Mele of CERN and Jan Brase of Datacite and the German National Technical Library that they would be looking to work together to make author identifiers (OrcIDs) and scholarly resource identifiers (DOIs) interoperate (hopefully via linkeddata methods); naturally this kind of bibliographic metadata profile that DOIs can provide cross-linked to author profile metadata (OrcID) is one where real value could be generated on behalf of the scholarly community by using both linkeddata and persistent digital identifier techniques (e.g. content negotiation, redirection, abstraction, etc).
Finally, I’ll end this post with a bit of gossip that JISC is itself hoping to launch a couple of new projects in the identifier space that will take action in providing end users tools that easily integrate “Digital Identifiers” into scholarly workflows (this alongside the ongoing work we already have done in this space).
In short, the PID arena has been ‘too much chatter and not enough action’ for some time and that needs to change; accordingly, we are currently looking at taking forwards some new efforts in the space that could really help make scholars lives easier in their day to day use of identifiers. These projects are in planning and as yet not guaranteed to happen…but fingers crossed they will. Stay tuned
Post written by David F. Flanders (with help from his Digital Infrastructure team colleagues, special thanks to Rachel Bruce and Neil Jacobs for suggested amendments). David is an Innovation Programme Manager for the Digital Infrastructure Team.
This blog post is a supplement to the requirement in the Call for Proposals for OER Rapid Innovation: enhancing digital infrastructure to support open content in education.
Paragraph 24 states that bidders must submit a Use Case.
“24. Bidders should note the requirement detailed in the Bid Form to produce a Use Case to accompany the proposal. These use cases must be made available as Creative Commons BY SA. Please see examples of Use Cases. “
As the definition on Wikipedia definition shows, “Use Case” has a range of meanings. Depending on the context it can mean explaining what something is for (using a key to open a lock), through to a specification of a problem and description of the solution, through to a specified methodology as part of a software development approach such as agile .
In software engineering, a use case is a technique for capturing the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.
It is always about describing how a solution will solve a problem. It always has measures of success defined with in it: if the key breaks in the lock, it doesn’t meet the use case. There are other terms such as user stories or scenarios that can also be used to describe issues that are being tackled, in some contexts they are used interchangeably with use case.
In terms of the OER Rapid Innovation Call, then, this is what I mean by “Use Case”
- What is it that users want to be able to do and currently can’t?
- What will you change to make it possible for them to do it?
- How will you know if you have succeeded?
This is NOT a job to be done AFTER you have written your proposal: this is a key task in scoping your project. If you can’t articulate a clear use case at the point you are granted project funding, you will struggle to deliver useful technical solutions within 6 months. To increase the quality of bids and resulting outputs, it is a requirement of this Call that a use case submitted with every proposal (as part of it or as a link).
The use case should be made available as Creative Commons Attribution (CC BY). This is to ensure that the thinking done by bidders does not go to waste. It is possible that bidders may identify a crucial use case but not have the technical or skills requirements to solve it. I therefore want to be able to share the use cases and make them available to others who may be able to create the technical solutions. Digital infrastructure for open content is global and distributed, there are experts all around the world that we could collaborate on solutions with. (Feedback on this approach is welcome, I recognise it is unusual).
There is no template provided for the Use Case. It is for bidders to identify the best way to structure and describe the problem the project will tackle. As a rough guide for this Call, aim for one page of text / diagrams.
Useful links given in the Call:
In addition, here are some further examples of useful approaches:
- British History Online ‘‘There are no shortcuts within a source other than to the volumes therein’
- Rescript at the IHR ‘There is no method for users to initiate queries using statistical tools’
- An extensive list of repository use cases from 2007, many of which have since been addressed.
ALUIAR Project working through an issues list
Rave in Context open innovation blog
Readers of this blog will know of good guidance and examples of Use Cases – comments and links would be very welcome, please do suggest further reading!
JISC Programme Manager: digital infrastructure for learning and teaching materials
Background to this blog post
The OER Rapid Innovation Call for Proposals was announced in November 2011. It is open to HEFCE-funded institutions to bid.
I am very aware that the issues in scope for this Call are broader then the UK. It includes a snapshot of the digital infrastructure space at November 2011, it builds on the understanding and experiences of projects within the UKOER Programme and beyond, and is particularly informed by the expertise at JISC CETIS . It therefore seems useful to make the snapshot available as a blog post so that it is more accessible to people working in open content for education around the world.
The following is taken from Paragraphs 25-75 of the Call, but with added headings to enable easier reading online. Please read the full Call for further understanding of what the requirements are for projects.
The Global Picture
The OLnet initiative has recently identified Key Challenges for the OER Movement. These challenges include:
- How can we improve access to OER?
- What are the issues surrounding Copyright and Licensing and how can they be overcome?
- What technologies and infrastructure are needed/in place to help the OER movement?
It is these global challenges that underpin this Call for projects to enhance the digital infrastructure to support open content.
The Story so Far
Through the JISC Digital Infrastructure Team, JISC supports the creation and use of a layer of scholarly resources for education and research across the network. This includes the development of infrastructure, technology, practice and policy to support processes from creation and access to re-use of resources. Major activities include sharing and storing content, providing access to content (via licences and technologies), developing solutions for curation and delivering data and content resources via data centres and distributed solutions.
Through the OER Technology Support Project, the OER IPR Project, the evaluation and synthesis, and the experiences of funded projects, and aided particularly by JISC CETIS’s technology synthesis work, JISC is developing a clearer understanding of the role of technologies and infrastructure in supporting open practice and open content.
In particular JISC has funded a number of elements that support the sharing of learning materials including Jorum, the Repositories Infokit, previous rapid innovation funding for the Xpert search, the SWORD protocol, the CaPRet project and an OER Programme-funded prototype showcase of UKOER content that is currently under development.
Opportunities and Challenges
There are some key areas that JISC has identified where developments under this call are encouraged. What follows is a description of some of the opportunities and challenges that have been identified in this space. However this list is not exhaustive and bidders are welcome to submit proposals that address different areas if they fulfil the main aims of the call.
Open licensing is key to open content, and fertile ground for developing digital infrastructure. Tools built around Creative Commons licences may provide a useful backbone, so the Open Attribute tool and projects using those conventions, such as OERGlue and CaPRet are useful in that they provide benefits to users (easy attribution) rewarded by benefits to content providers (analytics). Tools such as Xpert Attribution Tool help the flow of rights. Implementation of Open Attribute into tools and services, and a set of services around embedded licenses are potential areas that proposals could tackle.
Improved resource description, both machine-readable and human-readable are important to enable content to be effectively found, shared and selected. CETIS have provided a summary of the key initiatives to track, namely Learning Resources Metadata Initiative which is a profile of the schema.org initiative for improving html markup. HTML5 may offer promise in this area. Including provenance and licensing information in the sharing of resources is important to digital literacies as well as meeting the requirements of attribution such as in the Creative Commons BY clause.
Aggregation and discovery is another area of interest for open content (see OER aggregation blog post). The OER Thematic Collections projects have explored a range of approaches. The Content Clustering and Sustaining Resources publication provides a good description of the approaches in this area generally. The Shuttleworth-funded OER Roadmap Project proposes an ecosystem of repositories and services, characterised by the use of APIs and shared protocols such as JISC-funded SWORD. The Discovery Initiative promotes an open metadata ecology to enable better use and aggregation of content. The Learning Registry approach explores the use of activity data to enhance the metadata and discovery of resources and the OER Programme is funding a UK experimental node. Solutions might be developed that build on these initiatives, specifically to enhance the digital infrastructure for open content in education.
Many sites hosting collections of educational materials keep logs of the search terms used by visitors to the site when searching for resources. There might be solutions that could be developed to aid the understanding of search activity. For example, a project could deliver a tool that facilitates the analysis of search logs to classify the search terms used with reference to the characteristics of a resource that may be described in the metadata. Such information should assist a collection manager in building their collection (e.g. by showing what resources were in demand) and in describing their resources in such a way that helps users find them. The analysis tool should be shown to work with search logs from a number of and should produce reports in a format that are readily understood, for example a breakdown of how many searches were for “subjects” and which were the most popular subjects searched for. A a degree of manual classification will be required, but if the system is capable of learning how to handle certain terms and that this learning would be shared between users: a user should not have to tell the system that “Biology” is a subject once they or any other user has done so. Further information on the sort of data that is available and what it might mean is outlined in CETIS’s blog post on Metadata Requirements from the Analysis of Search Logs. Solutions should be developed as open source software then made free to use or install without restriction, with full documentation. The tool proposed above is one way that we could improve the understanding of search, other suggested solutions are welcome.
Effective Search Engine Optimisation is key to open educational resources providing benefits of discoverability, reach reputation and marketing. Guidance on “improving your online presence” needs applying to the wide range of platforms and content types used for OER, as described in JISC CETIS’ UKOER technical synthesis. Projects have explored SEO in several ways, for example, the SCOOTER project has produced guidance on its chosen approach to search engine optimisation and the MMTV project experimented with Google AdWords to improve SEO. The variations in format types and platforms mean that it is exposed to web search in a variety of ways. A particular key issue is how “repositories” compare to “web 2.0 services” in terms of search engine optimisation. To answer that, we may need to go beyond theory into running a structured experiment. For example, a technical investigation/tool for the SEO of commons platforms and formats for OER would be very useful. This project would be a repeatable approach, using technical tools to run the SEO work and capture and present the findings in a useful way. The outputs of such an investigation would include the methodology, a findings report to JISC, and an accessible set of outputs aimed at OER projects. Other solutions to improving SEO for open content would also be very welcome.
Understanding use has been a major theme of the OER Programme Phase Two. The Value of Reuse report and the Literature Review of Learners Use of Open Educational Resources captured what is known about use of open educational resources. The Learning Registry is relevant here. The Listening for Impact study analysed the feedback and usage of some open content collections. Further useful resources are available from the Activity Data Programme. Analytics may be an important way to provide evidence of the benefits of open educational resources, so enhancing content and platforms to enable enhanced usage tracking, exploiting APIs of third party systems, exploring ways of capturing and visualising use, and providing dashboards to manage analytics data may be very useful.
Online profiles are becoming a part of academic identity and open content provides a significant opportunity for academics to enhance their profile, alongside managing and reflecting on their professional work. To this point many efforts at creating academic profiles building on institutional information and open content have focused exclusively on profiles of publications and the provision of open access to scholarly communications. However, other forms of open content can play a significant role in academic identity and professional development. A key opportunity is therefore linking a broader range of open content to academic profiles.This might involve fully/semi-automated integration of publication/release/record of multiple types of open content into academic staff profiles. This is not about creating new platforms but of using feeds and APIs to enhance existing systems that handle continuing professional development / CVs / ePortfolios etc. Examples of this sort of functionality can be found in Humbox’s profile on contributing authors which also allows users to embed that author’s content list elsewhere, and Rice Connexions offers author profiles. Services such Slideshare and Youtube host user-generated content are well used as platforms for open content.Proposals could demonstrate fully/semi-automated approaches that can flexible draw on multiple distributed sources of open access articles, OER, blog posts and so on. Proposals to address this opportunity are very welcome.
One mechanism that connects people to content is social recommendation. This includes favouriting, liking, bookmarking, reviewing, and social curation tools such as Scoopit, paper.li, zite, storify, pearltrees and so on. Often this involves browser-based tools such as bookmarklets making it very easy for people to capture, share and store useful resources. There are two OER-specific bookmarking tools available that handle the licensing characteristics of open content: FavOERites developed at Newcastle University (as a UKOER funded project) and the OER Commons tool both of which have APIs and have open sourced their code. The implementation and enhancement of these tools to handle open content may be a useful area for projects to explore. For example, projects might develop solutions for making content “share-friendly” to these tools, how the tools can use automatically generated metadata about licences, the user and their context, and how shared tags and vocabularies might enable more effective sharing for educational purposes.
The growth in e-books and e-readers, both open and proprietary, is of interest to education. Books are a familiar format to use in teaching, but also digital technologies affording new ways of creating, sharing and using books. For example, the College Open Textbooks initiative states that “We have found that open textbooks should be:
- easy to use, get and pass around
- editable so instructors can customize content
- cross-platform compatible
- and accessible so they work with adaptive technology”
In the UK, JISC Collections have been running the ebooks observatory and examining business models for etextbooks. Developments from the research world are emerging around Enhanced Publications which combine research text, date and rich media. There is a recently announced pressbooks platform. International initiatives such as the The Saylor Open Textbook Challenge the WA State open course library etextbook initiative and have raised the profile of open textbooks. JISC CETIS have described the use case for open e-textbooks. There is guidance on ebooks available from JISC digital media, and JISC has funded the #jiscpub R&D projects. Several campus-based publishing projects have piloted reusable approaches, including Epicure, CampusROAR, Larkin Press and another useful example to look at is “living books about life”.
Phases 1 and 2 of OER programme made use of a wide range of platforms, blogs, wikis, repositories and often made modifications to the software to fully support OER use cases. It is likely to mean improving ingest and expose mechanisms, handling licence information, addressing syndicated feeds, APIs, widgets and apps. An example of platform enhancement would be the work Oxford University and others have done with WordPress or the CUNY Academic Commons in a Box work. Proposals are welcome to enhance platforms for open content. Bidders may wish to create enhancements to existing release, aggregation and remix platforms to improve the transfer of open content for educational purposes. Projects may wish to combine existing tools to provide enhanced functionality. The outcomes of these projects should be a richer exchange of metadata between publishing platforms, aggregators and other services used in the sharing of openly licensed content.
The opportunities and challenges above are only indicative and not exhaustive.
Please read the full Call for further understanding of what the requirements are for projects.
Bidders are welcome to use the oer-discuss mailing list to refine ideas and identify potential collaborators. JISC will not provide a matchmaking service, but commercial and overseas experts are welcome to use the mailing list to express an interest in collaborating.
I hope you find this useful. Comments very welcome.
JISC Programme Manager: digital infrastructure for learning and teaching materials
Enhancing platforms for open content: the project cited is from City University New York not State University New York (now corrected, thanks to Matthew Gold, CUNY for spotting the error)
A little bit of buzz going around the (virtual) office today as the Technology Strategy Board announces the ‘Open Data Institute’. Given the Digital Infrastructure Team’s investment and work in open data (not least the recently completed linkeddata programme, #jiscEXPO), we are hoping that the opportunity to collaborate in pushing forward the Open Data agenda (especially in Universities and Colleges) will be a conversation happening soon
Stay tuned, you’ll know (in the open) as we know