On behalf of the JISC/HEA UK Open Educational Resources (OER) Programme I am delighted to announce the completion of the OER Rapid Innovation Strand.
(also available as a PDF and downloadable file)
These solutions address a range of issues that we identified in our original call for proposals, and they specifically address:
- how to create rich machine-readable resources that contribute to the global content commons?
- how to simply and cheaply host and display content?
- how to enrich and manage audio-visual content in an educational context?
- how to bring OER content closer to the everyday life of academics?
- how to make use more visible to aid discovery and decision making?
All of the project outputs are reusable and the slidedeck includes links to key information. Final project reports will be available from the strand project pages. Please do try out the solutions provided, feed back to the projects and join the lively oer-discuss list to connect to people supporting open educational resources.
A huge thank you to the projects, and to Martin Hawksey, Phil Barker and Lorna Campbell of JISC CETIS for doing such fantastic work.
Tuesday 13th November saw the final programme meeting for the UK OER Programme 2009-2012. An aim of the programme had been to find sustainable practices for the release of OER, and there were many success stories shared. It seems to me that the funded programme marks the start of a general move towards greater OER practice.
There was a mandated requirement for projects within the Programme to tag their content with “ukoer”. Whether the content is images on flickr, courseware on institutional webpages or videos on youtube, they should be tagged as ukoer. It also got used for discussions about OER on twitter and in blogposts. As with many mandated requirements it was not universally or consistently applied, despite our best efforts otherwise.
It soon became clear that it can be hard to distinguish between the content that *is* OER and the content that is *about* OER. Particularly because openly licensed materials designed for other people to reuse in a training/learning context can be both about OER and OER themselves: OER squared! There’s quite a lot of content like that.
Recently members of the oer-discuss jiscmail list have been debating whether we should make continued use of the ukoer tag, and whether we can even control tag use post-funding now it is out there in the wild.
What does the tag mean? Is it …
- a signifier that content was produced with funding from the Programme
- a signifier that the releaser has been involved with the Programme
- a signifier that the releaser is contributing to a bigger collection of OER within the UK
- a signifier that the releaser identifies themselves as a part of a UK OER community?
Sometimes the tag might be about the contributor, the OER, or about activities such as workshops.
Focusing on its use for the OER content itself: each of these meanings above might suggest different use cases for how people might wish to slice and present content. Its worth noting that the tag is only one metadata item: each piece of content also has a publish/release data (often relating to when it went live on the platform being used), and an owner/author/contributor (sometimes an institution, or a team, or individual, or combination). Using these variables we can imagine use cases such as:
– see all content tagged ukoer dated 2009-2012
– see all content tagged ukoer
– see all content before 2012 and all content after 2013 (two searches to compare)
and of course, to look at the usage of that content too.
If we take the ukoer tag as a single identifier for content released as part of the programme, that might still be messy – “as part of”, “as a result of”, “with an awareness of”. Those latter meanings could continue to be true. Many people might still see benefits of signifying their content is contributing to a UK OER commons. That commons is the real impact of the programme and it would be healthy to see that continue.
However that does make it harder for people to derive clear narratives / patterns from the data in Jorum (or any other aggregation). As Sarah Currier puts it “it’s harder to disambiguate a large number of resources with the same tag expressing different properties (“funded by UKOER” *and* “produced by member of UK OER community”), than to just have a new tag that expresses the new property”. “It is very bad data management practice to munge together two concepts in one tag. It is very easy to agree a new tag; data from both can be brought together for analysis much more easily than disambiguating data about two things from one tag”.
However our decision about whether to encourage continued use of the “ukoer” tag will not just be about best practice. It is about weighing up best practice against common practice and the cultural considerations. At the risk of sounding like I’m overcomplicating things: it is a socio-technical issue. There is a balance to be made between the stated or tacit requirements of funders, the role Jorum plays for the funders, the role of Jorum for contributors, and the effort of people involved with OER. Of course by contributors, we are talking about the deposit/share point within an institution or team, who need to keep messages and requirements as simple as possible.
The list members have therefore looked to JISC to say whether/how we will want to draw on these figures as evidence of the impact of the programme. In a sense the measurement of the impact of the programme is inherently fuzzy and that causes complexities for service providers like Jorum who are rightly trying to anticipate future use cases.
We are lucky to have experts in this field, including two members of the Jorum team who wrote about the challenges of metadata in learning object repositories and members of JISC Cetis who are writing about resource description in OER. I have gathered their input into this post so that we can try to start articulating the issues here. It is through this exchange that we can make the right decisions for JISC, HEA and the wider community.
The point I make here is that we have before us a classic problem space. It illustrates that metadata decisions are about current and future use, that they are about balancing the needs of contributors and users, and that these things require discussion and the unpacking of assumptions. There are solutions out there, involving the sources, the aggregations … but it depends on what we want.
What’s the answer? Should we continue using ukoer as a community tag for a fuzzy concept or try to restrict use to a controlled tag for a funding stream? If we chose the latter (for any reason) could it actually be controlled in that way?
We would be interested to know what people think. The oer-discuss list leans towards the former but there can be many other perspectives and those of you who have been at the sharp end of evidencing impact may have some valuable war stories to share.
Post written with input from Sarah Currier, David Kernohan, Martin Hawksey, Lorna Campbell, Jackie Carter.
This post is my reflections on the emerging conclusions from the JLeRN Experiment.
Applying a new approach to an old problem
Followers of technology trends will have noticed some of the big themes of recent years include cloud storage, big data, analytics and activity streams, social media. Technologists supporting education and research have been using these approaches in a range of ways, finding where they can help solve critical problems and meet unmet needs. Many of these explorations are investigative: they are about getting a grasp of how the technologies work, what the data looks like, where there are organisational or ethical issues that need to be addressed, and what the skills are that we need to develop in order to fully exploit these emerging opportunities.
The Learning Registry has been described by Dan Rehak as “Social Networking for Metadata” (about learning resources) . Imagine pushing RSS feeds into the cloud of all the urls of learning resources you can imagine, from museums, from educational content providers, from libraries. This is about web-scale big data. Imagine that cloud also pulling in data about where those urls have been shared, on facebook, twitter, blogs, mailing lists. If you’ve tried out services like topsy.com or bit.ly analytics you’ll know that finding out information about url shares is possible and potentially interesting. Now imagine being able to ask interrogate that data, to see meta trends, to provide a widget next to your content item that pulls down the conversation being had around it. That is the vision of the learning registry. Anyone who has been involved with sharing learning materials will recognise the scenario on the left below.
Learning Registry Use Case, Amber Thomas, JISC 2012, CC BY
The Learning Registry is about applying the technologies described above to the problem on the left, by making it possible to mine the network for useful context to guide users.
To explore this potential, the JISC/HEA OER Programme has been funding an experiment to run a Learning Registry “node” in the UK. The growth of openly licensed content and the political momentum to encourage the use of that content has been a spur to funding this experiment though it should be noted that the Learning Registry is not designed purely for open content.
See this useful overview for more detail of the project. It has been running on an open innovation model, sharing progress openly, and working with interested people. Headed up by the excellent Sarah Currier, with input from Lorna Campbell and Phil Barker from JISC CETIS, in my view it has been a very effective experiment.
Towards the end of the work, on 22nd October 2012, Mimas hosted an expert meeting of those people that have been working with the Learning Registry, services and projects, contributors and consumers, developers and decision makers. It was a very rich meeting, with participants exchanging details of the way they have used these approaches, and deep discussions on what we have found.
What follows is my analysis of some of the key issues we have uncovered in this experiment.
Networks and Nodes
The structure of the LR is a fairly flat hierarchy, it can expand infinitely to accommodate new nodes, and nodes can cluster. See the overview for a useful diagram.
What this structure means is that it can grow easily, and that it does not require a governance model with large overheads. The rules are the rules of the network rather than of a gate-keeping organisation. This is an attractive model where it is not clear who the business case lies with.
One of the ways of running a node is to use an Amazon Web Service instance. That seems a nice pure way of running a distributed network, however university procurement frameworks have still got to adjust to the pricing mechanisms of the cloud. Perhaps in that respect we’re not ready to exploit cloud-based network services quite yet.
However more generally I think we are seeing is a growth in the profile of services that are brokers and aggregators of web content. Not the hosts, or the presentation layers, but services in between, sometimes invisible. JISC has been supporting the development of these sorts of “middleware”, “machine services” from the early days: the terminology changes but the concept is not new to JISC. What does seem to be developing though (and this is my perception) is an appetite for these intermediary services, and the skills to integrate them. Perhaps there is a readiness for a Learning Registry-ish service now.
Another key architectural characteristic is a reliance on APIs. This enables developers to create services to meet particular needs. Rather than a centralised model that collects feature requests from users, it allows a layer of skilled developers to create services around the APIs. The APIs have to be powerful to enable this though, so getting that first layer of rich API functionality working is key. To that extent the central team has to be fast and responsive to keep up momentum.
However the extent to which the LR is actually a network so far is unclear. There are a handful of nodes, but not to the extent that we can be sure we are seeing any network effects. The lack of growth of nodes may be because the barrier to setting up a node is perceived to be high. It may be too early to tell. But for the purposes of the JLeRN experiment, my conclusion is that we have not seen the network effects that the LR promises.
Pushing the hardest problems out of sight?
It’s easy to fall into a trap of hoping that one technical system will meet everybody’s needs. The Learning Registry might not be THE answer, but there is something of value in the way that it provides some infrastructure to manage a very complex distributed problem.
However the question raised at the workshop by Sarah Currier in her introduction and again by David Kay in his closing reflections is: does it push some of the challenges out of scope, for someone else to solve? The challenges in question include:
- Resource description and keywords
- People identifiers
- Data versioning
To take one problem area: resource description for learning materials. It is very hard to agree on any mandatory metadata beyond Dublin Core. This is partly because of the diversity of resource types and formats: a learning material can be anything, from a photo to a whole website. Within resource types it is possible to have a deeper vocabulary, for example for content packaged resources that may have a nominal “time” or “level” attached. Likewise, different disciplinary areas not only have specialist vocabularies but also use content in different ways. It is technically possible to set useful mandatory metadata BUT in practice it is rarely complied with. When we are talking about a diversity of content providers with different motivations, the carrots and sticks are pretty complicated. So users want rich resource description metadata, to aid search and selection, but that is rarely supplied.
The Learning Registry solution is to be agnostic about metadata: it just sucks it all into the big data cloud. It does not mandate particular fields. What it does is offer developers a huge dataset to prod, to model, to shape, and to pull out in whichever way the users want it. Developers can do anything they want with the data AS LONG AS THE DATA EXISTS. If there is not enough data to play with, or not enough consistency between resources, then it is hard to create meaningful services over the data.
I said above that the Learning Registry “provides some infrastructure to manage a very complex distributed problem”. But on reflection does it manage that complexity? Or does it just make it manageable by pushing it out of scope? And if it doesn’t enable developers to build useful services for educators, is it successful?
These are a selection of the issues that the experiment is surfacing. There are certainly plenty of question marks about the effectiveness of this sort of approach. But I still feel sure that there are aspects of these technologies that we should be applying to meeting our needs in education and research. Certainly, this experiment has overlapped with work in JISC’s Activity Data programme, in our analytics work and in the area of cloud solutions. There is something interesting happening here, some glimpses of more elegant ways of share content, maybe even a step change.
Opportunity to preview and contribute to a book about the technical themes emerging from three years of the UK OER Programme
Extract from Lorna Campbell’s blogpost:
The OER technology directions book that Amber, Phil, Martin and I drafted during a book sprint at the end of August is now almost complete. We even have a title!
Technology for Open Educational Resources – Into The Wild. Reflections on three years of the UK OER Programmes
We’ve spent the last few weeks, polishing, editing and amending the text and we would now like to invite colleagues who have an interest in technology and digital infrastructure for open educational resources to review and comment on the open draft.
We’re looking for short commentaries and feedback, either on the whole book, or on individual chapters. These commentaries will form the final chapter of the book. We want to know what rings true and what doesn’t. Have we missed any important technical directions that you think should be included? What do you think the future technical directions are for OER?
Note that the focus of this book is as much on real world current practice as on recommended or best practice. This book is not intended as a beginners guide or a technical manual, instead it is a synthesis of the key technical issues arising from three years of the UK OER Programmes. It is intended for people working with technology to support the creation, management, dissemination and tracking of open educational resources, and particularly those who design digital infrastructure and services at institutional and national level.
The chapters cover:
- Defining OER
- Resource management
- Resource description
- Licensing and attribution
- SEO and discoverability
- Tracking OERs
- Accessibility by Terry McAndrew, TechDis
UK OER projects from all phases of the Programme are encouraged to comment, and we would particularly welcome feedback from colleagues that are grounded in experience of designing and running OER services.
For more details, links and guidance on how to contribute, please see Lorna’s blogpost.
We wrote most of this as a booksprint: a writing retreat using collaborative authoring software. As well as continuing to write the book with the help of the community, we also have the fun of choosing cover designs and formats and print-on-demand options … it’s quite a learning curve and very rewarding. So now you all know what my family and friends will be getting for xmas 😉
Amber Thomas, JISC.
One of the findings that has emerged clearly from the UK OER Programme and from the UK Discovery work is that for a healthy content ecosystem, information about the content needs to be available to many different systems, services and users. Appropriately licensing the metadata and feeds is crucial to downstream discovery and use.
The OER IPR Support Project have developed this fabulous animation to introduce the importance of open data licensing in an engaging way.
It was developed out of the UK OER Programme but informed by the work of several other areas including UK Discovery, Managing Research Data, the Strategic Content Alliance, and sharing XCRI course feeds. With thanks to the many people who helped in the storyboarding, scripting and feedback: particularly Phil Barker, Tony Hirst and Martin Hawskey.
You may remember the same OER IPR team produced the Turning a Resource into an Open Educational Resource (1,700+ hits and counting). The team is Web2Rights (Naomi Korn, Alex Dawson), JISC Legal (Jason Miles-Campbell) and the animator is Luke McGowan. The whole animation is (c) HEFCE on behalf of JISC, and Creative Commons Attribution Share Alike 3.0.
We hope it will have wide usefulness and we very much welcome feedback.
Amber Thomas, JISC
At ALT-C in early September I ran a session with David Kernohan on Openness: learning from our history. The theme of the conference was ” a confrontation with reality” so it seemed fitting to explore the trajectories taken by various forms of openness. What follows is just a short thought piece that I contributed to the session about some of the patterns I have observed over the past decade or so.
Curves and Cycles
The first thing to say is that we are all different in our encounters with new approaches Whether they are new technologies like badges or new delivery models like MOOCs. We are each on our own learning curves and changed curves, and we meet new ideas and solutions at different points in the hype cycle. That is a lot of variation. So when we meet new ideas, we can respond very differently. My first message is that every response is a real response.
It’s to easy to characterise people as pro- or anti- something It’s too easy to present things as a debate for or against But polarisation often masks the real questions, because we don’t hear them properly.
“The use of technology seems to divide people into strong pro- and anti-camps or perhaps utopian and dystopian perspectives” Martin Weller, The Digital Scholar 
Dialectics of Open and Free
There is usually a dialectic around open and free: free as in freedom, free as in beer . “Open as in door or open as in heart”: Some courses are open as in door. You can walk in, you can listen for free. Others are open as in heart. You become part of a community, you are accepted and nurtured . I always add: open as in markets?
To follow on from free as in beer …A great example of branching is the trajectory of the open source movement. there were big debates over “gratis vs libre” and that gave birth to the umbrella term Free AND Libre Open Source Software term – FLOSS. By enabling the practices of open source to branch off, to allow the community to branch off, we saw a diffusion of innovation. Towards profit-making business models in some areas, free culture models in others. Its interesting how github supports the whole spectrum
This has also been the approach of the UK OER Programme. We have been quite pluralistic about OER, to let people find their own ways. We have certainly had tensions between the marketing/recruitment aspect and the open practice perspective. What’s important to note is that often it’s not just one model that comes to dominate.
Tipping points into the mainstream
We don’t always understand what brings about mainstreaming. We very rarely control it.
Consider a story from open standards: the rise and fall of RSS aggregation. Is it netvibes or pageflakes that made the difference? Or google-reader? At what point did twitter and facebook start to dominate the aggregation game and overtake RSS? The OER programme gave freedom for each project to choose their platform. They didn’t chose a standard they chose a platform. It’s often when open standards are baked in to platforms that we see take-up without conscious decision making.
I’m not sure we always notice: sometimes when mainstreaming happens we don’t recognise it. When did e-learning become part of the fabric of education?
Finally, change can take a lot longer than we hope. The 10 years since the Budapest Open Access Initiative  can feel like geological time. And yet the OA movement has achieved so much. Perhaps we need some time-lapse photography approach to recognising the impact of changes we started back then. So many more people understand OA now. So many more people care.
Change takes longer than you think
We are all unique in our encounters with new things. Polarisation often masks the real questions. There is often a dialectic around open and free. Often it’s not just one model that comes to dominate. Sometimes when mainstreaming happens we don’t recognise it. Change can take a lot longer than we hope.
Last week I attended a very engaging conference organised by Wikimedia, focused on the uses of Wikipedia in Education: EduWiki .
I was on a panel discussing openness in HE and I also I gave a keynote on 21st Century Scholarship and the role of Wikipedia. I’d produced a visual (infographic/poster/prettypicture) which I blogged and have included below. The slides of my talk are online. A video of the talk is also available.
It was interesting to bring together Wikipedians with educators and I learnt a great deal about how Wikipedia is being used in HE settings, and what the active debates are in this area. Some of my take-home points:
- An increasing number of countries are running targeted support for educational use of Wikipedia, there were some great presentations about setting Wikipedia assignments.
- A bit of digital literacy myth-busting: I had thought that Wikipedia wanted to be seen as an acceptable source of citation, but that’s not true. Perhaps old news (2006) but I wasn’t aware that Wikipedia was so clear that it is NOT a primary source and shouldn’t be cited in academic work.
- That said, I realised as the conference went on that I often use Wikipedia articles as unique identifiers for concepts and things. I think of Wikipedia in quite a linked data way, creating URLs for things. So I do reference Wikipedia as a way of helping people find out more, and I especially do this for things like academic theories: I use it as an accessible reference point.
- The role that Wikipedia has as an entry point for academic work is really interesting, I think. It makes it clear to me that those in the open access community should be helping Wikipedians ensure that wherever possible they cite an open access version of an article, monograph or textbook. Wikipedia editors are very niche group but really key to ensuring that people outside academia can get access to academic outputs.
- Wikipedia articles are written in a different style to academic essays, which are different again from journal articles. Even then, Wikipedia articles vary in their styles. Alannah Fitzgerald demonstrated the use of textmining tools to illustrate the different styles. The discussion surfaced questions about the way that academics communicate with each other and with “the public”.
- I didn’t realise there is a whole set of Wikimedia projects, I will be having a proper look at Wikidata and Wikibooks
- We heard some interesting angles about the nature of the Wikipedia community, how many are highly educated, and that as a community it is still developing and there is space for much more diversity in the contributor base, particularly more women. They are actively seeking engagement, and I love the phrase “pedants welcome”, I can think of many colleagues who fit that criteria admirably 😉
Overall, a great conference, congratulations to Martin Poulter and Daria Cybulska for organising it. I’m pleased to say that I have already arranged for me and my colleagues to get trained up in Wikipedia, and I already have my eye on some pages I want to contribute to.
As a parting shot, here’s the pretty picture I made (deftly dodges the infographic pedants who argue this isn’t one). (Update: Brian Kelly unpacks that issue in his post on posters and infographics !)
This is a summary of notable developments around work on technology issues around learning materials, mostly by JISC. It’s aimed at the technical and semi-technical and comments/additions are very welcome.
Back in late May we ran our first Dev8ED. It was a great event, with developers supporting course data, curriculum design and delivery, distributed VLE and OER programmes coming together for two days of technical work and training.
There is a buzz of technical activity at Jorum. They are trialling the beta of their open usage statistics dashboard . To see the many reasons why open usage data is a good idea, see this post by Nick Sheppard. He and Brian Kelly (an advocate of open usage data) are both on the Jorum Steering Group and we’ve long been aware of the importance to users and contributors of being able to see how Jorum is being used. The Jorum team are also upgrading to dSpace 1.8 which will bring a raft of improvements including some clever search/browse interfaces. Its all part of a re-engineering process to make Jorum work better for institutions. UPDATE: Read more from Jorum.
Talking of usage, the JISC Learning Registry Node Experiment, JLeRN, is exploring how this emerging global architecture can work for the UK. It’s all about surfacing the “context with the content”, as Suzanne Hardy describes it. Think big data approaches for learning resources. So far we have learnt that the local “nodes” are fairly easy to set up and feed with data. The challenge is making use of it in this incubation stage: building tools and interfaces over variable data sets. This is exactly what we intended to explore, so the team is now working closely with some handpicked projects to work our way through the challenge. Some early work by the SPAWS project is making good progress, and it will be great to see the learning start to emerge from these pilots.
If you like the notion of “paradata” (social and contextual data derived about content and its use), then you’ll see how it fits well with the idea of Learning Analytics. JISC Cetis and others have been examining emerging practices and the issues around this concept. Paradata seems to be a bit of bridge between web content analytics and activity data, so those of us working with digital resources for teaching and learning would do well to catch up on these concepts.
My Open Educational Resources Rapid Innovation (OERRI) Projects are past the halfway mark now. These are all designed to enhance the digital infrastructure for open content in education. They are 15 projects, each with grants of £25k or less, and running for 4-6 months. Summaries follow (in my words, with some links to nutshell descriptions)
- Attribute Images stamp your images with their licence conditions for better reuse and attribution
- Bebop flow third party content services through your buddypress wordpress platform MORE
- Breaking Down Barriers: Building a GeoKnowledge Community with Open Educational Resources (OER) making Landmap more open and Jorum more geo MORE
- CAMILOE (Collation and Moderation of Intriguing Learning Objects in Education) providing rich open reusable texts about educational practice
- Developing Linked Data Infrastructures for OERs use simile exhibit software to enrich videos and connect them to content MORE
- Improving Accessibility to Mathematical Teaching Resources using latex mark-up to increase reuse of open content for maths MORE
- Portfolio Commons connect repository content to mahara e-portfolios MORE
- Rapid Innovation Dynamic Learning Maps-Learning Registry (RIDLR) mapping resources against curricula and learning paths MORE
- RedFeather (Resource Exhibition and Discovery) free lightweight version of e-prints MORE
- Sharing Paradata Across Widget Stores using the learning registry to share usage and recommendation data about software MORE
- SPINDLE: Increasing OER discoverability by improved keyword metadata via automatic speech to text transcription orchestration of low-costs approaches to enhancing metadata for audio
- SupOERGlue licence-aware remix platform for educators
- Synote Mobile enhancing custom software for managing audio files MORE
- Track OER: Tracking Open Educational Resources tracing use of open content beyond the host platform MORE
- Xerte Experience Now Improved: Targeting HTML5 (XENITH) enhancing Xerte online toolkits to output as html5 for improved accessiblity and mobile-friendly content MORE
An honorable mention too for PublishOER, an OER Themes project rather than OERRI. It is working with JISC Collections and Publishers, and includes development of improved technical support for permission seeking and licensing requests.
Meanwhile, colleagues have been busy with the WW1 Discovery projects.
Sarah Fahmy worked with the British Library on an WW1 Editathon, nicely summed up in a quick video, and there was an interesting tweet experiment from WW1 Arras project too. On the more technical side, Andy McGregor updated me that King’s College did some research into what researchers want out of an online WW1 research collection and what are the valuable collections that could be aggregated into such a research collection. Building on this research, Mimas will develop an exemplar research aggregation of WW1 content. The King’s research discovered that not many of the most valuable collections have working APIs. Therefore Mimas will build APIs for a number of the collections identified then build a service that will aggregate these collections and enable people to build services that allow researchers to work with the aggregated content. The project is expected to deliver in November 2012.
July saw the Open Repositories conference in Edinburgh and there was a wealth of useful discussion and outputs as always. For those of us working with learning materials I’d particularly recommend looking at the reflective posts by Kathi Fletcher and Nick Sheppard. I’m keeping a close eye on developments around the digital infrastructure for open access to research. It’s interesting to see an increase in discussion about formats and licensing. Perhaps, as Laura Czerniewicz , Martin Weller and others have been saying, the next step forward is in more a more holistic view of building services to support open scholarship that incorporate teaching as well as research. My colleague Balviar Notay is leading JISC’s work on repositories and curation shared infrastructure and we are very mindful of the delicate balance between building for research use cases and risking scope creep by trying to be inclusive of other uses.
Friday 27th July saw 20 experts gathered together for an online meeting on schema.org and the Learning Resources Metadata initiative (LRMI) that Phil Barker is engaged with. The discussion was very rich, and my take home message was that over the years JISC and its services (especially CETIS, UKOLN and OSSWatch) have developed real expertise in standards development and adoption. Working with innovators and early adopters we have a deep understanding how technology develops, in the relationship between aspiration and implementation. UPDATE: Read a brief summary of the webinar.
On the horizon …
The OER IPR Team are currently producing a follow-up animation to Turning a Resource into an OER and this one will be about open licensing for your data. More about that when it is released!
Inspired by the booksprint session at Dev8ED, in August I will be working with Lorna Campbell, Phil Barker and Martin Hawskey on our ebook. Our working title is “Small Pieces Loosely Joined: technology stories from three years of the JISC/HEA OER Programme”. Or something like that. It will be our way of drawing out the lessons for future development in this area. We like a challenge.
The theme of ecosystems will inform my contribution to the UK Eduwiki conference in early September. I’ll be on a panel about openness in HE, and I hope to reflect the many varied ways that educators and learners can help develop the ecosystem around wikipedia.
Then I’ll be off to ALT-C where I’m running two sessions. One, with Paul Walk is on the role of the Strategic Developer in HE. I work with so many talented technologists who add great value to their institutions, this session is a chance to explore the benefits of investing in in-house development expertise. I’d love to hear from anyone in that sort of role who has been doing CMALT (please get in touch!) . The other session, with David Kernohan, is about the trajectory of open education, including some of the key concepts around badges, MOOCs and other hot topics. I’m particularly interested in how we can learn lessons from the route taken by open source and open data movements. I suspect ALT-C will be noisy with talk of MOOCs! From a technical perspective, JISC Cetis is keeping an eye on what platforms and tools people are using to deliver online courses. UPDATE: Read about what technologies people are using for some MOOCs!
Finally, I’m pleased to announce that I will be working with JISC Collections, JISC Digital Media and other experts led by Ken Chad, on guidance on the Challenge of eBooks. It will be the next step on from the forthcoming JISC Observatory report on ebooks: it will look at the creation, curation and consumption of digital books of all types, and the opportunities for institutions to respond coherently to the challenge. Watch this space for news of both!
With so much going on, I’m sure there are things I’ve missed in this update – do get in touch!
My colleague Ben Showers has recently been looking across the work taking place around digital books in all their forms: open textbooks, digital monographs, epub, web-based books. For educational institutions the need to keep up with the content needs of learners and researchers is paramount but so much is happening at the moment, with hardware, content formats, the emergence of new authoring tools and rising user expectations, so where do you start?
We have pulled together some key information for decision-makers, with a distinct JISC flavour. Particular thanks to Caren Milloy from JISC Collections and Zak Mensah from JISC Digital Media for their help.
Legal (Licensing, IPR, DRM)
- E-books for Skills: Licensing model for e-textbooks for use by ACL, WBL and Offender Learning
- PublishOER is looking at licensing issues and models for incorporating publisher content into OER, building on CASPER
- Legal Aspects of OERs: OER Infokit (more IPR specific support is on OER IPR website but nothing specific on etextbooks yet)
- OER Report on Open Practice across sectors (implications for textbooks)
- Jorum is now hosting resources from the Saylor Foundation who have been supporting open textbooks
- Frances Pinter (Bloomsbury Academic) – Frances Pinter future of academic monograph slides and : video
- e- textbooks on mobile devices: JISC Collections is working with the University of Lincoln to licence ebooks for use on mobile devices but downloadable via the VLE.
- Pilot of a consortia model for e-books: JISC Collections is working with Swets to pilot the model used by the Max Plank Society
- E-books for Skills: JISC Collections is looking at the business model to support licensing ebooks to ACL, WBL and Offender Learning
- PublishOER is looking at the sorts of negotiations needed between OER producers and publishers and how the business models might work
- The future of the scholarly monograph in humanities and social sciences: OAPEN-UK
- Living Books for Life: A sustainable, low cost model for publishing books
- Economic implications of alternative scholarly publishing models: Exploring the costs and benefits: Houghton Report (not directly linked but the models around research articles could lend themselves to a similar move in the area of books/learning resources)
- e-Textbook business models (JISC Collections)
- E-books for FE eTextbooks Business Models report looked at the barriers to the adoption of textbooks in FE
- JISC eBooks for FE explored standard subscription / purchase model and more recently, the Patron-Driven Aquisition model at a national level
- Wikieducator Open Textbook book explores the use and adoption of open textbooks for teaching
Technology and Standards
- Textus project
- HTML5 case studies (to be available shortly)
- JISC Digital Media have a report on html 5 / video
- jiscPUB: Digital Monograph Technical Landscape: Exemplars and Recommendations
- Mobile and Wireless Technologies Review at Edina
- JISC Scholarly Comms: Campus Based Publishing
- TechDis: Access to eBooks
- JISC Digital Media Introduction to eBooks
- JISC Pals, TIME ebook metadata work 2006
- ebooks metadata RDTF (Discovery ) vision – see the attachments
- Discovery initiative on open metadata, focused on bibliographic metadata and open metadata – discovery
- KB + will help support institutions in the management and discovery of ebooks
- JISC Collections (Carol Tenopir) study on scholarly reading: UK Scholarly Reading and the Value of Library Resources: Summary Results of the Study Conducted Spring 2011
- PALS group study on Patron Driven Acquisition of eBooks and the role metadata plays in that process: Patron Driven Acquisitions (PDA) and the role of metadata in the discovery, selection and acquisition of ebooks
- JISC eBooks Observatory project: http://observatory.jiscebooks.org/
- Open etextbook Use Case from JISC CETIS
What have we missed?
Please let us know what resources you find most useful, from JISC and elsewhere, in meeting the challenge of ebooks in your institution.
Ben Showers and Amber Thomas, JISC Digital Infrastructure Team
(last updated 28th May 2012)
Last week I had the great pleasure of meeting with Cathy Casserly (Chief Executive Officer) and Diane Cabell (Counsel and Corporate Secretary) of Creative Commons. Over a couple of days I had many conversations about open licensing, open education and the routes ahead.
I was a panel member for a CC Salon on OER Policies for Promotion. The panel was chaired by Joscelyn Upendran of CC UK, and comprised Cathy Casserly (CC), Patrick McAndrew (OU) and Victor Henning (Mendeley) and myself.
To prepare, I had mapped out some my thoughts on how to encourage open content approaches in education, and some ways that we could be thrown off track.
Preview below. View on Prezi.
We talked about what funders and institutions can do to encourage open educational practices. As is often the case, discussion of open access research publishing and open educational resources often blended.
Some key points percolating from my discussions last week:
- Educational institutions have everything to gain from “open access”, it is mainly publishers who have to adjust and find new models. In contrast, in the case of “open education” educational institutions have to adjust and find new models. In fact publishers are one of the contenders for providing open education.
- The most successful “open” approach since the birth of the web so far has been open source. What we saw there were the vision and leadership of the early proponents branching off into a wide range of business models, both pure and hybrid. I anticipate a similar hybridisation emerging in the “OER” space: the purist approaches will continue and mature, but there will also be hybrid approaches taking parts of the model: open processes but with closed products (collaborative textbook authoring), or open products with closed processes (open courses with paid for accreditation) etc. Expect a diffusion of implementation.
- I am thinking more and more that OER as a term that marks out reusable adaptable teaching resources is one thing, open content that is available for anyone to freely copy and remix is a slightly different thing and a much larger venn circle. Trying to meet both needs in one platform and one definition might be too much compromise and frustration. To draw on the parallel above, the structure of the open source ecosystem is hidden to most end users. Github and sourceforge are for developers, who reuse the code in ways that end users can be unaware of. If we are to deepen approaches to educational reusability we may have to branch those platforms off from the places that end users find the content (We are currently exploring this on the oer-discuss jiscmail list: join in!).
- Speaking of platforms, it’s recently come onto my radar that there is a strong dependency in the way the web works between the terms and conditions of service of something like YouTube or slideshare or prezi, in what rights and responsibilities lie with the content contributor, what r&rs lie with the user, and where the choice of content licence fits into that. There is potential for all kinds of overrides between them. I’m also very aware that in app style software it is often pared down to a minimum interface, so where is the small print in every little “put” or “get” action? We already know through various JISC innovation projects that RSS feeds, APIs and open data models implicitly encourage particular types of content use, but the licensing is rarely explicit. If an item of content is easily embedded into a third party platform through code snippets or widgets, doesn’t that imply such uses are allowed? Pinterest made it the technology too easy to override the licence! I suspect this is going to be an increasing focus: the role of platform T&C and functionality in facilitating content licences.
- Creative Commons are hearing over and over that privacy is now a key concern in the flow of content on the web. The ways in which the licensing backbone that CC provides might support or be parallel to, activities regarding consent, takedown requests and ethical considerations is coming up a lot. The team at Newcastle Medical School have been exploring the concept of consent commons for a while now, and I anticipate we’ll hear more of that sort of issue.
- Mark-up and embedding of licence terms into the vast range of digital formats is really important. I have been hoping to commission some work on that, and will be exploring ways of helping map out the options, at content, platform and ecosystem level.
- As the open educational resources space grows, we need to look for how to support the infrastructure in sustainable ways. CC are forming an OER Policy Registry. Here at JISC we are assessing the network of services required to support open access to research: what lessons can we share from that? What services can we share? The potential for alignment is there, but avoiding dilution and scope creep are always concerns.
- Lastly, thinking of ecosystems I’ve been noticing there may be lessons from the green movement in how to mainstream openness and build business models around it. Think of reuse as recycling. Manufacturers mark products as made from a particular percentage of recycled materials, they also indicate which aspects of the product is itself recyclable. As consumers we can use the logos to know which products can be recycled according to the schemes that collect our reycling. Shared services like rubbish collections and council waste processing contracts, take our recycling to specialist recyclers who then supply the recycled materials on to manufacturers. We all have a part to play in the ecosystem of reuse. So it goes, I think, with trying to make open content sustainable (except the most valuable commodity is the labour behind the content rather than the content itself!). There is a whole other blog post in that (with a bit of fairtrade and organic thrown in!).
These thoughts, and more, will be framing my contribution to the Creative Commons consultation on v4 of the licences over the next month or so.
“Creative Commons staff, board and community have to date identified several goals for the next version of its core license suite tied to achieving CC’s goal and mission. These include:
Internationalization – further adapt the core suite of international licenses to operate globally, ensuring they are robust, enforceable and easily adopted worldwide;
Interoperability – maximize interoperability between CC licenses and other licenses to reduce friction within the commons, promote standards and stem license proliferation;
Long-lasting — anticipate new and changing adoption opportunities and legal challenges, allowing the new suite of licenses to endure for the foreseeable future;
Data/PSI/Science/Education — recognize and address impediments to adoption of CC by governments as well as other important, publicly-minded institutions in these and other critical arenas; and
Supporting Existing Adoption Models and Frameworks – remain mindful of and accommodate the needs of our existing community of adopters leveraging pre-4.0 licenses, including governments but also other important constituencies. “
Creative Commons has asked me to promote this consultation to you. They would love to hear from you, as providers, users and facilitators of openly licensed content.