15 reusable technology solutions for OER

On behalf of the JISC/HEA UK Open Educational Resources (OER) Programme I am delighted to announce the completion of the OER Rapid Innovation Strand.

(also available as a PDF and downloadable file)

These solutions address a range of issues that we identified in our original call for proposals, and they specifically address:

All of the project outputs are reusable and the slidedeck includes links to key information. Final project reports will be available from the strand project pages. Please do try out the solutions provided, feed back to the projects and join the lively oer-discuss list to connect to people supporting open educational resources.

A huge thank you to the projects, and to Martin Hawksey, Phil Barker and Lorna Campbell of JISC CETIS for doing such fantastic work.

UKOER: what’s in a tag?

Tuesday 13th November saw the final programme meeting for the UK OER Programme 2009-2012. An aim of the programme had been to find sustainable practices for the release of OER, and there were many success stories shared. It seems to me that the funded programme marks the start of a general move towards greater OER practice.

There was a mandated requirement for projects within the Programme to tag their content with “ukoer”. Whether the content is images on flickr, courseware on institutional webpages or videos on youtube, they should be tagged as ukoer.  It also got used for discussions about OER on twitter and in blogposts. As with many mandated requirements it was not universally or consistently applied, despite our best efforts otherwise.

It soon became clear that it can be hard to distinguish between the content that *is* OER and the content that is *about* OER. Particularly because openly licensed materials designed for other people to reuse in a training/learning context can be both about OER and OER themselves: OER squared! There’s quite a lot of content like that.

Recently members of the oer-discuss jiscmail list have been debating whether we should make continued use of the ukoer tag, and whether we can even control tag use post-funding now it is out there in the wild.

What does the tag mean? Is it …

Sometimes the tag might be about the contributor, the OER, or about activities such as workshops.

Focusing on its use for the OER content itself: each of these meanings above might suggest different use cases for how people might wish to slice and present content. Its worth noting that the tag is only one metadata item: each piece of content also has a publish/release data (often relating to when it went live on the platform being used), and an owner/author/contributor (sometimes an institution, or a team, or individual, or combination). Using these variables we can imagine use cases such as:
- see all content tagged ukoer dated 2009-2012
- see all content tagged ukoer
- see all content before 2012 and all content after 2013 (two searches to compare)
and of course, to look at the usage of that content too.

If we take the ukoer tag as a single identifier for content released as part of the programme, that might still be messy – “as part of”, “as a result of”, “with an awareness of”. Those latter meanings could continue to be true. Many people might still see benefits of signifying their content is contributing to a UK OER commons. That commons is the real impact of the programme and it would be healthy to see that continue.

However that does make it harder for people to derive clear narratives / patterns from the data in Jorum (or any other aggregation).  As Sarah Currier puts it “it’s harder to disambiguate a large number of resources with the same tag expressing different properties (“funded by UKOER” *and* “produced by member of UK OER community”), than to just have a new tag that expresses the new property”. “It is very bad data management practice to munge together two concepts in one tag. It is very easy to agree a new tag; data from both can be brought together for analysis much more easily than disambiguating data about two things from one tag”.

However our decision about whether to encourage continued use of the “ukoer” tag will not just be about best practice. It is about weighing up best practice against common practice and the cultural considerations. At the risk of sounding like I’m overcomplicating things: it is a socio-technical issue. There is a balance to be made between the stated or tacit requirements of funders, the role Jorum plays for the funders, the role of Jorum for contributors, and the effort of people involved with OER. Of course by contributors, we are talking about the deposit/share point within an institution or team, who need to keep messages and requirements as simple as possible.

The list members have therefore looked to JISC to say whether/how we will want to draw on these figures as evidence of the impact of the programme. In a sense the measurement of the impact of the programme is inherently fuzzy and that causes complexities for service providers like Jorum who are rightly  trying to anticipate future use cases.

We are lucky to have experts in this field, including two members of the Jorum team who wrote about the challenges of metadata in learning object repositories  and members of JISC Cetis who are writing about resource description in OER. I have gathered their input into this post so that we can try to start articulating the issues here. It is through this exchange that we can make the right decisions for JISC, HEA and the wider community.

The point I make here is that we have before us a classic problem space. It illustrates that metadata decisions are about current and future use, that they are about balancing the needs of contributors and users, and that these things require discussion and the unpacking of assumptions. There are solutions out there, involving the sources, the aggregations … but it depends on what we want.

What’s the answer? Should we continue using ukoer as a community tag for a fuzzy concept or try to restrict use to a controlled tag for a funding stream? If we chose the latter (for any reason) could it actually be controlled in that way?

We would be interested to know what people think. The oer-discuss list leans towards the former but there can be many other perspectives and those of you who have been at the sharp end of evidencing impact may have some valuable war stories to share.

 

Amber Thomas

Post written with input from Sarah Currier, David Kernohan, Martin Hawksey, Lorna Campbell, Jackie Carter.

Experimenting with the Learning Registry

This post is my reflections on the emerging conclusions from the JLeRN Experiment.

Applying a new approach to an old problem

Followers of technology trends will have noticed some of the big themes of recent years include cloud storage, big data, analytics and activity streams, social media. Technologists supporting education and research have been using these approaches in a range of ways, finding where they can help solve critical problems and meet unmet needs. Many of these explorations are investigative: they are about getting a grasp of how the technologies work, what the data looks like, where there are organisational or ethical issues that need to be addressed, and what the skills are that we need to develop in order to fully exploit these emerging opportunities.

The Learning Registry has been described by Dan Rehak as “Social Networking for Metadata” (about learning resources) . Imagine pushing RSS feeds into the cloud of all the urls of learning resources you can imagine, from museums, from educational content providers, from libraries. This is about web-scale big data. Imagine that cloud also pulling in data about where those urls have been shared, on facebook, twitter, blogs, mailing lists.  If you’ve tried out services like topsy.com or bit.ly analytics you’ll know that finding out information about url shares is possible and potentially interesting. Now imagine being able to ask interrogate that data, to see meta trends, to provide a widget next to your content item that pulls down the conversation being had around it. That is the vision of the learning registry. Anyone who has been involved with sharing learning materials will recognise the scenario on the left below.

jokey sketch of the use case for the Learning Registry

 Learning Registry Use Case, Amber Thomas, JISC 2012, CC BY

The Learning Registry is about applying the technologies described above to the problem on the left, by making it possible to mine the network for useful context to guide users.

The Experiment

To explore this potential, the JISC/HEA OER Programme has been funding an experiment to run a Learning Registry “node” in the UK. The growth of openly licensed content and the political momentum to encourage the use of that content has been a spur to funding this experiment though it should be noted that the Learning Registry is not designed purely for open content.

See this useful overview for more detail of the project. It has been running on an open innovation model, sharing progress openly, and working with interested people. Headed up by the excellent Sarah Currier, with input from Lorna Campbell and Phil Barker from JISC CETIS, in my view it has been a very effective experiment.

Towards the end of the work, on 22nd October 2012, Mimas hosted an expert meeting of those people that have been working with the Learning Registry, services and projects, contributors and consumers, developers and decision makers. It was a very rich meeting, with participants exchanging details of the way they have used these approaches, and deep discussions on what we have found.

What follows is my analysis of some of the key issues we have uncovered in this experiment.

Networks and Nodes

The structure of the LR is a fairly flat hierarchy, it can expand infinitely to accommodate new nodes, and nodes can cluster. See the overview for a useful diagram.

What this structure means is that it can grow easily, and that it does not require a governance model with large overheads. The rules are the rules of the network rather than of a gate-keeping organisation. This is an attractive model where it is not clear who the business case lies with.

One of the ways of running a node is to use an Amazon Web Service instance. That seems a nice pure way of running a distributed network, however university procurement frameworks have still got to adjust to the pricing mechanisms of the cloud. Perhaps in that respect we’re not ready to exploit cloud-based network services quite yet.

However more generally I think we are seeing is a growth in the profile of services that are brokers and aggregators of web content. Not the hosts, or the presentation layers, but services in between, sometimes invisible. JISC has been supporting the development of these sorts of “middleware”, “machine services” from the early days: the terminology changes but the concept is not new to JISC. What does seem to be developing though (and this is my perception) is an appetite for these intermediary services, and the skills to integrate them. Perhaps there is a readiness for a Learning Registry-ish service now.

Another key architectural characteristic is a reliance on APIs. This enables developers to create services to meet particular needs. Rather than a centralised model that collects feature requests from users, it allows a layer of skilled developers to create services around the APIs. The APIs have to be powerful to enable this though, so getting that first layer of rich API functionality working is key. To that extent the central team has to be fast and responsive to keep up momentum.

However the extent to which the LR is actually a network so far is unclear. There are a handful of nodes, but not to the extent that we can be sure we are seeing any network effects. The lack of growth of nodes may be because the barrier to setting up a node is perceived to be high. It may be too early to tell. But for the purposes of the JLeRN experiment, my conclusion is that we have not seen the network effects that the LR promises.

 

Pushing the hardest problems out of sight?

It’s easy to fall into a trap of hoping that one technical system will meet everybody’s needs. The Learning Registry might not be THE answer, but there is something of value in the way that it provides some infrastructure to manage a very complex distributed problem.

However the question raised at the workshop by Sarah Currier in her introduction and again by David Kay in his closing reflections is: does it push some of the challenges out of scope, for someone else to solve? The challenges in question include:

To take one problem area: resource description for learning materials. It is very hard to agree on any mandatory metadata beyond Dublin Core. This is partly because of the diversity of resource types and formats: a learning material can be anything, from a photo to a whole website. Within resource types it is possible to have a deeper vocabulary, for example for content packaged resources that may have a nominal “time” or “level” attached. Likewise, different disciplinary areas not only have specialist vocabularies but also use content in different ways. It is technically possible to set useful mandatory metadata BUT in practice it is rarely complied with. When we are talking about a diversity of content providers with different motivations, the carrots and sticks are pretty complicated. So users want rich resource description metadata, to aid search and selection, but that is rarely supplied.

The Learning Registry solution is to be agnostic about metadata: it just sucks it all into the big data cloud. It does not mandate particular fields. What it does is offer developers a huge dataset to prod, to model, to shape, and to pull out in whichever way the users want it. Developers can do anything they want with the data AS LONG AS THE DATA EXISTS. If there is not enough data to play with, or not enough consistency between resources, then it is hard to create meaningful services over the data.

I said above that the Learning Registry “provides some infrastructure to manage a very complex distributed problem”. But on reflection does it manage that complexity? Or does it just make it manageable by pushing it out of scope? And if it doesn’t enable developers to build useful services for educators, is it successful?

Final Thoughts

These are a selection of the issues that the experiment is surfacing. There are certainly plenty of question marks about the effectiveness of this sort of approach. But I still feel sure that there are aspects of these technologies that we should be applying to meeting our needs in education and research. Certainly, this experiment has overlapped with work in JISC’s Activity Data programme, in our analytics work and in the area of cloud solutions. There is something interesting happening here, some glimpses of more elegant ways of share content, maybe even a step change.

 

 

Draft Book on OER Technologies: Into The Wild

Opportunity to preview and contribute to a book about the technical themes emerging from three years of the UK OER Programme

Extract from Lorna Campbell’s blogpost:

The OER technology directions book that Amber, Phil, Martin and I drafted during a book sprint at the end of August is now almost complete. We even have a title!

Technology for Open Educational Resources – Into The Wild. Reflections on three years of the UK OER Programmes

We’ve spent the last few weeks, polishing, editing and amending the text and we would now like to invite colleagues who have an interest in technology and digital infrastructure for open educational resources to review and comment on the open draft.

We’re looking for short commentaries and feedback, either on the whole book, or on individual chapters. These commentaries will form the final chapter of the book. We want to know what rings true and what doesn’t. Have we missed any important technical directions that you think should be included? What do you think the future technical directions are for OER?

Note that the focus of this book is as much on real world current practice as on recommended or best practice. This book is not intended as a beginners guide or a technical manual, instead it is a synthesis of the key technical issues arising from three years of the UK OER Programmes. It is intended for people working with technology to support the creation, management, dissemination and tracking of open educational resources, and particularly those who design digital infrastructure and services at institutional and national level.

The chapters cover:

  • Defining OER
  • Resource management
  • Resource description
  • Licensing and attribution
  • SEO and discoverability
  • Tracking OERs
  • Paradata
  • Accessibility by Terry McAndrew, TechDis

UK OER projects from all phases of the Programme are encouraged to comment, and we would particularly welcome feedback from colleagues that are grounded in experience of designing and running OER services.

For more details, links and guidance on how to contribute, please see Lorna’s blogpost.

We wrote most of this as a booksprint: a writing retreat using collaborative authoring software. As well as continuing to write the book with the help of the community, we also have the fun of choosing cover designs and formats and print-on-demand options … it’s quite a learning curve and very rewarding. So now you all know what my family and friends will be getting for xmas ;-)

Amber Thomas, JISC.

Licensing Data as Open Data

One of the findings that has emerged clearly from the UK OER Programme and from the UK Discovery work is that for a healthy content ecosystem, information about the content needs to be available to many different systems, services and users. Appropriately licensing the metadata and feeds is crucial to downstream discovery and use.

The OER IPR Support Project have developed this fabulous animation to introduce the importance of open data licensing in an engaging way.

It was developed out of the UK OER Programme but informed by the work of several other areas including UK Discovery, Managing Research Data, the Strategic Content Alliance, and sharing XCRI course feeds. With thanks to the many people who helped in the storyboarding, scripting and feedback: particularly Phil Barker, Tony Hirst and Martin Hawskey.

You may remember the same OER IPR team produced the Turning a Resource into an Open Educational Resource (1,700+ hits and counting). The team is Web2Rights (Naomi Korn, Alex Dawson), JISC Legal (Jason Miles-Campbell) and the animator is Luke McGowan. The whole animation is (c) HEFCE on behalf of JISC, and Creative Commons Attribution Share Alike 3.0.

We hope it will have wide usefulness and we very much welcome feedback.

Amber Thomas, JISC

When ideals meet reality

At ALT-C in early September I ran a session with David Kernohan on Openness: learning from our history. The theme of the conference was ” a confrontation with reality” so it seemed fitting to explore the trajectories taken by various forms of openness. What follows is just a short thought piece that I contributed to the session about some of the patterns I have observed over the past decade or so.

Altc openness idealsmeetreality from JISC


 

Curves and Cycles

The first thing to say is that we are all different in our encounters with new approaches Whether they are new technologies like badges or new delivery models like MOOCs. We are each on our own learning curves and changed curves, and we meet new ideas and solutions at different points in the hype cycle. That is a lot of variation. So when we meet new ideas, we can respond very differently. My first message is that every response is a real response.

Polarisation

It’s to easy to characterise people as pro- or anti- something It’s too easy to present things as a debate for or against But polarisation often masks the real questions, because we don’t hear them properly.

“The use of technology seems to divide people into strong pro- and anti-camps or perhaps utopian and dystopian perspectives” Martin Weller, The Digital Scholar [1]

Dialectics of Open and Free

There is usually a dialectic around open and free: free as in freedom, free as in beer [2].  “Open as in door or open as in heart”: Some courses are open as in door. You can walk in, you can listen for free. Others are open as in heart. You become part of a community, you are accepted and nurtured [3]. I always add: open as in markets?

Branching

To follow on from free as in beer …A great example of branching is the trajectory of the open source movement. there were big debates over “gratis vs libre” and that gave birth to the umbrella term Free AND Libre Open Source Software term – FLOSS. By enabling the practices of open source to branch off, to allow the community to branch off, we saw a diffusion of innovation. Towards profit-making business models in some areas, free culture models in others. Its interesting how github supports the whole spectrum

This has also been the approach of the UK OER Programme. We have been quite pluralistic about OER, to let people find their own ways. We have certainly had tensions between the marketing/recruitment aspect and the open practice perspective. What’s important to note is that often it’s not just one model that comes to dominate.

Tipping points into the mainstream

We don’t always understand what brings about mainstreaming. We very rarely control it.

Consider a story from open standards: the rise and fall of RSS aggregation. Is it netvibes or pageflakes that made the difference? Or google-reader? At what point did twitter and facebook start to dominate the aggregation game and overtake RSS? The OER programme gave freedom for each project to choose their platform. They didn’t chose a standard they chose a platform. It’s often when open standards are baked in to platforms that we see take-up without conscious decision making.

I’m not sure we always notice: sometimes when mainstreaming happens we don’t recognise it. When did e-learning become part of the fabric of education?

Pace

Finally, change can take a lot longer than we hope. The 10 years since the Budapest Open Access Initiative [4] can feel like geological time. And yet the OA movement has achieved so much. Perhaps we need some time-lapse photography approach to recognising the impact of changes we started back then. So many more people understand OA now. So many more people care.

Change takes longer than you think

Key Messages

We are all unique in our encounters with new things. Polarisation often masks the real questions. There is often a dialectic around open and free. Often it’s not just one model that comes to dominate. Sometimes when mainstreaming happens we don’t recognise it. Change can take a lot longer than we hope.

 

 

References:

[1] http://www.bloomsburyacademic.com/view/DigitalScholar_9781849666275/chapter-ba-9781849666275-chapter-013.xml

[2] http://www.wired.com/wired/archive/14.09/posts.html?pg=6

[3] http://followersoftheapocalyp.se/open-as-in-door-or-open-as-in-heart-mooc/

[4] http://en.wikipedia.org/wiki/Budapest_Open_Access_Initiative

EduWiki Conference Reflections

Last week I attended a very engaging conference organised by Wikimedia, focused on the uses of Wikipedia in Education: EduWiki .

I was on a panel discussing openness in HE and I also I gave a keynote on 21st Century Scholarship and the role of Wikipedia. I’d produced a visual (infographic/poster/prettypicture) which I blogged and have included below. The slides of my talk are online. A video of the talk is also available.

It was interesting to bring together Wikipedians with educators and I learnt a great deal about how Wikipedia is being used in HE settings, and what the active debates are in this area. Some of my take-home points:

Overall, a great conference, congratulations to Martin Poulter and Daria Cybulska for organising it.  I’m pleased to say that I have already arranged for me and my colleagues to get trained up in Wikipedia, and I already have my eye on some pages I want to contribute to.

As a parting shot, here’s the pretty picture I made (deftly dodges the infographic pedants who argue this isn’t one). (Update: Brian Kelly unpacks that issue in his post on posters and infographics !)

Visual from jisc wikipedia talkview in full at easel.ly

Digital infrastructure for learning materials: update July 2012

This is a summary of notable developments around work on technology issues around learning materials, mostly by JISC. It’s aimed at the technical and semi-technical and comments/additions are very welcome.

Update

Back in late May we ran our first Dev8ED. It was a great event, with developers supporting course data, curriculum design and delivery, distributed VLE and OER programmes coming together for two days of technical work and training.

There is a buzz of technical activity at Jorum. They are trialling the beta of their open usage statistics dashboard . To see the many reasons why open usage data is a good idea, see this post by Nick Sheppard. He and Brian Kelly (an advocate of open usage data) are both on the Jorum Steering Group and we’ve long been aware of the importance to users and contributors of being able to see how Jorum is being used. The Jorum team are also upgrading to dSpace 1.8 which will bring a raft of improvements including some clever search/browse interfaces. Its all part of a re-engineering process to make Jorum work better for institutions. UPDATE: Read more from Jorum.

Talking of usage, the JISC Learning Registry Node Experiment, JLeRN, is exploring how this emerging global architecture can work for the UK. It’s all about surfacing the “context with the content”, as Suzanne Hardy describes it. Think big data approaches for learning resources. So far we have learnt that the local “nodes” are fairly easy to set up and feed with data. The challenge is making use of it in this incubation stage: building tools and interfaces over variable data sets. This is exactly what we intended to explore, so the team is now working closely with some handpicked projects to work our way through the challenge. Some early work by the SPAWS project is making good progress, and it will be great to see the learning start to emerge from these pilots.

If you like the notion of “paradata” (social and contextual data derived about content and its use), then you’ll see how it fits well with the idea of Learning Analytics. JISC Cetis and others have been examining emerging practices and the issues around this concept. Paradata seems to be a bit of bridge between web content analytics and activity data, so those of us working with digital resources for teaching and learning would do well to catch up on these concepts.

My Open Educational Resources Rapid Innovation (OERRI) Projects are past the halfway mark now. These are all designed to enhance the digital infrastructure for open content in education. They are 15 projects, each with grants of £25k or less, and running for 4-6 months. Summaries follow (in my words, with some links to nutshell descriptions)

An honorable mention too for PublishOER, an OER Themes project rather than OERRI. It is working with JISC Collections and Publishers, and includes development of improved technical support for permission seeking and licensing requests.

Meanwhile, colleagues have been busy with the WW1 Discovery projects.
Sarah Fahmy worked with the British Library on an WW1 Editathon, nicely summed up in a quick video, and there was an interesting tweet experiment from WW1 Arras project too. On the more technical side, Andy McGregor updated me that King’s College did some research into what researchers want out of an online WW1 research collection and what are the valuable collections that could be aggregated into such a research collection. Building on this research, Mimas will develop an exemplar research aggregation of WW1 content. The King’s research discovered that not many of the most valuable collections have working APIs. Therefore Mimas will build APIs for a number of the collections identified then build a service that will aggregate these collections and enable people to build services that allow researchers to work with the aggregated content. The project is expected to deliver in November 2012.

July saw the Open Repositories conference in Edinburgh and there was a wealth of useful discussion and outputs as always. For those of us working with learning materials I’d particularly recommend looking at the reflective posts by Kathi Fletcher and Nick Sheppard. I’m keeping a close eye on developments around the digital infrastructure for open access to research. It’s interesting to see an increase in discussion about formats and licensing. Perhaps, as Laura Czerniewicz , Martin Weller and others have been saying, the next step forward is in more a more holistic view of building services to support open scholarship that incorporate teaching as well as research. My colleague Balviar Notay is leading JISC’s work on repositories and curation shared infrastructure and we are very mindful of the delicate balance between building for research use cases and risking scope creep by trying to be inclusive of other uses.

Friday 27th July saw 20 experts gathered together for an online meeting on schema.org and the Learning Resources Metadata initiative (LRMI) that Phil Barker is engaged with. The discussion was very rich, and my take home message was that over the years JISC and its services (especially CETIS, UKOLN and OSSWatch) have developed real expertise in standards development and adoption. Working with innovators and early adopters we have a deep understanding how technology develops, in the relationship between aspiration and implementation. UPDATE: Read a brief summary of the webinar.

On the horizon …

The OER IPR Team are currently producing a follow-up animation to Turning a Resource into an OER and this one will be about open licensing for your data. More about that when it is released!

Inspired by the booksprint session at Dev8ED, in August I will be working with Lorna Campbell, Phil Barker and Martin Hawskey on our ebook. Our working title is “Small Pieces Loosely Joined: technology stories from three years of the JISC/HEA OER Programme”. Or something like that. It will be our way of drawing out the lessons for future development in this area. We like a challenge.

The theme of ecosystems will inform my contribution to the UK Eduwiki conference in early September. I’ll be on a panel about openness in HE, and I hope to reflect the many varied ways that educators and learners can help develop the ecosystem around wikipedia.

Then I’ll be off to ALT-C where I’m running two sessions. One, with Paul Walk is on the role of the Strategic Developer in HE. I work with so many talented technologists who add great value to their institutions, this session is a chance to explore the benefits of investing in in-house development expertise. I’d love to hear from anyone in that sort of role who has been doing CMALT (please get in touch!) . The other session, with David Kernohan, is about the trajectory of open education, including some of the key concepts around badges, MOOCs and other hot topics. I’m particularly interested in how we can learn lessons from the route taken by open source and open data movements. I suspect ALT-C will be noisy with talk of MOOCs! From a technical perspective, JISC Cetis is keeping an eye on what platforms and tools people are using to deliver online courses. UPDATE: Read about what technologies people are using for some MOOCs!

Finally, I’m pleased to announce that I will be working with JISC Collections, JISC Digital Media and other experts led by Ken Chad, on guidance on the Challenge of eBooks. It will be the next step on from the forthcoming JISC Observatory report on ebooks: it will look at the creation, curation and consumption of digital books of all types, and the opportunities for institutions to respond coherently to the challenge. Watch this space for news of both!

With so much going on, I’m sure there are things I’ve missed in this update – do get in touch!

 

Amber Thomas

July 2012

 

The JISC ebooks universe

My colleague Ben Showers has recently been looking across the work taking place around digital books in all their forms: open textbooks, digital monographs, epub, web-based books. For educational institutions the need to keep up with the content needs of learners and researchers is paramount but so much is happening at the moment, with hardware, content formats, the emergence of new authoring tools and rising user expectations, so where do you start?

We have pulled together some key information for decision-makers, with a distinct JISC flavour. Particular thanks to Caren Milloy from JISC Collections and Zak Mensah from JISC Digital Media for their help.

 

Legal (Licensing, IPR, DRM)

Business Models

 

Technology and Standards

 

User behaviour/Requirements

What have we missed?

Please let us know what resources you find most useful, from JISC and elsewhere, in meeting the challenge of ebooks in your institution.

 

Ben Showers and Amber Thomas, JISC Digital Infrastructure Team

May 2012

(last updated 28th May 2012)

Developing our Creative Commons

Last week I had the great pleasure of meeting with Cathy Casserly (Chief Executive Officer) and Diane Cabell (Counsel and Corporate Secretary) of Creative Commons. Over a couple of days I had many conversations about open licensing, open education and the routes ahead.

I was a panel member for a CC Salon on OER Policies for Promotion. The panel was chaired by Joscelyn Upendran of CC UK, and comprised Cathy Casserly (CC), Patrick McAndrew (OU) and Victor Henning (Mendeley) and myself.

ccSalon London Panel: Victor Henning, Amber Thomas, Cathy Casserly, Patrick McAndrew, Joscelyn Upendran photo by David Percy

To prepare, I had mapped out some my thoughts on how to encourage open content approaches in education, and some ways that we could be thrown off track.

Preview below. View on Prezi.

screengrab of prezi

We talked about what funders and institutions can do to encourage open educational practices. As is often the case, discussion of open access research publishing and open educational resources often blended.

Some key points percolating from my discussions last week:

These thoughts, and more, will be framing my contribution to the Creative Commons consultation on v4 of the licences over the next month or so.

“Creative Commons staff, board and community have to date identified several goals for the next version of its core license suite tied to achieving CC’s goal and mission. These include:

Internationalization – further adapt the core suite of international licenses to operate globally, ensuring they are robust, enforceable and easily adopted worldwide;

Interoperability – maximize interoperability between CC licenses and other licenses to reduce friction within the commons, promote standards and stem license proliferation;

Long-lasting — anticipate new and changing adoption opportunities and legal challenges, allowing the new suite of licenses to endure for the foreseeable future;

Data/PSI/Science/Education — recognize and address impediments to adoption of CC by governments as well as other important, publicly-minded institutions in these and other critical arenas; and

Supporting Existing Adoption Models and Frameworks – remain mindful of and accommodate the needs of our existing community of adopters leveraging pre-4.0 licenses, including governments but also other important constituencies. “

Creative Commons has asked me to promote this consultation to you. They would love to hear from you, as providers, users and facilitators of openly licensed content.

Next Page →