At ALT-C in early September I ran a session with David Kernohan on Openness: learning from our history. The theme of the conference was ” a confrontation with reality” so it seemed fitting to explore the trajectories taken by various forms of openness. What follows is just a short thought piece that I contributed to the session about some of the patterns I have observed over the past decade or so.
Curves and Cycles
The first thing to say is that we are all different in our encounters with new approaches Whether they are new technologies like badges or new delivery models like MOOCs. We are each on our own learning curves and changed curves, and we meet new ideas and solutions at different points in the hype cycle. That is a lot of variation. So when we meet new ideas, we can respond very differently. My first message is that every response is a real response.
It’s to easy to characterise people as pro- or anti- something It’s too easy to present things as a debate for or against But polarisation often masks the real questions, because we don’t hear them properly.
“The use of technology seems to divide people into strong pro- and anti-camps or perhaps utopian and dystopian perspectives” Martin Weller, The Digital Scholar 
Dialectics of Open and Free
There is usually a dialectic around open and free: free as in freedom, free as in beer . “Open as in door or open as in heart”: Some courses are open as in door. You can walk in, you can listen for free. Others are open as in heart. You become part of a community, you are accepted and nurtured . I always add: open as in markets?
To follow on from free as in beer …A great example of branching is the trajectory of the open source movement. there were big debates over “gratis vs libre” and that gave birth to the umbrella term Free AND Libre Open Source Software term – FLOSS. By enabling the practices of open source to branch off, to allow the community to branch off, we saw a diffusion of innovation. Towards profit-making business models in some areas, free culture models in others. Its interesting how github supports the whole spectrum
This has also been the approach of the UK OER Programme. We have been quite pluralistic about OER, to let people find their own ways. We have certainly had tensions between the marketing/recruitment aspect and the open practice perspective. What’s important to note is that often it’s not just one model that comes to dominate.
Tipping points into the mainstream
We don’t always understand what brings about mainstreaming. We very rarely control it.
Consider a story from open standards: the rise and fall of RSS aggregation. Is it netvibes or pageflakes that made the difference? Or google-reader? At what point did twitter and facebook start to dominate the aggregation game and overtake RSS? The OER programme gave freedom for each project to choose their platform. They didn’t chose a standard they chose a platform. It’s often when open standards are baked in to platforms that we see take-up without conscious decision making.
I’m not sure we always notice: sometimes when mainstreaming happens we don’t recognise it. When did e-learning become part of the fabric of education?
Finally, change can take a lot longer than we hope. The 10 years since the Budapest Open Access Initiative  can feel like geological time. And yet the OA movement has achieved so much. Perhaps we need some time-lapse photography approach to recognising the impact of changes we started back then. So many more people understand OA now. So many more people care.
Change takes longer than you think
We are all unique in our encounters with new things. Polarisation often masks the real questions. There is often a dialectic around open and free. Often it’s not just one model that comes to dominate. Sometimes when mainstreaming happens we don’t recognise it. Change can take a lot longer than we hope.
Here at JISC we think a lot about openness: what it means, how to support it, where it takes us.
This is my contribution to that thinking. It is very much my individual views, but informed by the work we do at JISC, and by the Open Knowledge Foundation, amongst others.
My open narrative
Open makes things visible.
The everyday sense of “open” is open rather than closed – letting people see what is there, what is happening.
The web enables you to;
- do some of your processes/practices online, visible to others
- share some of your products/outputs online, visible to others
Open makes access easy.
This is where open–as-in-open-access comes in: open without needing to log in, and open without payment.
Open is social.
The “many eyes” principle of sharing open data and the open innovation model encourage others not only to view but to comment, to feed back, to engage. This speeds up the process in hand and improves the quality of the resulting work.
Open makes things usable by others.
Open standards exist to encourage as many developers as possible to adopt them.
This is where open licensing comes in: granting others explicit and generous permissions to use your content.
Open can be a way of working.
Doing open working and openly releasing outputs can make a person feel differently about what they do. Researchers might call this collection of activities open scholarship, technologists might call their activities open development, project teams might call it open innovation. Each of these types of open practice has elements in common and elements specific to the sorts of activities the practice involves.
Open is not exclusive
Open source can mean both the open development process and the open source software. They are not always found together: open development processes can produce non-open software, and closed development processes can produce open source software.
Opens are mutually beneficial
There is a virtuous cycle when open process and open products combine. In open scholarship, both creating and using open content and using open ways of working, the content feeds the practice feeds the content.
I’m watching the Openness in Education course with interest and I expect this whole meta open concept to deepen in 2012.
A Diagram of Opens
Its important to note that is is an abstracted diagram: in my view, open is not a replacement for the way things currently work. There is not ever going to be a total transformation to open. The reality is a mixed economy. Business models matter. Practice models matter.
Open can be good for business, open can be good for practice but it exists in a bigger ecosystem of technologies and behaviours. Good is not enough, it needs to be useful. That’s what JISC and other advocates of openness are working hard to surface.
Ultimately I think open is good because it is a good way of working.
My Story of O(pen) by Amber Thomas is licensed under a Creative Commons Attribution 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://www.jisc.ac.uk/contactus
I am just back from the Berlin 9 conference. The “Berlin” series of conference are named after the Berlin Declaration on Open Access, and this was the first time the annual conference has been held in North America. It’s very hard to summarise my reactions from the conference, there were so many stories showing how opening up scholarship can lead to real benefits, in health, development, innovation and our quality of life. For example, Cyril Muller from the World Bank described how that organisation has adopted an open approach to the work it funds, and to its own operations, and is encouraging the governments with whom it works to do the same. Laura Czerniewicz from the University of Cape Town showed how open education resource, configured for SIM-enabled mobile devices, can make a real difference to some quite seriously disadvantaged students. And Elliot Maxwell highlighted some wonderfully elegant research studies, showing clearly how, when scientific findings and resources are made open, it leads to a greater diversity, quality and application of knowledge. Of course, there are implications. Michael Crow of Arizona State University argued that all this requires us to re-think the university as a social technology, and Philip Bourne highlighted some of the challenges we have in moving to a research practice that is native to the digital environment, genuinely reproducible, and that rewards researchers who move in that direction. The overwhelming impression, though, was of a scholarly community now adopting more open approaches, and beginning to see tangible benefits from that. Berlin 10 is on the African continent for the first time. I hope it will bring new voices to be heard in this community.
At ALT- C I participated in several discussions around open content, and then this week we ran the closing programme meeting for Phase Two of the HEA/JISC OER programme. I feel that a new perspective on academic-created open content is emerging.
I think it’s sometimes useful to think in terms of foreground and background: most of the elements are there and have been there all along, but some take centre stage. It’s a question of weight and attention given to the different activities, a question of where the discussions happen.
|2009 Foreground||2011 Foreground|
|Focus on provision||Focus on use|
|Focus on educator as provider and user||Focus on non-educators as users|
|Open courseware, the course online||Rich media, beyond text|
|Embedding in courses or free searching online||Tutor or peer or social recommendation|
|CC BY NC ND for editing by educators||CC BY for remixing by anyone|
|Focus on licensing is key||Focus on licensing might be distracting|
|Institutional workflows||Institutional support for open practice|
|Storage and presentation services||Brokerage services|
Just to stress, all of these views are evident in part in 2009, 2010 and 2011, and even before that. My interest is just in what is shifting in the foreground. This is just my first take and I hope it will stimulate discussion. It would be really interesting for others to write their own lists.
I think that enough of the focus has shifted that the space has changed. I christen this The OER Turn. The OER Turn is characterised by the working through of the following topics:
- visible and invisible use
- the importance of licensing
- the role of the university
Visible and Invisible Use
The Value of Reuse Report by Marion Manton and Dave White of Oxford, produced for the OER programme, uses an iceberg analogy of visible and invisible use. It helpfully guided our discussions at the end of phase two programme meeting. Previous to this, David Wiley’s “toothbrush” post was the example cited of this conundrum. The fact that it was so divisive amongst OER people shows that it hit a nerve (excuse the pun). I think the iceberg diagram draws on this and illustrates the problem of visibility.
The general consensus amongst people involved in UK OER Programme is that reuse of web-based resources does happen, all the time, but it is usually private, mostly invisible to the providers and often not strictly legal. So there is above waterline use and below the waterline use.
The visible use of open content tends to be institutional-level use that is itself more visible because it is openly shared again: it is being relicensed out again. Where institutions want to reuse content in aggregate open resources, it may influence the types of content they reuse, and they way in which they do it.
- Visible use has characteristics that may not be shared by invisible use: we should not extrapolate too far from the visible uses to the invisible uses
- Institutional content might make more use of resources that are clearly pedagogically described and fit with their structures course provision. So the most visible reuse we see might be that. But that might not be the main use case for open content.
On reflection, perhaps the top of the iceberg fits most closely with 2009 foregrounded conception of OER, the majority of the iceberg is what is being given more attention in the 2011 conception.
The importance of licensing?
Naomi Korn’s video introduction to licensing and IPR issues illustrates the concept of rights in > rights out. The more rights you want to give out to users, the more you have to restrict yourself to content you have obtained broad rights in for. As the risk management calculator illustrates, if you use content licensed as CC BY NC ND, you cannot licence it out as CC BY NC, because that doesn’t not include the ND clause. And CC BY SA can only be licensed out as CC BY SA, so cannot be remixed with anything less than CC BY or more CC BY SA. Share-Alike is quite restrictive. This is counter-intuitive but true. Play with the calculator and find out for yourself.
It may be only when reuse is more visible, such as formal institutional adoption of third party resources (above the waterline) that the risk of reusing un-cleared content is high enough to make the open license a key aspect of OER.Institutions as providers of content may wish to choose different licences to what institutions as users of content want. They may want to publish content as (c) all rights reserved. If it is a choice between that or nothing, what should they choose? Note that they could publish something as (c) all rights reserved unless otherwise stated, and have clearly marked CC elements. Polarising a simple open or not open isn’t helpful. As Naomi Korn put it, “what is the opposite of open”? Paid-for? Authenticated? Not editable? Open content needs to be viewed as part of the wider ecosystem of content. What it “affords” will be specific to the use case. Interestingly this is a good parallel with accessibility: “accessible to who to do what?”
Reflecting with Naomi Korn we shifted our recommendation from “whichever CC licence suits” (1) in phase one to “ideally CC BY” in phase two. John Robertson has summarised licensing choices made by the projects. This is an example of how the programme has pushed forward our understanding of this area, including that of the legal experts. If we knew the answers at the start, it wouldn’t be an innovation programme!
Thinking above to the points about visibility: if the open content is not shared out again under an open licence then it might be being used but not so visibly. It might show up as web stats, but even then, once the content has been copied to another place, as any CC licence lets you do, then the content proliferates, as does the usage. Usage becomes even harder to see.
Another implication of described use as above below the waterline is that the risk is significantly less. The feeling was that this is an appropriate risk evaluation. So, open licensing does matter for formal institutional use, less so for practice that is “under the waterline”.
- Mixed economy of content is the reality for most end use cases.
- The benefits of licensing to providers and users are different; of courses users would like as many rights as they can have, but which use cases really need those rights? Can the content be made available under a more restrictive licence and still be useful to the majority of use cases?
- There is an emerging use case of intermediary/brokering services: aggregation, fusion, curation, which perhaps does require CC BY. Not because the end user needs them but because the middleware needs them in order to remix and represent content. Often though I suspect it is the feed or metadata that needs to be licensed rather than the content. Open metadata might turn out to be more important to open academic content than open content licensing.
We are genuinely learning together on this: as open licensing spreads and models of use develop, we will need to be alert to the mechanics of remixing content. And also open to the possibility that the end point of the supply chain need not be open: not all content will be licensed out again.
However … I still want to hear more about the rights that Creative Commons licenses grant to translate and shift formats, for above the waterline activities. Examples please! This could be key.
The role of the university
Alongside the previously dominant narrative of OERs as exchanges between educators and each other, and educators and their learners, there is also the potential for academic open content to connect academics with the public. I explored that a little in a previous post, Tony Hirst further in his post on OERs and public service education, University of Nottingham, Oxford University and other institutions also see their open content as part of their public engagement.
In some scenarios opening up universities through open practice is about OER, in the top of the iceberg sense. But much of the benefits of opening up knowledge to the public can be achieved without open content licensing (unless the brokering services need it). Initiatives like Warwick Knowledge also draw on the notion of the university as a pubic knowledge organisation.
So there is a strong argument for focussing more on the public and global use of open content.
Sidenote: Critics of the toothbrush analogy might say that this is what they meant all along. I’m not sure that is true. If it is, it wasn’t very well articulated. Because we still need to understand the drivers behind provision and how the benefits of public engagement can be articulated. Academic staff are a university’s most precious resource. The needs of institutions, who pay academic’s wages, need to be factored in to how open practice can be supported.
The pursuit of global knowledge is not owned by universities. Wikipedia, Slideshare, YouTube, Twitter, Delicious have all seen a blossoming of thoughtful quality input from a huge range of sources. The role that academics play in this open content space is as one type of contribution. Learners contribute too. But so do millions who are outside of formal education.
OER is dead. Long live academically-created content appropriately licensed and formatted to support intended users
Not quite as catchy, is it? However I am increasingly hearing suggestions that OER is not a useful term any more, aside from a supply-side term relating to the visible tip of the iceberg. I have recommended for some time that we drop the term and focus instead on open content and open practice.
Having asked a year ago what the “O” in OER means, now I find myself asking what the “Open” in Open Content means. Well, it definitely means free (not paid). And it means easily linkable, which means not authenticated (not closed). However what about near-ubiquitous controlled access through gmail or facebook? Sometimes the format matters, sometimes the licensing matters. Maybe this matters a lot for content to cross language boundaries, maybe it matters a lot for accessibility. In which case do we need to articulate the costs and benefits of open content for those use cases? We don’t want to kill open practice dead by focusing too strictly on definitions of openness any more than we want to kill open content by diluting the promise to users seeking editable re-licensable content. What questions should be asking about open content?
What do you think?
OER Digital Infrastructure Update by Amber Thomas is licensed under a Creative Commons Attribution 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://www.jisc.ac.uk/contactus
Footnote (1) The wording was “Examples of suitable licences include those with the “Attribution” and “Share-Alike” clauses. We would encourage projects not to use the non-free Creative Commons variants (such as “Non Commercial” and “No Derivatives”), as these negatively affect the reusability of resources. However, we do recognise that in some specific circumstances this may not be possible.” (Thanks to David Kernohan for extracting)
In open educational resources and open access research, there is a strong emphasis about how they can support the exchange of content and understanding between people, with a focus on academic-to-academic, teacher-to-learner, and sometimes learner-to-learner. In this blog post I’d like to unpack how OER and OA can help connect academics with the public and what implications that might have.
If, as a thought experiment, we were to take as our primary use case members of the public: how can we best support that?
Heath warning: I used an old fashioned method of flipchart and felt-tips, so I hope these are legible enough to get my point across. Also I have used the metaphor of an open content cloud to mean something nebulous with fuzzy edges. I don’t mean cloud as in cloud computing. Also I have a very weak definition of open: at a minimum I just mean freely available without authentication.
Let’s start with the basics …
1) Academics contribute content (tweets, slides, papers, courses …) to the web. As well as being available to other academics, it is also on the open web so it is available to the public. It is also available to journalists to use in their reporting up to the public.
2) Of course, the cloud of open content is huge, the range of journalistic media is huge: no individual can read it all. Which is why social media is so important to our filtering and selection of resources …
3) … because it is social media that enables members of the public, journalists and academics to make connections between content items, for those connections to be visible to everyone else, and for people to be able to make rich networks of connections.
4) So far so good. But for this to deepen and build over time, within topic areas, and within individuals, for provenance to be trusted,and for this to become normalised as the way we use the web, we need to properly attribute and cite resources, to make this flow reliable and visible …
5) … and to do that we need to be able to clearly identify people and content
Hopefully this all looks familiar.
I think there might be some interesting implications for how we manage the digital infrastructure to underpin this public<>academic use case. Many of these implications are bubbling away in blog posts about OA and OER, so I think there is some value in articulating what these implications are.
- Academics should, and many are, embracing the possibilities of contributing content to the web, beyond an audience of other academics
- We need to understand what journalists need in order to use academic-created content. Data journalism, journalistic use of social media could be key to how academic-created content reaches the public. The rights that need to be granted to journalists to share, reuse and remix content need to be given, but in such a way that meets the motivations of the academic (often attribution)
- We need to understand better how the public (i.e people) use the web and social media: digital literacy is important for academic content to get informed take-up from the public
- Likewise rights that need to be granted to the public to share, reuse and remix content need to be given as appropriate. But they need to be simple, hence the value of sticking to some core licenses such as the Creative Commons suite even where better niche licenses might be available
- So how much does open licensing matter? Perhaps it matters most in the academic<>academic exchange, next in the academic<>journalist exchange, and least of all in the academic<>public exchange.
- However how much does attribution matter? A lot. Embeddable machine readable licenses are key to these chains of attribution because they offer the possibility of automatic attribution. (I am convinced this is very important, whilst I recognise it doesn’t work for big data, it is necessary for a lot of other use cases)
- When we think about identifying people, we shouldn’t just think about identifying academics: journalists and the public matter too, and our systems of identifiers must work across domain boundaries; where do twitter and facebook and wordpress fit with this rich linkage?
Given that researchers should be thinking about public impact, and teaching academics are starting to think about open education, perhaps infrastructure providers and institutions should be weighting the academic-to-public use case more heavily?
Connecting people through open content by Amber Thomas is licensed under a Creative Commons Attribution 3.0 Unported License.
Based on a work at infteam.jiscinvolve.org.
Permissions beyond the scope of this license may be available at http://www.jisc.ac.uk/contactus
I have been thinking a lot recently about how to move beyond the rhetoric of “open equals good” towards identifying where open approaches help us meet key business cases. A notable quote from the Power of Open book launch was that “open isn’t a business model, its a part of a business model”. I’m seeing this trend in open educational resources, open access repositories and open innovation. It’s how open source became more mainstream, and we need to be learning from that journey. If we want to see open approaches sustained, we need to get businesslike about how make the case, however contradictory that might sound.
Earlier this month I spoke at a UKOLN event on metrics and the social web, and the discussion there reinforced the potential of using the web more effectively to underpin our key business goals in further and higher education.
On 26th July I am presenting at the Institutional Web Managers Workshop 2011 and I will be developing this theme further, paying particular attention to the way that web managers can support open access, open educational resources and open social scholarship.
In reflecting on how open access and OER can contribute to the core business cases of universities, I think that activities particularly worthy of more attention include:
- Profiling academic expertise
- Supporting REF impact metrics
- Enhanced research publications
- Cross-linking open content to open course data
- Social media listening tools
- Web analytics and visualisation
My presentation on slideshare: Marketing and other dirty words
The information environment programme 2009-11 (mercifully shortened to inf11) is drawing to a close and we are starting to reflect on what it has achieved.
We chose to manage this programme as one very broad programme rather than a number of smaller programmes and it has included work on:
- Activity data
- Automatic metadata generation
- Infrastructure for resource discovery
- Repositories – enhancement, take up and embedding and improving deposit
- Linked data
- Scholarly communication
- Rapid Innovation
- Library management systems – includes work on a shared ERM system with SCONUL
- Research Information management
- Developer community
This represents a lot of work that has produced some exciting outputs and interesting results. To try and help people see what outputs and results are relevant to them, we have prepared a list of 27 questions that the programme has addressed or started to address. This was put together by Jo Alcock from Evidence Base who are evaluating the programme.
The programme won’t finish until July so we will continue to add to these questions. If you have any suggestions for things to be included, please let me know.
For our next programme of work we will have 4 separate programmes:
- Information and Library Infrastructure
- Research Management
- Digital Infrastructure Directions
We will be blogging more about these programmes soon.
There are a few places up for grabs for Innovation Takeaway – our event to discuss some of the lessons from the information environment programme 2009-11.
The event is free to attend and will take place at Aston University’s Lakeside Conference Centre on Thursday April 7th.
This event is a chance for programme participants and others to reflect on major lessons and how these can be applied to challenging institutional issues in Higher Education such as how to reduce or avoid costs in managing digital assets, how local innovators can benefit the institution, and how institutions can realise the value of an ‘open’ approach.
The information environment programme has included work on preservation, repositories, linked data, library systems, research management, developer communities and various flavours of open. The event will focus on case studies from the programme and will offer opportunities for discussion around each topic. Margaret Coutts will be the keynote speaker and will be giving her view on how we should be addressing the challenges the sector faces. We’ll provide a takeaway resource pack on each of the topics the event covers.
Places for the event will be assigned on a first come first served basis, so if this interests you, please register now. An agenda for the day, travel instructions and a contact email are all available from the registration page.
The hashtag for the programme and the event is #inf11
There is a huge variety of free content on the web of use for teaching, learning and research. In my recent post on Making the most of open content I argued that we need to understand use in order to make open content release more sustainable. This post is part two, an attempt to deepen the argument that use matters.
JISC funds a range of work to support innovation in open access and open content, and open comes in many flavours. So … what do open access, open data and open educational resources have in common? What makes content ‘open content’?
- Free at the point of use
- Not password-protected
- Available under an open licence
- often academic/user-generated (OER, OA)
- often repurposable/editable (OER, open data)
but after that, it gets a little more complicated … there is debate around what makes content “open”:
- Open data: 5 Stars of Linked Data, and there are discussions about how far open data and linked data should be aligned
- OERs: definitions: OECD Definition, 4 R:s reuse, revise, remix, redistribute, and there are discussions about what makes an educational resource open, or a resource useful for open educatio
- Open Access: Research papers tend to be pdfs or text but there are interesting issues around the limitations of PDFs: and the potential for enhanced publications
- Open Licensing: Even within creative commons licences there is a spectrum of openness and people draw the lines in different places, especially around non commercial and non derivative clauses
Its worth saying at this point that of course open content isn’t just about the content … because it is also a manifestation of a way of working … and the benefits of the open way of working are:
- knowing that content will be public is an incentive to improve the content
- collaborative development improves the work: the many eyes principle
- the best thing to do with your data/idea will be thought of by someone else
- if the public have paid, the public should benefit
- clarity of licensing makes re-use easier
- free at the point of use can save £cash and time
- it can invite commercial exploitation downstream
- visibility increases reputation, brand awareness, recruitment …
So given all these benefits of release, does it matter if content gets used? I often hear concerns that if we focus too much on use of content over the benefits of release then we risk putting people off releasing content that might not get used. They argue for the long tail argument that your content might just be perfect for someone? They ask what about the need to preserve content for access in the future rather than now?
There are some interesting perspectives to consider on this …
David Wiley makes an analogy between OERs and learning and toothbrushes and good oral hygiene, arguing that “OERs are like toothbrushes”. The analogy includes:
- “A free toothbrush doesn’t insure that people will actually engage in the behavior of brushing their teeth.
- Toothbrushing normally takes place in a private space (like a bathroom), so direct observation isn’t practical.
- Because the organization has no idea who picked up the toothbrushes, they can’t reach back out to people later to find out if people’s oral hygiene actually improved or not.”
There are other issues around the relationship between release and use too:
- In open access, research papers have an established and understood use model where publishing brings benefits. whereas teaching resources don’t have such an established model
- Open data is partly driven by transparency agenda, and as a cost efficient way to handle freedom of information requirements, so it can be argued that making the data available is enough to justify release
- Andy Beggan, Nottingham observes that reusing web-based content already happens a lot, hence the need to strip third party materials out of existing teaching resources to make them into OER. Yet people ask whether OERs are getting used.
- Melissa Highton, Oxford suggests, open content literacy means using content in an ethical manner
- Les Carr, Southampton, reflecting on OER and open data, argues re-use is the enemy of access: designing for reuse rather than just access sets the bar to entry very high
So I think that understanding use should inform the way we release content. Trying to understand use is not to imply that unused content is useless content, but that through understanding use we can release content that is optimised for use. Optimising for use based on evidence means making informed decisions about the costs/benefits of release, so that efforts taken to optimise content for use will be worthwhile and will result in more benefits arising from usage. This will help open content be more sustainable.
At the JISC conference next week I have organised a session on “Making the most of open content: stories from the frontier”
“Over the years JISC has funded many projects to support sustainable open working and release of content. So what are people doing with all this “stuff”? This session will bring together ideas of digital scholarship, open science, open data, open education and open educational resources to look at it all from the point of view of the user. It will be a lively set of stories of how people have found, used and shared free open content, opportunity for discussion and reflection, and participants will help compile top tips for making the most of it all”
I have brought together five people with different stories to tell about how they make use of open content/data and in the process of sourcing those stories I have been struck again by how much more we understand about release of content than of use. The question of “what do you want to do with it” is ever shifting, and the implications that has for how content is released on the web is playing itself out in a number of areas, particularly in open educational resources and open data.
The OER Impact Study is exploring use and re-use, including modelling the landscape , also there is a learner voice literature review out to tender. Understanding audience in order to provide quality content services and collections there are the Strategic Content Alliance audience publications. For models of optimising research data for use there are the current projects on citing, linking, integrating and publishing research data (CLIP) Managing Research Data CLIP projects. focussing on two-way engagement with external communities in the co-development of digital content there are the Developing Community Content projects. Work on ways of collecting, analysing and reusing the data about the way that staff and students interact with institutional systems is being done in the Activity Data Programme.
In preparing, I have been thinking about how much we really understand about use, about what people actually do with the content they find. There is huge potential, and plenty of models, but the task before us now is to understand actual use, so that we can support release models that really accrue the benefits promised. We can optimise content for potential use, but unless it gets used, the affordances we’ve work so hard to provide for don’t translate into benefits for users.
The big issues around open content/data are whether the effort needed to release it will be rewarded with enough benefits to justify continued release. Open data is an organisational decision, with some parallels to “Big OER”: enough benefits have to be realised by the organisation to be able to justify continued release. I think this is different to “small OER”: if individuals have the rights and the drive to release their content, whether as researchers, teaching academics, managers or learners, then they have the option to release it, as individuals: issues around individual motivation to release are not what I am talking about here. For these organisation-level decisions the stakes are higher, and that’s why the question of benefits is in sharp focus. It’s quite possible that benefits may be long tail and /or long term, or difficult to measure, such as process change. I’m really impressed by the approach taken in the Open Bibliographic Data Guide to articulating the nuances of supply- driven and demand- driven use cases: we need more of this pragmatic approach to openness.
Business models for open content can be abused more easily than the business models for paid-for, proprietary and all-rights-reserved content, so its important that users help bring benefits to releasers but at the very least they shouldn’t undermine the business model for release. The need for users to act responsibly in the demand for open data is nicely outlined in this post by Tom Steinberg . The need to assume a spectrum of practice and use, rather than to build for the most pure use case is highlighted in this post by Les Carr .
What might a similar OER pragmatism look like? Certainly an emphasis on attribution. Learning from citation practices, we need to make teaching resources citable and attributed, and one way towards that is smarter use of embedded licences for expressing rights, and ideally machine readable licences for tracking use. There are also things that open licences enable, such as the right to make alternative formats, and the right to translate, both of which make open resources useful to open education: as this post by Terese Bird argues . In looking at the trade-offs between what releasers want and what users want we also need to consider branding, as described by this post by Suzanne Hardy . I’d really like to see more thinking like this about how understanding real use of open content/data can help us make pragmatic decisions about which use cases to optimise for.
We need to know more about use! The OER Impact Study is a key focus for open resources for teaching and learning, but the question is bigger than that: how can we make open content sustainable?
Follow the conference session online, tweet #jisc11 #ocstories, or blog.
Stop Press: published today: sharing, reuse and frameworks by Mike Caulfield.
More! this post by Peter Robinson about feedback from users illustrates what users value, the potential for reaching the long tail, and listening to podcasts as an alernative to afternoon tv. This post by Andy Beggan highlights the irony of bemoaning “a lack of reuse” whilst simultaneously struggling with the licensing of materials full of third party materials: reuse is happening, its just mostly unattributed/illegal and unknown/untrackable.