Prediction is very difficult, especially about the future. Niels Bohr
Recently I have been thinking a bit about ‘mobile’ within an academic context. The M-Libraries projects are coming to an end, and we’re currently working on a report exploring the future impact of mobile and ubiquitous technologies on the HE sector called Mobile Futures.
Mobile technologies are a bridge between our online social connections, the hallmark of recent web innovations such as Facebook, twitter etc, and our physical, real-world social interactions. Institutions increasingly recognise mobile as an extension of the journey from online social connectivity to real-world reciprocity. This transformation provides opportunities for institutions to engage with their students in significant new ways, and to exploit mobile technologies to enhance this engagement and experience.
So, with these thoughts in mind I thought I would have some fun and think about 5 ‘trends’ around the future of mobile in academia and education. These thoughts are very much rough-drafts, and shipped far too early!
Mobile as a Platform
For institutions mobile ‘products’ are often the focus of attention – the campus app or the mobile website. Yet, these discrete developments feel increasingly like a means to something greater – stepping stones – rather than as ends in themselves.
Can mobile (services, and the development of those services) itself be a platform for other institutional and student/researcher benefits? Are our future mobile interactions a platform for a new form of engagement with students and researchers?
This is a question that is emerging from current work exploring the future impact of mobile on the academic sector. Here platform embodies something much closer to the ‘platforms’ represented by Facebook or other social network platforms.
The potential of this new ‘platform’ seems to run over our current conceptions, to something decentralised, data-centric, open, social, intuitive… In many ways the technology begins to drop out of our considerations of the future, and its the mobility of the student or researcher that becomes the critical factor.
It is the mobility of the individual that also highlights the fragmentation of the mobile device, into one much more intimately connected to our-selves.
The mobile devices that we have upon us will, increasingly, also be the filters through which we view reality. Augmented Reality (AR) will be the next transformative technology to change the way in which we interact with the world, and our institutions.
Using visual cues in the environment, AR uses mobile devices to overlay a digital world on top of the real world. Projects like HealthCARe at City university enable students to gain contextual information on health care issues simply by pointing their phone at an object or space.
SCARLET, a project from Mimas, at the University of Manchester, uses augmented reality as a way to connect students to extremely rare books as well as all the relevant contextual information for that resource to radically enhanced the student learning experience.
AR will play an increasing role in teaching and learning, as well as in the way institutions provide support services and information to students and staff. The interactions between the physical and virtual environments of the student will become increasingly blurred, as will the boundaries between the body and the device.
The history of our recent technologies is one of carefully repackaging the artifacts of our lives in smaller and smaller boxes. The zenith of this miniaturisation is mobile computing. Increasingly, however, these boxes are being unpacked, and the technologies of mobile computing are being reconfigured in new forms.
The emergence of Augmented Reality as a serious mobile trend for education also marks the growing intimacy between the device and our bodies. The augmentation of realities will be mirrored with a augmentation between the device and the body. Increasingly the ‘form-factors’ we are used to (the mobile phone, tablet) will gradually be superseded by new forms: earpieces, glasses and sensors.
This evolution of form could have some interesting implications for institutions. If BYOD (bring your own device) creates issues for institutions supporting user-owned technologies, then a fragmented, decentralised mobile form could increase those problems of support exponentially.
Devices will become hyper-personalised, and this will impact on the experience students will expect from the institutions that deliver their education.
There are huge opportunities for scale when it comes to educational technologies and mobile learning initiatives: the worldwide demand for education is growing exponentially. Yet, mobile offers an intriguing opportunity for institutions to scale downwards; effectively scaling down to the singular – to the individual level of experience.
Imagine an institutional information service that is scaled to you – not the institution. Echoing Paul Walk’s futuristic vision of library services this might place the mobile device in the role of ‘educational concierge’, delivering relevant information and resources, wherever and whenever. Indeed, it is a small leap of the imagination for this ‘library service’ to deliver information before you know you need it: precognitive services!
This notion of scale rolls over into areas such as course work and accreditation: micro-tasks combined with micro-accreditation. A couple of small tasks to do on the bus into campus, accruing towards your final exam. With the rise of MOOCs and online learning, this future is fast becoming a reality!
Mobile technologies, given their ubiquity, encourage a focus on the opportunities for constant connectivity that they offer. An academic world always on. However, it is clear that there will be an increasing need for spaces, places and strategies that enable students and staff to go ‘dark’. As institutions attend to enabling wifi everywhere, there may be an increasing requirement for wifi ‘coldspots’.
Some of the research that is emerging from the Visitors and Residents project highlights the awareness of students to the addictive and distracting nature of online social media and may increasingly require wifi free areas. Indeed, it may be that around periods of intense ‘visitor’ type activity, for example, examinations, paper deadlines, that institutions provide entirely ‘blacked out’ environments.
Institutions will need to build this ‘graceful disconnection’ capacity into their systems and services to enable students and researchers to step away from the ‘internet of things’ while they study.
This is my attempt to have some fun with the implications of mobile computing and technologies on education and academic institutions. It’s personal, HE-centric and hopelessly optimistic (I don’t touch on issues of your ‘data shadow’, issues of privacy or protection etc).
So… What would you articulate as the important trends in mobile for education? What are your #mobilefutures?
On behalf of the JISC/HEA UK Open Educational Resources (OER) Programme I am delighted to announce the completion of the OER Rapid Innovation Strand.
(also available as a PDF and downloadable file)
These solutions address a range of issues that we identified in our original call for proposals, and they specifically address:
- how to create rich machine-readable resources that contribute to the global content commons?
- how to simply and cheaply host and display content?
- how to enrich and manage audio-visual content in an educational context?
- how to bring OER content closer to the everyday life of academics?
- how to make use more visible to aid discovery and decision making?
All of the project outputs are reusable and the slidedeck includes links to key information. Final project reports will be available from the strand project pages. Please do try out the solutions provided, feed back to the projects and join the lively oer-discuss list to connect to people supporting open educational resources.
A huge thank you to the projects, and to Martin Hawksey, Phil Barker and Lorna Campbell of JISC CETIS for doing such fantastic work.
Tuesday 13th November saw the final programme meeting for the UK OER Programme 2009-2012. An aim of the programme had been to find sustainable practices for the release of OER, and there were many success stories shared. It seems to me that the funded programme marks the start of a general move towards greater OER practice.
There was a mandated requirement for projects within the Programme to tag their content with “ukoer”. Whether the content is images on flickr, courseware on institutional webpages or videos on youtube, they should be tagged as ukoer. It also got used for discussions about OER on twitter and in blogposts. As with many mandated requirements it was not universally or consistently applied, despite our best efforts otherwise.
It soon became clear that it can be hard to distinguish between the content that *is* OER and the content that is *about* OER. Particularly because openly licensed materials designed for other people to reuse in a training/learning context can be both about OER and OER themselves: OER squared! There’s quite a lot of content like that.
Recently members of the oer-discuss jiscmail list have been debating whether we should make continued use of the ukoer tag, and whether we can even control tag use post-funding now it is out there in the wild.
What does the tag mean? Is it …
- a signifier that content was produced with funding from the Programme
- a signifier that the releaser has been involved with the Programme
- a signifier that the releaser is contributing to a bigger collection of OER within the UK
- a signifier that the releaser identifies themselves as a part of a UK OER community?
Sometimes the tag might be about the contributor, the OER, or about activities such as workshops.
Focusing on its use for the OER content itself: each of these meanings above might suggest different use cases for how people might wish to slice and present content. Its worth noting that the tag is only one metadata item: each piece of content also has a publish/release data (often relating to when it went live on the platform being used), and an owner/author/contributor (sometimes an institution, or a team, or individual, or combination). Using these variables we can imagine use cases such as:
- see all content tagged ukoer dated 2009-2012
- see all content tagged ukoer
- see all content before 2012 and all content after 2013 (two searches to compare)
and of course, to look at the usage of that content too.
If we take the ukoer tag as a single identifier for content released as part of the programme, that might still be messy – “as part of”, “as a result of”, “with an awareness of”. Those latter meanings could continue to be true. Many people might still see benefits of signifying their content is contributing to a UK OER commons. That commons is the real impact of the programme and it would be healthy to see that continue.
However that does make it harder for people to derive clear narratives / patterns from the data in Jorum (or any other aggregation). As Sarah Currier puts it “it’s harder to disambiguate a large number of resources with the same tag expressing different properties (“funded by UKOER” *and* “produced by member of UK OER community”), than to just have a new tag that expresses the new property”. “It is very bad data management practice to munge together two concepts in one tag. It is very easy to agree a new tag; data from both can be brought together for analysis much more easily than disambiguating data about two things from one tag”.
However our decision about whether to encourage continued use of the “ukoer” tag will not just be about best practice. It is about weighing up best practice against common practice and the cultural considerations. At the risk of sounding like I’m overcomplicating things: it is a socio-technical issue. There is a balance to be made between the stated or tacit requirements of funders, the role Jorum plays for the funders, the role of Jorum for contributors, and the effort of people involved with OER. Of course by contributors, we are talking about the deposit/share point within an institution or team, who need to keep messages and requirements as simple as possible.
The list members have therefore looked to JISC to say whether/how we will want to draw on these figures as evidence of the impact of the programme. In a sense the measurement of the impact of the programme is inherently fuzzy and that causes complexities for service providers like Jorum who are rightly trying to anticipate future use cases.
We are lucky to have experts in this field, including two members of the Jorum team who wrote about the challenges of metadata in learning object repositories and members of JISC Cetis who are writing about resource description in OER. I have gathered their input into this post so that we can try to start articulating the issues here. It is through this exchange that we can make the right decisions for JISC, HEA and the wider community.
The point I make here is that we have before us a classic problem space. It illustrates that metadata decisions are about current and future use, that they are about balancing the needs of contributors and users, and that these things require discussion and the unpacking of assumptions. There are solutions out there, involving the sources, the aggregations … but it depends on what we want.
What’s the answer? Should we continue using ukoer as a community tag for a fuzzy concept or try to restrict use to a controlled tag for a funding stream? If we chose the latter (for any reason) could it actually be controlled in that way?
We would be interested to know what people think. The oer-discuss list leans towards the former but there can be many other perspectives and those of you who have been at the sharp end of evidencing impact may have some valuable war stories to share.
Post written with input from Sarah Currier, David Kernohan, Martin Hawksey, Lorna Campbell, Jackie Carter.
This post is my reflections on the emerging conclusions from the JLeRN Experiment.
Applying a new approach to an old problem
Followers of technology trends will have noticed some of the big themes of recent years include cloud storage, big data, analytics and activity streams, social media. Technologists supporting education and research have been using these approaches in a range of ways, finding where they can help solve critical problems and meet unmet needs. Many of these explorations are investigative: they are about getting a grasp of how the technologies work, what the data looks like, where there are organisational or ethical issues that need to be addressed, and what the skills are that we need to develop in order to fully exploit these emerging opportunities.
The Learning Registry has been described by Dan Rehak as “Social Networking for Metadata” (about learning resources) . Imagine pushing RSS feeds into the cloud of all the urls of learning resources you can imagine, from museums, from educational content providers, from libraries. This is about web-scale big data. Imagine that cloud also pulling in data about where those urls have been shared, on facebook, twitter, blogs, mailing lists. If you’ve tried out services like topsy.com or bit.ly analytics you’ll know that finding out information about url shares is possible and potentially interesting. Now imagine being able to ask interrogate that data, to see meta trends, to provide a widget next to your content item that pulls down the conversation being had around it. That is the vision of the learning registry. Anyone who has been involved with sharing learning materials will recognise the scenario on the left below.
Learning Registry Use Case, Amber Thomas, JISC 2012, CC BY
The Learning Registry is about applying the technologies described above to the problem on the left, by making it possible to mine the network for useful context to guide users.
To explore this potential, the JISC/HEA OER Programme has been funding an experiment to run a Learning Registry “node” in the UK. The growth of openly licensed content and the political momentum to encourage the use of that content has been a spur to funding this experiment though it should be noted that the Learning Registry is not designed purely for open content.
See this useful overview for more detail of the project. It has been running on an open innovation model, sharing progress openly, and working with interested people. Headed up by the excellent Sarah Currier, with input from Lorna Campbell and Phil Barker from JISC CETIS, in my view it has been a very effective experiment.
Towards the end of the work, on 22nd October 2012, Mimas hosted an expert meeting of those people that have been working with the Learning Registry, services and projects, contributors and consumers, developers and decision makers. It was a very rich meeting, with participants exchanging details of the way they have used these approaches, and deep discussions on what we have found.
What follows is my analysis of some of the key issues we have uncovered in this experiment.
Networks and Nodes
The structure of the LR is a fairly flat hierarchy, it can expand infinitely to accommodate new nodes, and nodes can cluster. See the overview for a useful diagram.
What this structure means is that it can grow easily, and that it does not require a governance model with large overheads. The rules are the rules of the network rather than of a gate-keeping organisation. This is an attractive model where it is not clear who the business case lies with.
One of the ways of running a node is to use an Amazon Web Service instance. That seems a nice pure way of running a distributed network, however university procurement frameworks have still got to adjust to the pricing mechanisms of the cloud. Perhaps in that respect we’re not ready to exploit cloud-based network services quite yet.
However more generally I think we are seeing is a growth in the profile of services that are brokers and aggregators of web content. Not the hosts, or the presentation layers, but services in between, sometimes invisible. JISC has been supporting the development of these sorts of “middleware”, “machine services” from the early days: the terminology changes but the concept is not new to JISC. What does seem to be developing though (and this is my perception) is an appetite for these intermediary services, and the skills to integrate them. Perhaps there is a readiness for a Learning Registry-ish service now.
Another key architectural characteristic is a reliance on APIs. This enables developers to create services to meet particular needs. Rather than a centralised model that collects feature requests from users, it allows a layer of skilled developers to create services around the APIs. The APIs have to be powerful to enable this though, so getting that first layer of rich API functionality working is key. To that extent the central team has to be fast and responsive to keep up momentum.
However the extent to which the LR is actually a network so far is unclear. There are a handful of nodes, but not to the extent that we can be sure we are seeing any network effects. The lack of growth of nodes may be because the barrier to setting up a node is perceived to be high. It may be too early to tell. But for the purposes of the JLeRN experiment, my conclusion is that we have not seen the network effects that the LR promises.
Pushing the hardest problems out of sight?
It’s easy to fall into a trap of hoping that one technical system will meet everybody’s needs. The Learning Registry might not be THE answer, but there is something of value in the way that it provides some infrastructure to manage a very complex distributed problem.
However the question raised at the workshop by Sarah Currier in her introduction and again by David Kay in his closing reflections is: does it push some of the challenges out of scope, for someone else to solve? The challenges in question include:
- Resource description and keywords
- People identifiers
- Data versioning
To take one problem area: resource description for learning materials. It is very hard to agree on any mandatory metadata beyond Dublin Core. This is partly because of the diversity of resource types and formats: a learning material can be anything, from a photo to a whole website. Within resource types it is possible to have a deeper vocabulary, for example for content packaged resources that may have a nominal “time” or “level” attached. Likewise, different disciplinary areas not only have specialist vocabularies but also use content in different ways. It is technically possible to set useful mandatory metadata BUT in practice it is rarely complied with. When we are talking about a diversity of content providers with different motivations, the carrots and sticks are pretty complicated. So users want rich resource description metadata, to aid search and selection, but that is rarely supplied.
The Learning Registry solution is to be agnostic about metadata: it just sucks it all into the big data cloud. It does not mandate particular fields. What it does is offer developers a huge dataset to prod, to model, to shape, and to pull out in whichever way the users want it. Developers can do anything they want with the data AS LONG AS THE DATA EXISTS. If there is not enough data to play with, or not enough consistency between resources, then it is hard to create meaningful services over the data.
I said above that the Learning Registry “provides some infrastructure to manage a very complex distributed problem”. But on reflection does it manage that complexity? Or does it just make it manageable by pushing it out of scope? And if it doesn’t enable developers to build useful services for educators, is it successful?
These are a selection of the issues that the experiment is surfacing. There are certainly plenty of question marks about the effectiveness of this sort of approach. But I still feel sure that there are aspects of these technologies that we should be applying to meeting our needs in education and research. Certainly, this experiment has overlapped with work in JISC’s Activity Data programme, in our analytics work and in the area of cloud solutions. There is something interesting happening here, some glimpses of more elegant ways of share content, maybe even a step change.
Opportunity to preview and contribute to a book about the technical themes emerging from three years of the UK OER Programme
Extract from Lorna Campbell’s blogpost:
The OER technology directions book that Amber, Phil, Martin and I drafted during a book sprint at the end of August is now almost complete. We even have a title!
Technology for Open Educational Resources – Into The Wild. Reflections on three years of the UK OER Programmes
We’ve spent the last few weeks, polishing, editing and amending the text and we would now like to invite colleagues who have an interest in technology and digital infrastructure for open educational resources to review and comment on the open draft.
We’re looking for short commentaries and feedback, either on the whole book, or on individual chapters. These commentaries will form the final chapter of the book. We want to know what rings true and what doesn’t. Have we missed any important technical directions that you think should be included? What do you think the future technical directions are for OER?
Note that the focus of this book is as much on real world current practice as on recommended or best practice. This book is not intended as a beginners guide or a technical manual, instead it is a synthesis of the key technical issues arising from three years of the UK OER Programmes. It is intended for people working with technology to support the creation, management, dissemination and tracking of open educational resources, and particularly those who design digital infrastructure and services at institutional and national level.
The chapters cover:
- Defining OER
- Resource management
- Resource description
- Licensing and attribution
- SEO and discoverability
- Tracking OERs
- Accessibility by Terry McAndrew, TechDis
UK OER projects from all phases of the Programme are encouraged to comment, and we would particularly welcome feedback from colleagues that are grounded in experience of designing and running OER services.
For more details, links and guidance on how to contribute, please see Lorna’s blogpost.
We wrote most of this as a booksprint: a writing retreat using collaborative authoring software. As well as continuing to write the book with the help of the community, we also have the fun of choosing cover designs and formats and print-on-demand options … it’s quite a learning curve and very rewarding. So now you all know what my family and friends will be getting for xmas
Amber Thomas, JISC.
One of the findings that has emerged clearly from the UK OER Programme and from the UK Discovery work is that for a healthy content ecosystem, information about the content needs to be available to many different systems, services and users. Appropriately licensing the metadata and feeds is crucial to downstream discovery and use.
The OER IPR Support Project have developed this fabulous animation to introduce the importance of open data licensing in an engaging way.
It was developed out of the UK OER Programme but informed by the work of several other areas including UK Discovery, Managing Research Data, the Strategic Content Alliance, and sharing XCRI course feeds. With thanks to the many people who helped in the storyboarding, scripting and feedback: particularly Phil Barker, Tony Hirst and Martin Hawskey.
You may remember the same OER IPR team produced the Turning a Resource into an Open Educational Resource (1,700+ hits and counting). The team is Web2Rights (Naomi Korn, Alex Dawson), JISC Legal (Jason Miles-Campbell) and the animator is Luke McGowan. The whole animation is (c) HEFCE on behalf of JISC, and Creative Commons Attribution Share Alike 3.0.
We hope it will have wide usefulness and we very much welcome feedback.
Amber Thomas, JISC
Further / Higher Education (F/HE) in the UK is in the fortunate position to have talented and experienced developers working in its organisations, driving both service development and applied research. Because of this, developers in F/HE frequently contribute to a particularly rich source of technical innovation to the sector.
At ALT-C Paul Walk and I ran a session exploring the concept of the Strategic Developer. Paul is the Deputy Director of UKOLN and oversees DevCSI, a JISC-funded initiative to focus on the development of technical talent with in the UK FE/HE/Research sectors. An experienced manager of developers himself, Paul has been looking at the ways in which technology staff are situated in decision-making processes. Back in the spring, his colleague Mahendra Mahey and I ran a discussion session at Dev8eD about the role of developers in e-learning and the ideas UKOLN are exploring clearly had some resonance. This session was a discussion on the issues around in-house technology expertise in a learning and teaching systems context. Our focus was on the hard technical skills end of the technology spectrum: it is about the coders, hackers and integrators, the people who build and develop software solutions.
Paul’s slides below describe the ideas around local developers, connected developers and strategic developers which underpin the DevCSI initiative.
We had a small but very experienced group of participants.
A-Z by surname: Suzanne Hardy (Newcastle), Martin Hawksey (JISC CETIS), Jo Matthews (UCL), Mark Stubbs (MMU), Jim Turner (JMU), Scott Wilson (JISC OSSWatch).
My take-home messages from our discussions are below, and I hope that other participants will add their thoughts.
The cloud, and software-as-a-service model is often conflated in people’s minds with the outsourced model. In truth there are many models of SaaS that have greater and lesser levels of control for the client. This reminded me of one of my favourite talks Dev8eD: Alex Iaconni on different sorts of hosting . Paul’s observation is that the push into the cloud is sometimes mistakenly associated with a reduction in expertise required from the client. Cloud and SaaS just make some aspects of the system remote, not necessarily all, they certainly don’t always negate the need for in-house expertise.
That said, there are some trends where complexity moves up to the “above campus level”. The sorts of shared services that libraries use are changing the division of labour between technical experts within libraries and those working at a vendor/supplier level. Certainly in JISC’s work on repository and curation infrastructures we are seeing potential for abstracting some functionality (and its expert design) up to a network level. I am interested to see whether e-learning will see similar trends: with some specialisms focussed at the shared service level rather than locally.
In open source, we also see that pooling of technical expertise across employer boundaries. Certainly moodle is a really good example of technical skills distributed between institutions, service providers and the developer community in its own right. The recent case study on MMU’s use of ULCC’s hosted moodle solution is a good example of that. The point was also made that OS coders are connected developers out of necessity and that brings the benefits of greater awareness of other software and approaches.
Thinking now about big contracts for outsourced services, we discussed how an institution needs in-house technical expertise to:
- specify technical requirements to vendors
- evaluate proposals
- negotiate technical detail
- oversee technical delivery
- integrate the external service with local integrations
and so on. In short, to act as an “intelligent client”/”intelligent customer” to ensure that institutions are getting value for money from their suppliers. The complexity of university technical infrastructures mean that vendors who overpromise or underperform are hugely costly to universities. When we’re talking about huge contracts like that at London Met the potential for inefficiency is huge and those suppliers must be carefully managed.
I think Paul’s diagram is worthy of reproducing here:
Incidentally I’m not suggesting here a crude “them and us” characterisation of suppliers and customers. I’m arguing that for IT contracts to deliver effective solutions there needs to be a meeting in the middle. I would argue that it is a good test of a vendor that they are happy to get “their guys” talking to “your guys” as soon as possible. Any supplier who is happy to be judged on results will want to get it right and they would rather have frequent access to accurate technical information than to a contract manager with no mandate for decisions. I would love to hear from developers working for suppliers on whether that rings true, but in all my experience, they need to be met half way by the client on getting the technical implementation right.
We also discussed the way in which in-house technical expertise is managed, and on reflection we were describing some common variations, each of which combines to make institutional set-ups quite diverse:
- institution size matters: a small institution may provide more space for a networked and strategic developer
- seniority of developers matters: some will be more involved in procurement decisions described above
- e-learning developers might be central or embedded in departments
- VLEs treated as part of the enterprise suite or as specialist applications supported by e-learning team
- Whether IT/library is converged or not also impacts on where e-learning developers sit in the organisation
- Patterns of home-grown systems/tools becoming integrated or discarded
- In-house open source solutions mean in-house expertise, but externally hosted OSS has more variation
- Mix of core staff and contracted staff (both long and short term)
- Mix of external technical consultants and coders and the ways in which their knowledge is sustained
- Extent of tactical use of internal and external project funding to enhance in-house technical capacity
- Extent to which developers technical skills and approaches are actively nurtured
- Extent to which developers soft skills are developed in areas like pitching, presenting, supporting users, business analysis, cost assessments etc
Even within our small group there was considerable variation. That certainly suggests that in sharing our emerging models of managing distributed and cloudy infrastructures, we need to clearly state our local contexts.
It was a thought-provoking discussion. It emphasised to me the value of JISC’s support for connecting developers, and the need to continue investing in in-house technology expertise.
Amber Thomas, JISC
JISC Observatory have launched the draft version of a new report on eBooks in Education.
This report updates previous work researching the usage and adoption of ebooks within academic institutions and examines recent developments that are shaping how academic institutions can respond to growing interest in ebooks:
As ebooks become mainstream and the percentage of academic publications delivered as ebooks rises steadily, this report explains the importance of preparing for the increasing adoption and usage of ebooks in academic institutions. Specifically, this report: 1) introduces the historical and present context of ebooks; 2) reviews the basics of ebooks; 3) considers scenarios for ebook adoption and usage; 4) addresses current challenges; and 5) considers the future. This report also provides a glossary to help clarify key terms and a ‘References’ section listing works cited.
The preview version of this report is open for public comments from 27 September to 8 October 2012. A final version, taking into account feedback received, is scheduled for publication around the end of October.
See news item for more information and an opportunity to comment.
Alongside this report, JISC is developing further practical guidance. Building on JISC Collections ebook expertise, the Digital Monographs Study and JISC Digital Media expertise, later this year we will be releasing Digital Infrastructure Directions guidance on the Challenge of eBooks in Academic Institutions.
The guidance is being co-designed with experts in the sector using a project development wiki. The project covers the creation, curation and consumption of eBooks. There are many issues to unpick: the Bring Your Own Device (BOYD) trend, the role of libraries, changes in publishing and purchasing models, accessibility and access considerations and so on. We will make this a helpful tool for institutional managers to navigate the choices signposted in the JISC Observatory report.
Some prototype guidance will be available soon and we will be seeking input on how to ensure that it meets the needs of decision-makers in institutions. The guidance authors will be watching the comments on the JISC Observatory report preview to ensure that they address the challenges surfaced.
In summary please comment on the JISC Observatory Draft Report: Preparing for Effective Adoption and Use of eBooks in Education and watch this space for more!
Amber Thomas, JISC
At ALT-C in early September I ran a session with David Kernohan on Openness: learning from our history. The theme of the conference was ” a confrontation with reality” so it seemed fitting to explore the trajectories taken by various forms of openness. What follows is just a short thought piece that I contributed to the session about some of the patterns I have observed over the past decade or so.
Curves and Cycles
The first thing to say is that we are all different in our encounters with new approaches Whether they are new technologies like badges or new delivery models like MOOCs. We are each on our own learning curves and changed curves, and we meet new ideas and solutions at different points in the hype cycle. That is a lot of variation. So when we meet new ideas, we can respond very differently. My first message is that every response is a real response.
It’s to easy to characterise people as pro- or anti- something It’s too easy to present things as a debate for or against But polarisation often masks the real questions, because we don’t hear them properly.
“The use of technology seems to divide people into strong pro- and anti-camps or perhaps utopian and dystopian perspectives” Martin Weller, The Digital Scholar 
Dialectics of Open and Free
There is usually a dialectic around open and free: free as in freedom, free as in beer . “Open as in door or open as in heart”: Some courses are open as in door. You can walk in, you can listen for free. Others are open as in heart. You become part of a community, you are accepted and nurtured . I always add: open as in markets?
To follow on from free as in beer …A great example of branching is the trajectory of the open source movement. there were big debates over “gratis vs libre” and that gave birth to the umbrella term Free AND Libre Open Source Software term – FLOSS. By enabling the practices of open source to branch off, to allow the community to branch off, we saw a diffusion of innovation. Towards profit-making business models in some areas, free culture models in others. Its interesting how github supports the whole spectrum
This has also been the approach of the UK OER Programme. We have been quite pluralistic about OER, to let people find their own ways. We have certainly had tensions between the marketing/recruitment aspect and the open practice perspective. What’s important to note is that often it’s not just one model that comes to dominate.
Tipping points into the mainstream
We don’t always understand what brings about mainstreaming. We very rarely control it.
Consider a story from open standards: the rise and fall of RSS aggregation. Is it netvibes or pageflakes that made the difference? Or google-reader? At what point did twitter and facebook start to dominate the aggregation game and overtake RSS? The OER programme gave freedom for each project to choose their platform. They didn’t chose a standard they chose a platform. It’s often when open standards are baked in to platforms that we see take-up without conscious decision making.
I’m not sure we always notice: sometimes when mainstreaming happens we don’t recognise it. When did e-learning become part of the fabric of education?
Finally, change can take a lot longer than we hope. The 10 years since the Budapest Open Access Initiative  can feel like geological time. And yet the OA movement has achieved so much. Perhaps we need some time-lapse photography approach to recognising the impact of changes we started back then. So many more people understand OA now. So many more people care.
Change takes longer than you think
We are all unique in our encounters with new things. Polarisation often masks the real questions. There is often a dialectic around open and free. Often it’s not just one model that comes to dominate. Sometimes when mainstreaming happens we don’t recognise it. Change can take a lot longer than we hope.
The Knowledge Exchange, of which JISC is a member, has just released a report on the sustainability of OA services and infrastructure. The report identifies services that are considered critical, and what they are critical for. It then considers the perspectives of a range of stakeholders, and considers the value offered to them by these services. It is the first part of a series of reports, with the next one being undertaken now under the auspices of SPARC in the US, focusing on discrete business models and related issues (governance, etc).
The KE report is important for JISC, as we are working with the UK and wider repositories community to develop a repositories service infrastructure. You may know that this is based around RepositoryNet+ at Edina, includes an “innovation zone” at UKOLN, and strong relations with centres of excellence such as the Universities of Nottingham and Southampton, and MIMAS. The repositories infrastructure, including services such as Sherpa/RoMEO, needs to be sustainable and cost-effective, and the KE report helps us understand what that means in particular cases. In a time of constrained resources, and a strong policy direction in favour of open access, we will need to be creative in sustaining the repositories infrastructure. We need business planning perspectives to complement vital technical and academic expertise and understanding.
The same challenges, in a more commercial context, face Gold OA. The UK Open Access Implementation Group is already working hard with others on interoperability and service models, eg for an “intermediary” in Gold OA transactions.