Experimenting with the Learning Registry

This post is my reflections on the emerging conclusions from the JLeRN Experiment.

Applying a new approach to an old problem

Followers of technology trends will have noticed some of the big themes of recent years include cloud storage, big data, analytics and activity streams, social media. Technologists supporting education and research have been using these approaches in a range of ways, finding where they can help solve critical problems and meet unmet needs. Many of these explorations are investigative: they are about getting a grasp of how the technologies work, what the data looks like, where there are organisational or ethical issues that need to be addressed, and what the skills are that we need to develop in order to fully exploit these emerging opportunities.

The Learning Registry has been described by Dan Rehak as “Social Networking for Metadata” (about learning resources) . Imagine pushing RSS feeds into the cloud of all the urls of learning resources you can imagine, from museums, from educational content providers, from libraries. This is about web-scale big data. Imagine that cloud also pulling in data about where those urls have been shared, on facebook, twitter, blogs, mailing lists.  If you’ve tried out services like topsy.com or bit.ly analytics you’ll know that finding out information about url shares is possible and potentially interesting. Now imagine being able to ask interrogate that data, to see meta trends, to provide a widget next to your content item that pulls down the conversation being had around it. That is the vision of the learning registry. Anyone who has been involved with sharing learning materials will recognise the scenario on the left below.

jokey sketch of the use case for the Learning Registry

 Learning Registry Use Case, Amber Thomas, JISC 2012, CC BY

The Learning Registry is about applying the technologies described above to the problem on the left, by making it possible to mine the network for useful context to guide users.

The Experiment

To explore this potential, the JISC/HEA OER Programme has been funding an experiment to run a Learning Registry “node” in the UK. The growth of openly licensed content and the political momentum to encourage the use of that content has been a spur to funding this experiment though it should be noted that the Learning Registry is not designed purely for open content.

See this useful overview for more detail of the project. It has been running on an open innovation model, sharing progress openly, and working with interested people. Headed up by the excellent Sarah Currier, with input from Lorna Campbell and Phil Barker from JISC CETIS, in my view it has been a very effective experiment.

Towards the end of the work, on 22nd October 2012, Mimas hosted an expert meeting of those people that have been working with the Learning Registry, services and projects, contributors and consumers, developers and decision makers. It was a very rich meeting, with participants exchanging details of the way they have used these approaches, and deep discussions on what we have found.

What follows is my analysis of some of the key issues we have uncovered in this experiment.

Networks and Nodes

The structure of the LR is a fairly flat hierarchy, it can expand infinitely to accommodate new nodes, and nodes can cluster. See the overview for a useful diagram.

What this structure means is that it can grow easily, and that it does not require a governance model with large overheads. The rules are the rules of the network rather than of a gate-keeping organisation. This is an attractive model where it is not clear who the business case lies with.

One of the ways of running a node is to use an Amazon Web Service instance. That seems a nice pure way of running a distributed network, however university procurement frameworks have still got to adjust to the pricing mechanisms of the cloud. Perhaps in that respect we’re not ready to exploit cloud-based network services quite yet.

However more generally I think we are seeing is a growth in the profile of services that are brokers and aggregators of web content. Not the hosts, or the presentation layers, but services in between, sometimes invisible. JISC has been supporting the development of these sorts of “middleware”, “machine services” from the early days: the terminology changes but the concept is not new to JISC. What does seem to be developing though (and this is my perception) is an appetite for these intermediary services, and the skills to integrate them. Perhaps there is a readiness for a Learning Registry-ish service now.

Another key architectural characteristic is a reliance on APIs. This enables developers to create services to meet particular needs. Rather than a centralised model that collects feature requests from users, it allows a layer of skilled developers to create services around the APIs. The APIs have to be powerful to enable this though, so getting that first layer of rich API functionality working is key. To that extent the central team has to be fast and responsive to keep up momentum.

However the extent to which the LR is actually a network so far is unclear. There are a handful of nodes, but not to the extent that we can be sure we are seeing any network effects. The lack of growth of nodes may be because the barrier to setting up a node is perceived to be high. It may be too early to tell. But for the purposes of the JLeRN experiment, my conclusion is that we have not seen the network effects that the LR promises.

 

Pushing the hardest problems out of sight?

It’s easy to fall into a trap of hoping that one technical system will meet everybody’s needs. The Learning Registry might not be THE answer, but there is something of value in the way that it provides some infrastructure to manage a very complex distributed problem.

However the question raised at the workshop by Sarah Currier in her introduction and again by David Kay in his closing reflections is: does it push some of the challenges out of scope, for someone else to solve? The challenges in question include:

  • Resource description and keywords
  • De-duplication
  • People identifiers
  • Data versioning

To take one problem area: resource description for learning materials. It is very hard to agree on any mandatory metadata beyond Dublin Core. This is partly because of the diversity of resource types and formats: a learning material can be anything, from a photo to a whole website. Within resource types it is possible to have a deeper vocabulary, for example for content packaged resources that may have a nominal “time” or “level” attached. Likewise, different disciplinary areas not only have specialist vocabularies but also use content in different ways. It is technically possible to set useful mandatory metadata BUT in practice it is rarely complied with. When we are talking about a diversity of content providers with different motivations, the carrots and sticks are pretty complicated. So users want rich resource description metadata, to aid search and selection, but that is rarely supplied.

The Learning Registry solution is to be agnostic about metadata: it just sucks it all into the big data cloud. It does not mandate particular fields. What it does is offer developers a huge dataset to prod, to model, to shape, and to pull out in whichever way the users want it. Developers can do anything they want with the data AS LONG AS THE DATA EXISTS. If there is not enough data to play with, or not enough consistency between resources, then it is hard to create meaningful services over the data.

I said above that the Learning Registry “provides some infrastructure to manage a very complex distributed problem”. But on reflection does it manage that complexity? Or does it just make it manageable by pushing it out of scope? And if it doesn’t enable developers to build useful services for educators, is it successful?

Final Thoughts

These are a selection of the issues that the experiment is surfacing. There are certainly plenty of question marks about the effectiveness of this sort of approach. But I still feel sure that there are aspects of these technologies that we should be applying to meeting our needs in education and research. Certainly, this experiment has overlapped with work in JISC’s Activity Data programme, in our analytics work and in the area of cloud solutions. There is something interesting happening here, some glimpses of more elegant ways of share content, maybe even a step change.

 

 

Draft Book on OER Technologies: Into The Wild

Opportunity to preview and contribute to a book about the technical themes emerging from three years of the UK OER Programme

Extract from Lorna Campbell’s blogpost:

The OER technology directions book that Amber, Phil, Martin and I drafted during a book sprint at the end of August is now almost complete. We even have a title!

Technology for Open Educational Resources – Into The Wild. Reflections on three years of the UK OER Programmes

We’ve spent the last few weeks, polishing, editing and amending the text and we would now like to invite colleagues who have an interest in technology and digital infrastructure for open educational resources to review and comment on the open draft.

We’re looking for short commentaries and feedback, either on the whole book, or on individual chapters. These commentaries will form the final chapter of the book. We want to know what rings true and what doesn’t. Have we missed any important technical directions that you think should be included? What do you think the future technical directions are for OER?

Note that the focus of this book is as much on real world current practice as on recommended or best practice. This book is not intended as a beginners guide or a technical manual, instead it is a synthesis of the key technical issues arising from three years of the UK OER Programmes. It is intended for people working with technology to support the creation, management, dissemination and tracking of open educational resources, and particularly those who design digital infrastructure and services at institutional and national level.

The chapters cover:

  • Defining OER
  • Resource management
  • Resource description
  • Licensing and attribution
  • SEO and discoverability
  • Tracking OERs
  • Paradata
  • Accessibility by Terry McAndrew, TechDis

UK OER projects from all phases of the Programme are encouraged to comment, and we would particularly welcome feedback from colleagues that are grounded in experience of designing and running OER services.

For more details, links and guidance on how to contribute, please see Lorna’s blogpost.

We wrote most of this as a booksprint: a writing retreat using collaborative authoring software. As well as continuing to write the book with the help of the community, we also have the fun of choosing cover designs and formats and print-on-demand options … it’s quite a learning curve and very rewarding. So now you all know what my family and friends will be getting for xmas 😉

Amber Thomas, JISC.

Licensing Data as Open Data

One of the findings that has emerged clearly from the UK OER Programme and from the UK Discovery work is that for a healthy content ecosystem, information about the content needs to be available to many different systems, services and users. Appropriately licensing the metadata and feeds is crucial to downstream discovery and use.

The OER IPR Support Project have developed this fabulous animation to introduce the importance of open data licensing in an engaging way.

It was developed out of the UK OER Programme but informed by the work of several other areas including UK Discovery, Managing Research Data, the Strategic Content Alliance, and sharing XCRI course feeds. With thanks to the many people who helped in the storyboarding, scripting and feedback: particularly Phil Barker, Tony Hirst and Martin Hawskey.

You may remember the same OER IPR team produced the Turning a Resource into an Open Educational Resource (1,700+ hits and counting). The team is Web2Rights (Naomi Korn, Alex Dawson), JISC Legal (Jason Miles-Campbell) and the animator is Luke McGowan. The whole animation is (c) HEFCE on behalf of JISC, and Creative Commons Attribution Share Alike 3.0.

We hope it will have wide usefulness and we very much welcome feedback.

Amber Thomas, JISC

Technology developers: local, connected and strategic

Further / Higher Education (F/HE) in the UK is in the fortunate position to have talented and experienced developers working in its organisations, driving both service development and applied research. Because of this, developers in F/HE frequently contribute to a particularly rich source of technical innovation to the sector.

At ALT-C Paul Walk and I ran a session exploring the concept of the Strategic Developer.  Paul is the Deputy Director of UKOLN and oversees DevCSI, a JISC-funded initiative to focus on the development of technical talent with in the UK FE/HE/Research sectors. An experienced manager of developers himself, Paul has been looking at the ways in which technology staff are situated in decision-making processes. Back in the spring, his colleague Mahendra Mahey and I ran a discussion session at Dev8eD about the role of developers in e-learning and the ideas UKOLN are exploring clearly had some resonance. This session was a discussion on the issues around in-house technology expertise in a learning and teaching systems context. Our focus was on the hard technical skills end of the technology spectrum: it is about the coders, hackers and integrators, the people who build and develop software solutions.

Paul’s slides below describe the ideas around local developers, connected developers and strategic developers which underpin the DevCSI initiative.


 

We had a small but very experienced group of participants.

A-Z by surname: Suzanne Hardy (Newcastle), Martin Hawksey (JISC CETIS), Jo Matthews (UCL), Mark Stubbs (MMU), Jim Turner (JMU), Scott Wilson (JISC OSSWatch).

My take-home messages from our discussions are below, and I hope that other participants will add their thoughts.

Key Points

The cloud, and software-as-a-service model is often conflated in people’s minds with the outsourced model. In truth there are many models of SaaS that have greater and lesser levels of control for the client. This reminded me of one of my favourite talks Dev8eD: Alex Iaconni on different sorts of hosting . Paul’s observation is that the push into the cloud is sometimes mistakenly associated with a reduction in expertise required from the client. Cloud and SaaS just make some aspects of the system remote, not necessarily all, they certainly don’t always negate the need for in-house expertise.

That said, there are some trends where complexity moves up to the “above campus level”. The sorts of shared services that libraries use are changing the division of labour between technical experts within libraries and those working at a vendor/supplier level. Certainly in JISC’s work on repository and curation infrastructures we are seeing potential for abstracting some functionality (and its expert design) up to a network level. I am interested to see whether e-learning will see similar trends: with some specialisms focussed at the shared service level rather than locally.

In open source, we also see that pooling of technical expertise across employer boundaries. Certainly moodle is a really good example of technical skills distributed between institutions, service providers and the developer community in its own right. The recent case study on MMU’s use of ULCC’s hosted moodle solution is a good example of that. The point was also made that OS coders are connected developers out of necessity and that brings the benefits of greater awareness of other software and approaches.

Thinking now about big contracts for outsourced services, we discussed how an institution needs in-house technical expertise to:

  • specify technical requirements to vendors
  • evaluate proposals
  • negotiate technical detail
  • oversee technical delivery
  • integrate the external service with local integrations

and so on. In short, to act as an “intelligent client”/”intelligent customer” to ensure that institutions are getting value for money from their suppliers. The complexity of university technical infrastructures mean that vendors who overpromise or underperform are hugely costly to universities. When we’re talking about huge contracts like that at London Met  the potential for inefficiency is huge and those suppliers must be carefully managed.

I think Paul’s diagram is worthy of reproducing here:

the role of the developer when the outlook is cloudy (Paul Walk, UKOLN)

Incidentally I’m not suggesting here a crude “them and us” characterisation of suppliers and customers. I’m arguing that for IT contracts to deliver effective solutions there needs to be a meeting in the middle. I would argue that it is a good test of a vendor that they are happy to get “their guys” talking to “your guys” as soon as possible. Any supplier who is happy to be judged on results will want to get it right and they would rather have frequent access to accurate technical information than to a contract manager with no mandate for decisions. I would love to hear from developers working for suppliers on whether that rings true, but in all my experience, they need to be met half way by the client on getting the technical implementation right.

We also discussed the way in which in-house technical expertise is managed, and on reflection we were describing some common variations, each of which combines to make institutional set-ups quite diverse:

  • institution size matters: a small institution may provide more space for a networked and strategic developer
  • seniority of developers matters: some will be more involved in procurement decisions described above
  • e-learning developers might be central or embedded in departments
  • VLEs treated as part of the enterprise suite or as specialist applications supported by e-learning team
  • Whether IT/library is converged or not also impacts on where e-learning developers sit in the organisation
  • Patterns of home-grown systems/tools becoming integrated or discarded
  • In-house open source solutions mean in-house expertise, but externally hosted OSS has more variation
  • Mix of core staff and contracted staff (both long and short term)
  • Mix of external technical consultants and coders and the ways in which their knowledge is sustained
  • Extent of tactical use of internal and external project funding to enhance in-house technical capacity
  • Extent to which developers technical skills and approaches are actively nurtured
  • Extent to which developers soft skills are developed in areas like pitching, presenting, supporting users, business analysis, cost assessments etc

Even within our small group there was considerable variation. That certainly suggests that in sharing our emerging models of managing distributed and cloudy infrastructures, we need to clearly state our local contexts.

It was a thought-provoking discussion. It emphasised to me the value of JISC’s support for connecting developers, and the need to continue investing in in-house technology expertise.

 

Amber Thomas, JISC

JISC Guidance on eBooks

JISC Observatory have launched the draft version of a new report on eBooks in Education.

This report updates previous work researching the usage and adoption of ebooks within academic institutions and examines recent developments that are shaping how academic institutions can respond to growing interest in ebooks:

As ebooks become mainstream and the percentage of academic publications delivered as ebooks rises steadily, this report explains the importance of preparing for the increasing adoption and usage of ebooks in academic institutions. Specifically, this report: 1) introduces the historical and present context of ebooks; 2) reviews the basics of ebooks; 3) considers scenarios for ebook adoption and usage; 4) addresses current challenges; and 5) considers the future. This report also provides a glossary to help clarify key terms and a ‘References’ section listing works cited.

The preview version of this report is open for public comments from 27 September to 8 October 2012. A final version, taking into account feedback received, is scheduled for publication around the end of October.

See news item for more information and an opportunity to comment.

Alongside this report, JISC is developing further practical guidance.  Building on  JISC Collections ebook expertise,  the Digital Monographs Study and JISC Digital Media expertise, later this year we will be releasing Digital Infrastructure Directions guidance on the Challenge of eBooks in Academic Institutions.

The guidance is being co-designed with experts in the sector using a project development wiki.  The project covers the creation, curation and consumption of eBooks. There are many issues to unpick: the Bring Your Own Device (BOYD) trend, the role of libraries, changes in publishing and purchasing models, accessibility and access considerations and so on. We will make this a helpful tool for institutional managers to navigate the choices signposted in the JISC Observatory report.

Some prototype guidance will be available soon and we will be seeking input on how to ensure that it meets the needs of decision-makers in institutions. The guidance authors will be watching the comments on the JISC Observatory report preview to ensure that they address the challenges surfaced.

In summary please comment on the JISC Observatory Draft Report: Preparing for Effective Adoption and Use of eBooks in Education and watch this space for more!

 

Amber Thomas, JISC

When ideals meet reality

At ALT-C in early September I ran a session with David Kernohan on Openness: learning from our history. The theme of the conference was ” a confrontation with reality” so it seemed fitting to explore the trajectories taken by various forms of openness. What follows is just a short thought piece that I contributed to the session about some of the patterns I have observed over the past decade or so.


 

Curves and Cycles

The first thing to say is that we are all different in our encounters with new approaches Whether they are new technologies like badges or new delivery models like MOOCs. We are each on our own learning curves and changed curves, and we meet new ideas and solutions at different points in the hype cycle. That is a lot of variation. So when we meet new ideas, we can respond very differently. My first message is that every response is a real response.

Polarisation

It’s to easy to characterise people as pro- or anti- something It’s too easy to present things as a debate for or against But polarisation often masks the real questions, because we don’t hear them properly.

“The use of technology seems to divide people into strong pro- and anti-camps or perhaps utopian and dystopian perspectives” Martin Weller, The Digital Scholar [1]

Dialectics of Open and Free

There is usually a dialectic around open and free: free as in freedom, free as in beer [2].  “Open as in door or open as in heart”: Some courses are open as in door. You can walk in, you can listen for free. Others are open as in heart. You become part of a community, you are accepted and nurtured [3]. I always add: open as in markets?

Branching

To follow on from free as in beer …A great example of branching is the trajectory of the open source movement. there were big debates over “gratis vs libre” and that gave birth to the umbrella term Free AND Libre Open Source Software term – FLOSS. By enabling the practices of open source to branch off, to allow the community to branch off, we saw a diffusion of innovation. Towards profit-making business models in some areas, free culture models in others. Its interesting how github supports the whole spectrum

This has also been the approach of the UK OER Programme. We have been quite pluralistic about OER, to let people find their own ways. We have certainly had tensions between the marketing/recruitment aspect and the open practice perspective. What’s important to note is that often it’s not just one model that comes to dominate.

Tipping points into the mainstream

We don’t always understand what brings about mainstreaming. We very rarely control it.

Consider a story from open standards: the rise and fall of RSS aggregation. Is it netvibes or pageflakes that made the difference? Or google-reader? At what point did twitter and facebook start to dominate the aggregation game and overtake RSS? The OER programme gave freedom for each project to choose their platform. They didn’t chose a standard they chose a platform. It’s often when open standards are baked in to platforms that we see take-up without conscious decision making.

I’m not sure we always notice: sometimes when mainstreaming happens we don’t recognise it. When did e-learning become part of the fabric of education?

Pace

Finally, change can take a lot longer than we hope. The 10 years since the Budapest Open Access Initiative [4] can feel like geological time. And yet the OA movement has achieved so much. Perhaps we need some time-lapse photography approach to recognising the impact of changes we started back then. So many more people understand OA now. So many more people care.

Change takes longer than you think

Key Messages

We are all unique in our encounters with new things. Polarisation often masks the real questions. There is often a dialectic around open and free. Often it’s not just one model that comes to dominate. Sometimes when mainstreaming happens we don’t recognise it. Change can take a lot longer than we hope.

 

 

References:

[1] http://www.bloomsburyacademic.com/view/DigitalScholar_9781849666275/chapter-ba-9781849666275-chapter-013.xml

[2] http://www.wired.com/wired/archive/14.09/posts.html?pg=6

[3] http://followersoftheapocalyp.se/open-as-in-door-or-open-as-in-heart-mooc/

[4] http://en.wikipedia.org/wiki/Budapest_Open_Access_Initiative

Sustaining OA infrastructure

The Knowledge Exchange, of which JISC is a member, has just released a report on the sustainability of OA services and infrastructure. The report identifies services that are considered critical, and what they are critical for. It then considers the perspectives of a range of stakeholders, and considers the value offered to them by these services. It is the first part of a series of reports, with the next one being undertaken now under the auspices of SPARC in the US, focusing on discrete business models and related issues (governance, etc).
The KE report is important for JISC, as we are working with the UK and wider repositories community to develop a repositories service infrastructure. You may know that this is based around RepositoryNet+ at Edina, includes an “innovation zone” at UKOLN, and strong relations with centres of excellence such as the Universities of Nottingham and Southampton, and MIMAS. The repositories infrastructure, including services such as Sherpa/RoMEO, needs to be sustainable and cost-effective, and the KE report helps us understand what that means in particular cases. In a time of constrained resources, and a strong policy direction in favour of open access, we will need to be creative in sustaining the repositories infrastructure. We need business planning perspectives to complement vital technical and academic expertise and understanding.
The same challenges, in a more commercial context, face Gold OA. The UK Open Access Implementation Group is already working hard with others on interoperability and service models, eg for an “intermediary” in Gold OA transactions.

Redesigning Library Systems and Services

Who’d have thought that a redesigned library website could attract quite so much attention.

Yet, the recent announcement by Stanford’s University Library that it has redesigned its website seems to have triggered a significant amount of interest.

stanford library website

At JISC colleagues have been discussing it for a number of reasons, from the development and UX approach to the fact it has been  blogged throughout the redesign process on the library website.

The changes in the website also provoked an interesting blog post from Lorcan Dempsey that reflects on two interesting consequences of the website, which Lorcan terms:

  • Space as a service, and;
  • Full library discovery.

What the Stanford website clearly highlights is that the traditional (siloed) library systems can no longer be conceived of as separate from the range of physical and virtual spaces.

The library web presence offers an opportunity to go beyond the binary opposition of online and physical, to one in which the library (website) itself becomes a navigation tool between a range of spaces, systems and services.

The distinction between online and physical becomes increasingly blurred – instead the focus is on appropriate services and resources wherever they may reside.

In some ways Lorcan’s second point: ‘full library discovery’ is an extension of these issues – the discovery experience itself flows beyond the traditional confines of the catalogue. It pours over into searching the website itself, guides, staff pages and so on.

The design of the site, with its central navigation banner, is also very mobile friendly – it is surely not long until the library web presence provides a siri like experience… is it?

These considerations are particularly interesting in terms of the current work JISC is undertaking looking at the future of library systems. In particular the ‘pathfinder’ projects that make up the programme and the range of system challenges they’re exploring, from shared LMS systems to patron-driven acquisition and shared collection management tools.

This work follows up some of the themes and motivations that emerged from the Library management Systems programme a few years ago. The programme was an explicit attempt to address some of the issues library systems faced in terms of usability, user experience (UX), and integarting with the wider web and other institutional systems.

Indeed, a number of the projects in that programme explicitly explored the potential for library systems to crossover into more social online spaces, like Facebook, and collaborative academic spaces, such as VLEs.

The current Library Systems programme is trying to make sure it captures interesting developments as they occur on the LMS Change blog to inform the programme as a whole.

Stanford’s website redevelopment certainly poses a number of important questions for other libraries in how they design and deploy their services and systems.

For more background to the development there is an interesting series of posts on the redevelopment from Chris Bourg, a Librarian at Stanford University.

EduWiki Conference Reflections

Last week I attended a very engaging conference organised by Wikimedia, focused on the uses of Wikipedia in Education: EduWiki .

I was on a panel discussing openness in HE and I also I gave a keynote on 21st Century Scholarship and the role of Wikipedia. I’d produced a visual (infographic/poster/prettypicture) which I blogged and have included below. The slides of my talk are online. A video of the talk is also available.

It was interesting to bring together Wikipedians with educators and I learnt a great deal about how Wikipedia is being used in HE settings, and what the active debates are in this area. Some of my take-home points:

  • An increasing number of countries are running targeted support for educational use of Wikipedia, there were some great presentations about setting Wikipedia assignments.
  • A bit of digital literacy myth-busting: I had thought that Wikipedia wanted to be seen as an acceptable source of citation, but that’s not true. Perhaps old news (2006) but I wasn’t aware that Wikipedia was so clear that it is NOT a primary source and shouldn’t be cited in academic work.
  • That said, I realised as the conference went on that I often use Wikipedia articles as unique identifiers for concepts and things. I think of Wikipedia in quite a linked data way, creating URLs for things. So I do reference Wikipedia as a way of helping people find out more, and I especially do this for things like academic theories: I use it as an accessible reference point.
  • The role that Wikipedia has as an entry point for academic work is really interesting, I think. It makes it clear to me that those in the open access community should be helping Wikipedians ensure that wherever possible they cite an open access version of an article, monograph or textbook. Wikipedia editors are very niche group but really key to ensuring that people outside academia can get access to academic outputs.
  • Wikipedia articles are written in a different style to academic essays, which are different again from journal articles. Even then, Wikipedia articles vary in their styles. Alannah Fitzgerald demonstrated the use of textmining tools to illustrate the different styles.  The discussion surfaced questions about the way that academics communicate with each other and with “the public”.
  • I didn’t realise there is a whole set of Wikimedia projects, I will be having a proper look at Wikidata and Wikibooks
  • We heard some interesting angles about the nature of the Wikipedia community, how many are highly educated, and that as a community it is still developing and there is space for much more diversity in the contributor base, particularly more women. They are actively seeking engagement, and I love the phrase “pedants welcome”, I can think of many colleagues who fit that criteria admirably 😉

Overall, a great conference, congratulations to Martin Poulter and Daria Cybulska for organising it.  I’m pleased to say that I have already arranged for me and my colleagues to get trained up in Wikipedia, and I already have my eye on some pages I want to contribute to.

As a parting shot, here’s the pretty picture I made (deftly dodges the infographic pedants who argue this isn’t one). (Update: Brian Kelly unpacks that issue in his post on posters and infographics !)

Visual from jisc wikipedia talkview in full at easel.ly