E-Learning 3.0, Part 1: Data

The premise of this series of articles is that we are entering the third major phase of the World Wide Web, and that it will redefine online learning as it has previously. The first phase of the internet as it was originally developed in 1994, based on the client-server model, and focused on pages and files. The second phase, popularly called Web 2.0, created a web based on data and interoperability between platforms. In what is now being called web3, the central role played by platforms is diminished in favour of direct interactions between peers, that is, a distributed web. The result is what we are calling E-Learning 3.0.

We’ve been seeing it take shape gradually since the 1990s: the shift in our understanding of content from documents to data. In some places – like libraries and classrooms and offices – the document may still prevail. That’s what you have if you’re reading a PDF e-book or filling out an Excel form or (as I am now) typing an article into MS-Word.

The first sign of the new paradigm, in the educational world at least, began with learning object metadata. When we created a learning resource, we created data about that resource, and this contained fields like ‘title’ and ‘typical age range’. From there it is a very small step to putting our content into the database as well, and completely converting our document into data. Most web-based content today comes from some sort of database.

By storing our content as data, we made it more flexible and more useful. One piece of data, such as an article, could be inserted into another piece of data, such as a template. We could insert data that changed from day to day, like the date, a stock price, or the weather. And as we gradually migrated to Web 2.0 we began inserting data from multiple locations into a single web page. This was all handled behind the scenes, by web platforms.

From the perspective of the browser, everything is pretty much the same. It doesn’t matter whether a web page was created from one data source or a dozen. The browser still had to visit a web page and still received content from that single source to assemble and display to the viewer.

But what if we accessed the data directly? What if we used our browsers to tap into these databases directly to let us choose for ourselves how to organize, merge and display this data? That’s what’s beginning to happen today. Canada’s Open Government Portal, to name just one example, makes raw data directly available to the reading public. For example, you can read raw data from an ocean climate monitoring system made available as JavaScript Object Notation (JSON) data, here.

This data might not look very useful, but your browser can use JavaScript code to store the data and present it however you want. Here is an example from Studio Ghibli. The JavaScript on this web pages accesses data (here is the raw JSON data) from an application programming interface (API) and display it as an easily-readable web page. More complex JavaScript applications can manipulate the data and, depending on the API, update or add to the source data.

Taking this one step further is the idea that data like this can be linked. This was the original idea behind the Semantic Web. Different types of data are associated with each other to create a web of data; for example, a book is linked to an author, who is linked to another book as well, and the books may be linked to a publisher, and to a bookstore, and so on. For example, the Online Computer Library Center (OCLC) linked data initiative is looking at Wikipedia as a linked data source.

There are hundreds of linked data sets. The Linked Data Cloud, for example, lists 1,224 datasets with 16,113 links (as of June 2018). This presentation from the European Data Portal traces the evolution from documents to linked data over the last ten years. This programmers’ guide outlines some of the major principles (for example: using web URIs to link to data sets and resources).

This far, linked data has been the domain of large enterprises like governments and institutions and universities. This is beginning to change. The dependence on centralized sources for linked data has led to the rise of platforms like Facebook and Twitter, with the result that people no longer feel in control of their own data, and even worse, have difficulty accessing and sharing this data. Also, it has become increasingly difficult to read this data without being tracked and without being forced to view advertisements and unwelcome messages.

So what we’re seeing now is a trend toward decentralized linked data. This is the idea that each person can manage his or her own data, storing it wherever they want, and using it whenever they like. Most notably, Tim Berners-Lee is working on this idea in his Social Linked Data (SoLiD ) project. Other projects, such as IndieWeb, are also looking at ways to enable people to create and curate their own linked data.

What does this do to education? Right now most applications of data to education focus on student management and assessment. For example, the Learning for Action Framework recommends using data to track student progress toward an identifiable goal. When data is linked working with it becomes much more than measurement. “When you tie that information to other information you have, your information becomes knowledge - for example, when you connect what you know about a student’s performance with what you know about the instruction provided to them,” they write.

More importantly, learning with data isn’t the same as working with documents. Documents are pre-packaged and pre-curated organizations of information mostly suitable only for consuming. Learning in dynamic distributed data networks becomes a process of creating and curating our own data. We will think of our learning resources as something we create, own and share, and not just as rentals from the college textbook or online publishers. We are just beginning to tap into this, for example, with projects like the Big Data Challenge for High School students.

Learning also becomes a process of being able to comprehend data, to be able to look at representations of data though dashboards and visualizations, and to be able to identify patterns and draw conclusions. It’s interactive, immersive and engaging, a process of learning how to perceive and comprehend rather than to decode and store. It’s hard to find contemporary examples of this outside critical thinking, but it’s what Sesame Street was trying to do when it asked children to find “three of these things that belong together.”

When we create and share our own data, we are look at and learning about these associations from the other side. By linking our own data and creating our own patterns, we can become more sensitive to them in the world. We see and manage our connections with each other, with the world of things, and with our own learning records and accomplishments with, say, a personal learning record.