The open internet began as email lists and Usenet groups. It grew through blogs and personal websites, from sites as humble as my own to those as sprawling in their ambition as Wikipedia. And it thrived in the age of social networks, online classrooms, and massive open online courses. When we think of the internet, we usually think of the internet as 'open', and this openness is most often found in the form of open resources.
In today's internet, however, we see companies and institutions pushing back against openness. Proprietors of copyright content such as music, videos, articles and research publications have demanded that internet services block access to free copies of this content, and that they require payment for access to these resources. In addition, content owners and vendors began making money through advertisements. Both subscription-based and advertising-based models encouraged the growth of technology that herded users into content silos and that tracked and analyzed their behaviour.
What we are calling Web3 is to a large degree a response against these trends. As Tim Berners-Lee wrote recently, "for all the good we've achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas." His own project, Solid, is a tentative first step toward re-decentralizing that new web.
The philosophy of 'open' that characterized the early internet was also reflected in the concept of open education. "Open education is a philosophy about the way people should produce, share, and build on knowledge. Proponents of open education believe everyone in the world should have access to high-quality educational experiences and resources, and they work to eliminate barriers to this goal." (opensource.com) This access is often supported by means of OER.
Open Educational Resources (OER) are teaching, learning and research materials that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. There is a large base of literature and practice associated with OER. Numerous repositories containing OER have been developed.
As I stated in a recent talk on OER, the potential of open learning to have a worldwide impact is clear. Digital technology has transformed society, created entire industries, informed minds young and old, and given most of us an opportunity to connect with others in our communities or across oceans. Almost five billion people today use mobile phones, and many more have access if they want it or need it. Almost equally significant is the world wide access to millions of videos and other resources on sites like YouTube and Facebook. OER evolved into MOOCs and these made open learning mainstream.
Yet the challenges faced by the open web, though, are reflected in the challenges faced by OER. It didn't take long for major MOOC providers to create barriers, charging first for certification and then for access to content itself. In the world of OER the same thing happened with open textbook publisher Flat World Knowledge started charging for access. The temptation to monetize OER is always present for centralized services like Open Stax, Alison, Top Hat and Lumen Learning.
By dint of subscription fees, value-added services, or advertising and surveillance, these services must contemplate one business model after another based on enclosing open content and requiring some form of authentication to access. And as David Bollier says, the enclosure of open content is one of the greatest threats to the internet. "Enclosure is about dispossession. It privatizes and commodifies resources that belong to a community or to everyone, and dismantles a commons-based culture."
The practical application of OER in education today faces numerous challenges, a number of which were described by Sukaina Walji and Cheryl Hodgkinson-Williams.
-
While OER are being created, we are seeing limited re-use, and almost no adaptation to create new or localized resources
-
Licensing remains a mystery to many people, and there isn't clarity about what license to use, how to license, or even whether certain licenses are actually OER
-
It is not easy to create and upload OER to repositories, nor is it easy to use OER in the context of a course or the creation of course materials.
-
Models for support and sustainability of OER remain elusive, and projects continue to depend on uncertain sources such as institutional funding, foundations and national or international bodies.
Additional problems exist, for example:
-
OER remain hard to discover; there isn't a good way to search for OER, and learning object metadata (LOM) is difficult to use, and didn't actually facilitate discovery
-
Individual OER often lacked educational support materials such as quizzes, assignment banks, or other materials
-
There is no mechanism for ensuring the quality of OER or the appropriateness of OER in a given educational context.
These are the sorts of problems that challenged the open web, and which led to the creation of enclosed content networks such as Blogger, Twitter, Facebook and LinkedIn. And while access policies vary from service to service, they all monetize content and resources. These trends are evident in the world of educational resources - indeed, the same companies are often involved with products like Google Classroom, Facebook Education, and LinkedIn Learning.
The proposal to re-decentralize the web is reflective of a trend that began almost as soon as the challenges posed by centralization became clear. The first challenge is traffic, which overloads a single server. A second issue is latency, or the lag created by accessing resources half a world away. Additionally, some resources may be subject to national policies creating the need to differentiate access. And finally, if the centralized source is unavailable for some reason, then access for the entire world is disrupted.
These challenges were addressed with Content Distribution Networks (CDN). In essence, a CDN creates a local version of a website in different geographical regions. When a person in that region requests a resource, they are served a copy of the resource from the local server, rather than the original from a server much further away. This reduces traffic on the home server and makes access faster for the end user. Companies such as Cloudflare and Akamai now serve as much as half the content traffic on the internet (yet they are almost invisible to end-users).
The solution proposed by advocates of the distributed web is in many respects very similar. Instead of being stored on a single server, content is stored on multiple servers. And when a web user requests that content, it is served from the nearest server. The only difference is that, in the distributed web, these servers are each others' computers. These are called 'peers' and the system as a whole is called a 'peer-to-peer' (P2P) network. "Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources." (Wikipedia)
Another big difference lies in how these resources are addressed. On the traditional web and in CDNs, we use the location of a resource. The URL corresponds to an IP address (for example, http://www.downes.ca corresponds to 167.99.39.236) and to retrieve a resource, the browser sends a request to that address. However, in the distributed web we use content-based addressing. In essence, we search for resources based on what it is rather than where it is.
Here's how it works: the content of a resource (whether it's text, a web page, an image, whatever) is used as input to a hash algorithm. This is a cryptographic function that produces a scrambled string of characters - the hash - of the resource. Depending on the algorithm and the length of the hash produced, each hash is an essentially unique identifier for that resource. So instead of using a URL to request a resource, we use this unique identifier. Our peer sends a request to the closest peer, which either sends us the resource, or passes the request along to more peers. When we receive the content, we can check it by using the hash algorithm ourselves to ensure the hash of the content received is the same as the hash of the content we asked for.
Peer-to-peer networking goes a long way to solve the issues created by centralization on the web in general and with OER in particular:
-
They are cheaper - there's no need to set up a large internet server, since each person's computer shares part of the load
-
Resources cannot be enclosed - a peer-to-peer network resists attempts to monetize resources through subscription barriers and surveillance
-
They are resilient - a local peer-to-peer network can continue to operate even when access to the wider internet has been disabled for some reason
-
They are equitable - each member of a peer-to-peer network is at the same time a content consumer and a content creator.
Peer-to-peer networks also preserve the provenance and fidelity of content. Because the content is identified by means of a hash, the content cannot be changed without changing the hash, which means that a request for a specific hash will always result in receiving the same content. Additionally, content can be associated with other contents, or previous versions of the same content, by means of chaining. This is done by embedding the hash of the previous content into the text of the next content, and then hashing the next content.
The first file sharing networks - services such as Napster and Gnutella - were peer-to-peer networks. Additionally, BitTorrent (see also uTorrent) - which is used to share large files - also operates as a peer to peer network (the large files are broken into pieces, and the pieces are hosted and shared by peers, and reassembled by the BitTorrent client).
One significant current project is called Dweb (for 'distributed web' or 'decentralized web'). There's a good recent introduction to the project from Mozilla. It's being called the next big step for the World Wide Web. The Dweb is based on the dat protocol, which is essentially a mechanism for finding and distributing content-addressable resources by their hash. You may see more and more resources with addresses like this in the future:
dat://502bdf152d00a35f9785f78d107b9037b5eca9354bcf593e7b4995f9be97a614/
This address is in fact the dat:// address for the first Content Addressable Resource for Education (CARE). If you access this resource using a peer-to-peer Dweb application you will find a set of pages containing the National Research Council's Vision and Principles statement (in both official languages, set to photos I took myself). CARE, along with the associated concepts of CARE Packages and CARENet, is a new type of Open Educational Resource.
In order to participate in the distributed web, it is necessary to have a peer application. This is an application that runs on your computer and communicates with other nodes in a P2P network to share resources. One such application is the Beaker Browser, which has versions available for Windows, Mac and Linux. The browser allows you to explore Dweb resources, 'clone' those resources locally, and create or edit new resources. Beaker manages Dweb functionality like creating hashes and chaining resources together.
Beaker also helps users with a dat name service. Hash addresses (like the one above) are long and difficult to remember. A name service allows us to associate a simple string with a hash address (in exactly the same way the Domain Name Service (DNS) associates URLs with IP addresses). So an address in Beaker might look like this: dat://enoki.site/ For more Dweb resources open a Beaker browser to this website: dat://taravancil.com/explore-the-p2p-web.md
The dat:// protocol is only one of a number of current projects based on creating a content-addressable distributed web. One of the other major initiatives is called the blockchain. In the case of the blockchain, the resources in question are entries in financial ledgers. Each entry is given its own hash, and blocks of these entries are chained together by embedding the hash from the previous black into the text of the next block. This ensures that the contents of earlier entries cannot be changed without changing every single entry in the chain. Another initiative is called Git (with services based on the protocol like GitHub and GitLab). In the case of Git, the resources chained together are different versions or branches of a software development project.
An ambitious project to bring all these under a single umbrella is called the Interplanetary File System (IPFS) along with the associated project, Inter Planetary Linked Data (IPLD). "IPLD is the data model of the content-addressable web. It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD." If you install and run an IPFS node on your own computer you can see this for yourself by using your browser and accessing http://localhost:5001/webui and viewing GitHub code, an Ethereum transaction and some XKCD comics all in the same application.
It is arguable that the future of open resources - including OER - looks like this. For a sense of what has already been done, you can open your Beaker Browser and access content libraries from the Internet Archive on a number of P2P networks, including IPFS, YJS (peer-2-peer shared editing), WebTorrent (like BitTorrent, but for the web) and GUN (a distributed graph database). It is only a matter of time before resources currently stored in OER repositories are added to the distributed web, creating a single searchable source of OER rather than the content silos in which they exist today.
CARE (or something like CARE) will be the new medium for free and open learning resources, essentially replacing OER as we know it today. The differences will be as follows:
-
Because CARE are content-addressable, they are stored and access in the web as a whole, rather than in a specific location, and hence cannot be blocked or paywalled
-
As part of the distributed web, CARE are also associated with each other (for example, as links in a single site, or as newer versions of existing resources) creating what is essentially an Open Resource Graph (ORG).
-
Accessed through applications such as Beaker Browser, CARE can be cloned and edited by any user to create and share new resources.
While we have seen more traditional contents, such as books, media and music, being distributed through IPFS and Dweb, it is important to underline that CARE consist not only of educational content, but interactive applications and service interfaces as well. Tools you can explore using Beaker include Fritter, a peer-to-peer social networking application (dat://fritter.hashbase.io), Enoki, a P2P publishing system (dat://enoki.site/), Ridder, an RSS reader (dat://ridder-kodedninja.hashbase.io/), hypercast, a P2P broadcasting application, and more. Elsewhere, similar technologies are being deployed to support more complex content, for example, distributed applications (dApps), subscriptions and lists, contract networks, and even distributed organizations such as the DAO (Decentralized Autonomous Organization).
All of that said, the distributed web is very much in flux and practical applications will depend on the resolution of some significant issues. Among them are:
-
Speed - though the distributed web can be very fast, in practice, it often isn't, partially because of the time it takes to locate individual content-addressed content, and partially because upload speeds can be very slow for average users. In response, many people look to the cloud to host Dweb or IPFS nodes.
-
Ease of Use - while it may seem that creating and sharing a web resource using Beaker or IPFS should be easy, in practice (as E-Learning 3.0 participants experienced first-hand) it can be daunting, especially since applications don't always work and guides are minimal.
-
Finding resources - there isn't yet a good Dweb search engine. Additionally, resources can disappear when a host goes offline. This has led to the development of semi-centralized intermediaries such as Hashbase (which make money by offering always-open nodes).
-
Acceptance - many institutions officially disapprove of peer-to-peer services and block .torrent and other P2P traffic; additionally, many P2P sites are associated with blockchain and may therefore also be blocked by institutional internet services
-
Appropriation for questionable and possible illegal content and services. With no central point of origin, there is no means to control these types of content, which raises questions about both their legality and their vulnerability.
Education is about more than resources, of course, but these considerations about resources bring us back to the idea of open educational practices.
At a certain the infrastructure of repositories and licensing regimes and sustainability issues fades into the background. Then distributed web resources (including not only CARE but the wide range of things people on an open network share with each other) become the currency of an ongoing conversation between people. These resources become as words in this conversation, unfettered by questions of whether we are allowed to use them, and employed constructively to create new visions and new pathways into the future.
Though individual resources are unchanging, the network of these resources as a whole is in a constant state of flux as new versions are created, new links are forged, and new applications and services are built on top of them. Learning becomes less about retaining contents and skills, less about best practice and methodology, and even less about jobs and employment. It becomes the development of a new literacy predicated on life and living in a state of flux. Open pedagogy becomes less about being 'open' and more about how to be open.