E-Learning 3.0, Part 6 - Recognition



The use of the word 'Recognition' to talk about assessment and credentials is intended to suggest the process involved. The outcome of an assessments process is intended to be for us to be able to determine whether or not a person has certain types of skills, experiences, or knowledge. It allows us, in short, to be able to recognize that a person is, say, a qualified pilot.

 

The first question that arises is, "what, exactly, are we recognizing?" The most obvious answer is that we are recognizing learning success, and here the focus in on the learner. But in the process of online learning, we should also be assessing the learning environment as well, and in addition perhaps assessing individual resources and perhaps also instructors or facilitators.

 

The second question is, "what counts as success?" The go-to answer for learner assessments is 'test scores', an answer that is as inadequate as it is easy. A more comprehensive consideration might be that we are assessing for competencies; this is the approach being taken by numerous learning providers today. In the current course we are taking a more pragmatic approach: task-completion.

 

In the assessment of learning environments, for example, courses, we often fall back on the four-level Kirkpatrick scale. This ranges from simple satisfaction with the course material to knowledge retained to application of the learning in the workplace to improvements in performance in the workplace. A fifth level, return on investment, is sometimes also added. Yet a lot of MOOC evaluation has been process-oriented: number of participants and completion rates. The evaluation of instructors, meanwhile, might be based on anything from student grades to course evaluations to peer review.

 

The third question is, "who decides?" This is where the original MOOCs challenged the status quo. We said, "it's the course participant who decides what counts as success, and how success is to be measured." Course participants have a variety of choices: they can value the course completion certificate most highly, or the new skills they learned, or the community created or facilitated by the course, or the task they were able to accomplish using the learning resources.

 

In the wider community, we often define 'who decides' in terms of "stakeholders". The most obvious stakeholder is the funder, which may be a government, a company, or a private agency, and will have certain goals and expectations related to outcomes (which brings us back, for example, to the Kirkpatrick scale). The course participants' parents or families may have expectations as well, especially for younger learners. And the community as a whole will want to weigh in with concerns ranging from a need for literacy, civics and public stewardship, standards of acceptable behaviour, and more.

 

The fourth quest is, "how do we know?" This has two parts. First, how do we make the actual determination of success? Is it a public and social process or a personal and private process. Are evaluation standards expected to be fair and objective? What, precisely, do we measure (if anything) in order to make this determination? Second, how to we communicate that knowledge? Most education is accompanied with a certificate or degree. Recently we've begin using badges. We can present evidence of our learning and experience in a c.v. or resumé or display it with badges. Or we can present the artifacts themselves in blog posts, a personal portfolio, or artifacts such as open source software or contributions to other civic projects.

 

We can't answer all these questions in one module but we can look at what will give us the framework for an answer.

 

Two major technical approaches were considered: first, competencies and competency frameworks, and second, the badge infrastructure. It should be clear from the outself that neither of these will provide satisfactory answers to the questions we have. Neither is either of the two exactly a future technology. But they give us a mechanism for considering what the answers might look like.

 

In recent years we have seen renewed focus the idea of competencies and competency definitions. According to various definitions, competencies are "the knowledge, skills, abilities, and behaviors that contribute to…" something. The 'something' in question might be "individual and organizational performance," or it might be "successful learning, living and working," or it might be "highly effective performance within a particular job."

 

The terms defining 'competency' are also troublesome. The NIH site defines them as follows: "Knowledge is information developed or learned through experience, study or investigation.  Skill is the result of repeatedly applying knowledge or ability. Ability is an innate potential to perform mental and physical actions or tasks." None of these definitions should remain unchallenged.

 

One major competencies initiative comes out of the U.S. The Advanced Distributed Learning (ADL) initiative has launched something called the Competencies and Skills Systems (CASS) program as part of their wider Total Learning Architecture (TLA). The ADL approaches is based on a mechanism for recording and tracking learning activities. The Experience API (xAPI), as it is called, defines how activity reports are created by various learning applications and stored in a learning record store (LRS) for analysis and evaluation (see this cmi5 page for a helpful diagram).

 

Within this framework we can work with the numerous competency definition standards defined for different specializations, everything from Australia's National Competency Standards to the NIH's Nursing Competency standard (and hundreds more). These vary but most more or less comply with the IMS reusable competencies information model where a 'competency definition' is "an optional structured description that provides a more complete definition of the competency or educational objective, usually using attributes taken from a specific model of how a competency or educational objective should be structured or defined. Typically, such models define a competency or educational objective in terms of a 'statement, conditions, criteria', 'proficiency, criteria, indicators', 'standards, performance indicators, outcomes', 'abilities, basic skills, content, process', and similar sets of statements."

 

A badge can be thought of as a token given to a person in recognition of the satisfaction of the proof as specified in the competency definition (we say 'can be' because in practice, badges are often much less rigorously defined). A badge API, such as that provided by Badgr, in essence describes the workflow for this process (Badgr has assumed responsibility for the now-retired Mozilla Backpack project).

 

The core process contains these steps:

  • create an 'issuer', who is responsible for creating and awarding badges

  • create a badge (more formally, create a 'badgeclass'), defining the criteria for earning it

  • award a badge (more formally, create a badge 'assertion') to a person based (optionally) on evidence

  • Display the badge on a website, in a c.v., or in a 'backpack'

 

Much of the work this week focused on badges, and the task was to create a badge. Badges were created back in 2012, as one person pointed out. Why focus on badges?

 

Badges are a proxy for what could be a recognition entity of any sort. We have already seen in xAPI the use of 'activities' as recognition entities. Badges, certificates and awards are recognition entities. So are endorsements, references, and plaudits. I have said in the past that the recognition entity of the future will be a job offer. (I can't find the term 'recognition entities' used elsewhere but I'm hoping the meaning is clear in this context). Already software is being developed to map directly from a person's online profile to job and work opportunities (this is how one of our projects, MicroMissions, works in the Government of Canada). These profiles today are unreliable and superficial, but with trustworthy data from distributed networks we will be able to much more accurately determine the skills - and potential - of every individual.

 

Recognition entities are and will continue to be valued by some course participants. So the next generation of learning technologies will embody some mechanism for generating them. In gRSShopper I've created a simple mechanism defining the major elements (ie., the major media types) of such a mechanism:

 

  • Modules (today's answer to 'learning objects') which describe the knowledge or skills intended to be captured by these recognition entities. Modules are themselves complex entities. In the future I will probably add 'competencies' to the list of elements associated with a module, but I think we need a much greater latitude here

  • Tasks which associate the performance required in order to demonstrate comprehension (or 'understand', or 'learning', or 'knowledge') of the module. A task is intended to produce an artifact or evidence of that comprehension.

  • Badges which associate the successful completion of a task associated with some module with a person.

 

Inside a course (such as the E-Learning 3.0 MOOC) these may be defined by the course designer. However, in an open and distributed network, it is allowed and expected that modules, tasks and badges would be developed by multiple participants, including course participants. This is, for example, the intent of the DS106 assignment bank (and the purpose of an earlier task in el30, that of creating a task). But there is no reason why participants should not also create modules that help people complete tasks, or create their own badges, that associate people with tasks in different ways.

 

How this is all applied in the next generation of learning will have a profound impact.

 

First, the nature of knowledge and skills is changing. I've tried to capture this in the way I've designed the modules in E-Learning 3.0. Instead of a document or narration or story, knowledge of the future is gradually migrating toward decentralized linked data models. A domain or discipline, for example, may be represented with a graph of associated concepts, actions and activities, background assumptions and environments. As I suggested earlier in the course, it doesn't makes sense to depict 'knowledge' of the discipline as the remembering of these data points (which themselves will be constantly changing).

 

The key here is in how tasks (and therefore competencies) are defined. As the Random Access Learning article says, "Open Badges can be highly effective in capturing learning and linking new learning to changes in work practices. The key to this is the the criteria you set and the expectations and guidance you give regarding the evidence you will require of the learner before awarding the badge."

 

This means, second, that the knowledge is in the doing. We associate tasks with elements of the domain model, but completion of the task itself is the objective, not acquisition of the data model. And we need to begin to think of these tasks more broadly. As a result, we need to think of the content of assessments more broadly. The traditional educational model is based on tests and assignments, grades, degrees and professional certifications. But with xAPI activity data we can begin tracking things like which resources a person read, who they spoke to, and what questions they asked - anything.

 

Actual AI-based assessment of competent performance will be used to create competency models - these in turn can inform AI-based speech-raters, competency systems, and professional evaluation. The danger here is that an automated system might associate incidental characteristics (such as race or gender) with proficiency. Actual authentic tasks designed (or contributed) by humans may be needed to balance the possibility of biased algorithms.

 

One aspect of this is captured in the Random Access Blog post describing the use of simulations in learning. "in my experience of designing four online simulations using only HTML, video and audio material, it isn't really necessary. In the simulations we created, it was the authenticity of the situations and learner tasks (ie how close they felt to the reality of the job) which created the immersion. As Jan Herrington points out, 'the use of authentic tasks encourages and supports immersion in self-directed and independent learning.'"

 

We can also gather data from tasks and activities completed outside the school or program, looking at actual results and feedback from the workplace (or even social media). In corporate learning, as described by the Kirkpatrick levels, this is actually necessary and expected. Much of this assessment is performed manually (in, for example, 360 interviews) but as work environments shift from documents to data, much more assessment data will be collected automatically.

 

In the world of centralized platforms, such data collection would be risky and intrusive. It's hard to imagine that anyone would want all their activities tracked by a single central entity (though we see precursors in thinks like insurance adjustment, credit ratings and China's social credit system). These are all cases where a third party examines your data (whether you want them to or not) and makes decisions about your qualifications. This may be socially unavoidable, but it is not an attractive model for an education system.

 

In a distributed data network where people manage their own data, greater opportunities are afforded. There is no central repository, so the opportunities for third parties to mine data are limited. While no doubt people will continue to collect badges, degrees and certificates, these will play a much smaller role in how we comprehend how and whether a person has learned. The same data set may be analyzed in any number of different ways and can be used by learners as input to evaluation services that use zero-knowledge methods (which mask or encrypt identities) to calculate an individual's status against any number of defined (or implicit) employment or position requirements.

 

An individual can (but should not be required to) display their learning accomplishments. As suggested above, this is currently done through badges, resumés or portfolios. In order to be useful, these records need to be trustworthy. Currently, trust in learning records is achieved through (expensive) centralized intermediaries. But new decentralized network technologies will enable individuals to manage their own credentials.

 

One simple example of this is the use of blockchain technology to encode recognition data such as badges. A badge consists of a (potentially signed) assertion by an issuer that a person completed a task. This assertion is data that can be added to a cryptographic data structure, so that it is not possible to alter the record once it has been entered. This enables the individual to point to a trusted assertion of their skill or competency (we'll talk more about how this trust is established in the next module).

 

These developments represent a signal change in the deployment of both learning analytics and artificial intelligence in education in the years to come. Today, such systems focus on process, and centrally and institutionally designed, and benefit teachers and employers far more than they do individual learners. Indeed,  the only people not benefiting are the learners themselves, with their own data. And that's what can and must change.

 

Force:yes