agregador de noticias
Terry Anderson sent me this item (thanks Terry). It identifies completion rates as a problem for MOOCs, suggest that a response to that is to increase student interaction, and then suggests a Knowledge Stock Exchange (KSX) as a type of hgamification to actieve this result. On this model the 'stocks' being traded are ideas related to the course; students can purchase a share of the idea in order to edit it and improve the value of the idea, and then a period of judging creates the payoff. I have mixed feelings about the model; I've never seen the appeal of stock markets, virtual or otherwise. They just seem like legalized gambling, which to me is a poor foundation on which to base an economy (or incentive system generally). Still, the authors reported much improved retention.Web: [Direct Link] [This Post]
I had a brief surge of excitement when I say this article (21 page PDF) thinking that it addressed connectivism and the new developments of web3, but I was disappointed to find it referred back to the definition of e-learning 3.0 used back in 2011 by people like Steve Wheeler in this presentation to talk about the semantic web applied to e-learning. So what we get here is a relatively straightforward application of semantically-based analytics applied to e-learning. It presents this in the context of an application and framework called i-SoLearn, and reports some experimentation and results. I would rate the results as 'mixed' - students who were disinterested in social networks didn't really benefit, and students who had negative sentiments found the technology difficult.Web: [Direct Link] [This Post]
This isn't receiving as much publicity (at least in my communities) as the recent UNESCO motion on OERs, but it could have an even greater importance to the future of education. UNESCO adopted a policy "that establishes universal principles for recognition of studies and degrees and will give signatory states an obligation to recognise studies or qualifications from outside of their region." Here is the final draft text of the agreement. According to the UNESCO page, " the Convention aims to facilitate academic mobility, improve quality of higher education institutions and enhance international cooperation in higher education."Web: [Direct Link] [This Post]
This article combines two parts mythology with one part fact to come to a conclusion that is accidentally a good one. The purpose of the mythology (in my view) is to appeal to readers who already believe these things. The myths? Carol Dweck's comment that, “When teachers are judging [students], [they] will sabotage the teacher by not trying." Also, Jessica Lahey's assertion that "parents and teachers have become adversaries." Even more, the assertion that "Teacher grades, for example, are subject to grade inflation." You can paint a picture of who the article is appealing to here.
But the conclusion - that the acts of teaching and evaluation should be separated - is a good one. Not for any of the reasons listed here, but because it allows students to learn in any way they wish. The danger of this model is that it could create a commercial evaluation industry. And this article (whose author is an advisor for one such company) is an example of the spin being produced to support this idea. So we need to be careful about how we separate the functions of teaching and evaluation. There needs to be an ethical standard here that the commercial sector has heretofore failed to demonstrate that it can meet.Web: [Direct Link] [This Post]
This article looks at consciousness from the perspective of “Rethinking Consciousness: A Scientific Theory of Subjective Experience.” Neuroscientist and psychologist Michael Graziano writes that "consciousness is simply a mental illusion, a simplified interface that humans evolved as a survival strategy in order to model the processes of the brain." I think that the very idea of saying that consciousness is an illusion suggests that there is something else which it actually is, an account that I find difficult to sustain. My own view is that consciousness is real, that it is experience, and nothing more or less.Web: [Direct Link] [This Post]
It's hard to find a topic that is more current and more entangled in current events. This set of slides (second source here) from Doug Belshaw makes clear the need to get on top of the various problems of surveillance capitalism and misinformation, and recommends an approach to digital literacies as a response. It's hard to argue against that, but it's also hard to say exactly what should be done. Professional trolls (as reported in this Rolling Stone article) are expert at dodging your cognitive defenses. "The professionals know you catch more flies with honey. They don’t go to social media looking for a fight; they go looking for new best friends. And they have found them. Disinformation operations aren’t typically fake news or outright lies. Disinformation is most often simply spin." So, what to do then? It's no panacea, but this article in the Verge is better than most. It describes how to check facts, look at sources, and weigh the evidence. It's still not perfect - but it's better than just trusting the authorities.Web: [Direct Link] [This Post]
When Hollywood film director Steven Spielberg acquired the film rights, he was forced to use the more anodyne title Schindler's List. This was because the Ark suffix had already been claimed by Noah, or more likely, by his publicist. At first Spielberg didn't really want to make the film, and offered it to several other directors, and also to a taxi driver by mistake, but eventually he agreed to direct the movie when he realised that one Oskar could be multiplied into several more. At the time, Hollywood couldn't get enough of Nazis running about in jackboots and any film that featured them tended to be a big hit. These included Spielberg's earlier movie Raiders of the Lost Ark (1981 - confusingly not the same ark as Noah's ark), Das Boot (1981 - a story about a lost jackboot) and ultimately the brutally violent the Sound of Music which featured a volatile mix of Nazis and nuns (1965) (there are misleading statements here that need to be corrected - Editor).
Schindler didn't really create a list, nor did he build an ark, mainly because he was too busy running his enamelware factory and raking in the money. If Schindler had created a real list - an alternative one to the one we all know he didn't create - what would be contained within that list? Would it be a list of all his Swiss bank accounts? Or a list of the number of time he restrained himself from hitting Gestapo officials when they called around for a drink? Or later in life, a list of all the book and film deals he was offered?
I think it would be a shopping list, because Oskar had plenty of money to splash about, owning a large factory (and also a huge advance from MGM studios for a film of his life that was never made, that he spent pretty quickly). You could imagine him nipping off down to his local Tesco Express (more likely to be Lidl or Aldi - Editor) to get his Drambuie, Camembert and a couple of baguettes, along with a copy of Völkischer Beobachter, to keep up appearances as a loyal Nazi party member.
You see, Oskar Schindler was a real anomaly. Although he was a fully paid up, card carrying member of the Nazi Party, which was responsible for the wholesale slaughter of millions of innocent people, he was also considered by the Jewish people to be righteous among the nations and several gave him financial aid after the war when he became bankrupt. Although Schindler was in the Party, he was not of the Party. He wasn't really a party person at all, not even at Christmas time. Schindler was actually a positive deviant. He was an oxymoron - a Nazi who was also a philanthropist (I suspect he wasn't a Nazi at all - Editor).
Schindler did things differently to all those around him, and would be considered a deviant if he had been caught out, but he deviated in a positive way. Some might call him a 'mole' but he didn't infiltrate the Nazi Party. He worked his way from the inside out to protect his workers from the fascists.
We can all learn a lot from Oskar Schindler, not least about standing up to the mob, swimming against the tide, and making great shopping lists. In spite of his many faults, he still tried to do some good in the midst of an appalling situation. I want to be a good positive deviant a little like Schindler. I want to make a positive difference in any organisation I work within, because it's the right thing to do. In fact, it's on my bucket list.
Next time: 26: Victoria's horrible secret
Previous posts in the #TwistedTropes series
1. Pavlov's drooling dog
2. Chekhov's smoking gun
3. Occam's bloody razor
4. Schrödinger’s undead cat
5. Pandora's closed box
6. Frankenstein's well-meaning monster
7. Thor's lost hammer
8. Noah's character ark
9. Hobson's multiple choice
10. Fibonacci's annoying sequence
11. Plato's empty cave
12. Dante's lukewarm inferno
13. Sod's unlucky law
14. Aladdin's miserly lamp
15. Batman's tangled cape
16. Cupid's bent arrow
17. Fermat's dodgy last theorem
18. Moore's obsolete law
19. Lucifer's idiotic fall
20. Adam's poisoned apple
21. Hadrian's busted wall
22. Montezuma's terrible revenge
23. Dale's shameful cone
24. Maslow's awkward hierarchy
Schindler's shopping list by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
The LMSPulse Moodle Educator Certification (MEC) Guide — Certify Essential Digital Teaching Competencies For 2020
Here are Steve Hargadon's responses to a questionnaire submited to speak at a TEDx event about the "learning revolution." It's pretty funny, but at the same time, somewhat sobering. "The culture we've created with schooling has allowed many of us to feel OK about our positions of privilege, to no longer really believe in the value of the common man or woman, and to look away when millions are killed in overseas engagements that have nothing to do with making the world safe for democracy." Something to think about.Web: [Direct Link] [This Post]
How do Online Learning Networks Emerge? A Review Study of Self-Organizing Network Effects in the Field of Networked Learning
This is a very nice study looking at the literature on self-organizing networks in online and networked learning. Starting from the (apparently correct) presumption that there is no comprehensive understanding of the subject, they look at literature on major forms of self-organization (preferential attachment, reciprocity, and transitivity) and ask "how these network effects can be enhanced or frustrated by the design elements of different networked learning environments." The authors observe that in the literature "self-organizing or endogenous network effects are never explicitly described as self-organizing or emergent." But they a present, and the authors "found factors related to the people, the physical environment and the task of the learning networks (illustrated)."
As an aside, I found myself thinking of the different systems of modal logic - S4, S5, K - while reading this article, as these are also defined by such things as reciprocity and transitivity (of a sort). Relevant? I don't know. I offer only the idea that there may be a relation between systems of modal logic and self-organizing networks; smarter minds than mine will have to determine whether such a relation exists (the closest I could find was connectionist modal reasoning, p. 273 here, but this isn't really what I'm suggesting here; my idea would make self-organization inherently modal, without the need for symbolic representation - that would be something).Web: [Direct Link] [This Post]
The point being made in this article is that " Conversations on surveillance tend to be problematic because much of the canon encourages readers to believe in two false premises—that all surveillance is equal and that surveillance is inescapable." An ethics of care, argues the author, requires that this perspective be challenged. "Failing to address the harms of surveillance in our communities only exacerbates their effects by encouraging their continuation and intensification. Modern surveillance tools make it challenging, if not impossible, to pinpoint the characteristic(s) against which the tool has been programmed to discriminate."Web: [Direct Link] [This Post]
This is a terrific article digging into an insight that is notoriously difficult to grasp. Here it is: "The meaning is underdetermined by the data." Text alone cannot tell us the author's intent. Data alone does not contain its own interpretation. It is only from a point of view that we can extract meaning from data. In this article, Michael Feldstein defines that point of view as 'pedagogical intent', which is one way of doing it. But of course, there are many other players in the system than just the teachers. Most notably, there are the students, who will (no matter what the pedagogical intent) perceive, recognize, and interpret the data in their own way. Feldstein draws a number of other insights out of this one basic fact - for example, that "interoperability without intent creates chaos," and "the 'semantic web' is all about intent." There's more - but this is enough for now.Web: [Direct Link] [This Post]
There has been a lot of coverage of the idea that blockchain will disrupt education. Martin Weller (for some reason) picks a paywalled article as his foil to make the case that blockchain will not be disruptive. "How will blockchain do it better? How will it overcome the problems that over a decade of eportfolio work has not quite managed to address?" Fair enough, but other (more open) work has addressed those questions. In the current post, D'Arcy Norman picks up on Weller's critique, finding the right path forward. "Will it disrupt higher ed? I don't think so. Will it transform how some services are run? Absolutely."Web: [Direct Link] [This Post]
The conference itself, held at the expansive Hotel Intercontinental in Budapester Straße was attended by thousands of delegates from around the world, and it was a delight to meet up and spend some time chatting with old friends from decades of working in educational technology, including Julian Stodd, Mike Sharples, Paul Kirschner, Donald Clark, Don Taylor, Michelle Selinger, Tom Wambeke, Ulf-Daniel Ehlers, Heike Philp, Gilly Salmon, Shannon Tipton, Laura Overton, Armin Hopp, Rhona Sharp, Maren Deepwell, Martin Hawksey, Jef Staes, Christian Glahn, Morten Paulsen, Charles Jennings, Bryan Alexander, Eric Sheninger, Alejandro Armellini, Paul Bacsich, Audrey Watters, Rolf Rheinhardt and Mirjam Neelen. It was also great to hook up again with the EADL crew including Jens Greefe, John Trasler, Ellen Gunning, Chris Wolstenholme and Tony Horsfield, and of course, the chair for my session - the wonderful Ildiko Mazar.
The great thing about this being in Berlin at this time of the year - other than the crisp weather, gluhwein, great food, Christmas markets, lights and shopping - is that OEB Global represents all the sectors of learning, and each has specific sessions and strands that focus on primary, secondary, tertiary and professional education as well as unified sessions where the keynotes and spotlight speeches connect every idea together. I met several colleagues for the first time, even though I had probably 'known' and interacted with them on Twitter for years. These included Benjamin Doxtdator, Mar Camacho, Andreas Keller, Paul Hearn, Neil Lasher, Tarkan Gürbüz, Laura Czerniewicz and Inge de Waard. If I have left anyone out, I apologise, but OEB Global is a very large conference. Below is
the recording of my Spotlight presentation complete with slides.
The abstract: In this talk we will explore the issues and challenges of learning within the workplace. What are the benefits and limitations? How can we achieve direct learning transfer, while maintaining a healthy work/life balance? We will explore micro-learning, personal and personalised learning as current methods and will discuss some key case studies about how large organisations successfully support and promote L&D. We will also examine the role new technology can play in promoting effective L&D and critically evaluate practices that are rooted in traditional learning/training environments.
OEB Global will return to Berlin between December 2-4, 2020.
Is there time for learning in the workplace? by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
I've been asked by several folks to write up some version of the talk I gave at the recent IMS learning analytics summit. The focus was on how, going forward, interoperability standards will need to capture pedagogical intent if we are going to develop meaningful learning analytics.
This isn't a word-for-word transcription of that talk, but it does capture the gist. The subtitles and literary quotes are mostly from the original presentation. Many thanks to Rob Abel and Cary Brown for inviting me and giving me the opportunity to speak to the IMS community about this important inflection point.Interoperability is communication
We are not here to curse the darkness, but to light the candle that can guide us through that darkness to a safe and sane future.John F. Kennedy
Ten or fifteen years ago, when we talked about what we wanted from EdTech software interoperability, most of the time the things we wanted seemed like they ought to be simple. Mostly, we just wanted to transfer student roster and grade information from one system to another. When we got a little more ambitious, we asked for single sign-on and the ability to put a little window of one application into another one. This was foundational interoperability. It wasn't "disruptive" or "revolutionary" or otherwise life-altering, but it was important.
My first close encounter with an IMS interoperability effort was when I worked at Oracle in on Peoplesoft Campus Solutions team. We just wanted to be able to send a course roster to the LMS and have the LMS send a final grade for each student back. Going in, I couldn't understand why that was hard or why it hadn't already been done. All I knew was that (a) it was a problem that IMS had tried and failed to solve at least once before, since we were working on the revision of an existing standard, and (b) a consequence of that failure was that IT professionals on campuses everywhere had to cook up their own, often time-intensive, duct-tape-and-chewing-gum solutions to getting roster data into the LMS. As for grades, it made no sense to me that faculty had to copy grades from their electronic LMS grade book and manually re-enter them into their electronic SIS grade book. It seemed weird that this was a thing.
It turned out that there was a translation problem. Registrars think about classes very differently than instructors and students do. For a registrar, if a student is taking a "statistics for psychology majors" course, and that course can be taken for credit in either the psychology or the math department, then "statistics for psychology" is actually two completely separate courses. And the decision of which of the two courses a student registers for may make the difference between that student meeting the requirements for graduation or not. On the other hand, the students and instructor experience "statistics for psychology" as one course meeting in one place on one schedule with one syllabus and one group of people.
Good software reflects and supports the needs and expectations of its users. Accordingly, SIS software represented "statistics for psychology" as two courses, while LMS software wanted to create one course space for it. In order to have roster information show up as instructors and students expect it in the LMS and then return final grades as the registrar expects them in the SIS, the standards committee had to recognize that there was a translation problem and design a specification that could function as a two-way translator.
This was an important lesson for me. Even seemingly simple interoperability challenges can be complicated because they often aren't about the software so much as they are about the people who use the software. Interoperability design is at least partly a liberal art.
Now, today, we would have a different possible way of solving that particular interoperability problem than the one we came up with over a decade ago. We could take a large data set of roster information exported from the SIS, both before and after the IT professionals massaged it for import into the LMS, and aim a machine learning algorithm at it. We then could use that algorithm as a translator. Could we solve such an interoperability problem this way? I think that we probably could. I would have been a weaker product manager had we done it that way, because I wouldn't have gone through the learning experience that resulted from the conversations we had to develop the specification. As a general principle, I think we need to be wary of machine learning applications in which the machines are the only ones doing the learning. That said, we could have probably solved such a problem this way and might have been able to do it in a lot less time than it took for the humans to work it out.
I will argue that today's EdTech interoperability challenges are different. That if we want to design interoperability for the purposes of insight into the teaching and learning process, then we cannot simply use clever algorithms to magically draw insights from the data, like a dehumidifier extracting water from thin air. Because the water isn't there to be extracted. The insights we seek will not be anywhere in the data unless we make a conscious effort to put them there through design of our applications. In order to get real teaching and learning insights, we need to understand the intent of the students. And in order to understand that, we need insight into the learning design. We need to understand pedagogical intent.
That new need, in turn, will require new approaches in interoperability standards-making. As hard as the challenges of the last decade have been, the challenges of the next one are much harder. They will require different people at the table having different conversations.Data and communication are not the same
How many goodly creatures are there here!
How beauteous mankind is! O brave new world
That has such people in’t!
The quote above is from The Tempest. Here's the scene: Miranda, the speaker, is a young woman who has lived her entire life on an island with nobody but her father and a strange creature who she may think of as a brother, a friend, or a pet. One day, a ship becomes grounded on the shore of the island. And out of it comes, literally, a handsome prince, followed by a collection of strange (and presumably virile) sailors. It is this sight that prompts Miranda's exclamation.
As with much of Shakespeare, there are multiple possible interpretations of her words, at least one of which is off-color. Miranda could be commenting on the hunka hunka manhood walking toward her.
"How beauteous mankind is!"
Or. She could be commenting on how her entire world has just shifted on its axis. Until that moment, she knew of only two other people in all of existence, each of who she had known her entire life and with each of whom she had a relationship that she understood so well that she took it for granted. Suddenly, there was literally a whole world of possible people and possible relationships that she had never considered before that moment.
"O brave new world / That has such people in't"
So what is on Miranda's mind when she speaks these lines? Is it lust? Wonder? Some combination of the two? Something else?
The text alone cannot tell us. The meaning is underdetermined by the data. Only with the metadata supplied by the actor (or the reader) can we arrive at a useful interpretation. That generative ambiguity is one of the aspects of Shakespeare's work that makes it art.
But Miranda is a fictional character. There is no fact of the matter about what she is thinking. When we are trying to understand the mental state of a real-life human learner, then making up our own answer because the data are not dispositive is not OK. As educators, we have a moral responsibility to understand a real-life Miranda having a real-life learning experience so that we can support her on her journey.Intention matters in education
With regard to moral rules, the child submits more or less completely in intention to the rules laid down for him, but these, remaining, as it were, external to the subject's conscience, do not really transform his conduct.Jean Piaget
The challenge that we face as educators is that learning, which happens completely inside the heads of the learners, is invisible. We can not observe it directly. Accordingly, there are no direct constructs that represent it in the data. This isn't a data science problem. It's an education problem. The learning that is or isn't happening in the students' heads is invisible even in a face-to-face classroom. And the indirect traces we see of it are often highly ambiguous. Did the student correctly solve the physics problem because she understands the forces involved? Because she memorized a formula and recognized a situation in which it should be applied? Because she guessed right? The instructor can't know the answer to this question unless she has designed a series of assessments that can disambiguate the student's internal mental state.
In turn, if we want to find traces of the student's learning (or lack thereof) in the data, we must understand the instructor's pedagogical intent that motivates her learning design. What competency is the assessment question that the student answered incorrectly intended to assess? Is the question intended to be a formative assessment? Or summative? If it's formative, is it a pre-test, where the instructor is trying to discover what the student knows before the lesson begins? Is it a check for understanding? A learn-by-doing exercise? Or maybe something that's a little more complex to define because it's embedded in a simulation? The answers to these questions can radically change the meaning we assign to a student's incorrect answer to the assessment question. We can't fully and confidently interpret what her answer means in terms of her learning progress without understanding the pedagogical intent of the assessment design.
But it's very easy to pretend that we understand what the students' answers mean. I could have chosen any one of many Shakespeare quotes to open this section, but the one I picked happens to be the very one from which Aldous Huxley derived the title of his dystopian novel Brave New World. In that story, intent was flattened through drugs, peer pressure, and conditioning. It was reduced to a small set of possible reactions that were useful in running the machine of society. Miranda's words appear in the book in a bitterly ironic fashion from the mouth of the character John, a "savage" who has grown up outside of societal conditioning.
We can easily develop "analytics" that tell us whether students consistently answer assessment questions correctly. And we can pretend that "correct answer analytics" are equivalent to "learning analytics." But they are not. If our educational technology is going to enable rich and authentic vision of learning rather than a dystopian reductivist parody of it, then our learning analytics must capture the nuances of pedagogical intent rather than flattening it.
This is hard.Some more examples
The lesson assessment example is easy enough to understand. (My post series on content as infrastructure explores it in more detail.) But the more one looks around at the full range of analytics that are truly aimed at supporting student success, the clearer the lesson about capturing intent becomes.
Take, for example, summer melt. I recently just hosted an entire hour-long Standard of Proof webinar on this topic. (You should watch it. It's good.) Here's a slide from that webinar which illustrates the obstacles that first-generation students face in getting from their college acceptance letter to the first day of class:The Summer Melt Maze Credit: Lindsay Page
Each of the text labels represents an obstacle that first-generation students in particular may struggle to overcome, because their circumstances are complex (e.g., no parent to provide a parental signature), because they don't have parents or guardians who have the training and knowledge to help them, and/or because they're seventeen-year-old kids. I don't know about you, but I don't think I had to navigate a single one of these obstacles without parental help, and I don't know if I could or would have done so without it.
Now think about developing a software solution to identify the specific barrier a student is struggling with and provide appropriate help. Could you solve the problem simply by sucking in enough data and running a machine learning algorithm? I very much doubt it. And even if you could, think about how invasive you would have to be to do so. Think about the privacy implications. The cure would be worse than the disease.
Georgia State University took a different approach. They have invited students to share their intent. They use a chatbot. A chatbot is a conversational interface. Students can ask directly about the problems they were encountering. Once students specify their intent, then machine learning can be used to further disambiguate it. For example, the software can figure out that a student who writes "I have no money" may be asking for help obtaining financial aid. But only because there is a conversational interface, and because the software behind that interface has been programmed to anticipate and respond to a certain range of questions that a student might come to the chatbot for answers. Through a combination of the interface layer, the data layer, and the usage context, the educational intent was encoded into the system.
Here's another example that postdates my talk. ACT recently released a paper on detecting non-cognitive education-relevant factors like grit and curiosity through LMS activity data logs. This is a really interesting study that I hope to write more about in a separate post in the near future, but for now, I want to focus on how labor-intensive it was to conduct. First author John Whitmer, formerly of Blackboard, is one of the people in the learning analytics community who I turn to first when I need an expert to help me understand the nuances. He's top-drawer, and he's particularly good at squeezing blood from a stone in terms of drawing credible and useful insights from LMS data.
Here's what he and his colleagues had to do in order to draw blood from this particular stone:
The online interaction features were generated from the LMS clickstream data. After manual inspection, we determined that the action field alone (e.g., “opened”) was insufficient to address our research questions and needed to be joined with the label of the item that the action was taken in reference to, which was a complex pairing. For example, course item values in the LMS data include “Week 12: Electrochemistry” or “CHEM 102 Practice Exam 4B,” which were easily interpretable from the course syllabus, while others (e.g., “2/27 CL” or “18.7 RQ”) required confirmation from the instructor. Hence, we created broader activity categories for these activity events in the LMS data using the course syllabus with confirmation from the instructor which resulted in 110 unique activity events in the LMS data that were recoded to a total of 21 activity categories as described in Table 4.
First, they had to look at the syllabi. With human eyeballs. Then they had to interview the instructors. You know, humans having conversations with other humans. Then the humans who interviewed the other humans in order to annotate the syllabi that they looked at with their human eyeballs labeled the items being accessed in the LMS with labels metadata that encoded the pedagogical intent of the instructors. Only after they did all that human work of understanding and encoding pedagogical intent could they usefully apply machine learning algorithms to identify patterns of intentional behavior by the students.
LMSs are often promoted as being "pedagogically neutral." (And no, I don't believe that Moodle is any different.) Another way of putting this is that they do not encode pedagogical intent. This means it is devilishly hard to get pedagogically meaningful learning analytics data out of them without additional encoding work of one kind or another.Interoperability without intent creates chaos
If Jorge Luis Borges' Library of Babel could have existed in reality, it would have been something like the Long Room of Trinity College.Christopher de Hamel
I want to underscore the point that there simply collecting more data and writing more clever algorithms will not help us find a way out of the problem of that last example. It is a problem of epistimic closure. Data and knowledge are not the same, and more data do not necessarily unlock more knowledge.
"The Library of Babel" is a short story by the great Jorge Luis Borges. (It's only nine pages long. You should read it.) The story describes a world that perfectly captures the nature of the problem we face:
The universe (which others call the Library) is composed of an indefinite and perhaps infinite number of hexagonal galleries, with vast air shafts between, surrounded by very low railings. From any of the hexagons one can see, interminably, the upper and lower floors. The distribution of the galleries is invariable. Twenty shelves, five long shelves per side, cover all the sides except two; their height, which is the distance from floor to ceiling, scarcely exceeds that of a normal bookcase. One of the free sides leads to a narrow hallway which opens onto another gallery, identical to the first and to all the rest. To the left and right of the hallway there are two very small closets. In the first, one may sleep standing up; in the other, satisfy one's fecal necessities. Also through here passes a spiral stairway, which sinks abysmally and soars upwards to remote distances....
There are five shelves for each of the hexagon's walls; each shelf contains thirty-five books of uniform format; each book is of four hundred and ten pages; each page, of forty lines, each line, of some eighty letters which are black in color. There are also letters on the spine of each book; these letters do not indicate or prefigure what the pages will say....
The orthographical symbols are twenty-five in number. This finding made it possible, three hundred years ago, to formulate a general theory of the Library and solve satisfactorily the problem which no conjecture had deciphered: the formless and chaotic nature of almost all the books. One which my father saw in a hexagon on circuit fifteen ninety-four was made up of the letters MCV, perversely repeated from the first line to the last. Another (very much consulted in this area) is a mere labyrinth of letters, but the next-to-last page says Oh time thy pyramids. This much is already known: for every sensible line of straightforward statement, there are leagues of senseless cacophonies, verbal jumbles and incoherences....
Five hundred years ago, the chief of an upper hexagon came upon a book as confusing as the others, but which had nearly two pages of homogeneous ines. He showed his find to a wandering decoder who told him the lines were written in Portuguese; others said they were Yiddish. Within a century, the language was established: a Samoyedic Lithuanian dialect of Guarani, with classical Arabian inflections. The content was also deciphered: some notions of combinative analysis, illustrated with examples of variations with unlimited repetition. These examples made it possible for a librarian of genius to discover the fundamental law of the Library. This thinker observed that all the books, no matter how diverse they might be, are made up of the same elements: the space, the period, the comma, the twenty-two letters of the alphabet. He also alleged a fact which travelers have confirmed: In the vast Library there are no two identical books. From these two incontrovertible premises he deduced that the Library is total and that its shelves register all the possible combinations of the twenty-odd orthographical symbols (a number which, though extremely vast, is not infinite): Everything: the minutely detailed history of the future, the archangels' autobiographies, the faithful catalogues of the Library, thousands and thousands of false catalogues, the demonstration of the fallacy of those catalogues, the demonstration of the fallacy of the true catalogue, the Gnostic gospel of Basilides, the commentary on that gospel, the commentary on the commentary on that gospel, the true story of your death, the translation of every book in all languages, the interpolations of every book in all books.Jorge Luis Borges
The rest of the story is a rumination on how humans might make sense of this infinite series of rooms—this "data lake," in modern parlance—in absence of any information about the intent of its creator.
Since the Library contains every possible book with that number of pages, lines and characters, somewhere in all these rooms must exist a book that explains exactly what it all means. So it's a data search problem, right?
Wrong. Because there also exist books that are extremely similar but differ in minor but critical details. And books that argue why the book with the Truth is actually false. For that matter, there are books that contain many of the exact same words in the exact same order but are written in languages in which the words mean different things. How can one tell which account is the Truth?
If we are going to make progress toward educationally useful analytics, then we must ruthlessly expunge all traces of magical thinking about data. There are fundamental limits to what the data can tell us. Even systems that are designed for pedagogical intent do not necessarily encode it in a way that is useful for interoperable analytics. In some cases, it may be encoded at the user interface layer. A courseware authoring platform may never label an assessment as "formative" or "summative" in the data because the intended distinction is obvious to the users. In other cases, the data may be encoded in an idiosyncratic manner that does not map well to other systems (either of software or of thought). In still other cases, it may be designed badly and incorrectly or misleadingly reflect the pedagogical intent. It would be relatively easy to create a data lake of Babel which we could explore infinitely in a fruitless search for meaning.
That way lies madness.
If we want useful educational analytics, then we cannot simply worship the data and the algorithms. The humans must do some of the learning.The "semantic web" is all about intent
I love you. You are the object of my affection and the object of my sentence.Mignon Fogarty
One of the triggers for my being invited to speak at IMS about learning analytics in the first place was a previous post I wrote which was (partly) on how the structure of Caliper, which is borrowed from the structure of the semantic web, supports new sorts of interoperability conversations. Since you can read that blog post, I won't repeat the argument in detail here. But the gist is that we both have to and can boil down chains of inference that combine pedagogical intent into simple human language that educators can understand and articulate for themselves. The triple structure of the semantic web—a simple three-word sentence with a subject, a verb, and a direct object—is designed to enable non-technical humans to string thoughts and inferences together in ways that enable more technical humans to translate those inference patterns into data structures and interoperability requirements.
Right now, IMS Caliper adopters are largely using this vernacular as just one more IT tool for largely old-school data centralization. So now they use a "lake" instead of a "warehouse." It's still a centralized and IT-specialized mindset which is not suited for thinking about making meaning from multiple applications, never mind talking with other humans about interpreting the intent of users working across multiple applications.
This is the problem that must be solved over the next decade to make real progress on educational analytics. It is at least partly a liberal arts problem. And it will be at least a decade's worth of work, though we don't have to wait that long to see early results.You get to choose the world we live in
How many goodly creatures are there here!
How beauteous mankind is! O brave new world
That has such people in’t!
So which will it be? A brave new world in which we experience continuous wonder at how beauteous mankind is, or one in which "learning" is defined by the ability to correctly answer a series of questions and earn some digital badges? The difference between these possible futures is not one in which embrace or reject technology, or data. It's one in which we either embrace or ignore the complexity of human learning and the reality that we must make a conscious effort to ensure that some of this complexity is encoded into the data if we are going to design analytics systems that are educationally useful. We need to elicit specific reactions from our students as expert educators and encode the pedagogical intent for eliciting those reactions along with the reactions themselves.
.The play's the thing!William Shakespeare
The agreement allows open access publishing in 44 IOPP journals and removes article publication charges for authors.
Researchers at up to 58 UK universities will soon benefit from a new open access (OA) ‘read and publish’ agreement between Jisc and IOP Publishing (IOPP), a pioneer in open access physics publishing.
The four-year agreement begins on 1 January 2020. It enables unlimited open access publishing for affiliated corresponding authors in 44 of IOPP’s subscription journals, without barriers or charges to authors.
Members will also have reading access to 75 of IOPP’s journals, covering physics, materials science, biosciences, astronomy and astrophysics, environmental sciences, mathematics and education.
Anna Vernon, Jisc Collection’s head of licensing, said:
“For the Jisc consortium, the agreement constitutes an important next step towards rapidly increasing immediate access to scientific research under transparent conditions and pricing. This contract offers highly optimised workflows for different institutions, giving access to even more open access journals.”
Fully aligned to and compatible with open access mandates in the UK, the transitional agreement means there are no article publication charges (APCs) for qualifying articles at the point of publication.
Steven Hall, managing director at IOP Publishing, said:
“We’re delighted to have come to this agreement. We already have several transformative agreements in place in Europe, and there are more being agreed soon. We want to make publishing open access as easy as possible for our authors and help them comply with funder requirements. This agreement enables us to do this.”
The model offers a streamlined process for researchers and librarians; researchers can publish open access in IOPP journals without administrative burden or payment thanks to article identification, and unnecessary overheads are removed for librarians. Additionally, all accepted articles will be published under an open licence (CC-BY), which allows authors to retain copyright.
The agreement builds on one of the UK’s first OA agreements, established in 2014. It will drive up OA publishing from participating institutions in 44 IOPP subscription journals from 33% in 2018 aiming for 100% open access in year one, to the benefit of researchers, students and the global scientific community.
IOPP and Jisc are also forming an advisory group comprising representatives from IOPP, Jisc and institutions to evaluate progress and support the OA transition in the UK. Institutions will also benefit from transparent and comprehensive reporting at an institutional and national level.
Copies of OA articles published under the agreement will be automatically deposited in institutional repositories, via Jisc’s Publications Router. The full agreement terms will be made public shortly on our licence subscriptions manager service site.
Jisc and five UK-based society publishers have signed pilot transitional open access (OA) agreements, now available to UK universities.
The agreements are the first to result from work undertaken by Jisc Collections, negotiating with smaller publishers to offer a sustainable transition to OA.
The Microbiology Society, Portland Press, IWA Publishing, the Company of Biologists and the European Respiratory Society all now offer transitional journal agreements through the national Jisc consortium.
These ‘read and publish’ two-year pilots allow 100% of UK scholarly output to be published OA in the societies’ hybrid journals, with some including fully OA titles in the fixed-price deals.
Kathryn Spiller, licensing manager at Jisc, who has worked with the societies to negotiate the agreements, says:
“We are delighted to offer smaller publishers a chance to negotiate with a national consortium. OA publishing is becoming within reach, especially now Wellcome has confirmed that these agreements are in compliance with their policy and that their funds can be used to support these agreements. Together we’ll continue to explore new ways in which small learned societies can transition to OA in a sustainable way.”
The Charity Open Access Fund (COAF), a partnership between six health research charities, including the Wellcome Trust, invested £1.3m in OA publishing fees (APCs) with UK-based self-publishing learned societies between 2016 and 2018. These five publishers account for just under a third of this investment.
Through these agreements, the sector transitions away from hundreds of individual APC payments to a fixed annual payment between the institution and the publisher, significantly reducing the administrative burden on researchers, institutions, funders and publishers.
Robert Kiley, head of open research at Wellcome, comments:
“I am delighted that Jisc Collections has successfully negotiated transformative agreements with a number of learned society publishers at no extra cost to institutions. Institutions in receipt of COAF OA funding are able to use these funds to offset the publishing costs for COAF-attributed research published under these agreements.”