agregador de noticias
It has been quite a while (years, really) since we've seen such an outburst of fresh writing in the edublogosphere. The current deluge is courtesy of the #blimage (blog image) challenge issued by Amy Burvall, which she explains in a video: one person sends the other an image, the other writes a blog post about education related to the image. HJ.DeWaard explains more. Here's the list of just some of the items posted by Steve Wheeler in this item:
Space to make ideas your own by Jeff Merrell
Organic Growth by Andrew Jacobs
The #blimage challenge by Jane Bozarth
Fortunate Learning and Learning Fortunes by Sue Beckingham
Desks of Doom by David Hopkins
Taking up the #blimage challenge by Ignatia de Waard
Not just a waiting room by Rachel Challen
Human Writes by Simon Finch
It's only a jigsaw puzzle by Sandra Sinfield
Playing chess with the enemy by Steve Wheeler
Learning while wandering by Tracy Parish
The #blimage challenge by Charles Jennings
Learning in limbo by Wayne Barry
Time for a fresh perspective by Sukh Pabial
The colours of active learning by Anna Wood
Breaking bread with Steve Wheeler by Amy Burvall
The joy of learning #blimage by Jane Hart
The Web: Network, dreamcatcher, patterns by Whitney Kilgore
The #blimage challenge by Sheila MacNeill
In my recent keynote for the Online Teaching Conference, the core argument was as follows:
While there will be (significant) unbundling around the edges, the bigger potential impact [of ed innovation] is how existing colleges and universities allow technology-enabled change to enter the mainstream of the academic mission.
Let’s look at one example. Back in December the New York Times published an article highlighting work done at the University of California at Davis to transform large lecture classes into active learning formats.
Hundreds of students fill the seats, but the lecture hall stays quiet enough for everyone to hear each cough and crumpling piece of paper. The instructor speaks from a podium for nearly the entire 80 minutes. Most students take notes. Some scan the Internet. A few doze.
In a nearby hall, an instructor, Catherine Uvarov, peppers students with questions and presses them to explain and expand on their answers. Every few minutes, she has them solve problems in small groups. Running up and down the aisles, she sticks a microphone in front of a startled face, looking for an answer. Students dare not nod off or show up without doing the reading.
Both are introductory chemistry classes at the University of California campus here in Davis, but they present a sharp contrast — the traditional and orderly but dull versus the experimental and engaging but noisy. Breaking from practices that many educators say have proved ineffectual, Dr. Uvarov’s class is part of an effort at a small but growing number of colleges to transform the way science is taught.
This article follows the same argument laid out in the Washington Post nearly three years earlier.
Science, math and engineering departments at many universities are abandoning or retooling the lecture as a style of teaching, worried that it’s driving students away. [snip]
Lecture classrooms are the big-box retailers of academia, paragons of efficiency. One professor can teach hundreds of students in a single room, trailed by a retinue of teaching assistants.
But higher-education leaders increasingly blame the format for high attrition in science and math classes. They say the lecture is a turn-off, higher education at its most passive, leading to frustration and bad grades in highly challenging disciplines.
What do these large lecture transformations look like? We got the chance in our recent e-Literate TV case study to get an inside look at the work done at UC Davis (episode 1, episode 2, episode 3), including first-person accounts from faculty members and students.
The organizing idea is to apply active learning principles such as the flipped classroom to large introductory science classes.
Phil Hill: It sounds to me like you have common learning design principles that are being implemented, but they get implemented in different ways. So, you have common things of making students accountable, having the classes much more interactive where students have to react and try to apply what they’re learning.
Chris Pagliarulo: Yeah, the main general principle here is we’re trying to get—if you want to learn something complex, which is what we try to at an R1 university, that takes a lot of practice and feedback. Until recently, much of that was supposed to be going on at home with homework or whatnot, but it’s difficult to get feedback at home when the smart people aren’t there that would help you—either your peers or your professor.
So, that’s the whole idea of the flipped classroom where come prepared with some basic understand and take that time where you’re all together to do the high-quality practice and get the feedback while we’re all together. Everything that we’re doing is focused on that sort of principle—getting that principle into the classroom.
Professor Mitch Singer then describes his background in the redesign.
Phil Hill: Several years ago, the iAMSTEM group started working with the biology and chemistry departments to apply some of these learning concepts in an iterative fashion.
Mitch Singer: My (hopefully) permanent assignment now, at least for the next five years, will be what we call “BIS 2A,” which is the first introductory course of biology here at UC Davis. It’s part of a series, and its primary goal is to teach fundamentals of cellular and molecular biology going from origins up to the formation of a cell. We teach all the fundamentals in this class: the stuff that’s used for future ones.
About three to four years ago, I got involved in this class to sort of help redesign it, come up with a stronger curriculum, and primarily bring in sort of hands-on, interactive learning techniques, and we’ve done a bunch of experiments and changed the course in a variety of ways. It’s still evolving over the last several years. The biggest thing that we did was add a discussion section, which is two hours long where we’ve done a lot of our piloting for this interactive, online, personalized learning (as the new way of saying things, I guess).
This year (last quarter in the fall) was the first time we really tried to quote, flip part of the classroom.
That is make the students take a little bit more responsibility for their own reading and learning, and then the classic lecture is more asking questions trying to get them to put a and b together to come up with c. It’s sort of that process that we’d like to emphasize and get them to actually learn, and that’s what we want to test them on not so much the facts, and that’s the biggest challenge.
If you want to see the potential transformation of this core, it is crucial to look at the large lecture classes and how to make them more effective. The UC Davis case study highlights what is actually happening in the field, with input from real educators and students.
The post UC Davis: A look inside attempts to make large lecture classes active and personal appeared first on e-Literate.
This article first appeared on Educating Modern Learners in April 2015
"Learning is not a counting noun," says Dave Cormier, "so what should we count?"
His question - a writing prompt, if you will - comes from Week 2 of his latest MOOC on "Rhizomatic Learning." It's an incredibly provocative question as I think it recognizes that we cannot really count learning and that, at the same time, we find ourselves having to do just that. We do so not simply because of policy demands (although, goodness there is that) but because we do want to learn something and we want to know that we're making progress, whatever it is that might look like.
An early advocate for open online learning, Dave Cormier is credited for coining the term "MOOC." His work, broadly speaking, involves this question of "rhizomatic learning." It drawing on the work of philosophers Gilles Deleuze and Félix Guattari, and the idea of the "rhizome" - that there's a multiplicity to knowledge, information, and data, with a wide array of access points, interpretations, and influences on it.
A connectivist MOOC, "Rhizomatic Learning: A Practical View" - #rhizo15 for short - is something worth paying attention to as it both enacts and explores modern learning for educators-as-learners. Anyone can participate. You can sign up for the mailing list here.
The course description:
Rhizomatic learning is a story for learning that starts from the idea that this standard doesn't exist. It posits a learning experience where the curriculum of the course is the people that are in it. Given access to an abundance of content, how can we design a learning experience that celebrates complexity and creativity, rather than an artificial standard of knowing? A course experience where each student is encouraged to map their own learning?
This open course will tackle the practical realities of teaching this way. The participants of this course will be the curriculum.
The participants are the curriculum, and as Week 1's discussion made clear, the goal is to think about "learning subjectives" rather than "learning objectives." Learning is something developed by and for the self - with influence from others to be sure; but it means something quite different to have a stake in saying what that learning will entail than in having someone else's dictates about what you must learn imposed upon you.What Counts?
So what counts?
We can reject our society's obsession with education data and try to construct alternatives that are much less fixated on quantification. Indeed, we should. But that's a lot easier said than done. We still are faced - practically, if not philosophically - with the question that Cormier himself is frequently asked when he advocates for tossing aside rigid goals and "outcomes": "How will we measure this?"
Of course, that question demands we think about what "this" is. What can we measure? And how does the ability to measure something tend to give that thing priority? (Easy to measure: attendance, the score on a quiz where answers can be marked "right" or "wrong." Harder to measure: curiosity, critical thinking, creativity, passion.)
Even if we reject our society's maniacal focus on the quantifiable signals of schooling - that word "counts" in the phrase "what counts" probably does make us look to numbers - how do we identify what matters? And how then do we cultivate and then assess what matters in learners?
Even if we believe learners should do this for themselves, how do we help them - particularly novice learners - think through a framework to do just that? How can we design a "framework" with the least amount of restriction but paradoxically the most amount of support? How do we help learners decide "what matters" - that is, help them develop their own learning interests and goals? How do we guide them so that their goals remain theirs? How do we ensure they have the necessary resources to reach these goals without being too heavy-handed in imposing our notions of "what matters"? (And how do we support learners when and if they change their mind about "what counts"?)Counting for Yourself
from Walt Whitman's "Song of Myself":
Have you reckon'd a thousand acres much? have you reckon'd the earth much?
Have you practis'd so long to learn to read?
Have you felt so proud to get at the meaning of poems?
Stop this day and night with me and you shall possess the origin of all poems,
You shall possess the good of the earth and sun, (there are millions of suns left,)
You shall no longer take things at second or third hand, nor look through the eyes of the dead, nor feed on the spectres in books,
You shall not look through my eyes either, nor take things from me,
You shall listen to all sides and filter them from your self.
"What counts" for you, when you think about your own personal learning? In what ways - subtle or overt - is "what counts" to you been skewed by an obsession with quantification? How can you best help students think through these questions?
Donald H Taylor, Jul 27, 2015
Good post which to me shows why we can't simply rely on mechanical generalizations to understand learning. Donald Taylor writes on Herman Ebbinghaus's 'forgetting curve', which basically shows how memories decline over time (and can be extended by being refreshed at increasingly long intervals). But as Taylor points out, "in memory experiments, the content you learn is meaningless." Indeed, they deliberately use nonsense syllables in order to control for the effect of meaning and context. All very fine, but learning is all about meaning and context. It's how what we are remembering fits into a pattern. Taylor points to another well-known investigation, in which Chase and Simon (1973) shows that expert chess players remember the positions of players much better than novices, simply because they recognize patterns. If you're not testing for pattern recognition, you're not testing for knowledge and learning.[Link] [Comment]
16th Biennial Conference of the European Association for Research in Learning and Instruction (EARLI 2015)
The 16th Biennial Conference of the European Association for Research in Learning and Instruction (EARLI) will take place in Limassol, Cyprus at the Cyprus University of Technology (CUT) from the 25th to the 29th of August 2015.
16th Biennial Conference of the European Association for Research in Learning and Instruction (EARLI 2015)
The 16th Biennial Conference of the European Association for Research in Learning and Instruction (EARLI) will take place in Limassol, Cyprus at the Cyprus University of Technology (CUT) from the 25th to the 29th of August 2015.
It can be more difficult to teach in an open environment. Janny Mackness observes, "being ‘ in the open’ raises security alarm bells for some tutors. What if their students post the less than perfect (in their eyes) videos they have made on Facebook? What if synchronous sessions with students, which are not intended to be viewed by anyone other than the student group involved, suddenly find their way onto the open web?"[Link] [Comment]
This is a small thing, but illustrative: the correct expression is "struck a chord", not "struck a cord". Why does that even matter? The former shows that you understand what the words mean, while the latter shows that you are parroting by rote. And this - not "a vested interest in maintaining an intellectual hegemony" - is what the three or four years of an undergraduate education is intended to produce. These minor differences in expression and presentation (citing people by their first name, use of generalizations like, "no interest in transformation", out-of-place employment of cliché s like "wax lyrical") are very obvious to a person with a formal education and invisible to a person without one. The result is the difference between learning on one's own, and learning through immersion in a knowing community, the difference between remembering what words mean and being able to speak a language. I have nothing but sympathy for Graham Brown-Martin, but it's hard, especially if it wasn't part of your early life, and you can't learn to speak a language by reading books. This - and not just a bunch of stuff to remember - is what needs to be produced by online learning.[Link] [Comment]
This list of posts will grow, so if you know of any that are missing, please let me know via the comments box at the foot of this post. Also, if there are any *notable names* missing from this list, please challenge them to participate! Oh, and if you need an image, there's one here to get you started.
Amy Burvall's YouTube video explaining #blimage
Simon Ensor's #blimage Pinterest Board
Simon Ensor's #blimage Flipboard
The #blimage blog posts
Maha Bali – Rethinking puzzles: A post-modernist view
Wayne Barry – Learning in limbo
Sue Beckingham – Buck the status quo
Sue Beckingham – Learning anytime, anywhere, any social space
Sue Beckingham – Fortunate learning and learning fortunes
Jane Bozarth – The #Blimage Challenge
Mark Britz – Embraceable Me
Amy Burvall – Drawing on the Desk: Clues about Personalized and Visual Learning
Amy Burvall – Breaking bread with Steve Wheeler
Catherine Cronin – Alone and Together, Moving and Learning
Ada Czerwonogora – Un tiro en el blanco (Spanish)
Ignatia de Waard – Airports, legal papers, cultures and arts
HJ DeWaard – Stories and spaces within a #blimage challenge
Phil Denman – Everything is Not Awesome
@debsnet – Viva la boredom?
Matt Estermann – A part of a greater holonomy
Juan Domingo – Las dos caras de la #blimage! (Spanish)
Juan Domingo – #blimage-2 Cuando learning y aprender, no siempre es lo mismo! (Spanish)
Frans Droog – Een uitdaging! (Dutch)
Terry Elliott – #blimage: A game, an image of learning, a challenge
Simon Ensor – Let them eat (binned) cake
Simon Ensor – Puzzling memory loss
Simon Ensor – A bridge too low
Simon Finch – Human Writes
Sharon Flynn – Summer Holidays
Terry Freedman – My first #blimage
Jane Hart – The joy of learning #blimage
Katherine Haxton – Can’t see the wood for the trees!
Josie Holford – The efflorescence of learning
Sarah Honeychurch – A puzzle
Sarah Honeychurch – Learning technology cats
David Hopkins – Surfer Dude vs. Shark!
David Hopkins – Desks of Doom
Andrew Jacobs – Organic Growth
David Kelly – Learning Fortunes
Whitney Kilgore – The Web: Network, dreamcatcher, patterns #blimage
Martin King – #blimage 'Back to the Future' The network is out of the bottle again
Simon Lancaster – #blimage challenge
Claire Lotriet – Darth Vader, flow and learning
Colin Madland – The #blimage challenge
Connie Malamed – Fortune Cookies For eLearning Designers
Sheila MacNeill – What Sheila's seen this week #blimage
Jeff Merrell – Space to make ideas your own
Tracy Parish – Learning While Wandering
Nick Shackleton-Jones – Picture this
Sandra Sinfield – It's only a puzzle... it's a system
Sukhvinder Pabial – Time for a fresh perspective?
Michelle Parry-Slater – Is the Learning Landscape Desserted?
Brayley Pearce – apple for teacher
Andrea Stringer – See think wonder
Donald H Taylor – Memory is more than Ebbinghaus
Clark Quinn – A Nurturing Culture
Hartger Wassink – Een leeg schoolplein #blimageNL (Dutch)
Susan Watson – #blimage: A game, an image of learning, a challenge
Steve Wheeler – Blimey, it's #blimage!
Steve Wheeler – Piece by piece
Steve Wheeler – Off the rails?
Steve Wheeler – Playing chess with the enemy
Anna Wood – The colours of active learning
Kandy Woodfield – Persistence and learning #blimage
Photograph by Steve Wheeler
The #blimage list by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
SAIDE (2015) Siyaphumelela Inaugural Conference May 14th – 15th 2015 SAIDE Newsletter, Vol. 21, No.3
Reading sources in the right order can avoid you having to eat humble pie. Immediately after posting Privacy and the Use of Learning Analytics in which I questioned the ability of learning analytics to suggest appropriate interventions, I came across this article in the South African Institute of Distance Education’s (SAIDE) newsletter about a conference in South Africa on Exploring the potential of data analytics to inform improved practice in higher education: connecting data and people.
At this conference, Professor Tim Renick, Vice-President of Georgia State University in the USA, reported on his institution’s accomplishment of eliminating race and income as a predictor of student success.
This has been achieved through implementing various initiatives based on data mining of twelve years’ worth of student data. The university’s early warning system, based on predictive analysis, has spawned a number of tested and refined low cost, scalable, innovative programmes such as:
- supplemental instruction by former successful students;
- formation of freshman learning communities which entail groups of 25 students enrolled in “meta-majors” ;
- block scheduling of courses ;
- re-tooled pedagogies involving adaptive learning software;
- and small, prudent financial retention grants.
The combination of the above has resulted in phenomenally reduced student attrition.
I have no further comment (for once!). I would though be interested in yours.
Incidentally, there were other interesting articles in the SAIDE newsletter, including:
- a report on Learning from Implementation: Key Issues for the African Storybook Project, which aims to increase literacy and development through telling African stories in children’s native languages;
- a report on a quality assurance workshop for institutional management and their Quality Assurance Committee staff at the National Open University of Nigeria, which has over 100,000 students scattered across Nigeria;
- a report on an OER Institutional Analysis Workshop for the Open University of Tanzania.
Each of these reports has important lessons for those interested in these issues that go far beyond the individual cases themselves. Well worth reading.
Warrell, H. (2105) Students under surveillance Financial Times, July 24Applications of learning analytics
This is a thoughtful article in the Financial Times about the pros and cons of using learning analytics, drawing on applications from the U.K. Open University, Dartmouth College in the USA, student monitoring service Skyfactor, and CourseSmart, a Silicon Valley start-up that gives universities a window into exactly how e-textbooks are being read.
The UK Open University is using learning analytics to identify students at risk as early as a week into a course.
An algorithm monitoring how much the new recruits have read of their online textbooks, and how keenly they have engaged with web learning forums, will cross-reference this information against data on each person’s socio-economic background. It will identify those likely to founder and pinpoint when they will start struggling. Throughout the course, the university will know how hard students are working by continuing to scrutinise their online reading habits and test scores.
The article also discusses Dartmouth College’s mobile phone app which:
tracks how long students spend working, socialising, exercising and sleeping. The information is used to understand how behaviour affects grades, and to tailor feedback on how students can improve their results.
The article also tries to get a handle on student attitudes to this form of monitoring or surveillance. Not surprisingly, students appear to be somewhat ambiguous about learning analytics and differ in their acceptance of being monitored.Rationalisations
What was particularly interesting is the range of justifications given in this article for monitoring student behaviour through data analysis:
- the most obvious is to identify students at risk, so that appropriate interventions can be made. However, there weren’t any examples given in the article of appropriate interventions, highlighting the fact that it is one thing to identify a problem and quite another to know what to do about it. For instance we know that from previous research that students from particular socio-economic backgrounds or students from particular ethnic backgrounds are potentially more at risk than others. What does this mean though in terms of teaching and learning? If you know this is a challenge before students start studying, why wait for learning analytics to identify it as a problem?
- the next argument is the need to ensure that the high investment each student (or their parents) makes in higher education is not wasted by a failure to complete a program. Because of the high cost, fear of failure is increasing student stress. At Dartmouth, a third of the undergraduate student body saw mental health counsellors last year. However, the solution to that may not be better learning analytics, but finding ways to finance students that don’t lead to such stress in the first place;
- another rationale is to reduce the financial risk to an institution. The Chief Technology Officer at Skyfactor argues that with revenues from tuition fees of around $25,000+ per student per annum in the USA, avoiding student drop-out is a financial necessity for many U.S. institutions. However, surely there is a moral necessity as well in ensuring that your students don’t fail.
The Open University has always collected data on students since it started. In fact, McIntosh, Calder and Smith (1976) found that statistically, the best predictor of success was whether a student returned a questionnaire in the first week of a course, as this indicated their commitment. It still didn’t tell you what to do about the students who didn’t return the questionnaire. (In fact, the OU’s solution at the time was not to count anyone as an enrolment until they had completed an assignment two weeks into the course – advice that MOOC proponents might pay attention to).
As with so many technology developments, the issue is not so much the technology but how the technology is used, and for what purposes. Conscientious instructors have always tried to track or monitor the progress of individual students and learning analytics merely provides a more quantitative and measurable way of tracking progress. The issue though is whether the data you can track and measure can offer solutions when students do run into trouble.
My fear is that learning analytics will replace the qualitative assessment that an instructor gets from, for instance, participating in a live student discussion, monitoring an online discussion forum, or marking assignments. This is more likely to identify the actual conceptual or learning problems that students are having and is more likely to provide clues to the instructor about what needs to be done to address the learning issues. Indeed in a discussion the instructor may be able to deal with it on the spot and not wait for the data analysis. Whether a student chooses to study late at night, for instance, or only reads part of a textbook, might provide a relatively weak correlation with poorer student performance, but recommending students not to stay up late or to read all the textbook may not be the appropriate response for any individual student, and more importantly may well fail to identify key problems with the teaching or learning.Who gets to use the data?
Which brings me to my last point. Ruth Tudor, president of the Open University’s Students’ Association, reported that:
when the data analytics programme was first mooted, participants were “naturally” anxious about the university selling the information it collected to a third party.
The OU has given strong assurances that it will not do this, but there is growing concern that as higher education institutions come to rely more on direct funding and less government support, they will be tempted to raise revenues by selling data to third parties such as advertisers. As Andrew Keen has argued, this is a particular concern about MOOCs, which rely on other means than direct fees for financial support.
Thus it is incumbent on institutions using learning analytics to have very strong and well enforced policies about student privacy and use of student data. The problem then though is that can easily lead to instructors being denied access to the very data which is of most value in identifying student learning difficulties and possible solutions. Finding the right balance, or applying common sense, is not going to be easy in this area.Reference
McIntosh, N., Calder, J. and Swift, B. (1976) A Degree of Difference New York: Praeger
There aren’t many ed-tech products that have developed a cult following, a phrase that appears at the foot of the Wikipedia entry for AlphaSmart, a “smart keyboard” first marketed to schools in the 1990s. Indeed, while the AlphaSmart product line was discontinued a couple of years ago, aftermarket sales of the devices continue, accompanied by blog posts like these: “4 sucky things about this $19 piece of junk that make it AMAZING for writing” and “Is Alphasmart STILL the ultimate writers’ tool?”
The word-processing keyboard was designed to be just that – a device that would strip away the distractions that had come to accompany writing on a computer – fonts, colors, layout. Instead, says one of the AlphaSmart inventors Ketan Kothari, students should be able to “focus on the words.” So should writers, of course, which goes a long way in explaining why this remains such a beloved product in certain circles.An Ed-Tech “Garage” Story
Kothari was, at the time, an engineer at Apple. He along with fellow engineer Joe Barrus had heard the demands of educators who would visit Cupertino: desktop publishing on Macs meant that students were spending a lot of time on, well, publishing and not on the writing process itself.
So Barrus built a prototype of a portable word-processor that would enable more focus on that. Kothari posted the idea onto an ed-tech discussion board on FidoNet and got an immediate response from teachers in Seattle. The two traveled to Seattle to demo the keyboard to a small group. The feedback: it would need to be smaller, it would need to be battery-operated, and the 10-key Barrus had initially included was unnecessary. One of the teachers at the demo suggested that the two hold a workshop at the upcoming NCCE (Northwest Council for Computer Education) conference and pitch their idea to more local ed-tech enthusiasts. They did; it was standing room only, and teachers were ready to buy, pulling out their personal checkbooks to do so, Kothari jokes.
But first, Barrus and Kothari had to move from prototype to production. And they had to figure out what rights, if any, their employer Apple was going to claim to the idea.
But Apple, it seemed, viewed the machine as a peripheral – not really an accurate or even imaginative categorization. (This was the Sculley era, not known as one of Apple’s most innovative.) While Barrus and Kothari’s prototype wasn’t really a computer, it wasn’t a dumb keyboard either. Apple eventually decided that it wasn’t interested in producing the device – “people didn’t really get it” in a pre-Palm OS world, says Kothari – and the company gave the two engineers a release to work on their idea. (Other Apple engineers helped with designing the plastics.)
Unlike today’s education technology startups that rush to obtain venture capital at the first inkling of an idea, AlphaSmart was largely bootstrapped. (The company’s original name was Intelligent Peripheral Devices.) The team tried, but failed, to obtain an SBA loan. They pooled all their savings, kept their day jobs, and recruited Ketan Kothari’s brother, Manish, to head up marketing efforts, working out of Ketan’s spare bedroom – one of ed-tech’s few “garage” origin stories. After several points when the company almost folded due to financial and production issues, AlphaSmart started shipping in the fall of 1993. It couldn’t build a lot of inventory simply because it had little capital. However the devices were in high demand, and within six months, AlphaSmart found itself cash-flow positive (again, unlike today’s education technology startups, even if they’ve raised venture capital).
With their connections to Apple and its education distributors and resellers, AlphaSmart’s founders had access to several key inroads into schools. But the company decided to build direct sales channels. That gave them a closer relationship to customers, they said. So AlphaSmart decided to use its limited marketing dollars to focus on trade-shows, which enabled them to cut out the middle-man and, of course, helped them get feedback directly from teachers, both new and existing customers alike.
“Is that all it does?” a teacher asked Barrus when he was demoing the smart keyboard’s word-processing functionality. He replied sheepishly, “Yes.” And her response: “That’s wonderful!”
That anecdote underscores one of the great appeals of the AlphaSmart device: it was plug-and-play and, as such, incredibly easy to use – “a word processing toaster,” in Barrus’ words. From Apple, AlphaSmart had borrowed the mantra “it just works” as a design principle – more specifically “it just works in the elementary school classroom.”
That principle hasn’t been the guiding one for much of education technology, as a case study about AlphaSmart published by the Innosight Institute notes:
The elementary school teachers had made an important distinction that had helped guide Joe, Ketan, and Manish in developing the product. The teachers did not view the device they had requested as a computer in the traditional sense; they were asking for a “smart keyboard.” Manish stressed the importance of that distinction: “The key innovation in my mind … was that they turned it from being a computer substitute or a ‘low-end laptop’ to being a ‘smart keyboard,’ and there’s a big difference between the two.”
That difference is exemplified by the “low-end laptops” that a few desktop computer manufacturers had begun to offer. In the team’s view, these products were still not addressing the teachers’ needs well. The laptops sold in the $300 to $500 range, but they did not feature full-size keyboards, adequate durability, or suitable battery life. Additionally, they were complicated to deploy. One low-end laptop, the Tandy WP–2, had a manual that was 150 pages long, whereas the AlphaSmart manual was only 11 pages. Using such devices required downloading and configuring software, which often caused compatibility issues. They required drivers and configuration of the serial port’s baud rate and other settings. They also included extra software, such as a terminal emulator and calendar that was a distraction. “In our minds, low-end laptops were clearly way too complicated,” Ketan said.
Personal computers were not only seen as too complex (and more than a little intimidating) to teachers; they were expensive. The early AlphaSmart devices sold for about $270 – cheaper than either laptop or desktop computers, which ostensibly meant that classrooms could distribute one per student.
They were also quite rugged, with a lengthy battery life (the importance of which cannot be overstated in schools) and no internal moving parts, something that Barrus would demonstrate – pretty memorably for anyone that ever saw AlphaSmart in a tradeshow – by dropkicking a keyboard.
The product line specs:The Complexification of Ed-Tech
“The arc of the technology universe is long and it bends towards bloat.”
Although AlphaSmart had successfully found a niche selling its simple, smart keyboard to elementary school teachers, it soon began to explore other markets. In 2002, for example, it released the “Dana,” a higher-end product that functioned much more like a lower-end laptop. The Dana ran the Palm OS mobile operating system, had 8 MB of memory and a touchscreen, and was not only WiFi enabled but had infrared “beaming” capabilities to move your text from the AlphaSmart to another PC. The Dana’s battery life suffered with these advancements – the device only 25 hours worth of life. It retailed for about $400 or more. And that meant that it was beyond the budget of many schools.
In addition to feeling pressured to produce devices that met new demands for mobile computing, AlphaSmart also heard from teachers that the smart keyboards should offer more writing and editing curriculum. In other words, there were pressures on the AlphaSmart to become more complex and more robust in terms of hardware and software.
The company had its IPO in 2004, raising $24 million with a market cap of about $90 million; and a little over a year later, it was acquired by Renaissance Learning for $57 million (which itself was bought by a private equity firm last year). In 2013, the AlphaSmart product line was discontinued. Ketan and Manish Kothari continue to work in education technology; their startup Root–1 was acquired by Edmodo in 2013.Disruptive Innovation: The Wrong Lesson?
Perhaps it’s no surprise that the Innosight Institute, formerly known as the Clayton Christensen Institute, has penned a case study on AlphaSmart. It argues that the company fits into its model/myth/mantra of “disruptive innovation,” a phrase coined by Clayton Christensen – that is, whereby a low price entrant in the market eventually displaces a more expensive, established competitor:
When a large set of consumers becomes over-served by existing products, the door is opened for disruptive products to enter the market and serve those consumers better. Such a disruptive product is more likely to be successful at capturing the market if it is focused on a specific job those consumers need to get done. This focus allows development of a product that has the right performance metrics at the right cost, and it allows the producer to target the right customer base with a clear message.
The founders of AlphaSmart, Inc. created one such product and followed several key tenets of disruptive innovation in the process. Their product targeted customers trying to do a specific job, but for whom a simple, inexpensive solution was unattainable.
With this “disruptive innovation” analysis in mind, it is probably worth considering here how the end of the AlphaSmart product line coincided with the introduction of the Google Chromebook – another low-end, low-cost, low-functionality device that has also taken the education market by storm and which is, let’s be honest, little more than a “smart keyboard” centered on the Google Apps productivity suite.
But there’s more to the story of ed-tech adoption here than how AlphaSmart fits into some business school model about how entrants on the low-end of the market come to displace those on the high-end. Indeed, this highlights the flaws of looking at education technology simply through the lens of “the market.” Remember: neither the AlphaSmart nor the Chromebook can even remotely replace a laptop – no, not even for elementary school students. Choosing a “smart keyboard” (or tablet) over a laptop means choosing a device with less functionality; and in turn, students will be able to do less.
It’s a trade-off that education seems to have readily made: efforts to give every student a computer – Seymour Papert’s vision for powerful machines enabling powerful learning – are reduced to giving every student a word-processing (or increasingly these days, online standardized testing) gadget.
This happens in part, as Papert observed, because of the conservative tendencies of school:
Thus, little by little the subversive features of the computer were eroded away: Instead of cutting across and so challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. – Seymour Papert, The Children’s Machine
That is to say, machines are not “disruptive innovations” in schools; rather schools often simply look to replicate traditional practices with these new devices. Thus, “writing” becomes simply “word processing” – the processing of words via machine rather than paper and pencil – and perhaps as such, the appeal of the AlphaSmart smart keyboard becomes pretty clear.
Yet the ongoing appeal of the keyboard to writers reveals something else about technology writ large. There is this tendency, thanks to “markets” of course, for constantly upgraded iterations of increasingly overly complicated tools that never succeed in meeting users' needs (because as Steve Jobs famously said, why bother to ask). And all that make writing – already a painful and challenging process – even harder to actually accomplish. The elementary school teachers that inspired AlphaSmart were right: writing isn’t editing, and writing isn’t publishing. Writing is writing, and the computer - particularly one connected to the Internet – can be a quite lousy machine for aiding that.
Long live the AlphaSmart.
Phil and I spent much of this past week at BbWorld trying to understand what is going on there. The fact that their next-generation Ultra user experience is a year behind is deservedly getting a lot of attention, so one of our goals going into the conference was to understand why this happened, where the development is now, and how confident we could be in the company’s development promises going forward. Blackboard, to their credit, gave us tons of access to their top executives and technical folks. Despite the impression that a casual observer might have, there is actually a ton going on at the company. I’m going to try to break down much of the major news at a high level in this post.The News
Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.
Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.
Ultra is only available in SaaS at the moment and will not be available for on-premise installations any time soon: This was a surprise both to us and to a number of Blackboard customers we spoke to. It’s available now for SaaS customers and will be available for managed hosting customers, but the company is making no promises about self-hosted. The main reason is that they have added some pretty bleeding edge new components to the architecture that are hard to wrap up into an easily installable and maintainable bundle. The technical team believes this situation may change over time as the technologies that they are using mature—to be clear, we’re talking about third-party technologies like server containers rather than homegrown Blackboard technologies—they think it may become practical for schools to self-host Ultra if they still want to by that time. But don’t expect to see this happen in the next two years.
Ultra is much more than a usability makeover and much more ambitious than is commonly understood: There is a sense in the market that Ultra is Blackboard’s attempt to catch up with Instructure’s ease of use. While there is some truth to that, it would be a mistake to think of Ultra as just that. In fact, it is a very ambitious re-architecture that, for example, has the ability to capture a rich array of real-time learning analytics data. These substantial and ambitious under-the-hood changes, which Phil and I were briefed on extensively and which were also shared publicly at Blackboard’s Devcon, are the reason why Ultra is late and the reason why it can’t be locally installed at the moment. I’m not going to have room to go into the details here, but I may write more about it in a future post.
Blackboard “Classic” 9.x is continuing under active development: If you’re self-hosted, you will not be left behind. Blackboard claims that the 9.x code line will continue to be under active development for some time to come, and Phil and I found their claims to be fairly convincing. To begin with, Jay got burned at Autodesk when he tried to push customers onto a next-generation platform and they didn’t want to go. So he has a personal conviction that it’s a bad idea to try that again. But also, Blackboard gets close to a quarter of its revenue and most of its growth from international markets now, and for a variety of reasons, Ultra is not yet a good fit for those markets and probably won’t be any time soon. So self-hosted customers on Learn 9.x will likely get some love. This doesn’t mean development will be as fast as they would like; the company is pushing hard in a number of directions, and we get the definite sense that there is a strain on developer resources. But 9.x will not be abandoned or put into maintenance mode in the near future.
If you want to get a sense of what Ultra feels like, try out the Blackboard Student mobile app: The way Blackboard uses the term “Ultra” is confusing, because sometimes it means the user experience but sometimes it means the next generation architecture for Learn. If you want to try Ultra the user experience, the play with the Student mobile app, which is in production today and which will work with Learn 9.x as well as Learn Ultra. Personally, I think it represents some really solid thinking about designing for students.
Moodle may make a comeback: One of the reasons that Moodle adoption has suffered in the United States the past few years is that it has lacked an advocate with a loud voice. Moodlerooms used to be the biggest promoter of the platform, and when Blackboard acquired them, they went quiet in the US. But, as I already mentioned, the international market is hugely important for Blackboard now, and Moodle is the cornerstone of the company’s international strategy. They have been quietly investing in the platform, making significant code contributions and acquisitions. There are signs that Blackboard may unleash Moodlerooms to compete robustly in the US market again. This would entail taking the risk that Moodle, a cheaper and lower-margin product, would cannibalize their Learn business, so file this under “we’ll believe it when we see it,” but Apple has killed the taboo of self-cannibalization when the circumstances are right, and they seem like they may be right in this situation.
Collaborate Ultra is more mature than Learn Ultra but still not mature: This is another case where thinking about Ultra as a usability facelift would be hugely underestimating the ambition of what Blackboard is trying to do. The new version of Collaborate is built on a new standard called WebRTC, which enables webconferencing over naked HTML rather than through Flash or Java. This is extremely hard stuff that big companies like Google, Microsoft, and Apple are still in the process of working out right now. It is just this side of crazy for a company the size of Blackboard to try to release a collaboration product based heavily on this technology. (And the only reason it’s not on the other side of crazy is because Blackboard acquired a company that has one of the world’s foremost experts on WebRTC.) Phil and I have used Collaborate Ultra a little bit. It’s very cool but a little buggy. And, like Learn Ultra, it’s still missing some features. At the moment, the sweet spot for the app appears to be online office hours.
My Quick Take
I’m trying to restrain myself from writing a 10,000-word epic; there is just a ton to say here. I’ll give a high-level framework here and come back to some aspects in later posts. Bottom line: If you think that Ultra is all about playing catch-up with Instructure on usability, then the company’s late delivery, functionality gaps, and weird restrictions on where the product can and cannot be run look pretty terrible. But that’s probably not the right way to think about Ultra. The best analogy I can come up with is Apple’s Mac OS X. In both cases, we have a company that is trying to bring a large installed base of customers onto a substantially new architecture and new user experience without sending them running for the hills (or the competitors). This is a really hard challenge. Hardcore OS X early adopters will remember that 10.0 was essentially an unusable technology preview, 10.1 was usable but painful, 10.2 was starting to feel pretty good, and 10.3 was when we really began to see why the new world was going to be so much better than the old one. If I am right, Ultra will go through the same sort of evolution. I don’t know these stages will each be a year long; I suspect that they may be shorter than that. But right now we are probably in the 10.0 era for Ultra. As I mentioned earlier in the post, Phil and I still need to talk to some Ultra customers to get a sense of real usage and, of course, since it will be generally available to SaaS customers for use in the fall semester, we’ll have more folks to talk to soon. We will be watching closely to see how big the gaps are and how quickly they are filled. For example, how long will it take Blackboard to get to the items labeled as “In Development” on their slides? Does that mean in a few months? More? And what about the “Research” column? Based on these slide and our conversations, I think the best case scenario is that we reach the 10.2 era—where the platform is reasonably feature-complete, usable, and feeling pretty good overall—by BbWorld 2016, and with some 10.3-type new and strongly differentiating features starting to creep into the picture. Or they could fall flat and utterly fail to deliver. Or something in between. I’m pretty excited by the scope of the company’s ambition and am willing to cut them some slack, partly because they persuaded me that what they are trying to do is pretty big and party because they persuaded me that they probably know what they are doing. But they have had their Mulligan. As the saying goes (when properly remembered), the proof of the pudding is in the eating. We’ll see what they deliver to customers in the next 6-12 months.
Watch this space.
The post Blackboard Ultra and Other Product and Company Updates appeared first on e-Literate.