agregador de noticias
When we want people to behave appropriately, either online or offline, to what exactly are we appealing? One school of thought argues that morality is based on reason - this, for example, gives us utilitarianism or the categorical imperative. But what if morality is more like a sensation, as Hume argued, rather than reason? What might it look like? This paper examines a number of possibilities suggested by recent neuroscience:
- Altruism - "a neurological adaptation of the mechanisms that support maternal-infant bonding"
- Emotional contagion - "a form of somatic mimicry; i.e., the tendency to automatically
mimic and synchronize facial expressions, vocalizations, postures and movements..."
- Attachment theory - "the initial caring by a parent for an infant and of filial and pair-bonding" and extensible to groups or ideas
- Empathy - "to experience the psychological life of that person… "
- Empathy Altruism - "empathic concern-other oriented emotion elicited by and congruent with the perceived welfare of others in need"
- Fear - "tendency to freeze in place in mid-task was tightly correlated with their bodily reactions such as speeded-up heartbeat and elevated levels of stress hormones"
Clearly the different explanations of moral behaviour suggest very different strategies to elicit it. Until recently the primary mechanism was fear. But perhaps we can find our way to evoking less traumatic mechanisms.
I saw my first selfie stick last September and the reaction since then has been, well, negative. Rob Watson questions this. "It seems that users of selfie sticks have broken some kind of taboo? A taboo that says that we shouldn’ t be so obvious when we take our self-images using our phones?" But they've been banned from some museums and are the subject of scorn. But Watson resists this response, and I agree, "Just consider for moment what you would be trashing," he says. "The active participation of people as a social group who have strong social ties, and that are embedded in a location or a venue. How can anyone complain about that?" Photo: me![Link] [Comment]
Last Friday, former Star-Ledger education reporter Bob Braun posted a screenshot on his blog of an email by a New Jersey superintendent detailing a “Priority 1 Alert” issued by Pearson and the state department of education, alleging that a student had tweeted about a test question on the PARCC, causing a security breach. The email expressed several concerns: the potential for more parental outcry about student data and testing, particularly as the DOE had requested the student be disciplined. Also, through the incident, the superintendent had learned that “Pearson is monitoring all social media during PARCC testing.”
And with that sentence, Braun’s story went viral, temporary knocking his website offline (prompting, in certain circles, conspiracy theories that it had been DDOSed by a foreign corporation).
There’s already been a lot of ink spilled about this, with criticisms and justifications coming from a number of different angles. (See: Bill Fitzgerald, Alice Mercer, Cynthia Liu, for example). The AFT has since weighed in, demanding an end to the monitoring; and New Jersey Department of Education responded today, justifying its “vigilance” in “safeguarding test questions.”
But all of this strikes me as much more complicated than simply an act to protect the security of Common Core assessments. There are a number of important and interconnected questions raised by this story:What Do We Mean By “Spying” on Students?
Bob Braun’s initial post used the word “spying” in its headline. No doubt, that’s a pretty loaded word that some, like Cynthia Liu, have objected to. Alternatives, the thesaurus tells me: “surveilling” or “monitoring.” All these words carry slightly different weights, slightly different meanings. “To spy” means “to watch.” But it also means “to watch secretly” – and that’s where a lot of the concern comes from, I think.
Do students know they’re being watched? Yes, I think they (mostly) do when it comes to their interactions (offline) at school. They know they’re being watched in class, in the halls, in the cafeteria, on the playground. We've socialized them to conform, to "behave" there. But no, I don’t think students (necessarily) do know they're being watched when it comes to their after-school updates on social media. And it’s worth asking as such: what expectation of privacy from school surveillance should students expect while at home? What behaviors are we going to compel from students in their personal lives? Who gets to decide what that looks like?
I have heard a lot of adults sneer that “if you post it on Twitter, you should realize it’s public,” but I’m not sure all of us who use social media – and it’s not simply teens who get the societal finger-wag here – think about our social media updates that way. Nor should we, I’d argue.
“Public” and “private” are not simple binaries. As danah boyd has argued, “there’s a big difference between something being publicly available and being publicized. I worry about how others are going to publicize this publicly available [social media] data and, more importantly, who will get hurt in the cross-fire.” (Her original quote said “Facebook,” but I think the same holds across a number of platforms: Twitter, Facebook, and so on.)What Do We Mean By “Privacy”?
As boyd and others contend, “private” is not the opposite of “public.” There are things that we do out in public – conversations that we have in the coffee shop or bar or park, for example – that, we still expect to be private. We don’t anticipate that our conversations in these settings are recorded or broadcast or data-mined.
You can argue that that’s changing, that that’s naive for us to think that when those conversations in a “public space” are in an online public space, that we’d expect them not to be tracked or monitored. Perhaps that’s true. That doesn’t mean we shouldn’t fight corporations' compulsion to track us. And that doesn’t mean we shouldn’t move to defend the least powerful among us from having their lives monitored – and yes, that includes our students. (And a side note: I really do hate the whole “I can’t believe you didn’t know this was already happening” line that accompanies a lot of tech surveillance revelations. This sort of dismissive attitude offers nothing but smugness.)
I do think there are differing levels of “publicness” online – differing based on a number of factors: the popularity of the site, the user, the topic, for starters. As someone who has 28K followers on Twitter, I experience this often: a casual comment or RT echoes in ways that I hadn’t really expected. And even more importantly, it’s wrong to assume that we all get to move about – in either physical or virtual spaces – with the same assurances about our personal privacy, integrity, and safety. For a woman to lose elements of personal privacy has different ramifications than it does for a man; for an African American woman to lose personal privacy, moreso. For a teen... etc. "Public" and "private" are descriptions of power and privilege. They are not social absolutes. They are not "given."
Surveillance does not affect all students equally. Privacy is increasingly a premium feature; which students can afford such?
What is privacy? As Helen Nissenbaum has argued,
Attempts to define [privacy] have been notoriously controversial and have been accused of vagueness and internal inconsistency — of being overly inclusive, excessively narrow, or insufficiently distinct from other value concepts. Believing conceptual murkiness to be a key obstacle to resolving problems, many have embarked on the treacherous path of defining privacy. As a prelude to addressing crucial substantive questions, they have sought to establish whether privacy is a whether privacy is a claim, a right, an interest, a value, a preference, or merely a state of existence. They have defended accounts of privacy as a descriptive concept, a normative concept, a legal concept, or all three. They have taken positions on whether privacy applies only to information, to actions and decisions (the so-called constitutional rights to privacy), to special seclusion, or to all three. They have declared privacy relevant to all information, or only to a rarefied subset of personal, sensitive, or intimate information, and they have disagreed over whether it is a right to control and limit access or merely a measure of the degree of access others have to us and to information about us. They have posited links between privacy and anonymity, privacy and secrecy, privacy and confidentiality, and privacy and solitude.
We fail to have much nuance when we talk about student data and privacy, and here Nissenbaum’s work is particularly helpful: context matters. Privacy shouldn’t mean “never share.” Or “never share student data without parent’s consent.” These sorts of assertions are particularly irksome to me because they highlight the ways in which so many of our privacy conversations fail to recognize student agency at all. Students and their data are objects in many of these formulations. Too often conversations about privacy fail to give students a voice, for starters, in what pieces of their personal data are shared (or why they're not). Indeed, students are compelled -- by the syllabus and the TOS -- to share. They have little choice - in opting in or opting out. Policies and parents often fail to recognize that students might have – should have – a voice in determining what’s worth opening up to aggregation and analysis and what’s something not really meant for that. (Many so-called privacy advocates in education reinforce this, asssuming they always speak for students, assuming that they know better than students. Again, students just end up as objects of a different sort of paternalism.)
Discussions of privacy are rarely framed about personal integrity – about how identity is performed in certain venues and how surveillance and punishment in those venues might be detrimental to experimentation, exploration. or personal growth. Yet these factors - these vulnerabilities even - seem to be particularly important to consider in education technology circles. What happens to students’ personal growth if we’re going to watch them and collect all their clicks and updates and images and videos during and after school? Who do you get to be, what identities do you get to try out and peform, if you know you're always watched -- by your teachers, by brands? That is, what do intellectual freedom and personal identity development look like under total data surveillance? How much do we want to monitor students as they figure out how to express themselves, as they figure out who they are – again, on and offline?
How much of students' behavior do we want to give a side-eye, how much do we want to squint at, how much do we wnat to scrutinize algorithmically?
These are such important questions when we’re dealing with K–12 students and college students alike. But mostly, instead talking about identify formation and social media monitoring, it seems we want to wring our hands about “cheating.” We let that drive the conversation...Why Monitor Social Media?
There’s a longstanding debate over whether or not teachers should “friend” their students on social media. My 2 cents: it depends. It depends what educators’ relationship with students looks like. It depends on what students want out of that relationship. It depends what educators want. I know teachers who have been able to provide counseling and support in teens’ most dire moments, thanks to their being attuned to social media. (This is complicated by the tools we use too: take Twitter: I'd argue its infrastructure is built on “watching” not “friending.” It's different than Facebook's mechanisms - not that those folks are really your "friends.")
The whole "it depends" thing governs a lot of what "monitoring" looks like, doesn't it. If it's done out of caring or done out of concern.
So what is Pearson doing in this particular case? Pearson doesn’t care about individual students’ struggles with queer identity, homework, cyberbullying, college applications and college affordability, homework, after-school jobs, homecoming king drama, the basketball team’s season, band tryouts, drama tryouts, drama, a parent’s death, parents’ divorce, or standardized testing. Wait. No. Pearson “cares” about that last one.
Pearson is involved in social media monitoring, as is almost every major corporation, not because they care about students. It’s because they care about their brand. They care about their intellectual property. Corporations like Pearson monitor social media, in part, so they can provide customer service. Pearson monitors social media so it can glean insights based on social media sentiment about its brand. (You suck, Pearson!) And when it comes to assessment, Pearson monitors social media so it can identity – and based on its interaction with the NJDOE, punish – those who post status updates about its tests.
In this case, so we’re told, the social media monitoring falls under the umbrella of "test security," which isn’t a new concern by any means. Students have long been told to not bring anything into standardized tests but a number 2 pencil and the pre-approved calculator. Eyes on the exam. No talking. Etc. The same test that's given in New Jersey this week is going to be given in Iowa (or somewhere) next week; so no one can talk about it. Like, ever.
For what it's worth, the technology tools used to monitor testing are only increasing. Rutgers, for example, uses a tool called ProctorTrack to verify student identity (i.e. to prevent cheating) that demands they hand over biometric data including facial recognition and knuckle monitoring.
As Jessy Irwin has argued, we are grooming students for a lifetime of surveillance.
Social media monitoring, so we’re told, provides a new and powerful way to monitor and punish those students who talk about the test after the test. (That’s different than punishing those who talk during the test.) Students have always done so, let’s be honest. Some of this talk, schools long have decreed, counted as cheating – particularly if there were cheat-sheets that others could work from. But some of the talk about tests, we shrugged off merely as banter.
“That one reading comprehension passage about electricity was so easy, ya know, because we just talked about that on Thursday.” – if that’s the content of a tweet, is that cheating? What if it’s a conversation in the cafeteria? What if it’s the topic of a student’s phone call? Which do we monitor? Why?What Does Social Media Monitoring Track?
Again, a common response to the Pearson social media tracking here is that students’ tweets are public, so they’re fair game. But it’s not actually clear what Pearson is tracking. The social media firm Tracx had posted a case study about its work with Pearson, but immediately following Braun’s revelations, that link went dead. (Pearson is still listed as a client.) Tracx boasts, among other things, a “capability which automatically stitches together a user’s social profiles.” Many services (such as Full Contact) let you look up an email address and identify all the social media profiles associated with that. So even if a student’s Twitter account doesn’t have his or her name attached, it’s still discoverable with these monitoring tools as long as it was created with a known address.
Tracx also says it can “visualize social posts at the street level.” Again, if students haven’t turned off geolocation on Google Maps, Twitter, Foursquare, Yelp or the like, they’re pretty easy to find. And these are just a few public clues that a social media monitoring company can use to identify and monitor the Twitter accounts of students who’re currently sitting PARCC exams. That is, it’s not just that there was a tweet about the test; it’s that, thanks to data analytics, Pearson can immediately know who tweeted it.
Social media monitoring is an incredibly sophisticated and incredibly lucrative business. Social media monitoring is not, as I’ve seen some suggest, simply a matter of looking at all the #PARCC hashtags. Social media monitoring involves data collection and data analysis at such a level that schools are told they cannot do this in-house.
This whole process is based on algorithms that surface certain “insights” that the algorithm designers deem important. What metrics does Pearson care about? Even if it’s sucking in all sorts of data about teenagers – across platforms, across locations – what signals matter? What signals does it ignore? (We do not know because Pearson has not shared details of its monitoring algorithms.)Is This a Free Speech Issue?
Cynthia Liu has argued that “this is not a student data privacy issue, but a student free speech issue.” I disagree. It’s both. These issues are not either/or. Indeed, surveillance chills free speech.
In the light of this week’s revelations, students - particularly savvy ones in New Jersey - will move their conversations elsewhere. They will not stop talking about testing. They will just do so in venues in which they do not think adults are listening. They will whisper rather than tweet. They will Yik Yak or SnapChat; they will text. (Will Pearson try to monitor those too?)
Liu argues that that the crackdown on students' social media updates about the Common Core tests is a violation of the First Amendement. It's worth noting that, according to the initial report out of New Jersey, the student’s tweet that prompted this whole brouhaha was made at 3:18 – that is, after school. Do schools have a right to monitor and discipline students’ behavior and speech in school? – yup. Do schools have a right to monitor and discipline students’ online behavior and speech after school? – not so clear.
And again: what happens when, thanks to Internet technologies, schools and their corporate ed-tech providers, opt to surveil students 24–7?Who Benefits? Who Loses?
The response to the news from New Jersey – in certain circles at least – was shock. The blame for all this, placed on Pearson. But let’s be honest: many schools already engage in social media monitoring. Schools, not just ed-tech providers, hire social media monitoring companies. And many standardized tests – Smarter Balanced, the SAT, AP exams – have similar procedures and policies in place that also involve paying attention to what’s said about the assessments on social media. As such, focusing on Pearson or PARCC misses the point.
Late last year, news broke that the Huntsville, Alabama school district had paid over $150,000 to a security firm to investigate students’ online activity. As a result of his activity, 14 students were expelled. 12 of those were African American. And while the data involves more than this particular sting, it’s worth noting that in a school district where only 40% of students were Black, almost 80% of expulsions in that year involved African American students.
What does the school-to-prison pipeline look like when we bring it online?
What does the school-to-prison pipeline look like if we base it on Twitter updates? 24% of teens, according to Pew, use Twitter. But not equally: teen girls use Twitter more than teen boys. And 39% of Black teens do versus 23% of white teens.
As such, who’s going to be caught up in the Pearson dragnet? If schools and ed-tech companies are going to use social media to track behavior, whose behavior exactly will they track? Who's most likely to get caught up in these social media monitoring dragnets for "inappropriateness"? Who's too loud in class? Who's too loud online? Who's talking out of line? Social media monitoring algorithms are written by people. (Who writes them? Can we see them? Can we review the data these algorithms gather?) Crucially: none of this is neutral.Policy Says...
"This will go down on your permanent record"... Except none of this is protected by FERPA.
"It's covered by an NDA"... Except I'm not sure students ever signed a non-disclosure agreement, agreeing they'd never speak of the test.
Students should just know to never talk about the test... And the lesson here, for students: you have no rights to speak publicly about your education. It's all covered by some bullshit policy decree - most of it made-up once something goes awry, once someone dares complain. You're just a cog, an object. Fill in the blanks. You don't matter. And we're watching to make sure you know that.How Should We Rethink Assessment?
In an age of ubiquitous technology and social media, shouldn't we rethink assessment instead of opting to surveil students more severely?
Perhaps if a single tweet – 140 characters – can so easily destroy a test’s security and validity, the whole science of testing thing needs to be reconsidered? Because that’s pretty fragile.Vulnerability and Trust
Here’s an excerpt from Part 1 of my contribution to the Speaking Openly conversation about education, privacy, and risk (the other videos – from Cory Doctorow, Dan Gillmor, and others – are well worth watching):
Learning requires a certain vulnerability. We have to recognize we don’t know things; we have to be open to not knowing things; we have to listen and experiment and sometimes stumble and fail. We have to be open to learning.
But that vulnerability can play out in lots of different ways, depending on the setting for our learning, for example, and on the role we get to play in deciding what that learning looks like, the way we are treated as learners. Whether we like it or not, we are vulnerable when we’re enrolled in formal educational institutions, for example. That vulnerability is different for a five year old than a fifteen year old than a fifty year old returning to college.
In some ways, school is designed to do something to you – it tells you what you should know, it tells you how you should behave. So we are vulnerable not just intellectually and not just in ways that might open us up to new ideas – a good thing, generally, right? – but in ways that might open us up to less pleasant experiences as well.
Do you trust school? Do you trust your instructors? Your peers? Do you have a choice?
How we answers those questions will vary greatly based on any number of factors.
Trust, vulnerability, choice, control, power – these are all interconnected when it comes to learning. And they’re all connected to issues of privacy as well. What’s key to remember: privacy isn’t really the opposite of publicness. To have privacy isn’t the same as to be hidden – and by extension, privacy is not the opposite of “openness.” We have to recognize that privacy isn’t this universal “thing” society has always respected – or that all members of society have benefited from equally – that is now suddenly under attack by virtue of new technologies. Context matters. Again, power matters. But we do also have to recognize how much new technologies reshape these issues – they reshape practices, contexts, and power – in ways that are both obvious and subtle.
How much privacy do you have to hand over in school? How much have you had to do so historically? It’s one thing for a teacher to recognize that you’re still struggling with your 8 times table, for example. It’s another thing entirely for a piece of software, that the school mandates you use, to track massive quantities of other data about your “progress” – not just how well you score on various math exercises or math quizzes, but all the mouse clicks, all the videos you watch, all the times you rewound a video or fast-forwarded. All this data and metadata represents an unprecedented opportunity to learn more about how students learn, we’re often told. But what does this data collection and data-mining mean in terms of power and privacy and vulnerability? What does it mean in terms of how students have already been surveilled and shaped by school? Do students know this data is being collected? What sort of trust relationship is expected between a student, a school (or an informal learning environment too, I should add), and technology when it comes to learning data?
My report card, even when I was learning my math facts 35 years ago, might have said “she’s getting better at the 8 times table, but she tends to talk a lot in art class.” Or “she can do all the math times tables in some arbitrary time we’ve decided you need to know your math facts – good for her, bravo, but she tends to push to the front of the line in library.” To some extent, students have always been watched and observed as they learn. And we have to think about what that looks like in terms of their autonomy and their agency – are they objects or subjects?
We have to recognize too that this surveillance has never been applied equally – some bodies – “marked bodies,” if you will – have been seen as more “undisciplined.” They’ve always been watched more closely.
Will technology change this? Will technology put even more scrutiny on students? On which students? Which students are in a position to resist that scrutiny? Which students will be granted privacy?
These are questions of power, not simply questions of policy or of technology.
Ideally, of course, open education breaks open some of the control and power, because it recognizes that the learner is the driver here, not the instructor, not the institution. I think we need to do more, however, to make sure that open education when paired with various Internet technologies, isn’t re-inscribing new forms of control and power – it is not just a matter of the control of education institutions, but it’s surveillance and control by the technology sector. Do learners trust technology? Why? Why not? Has that trust been earned? What sorts of privacy should learners demand? How do we reconcile that need for a certain amount of vulnerability in order to learn, with the vulnerability of having so much more of ourselves – our data – exposed as we turn to technologies to do that very learning?Who Tracks Learners Online? Why?
It's not just Pearson. Pearson is a red herring here...
Photo courtesy of INTED 2015
Unevenly distributed by Steve Wheeler was written in Istanbul, Turkey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
The WSJ article is inaccessible due to a paywall, but the opinions expressed in this Slashdot are more worth reading in any case (scroll down to read it). Here's the summary: "The WSJ reports an army of teachers wielding Nook tablets and backed by investors including Bill Gates and Mark Zuckerberg is on a mission to bring cheap [$6.50/month], internet-based, private education to millions of the world's poorest children in Africa and Asia. In Kenya, 126,000 students are enrolled at 400+ Bridge International Academies that have sprung up across the country since the company was founded in 2009. Bridge's founders are challenging the long-held assumption that governments rather than companies should lead mass education programs. The Nook tablets are used to deliver lesson plans used by teachers (aka "scripted instruction"), as well as to collect test results from students to monitor their progress."
Note that Slashdot discussions might offend some people. But some comments are quite good. For example: "Looking at our own educational systems, both in the US and Europe, I'm not too sure that we're the right one's to show the Africans how to do it properly." And, "Gates and Zuck want to farm the entire human race for wage slaves. The oligarchs want to pluck the best and brightest from wherever they may be and utilize them."[Link] [Comment]
According to this article, the long-established 'no significant difference' between online and in-class learning outcomes is upheld. Yet "there remains a need for greater methodological rigor in the research on learning outcomes associated with online and hybrid instruction." On the one hand, I agree that academic research in the field is often poor (one of the studies cited is your typical "class of psychology students at a midwestern university"). On the other hand, I don't think the author is fair in his assessment of some of the better studies - the critiques, for example, of Xu and Jaggars (2014, "500,000 online and face-to-face courses taken by more than 40,000 degree-seeking students") are picky and pedantic.
Finally, I would add my usual caution that with online learning, we don't expect merely the same outcome, we expect different outcomes. A study like this is like comparing air and rail by distance travelled and on-time ratings, and finding no significant difference in the outcome. But when travelling by air, we travel much faster, and to locations not accessible by train, and a controlled point-for-point comparison would miss this result.[Link] [Comment]
In August 2013 Michael described Ray Henderson’s departure from an operational role at Blackboard. As of the end of 2014, Ray is no longer on the board of directors at Blackboard either. He is focusing on his board activity (including In The Telling, our partner for e-Literate TV) and helping with other ed tech companies. While Ray’s departure from the board did not come as a surprise to me, I have been noting the surprising number of other high-level departures from Blackboard recently.
As of December 24, 2014, Blackboard listed 12 company executives in their About > Leadership page. Of those 12 people, 4 have left the company since early January. Below is the list of the leadership team at that time along with notes on changes:
- Jay Bhatt, CEO
- Maurice Heiblum, SVP Higher Education, Corporate And Government Markets (DEPARTED February, new job unlisted)
- Mark Belles, SVP K-12 (DEPARTED March, now President & COO at Teaching Strategies, LLC)
- David Marr, SVP Transact
- Matthew Small, SVP & Managing Director, International
- Gary Lang, SVP Product Development, Support And Cloud Services (DEPARTED January, now VP B2B Technology, Amazon Supply)
- Katie Blot, SVP Educational Services (now SVP Corporate Strategy & Business Development)
- Mark Strassman, SVP Industry and Product Management
- Bill Davis, CFO
- Michael Bisignano, SVP General Counsel, Secretary (DEPARTED February, now EVP & General Counsel at CA Technologies)
- Denise Haselhorst, SVP Human Resources
- Tracey Stout, SVP Marketing
Beyond the leadership team, there are three others worth highlighting.
- Brad Koch, VP Product Management (DEPARTED January, now at Instructure)
- David Ashman, VP Chief Architect, Cloud Architecture (DEPARTED February, now CTO at Teaching Strategies, LLC)
- Mark Drechsler, Senior Director, Consulting (APAC) (DEPARTED March, now at Flinders University)
I mentioned Brad’s departure already and the significance in this post. Mark is significant in terms of his influence in the Australian market, as he came aboard from the acquisition of NetSpot.
David is significant as he was Chief Architect and had the primary vision for Blackboard’s impending moving into the cloud. Michael described this move in his post last July.
Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.
This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.
This rollout of new cloud architecture has taken a while, and I believe it is hitting select customers this year. Will David’s departure add risk to this move? I talked to David a few weeks ago, and he said that he was leaving for a great opportunity at Teaching Strategies, and that while he was perhaps the most visible face of the cloud at Blackboard, others behind the scenes are keeping the vision. He does not see added risk. While I appreciate the direct answers David gave me to my questions, I still cannot see how the departure of Gary Lang and David Ashman will not add risk.
So why are so many people leaving? From initial research and questions, the general answer seems to be ‘great opportunity for me professionally or personally, loved working at Blackboard, time to move on’. There is no smoking gun that I can find, and most departures are going to very good jobs.
Jay Bhatt, Blackboard’s CEO, provided the following statement based on my questions.
As part of the natural evolution of business, there have been some transitions that have taken place. A handful of executives have moved onto new roles, motivated by both personal and professional reasons. With these transitions, we have had the opportunity to add some great new executive talent to our company as well. Individuals who bring the experience and expertise we need to truly capture the growth opportunity we have in front of us. This includes Mark Gruzin, our new NAHE/ProEd GTM lead, Peter George, our new head of product development and a new general counsel who will be starting later this month. The amazing feedback we continue to receive from customers and others in the industry reinforces how far we’ve come and that we are on the right path. As Blackboard continues to evolve, our leaders remain dedicated to moving the company forward into the next stage of our transformation.
While Jay’s statement matches what I have heard, I would note the following:
- The percentage of leadership changes within a 3 month period rises above the level of “natural evolution of business”. Correlation does not imply causation, but neither does it imply a coincidence.
- The people leaving have a long history in educational technology (Gary Lang being the exception), but I have not seen the same in reverse direction. Mark Gruzin comes from a background in worldwide sales and federal software group at IBM. Peter George comes from a background in Identity & Access Management as well as Workforce Management companies. They both seem to be heavy hitters, but not in ed tech. Likewise, Jay himself along with Mark Strassman and Gary Lang had no ed tech experience when they joined Blackboard. This is not necessarily a mistake, as fresh ideas and approaches were needed, but it is worth noting the stark differences in people leaving and people coming in.
- These changes come in the middle of Blackboard making huge bets on a completely new user experience and a move into the cloud. These changes were announced last year, but they have not been completed. This is the most important area to watch – whether Blackboard completes these changes and successfully rolls them out to the market.
We’ll keep watching and update where appropriate.
The post Blackboard Brain Drain: One third of executive team leaves in past 3 months appeared first on e-Literate.