agregador de noticias
I’m not sure that there is much of an immediate ‘market’ for these posts on issues around developing an open textbook (or maybe everyone is sensibly enjoying the summer), but for those who may in the future be contemplating self-publishing a textbook, I though it may be helpful to draw on my experience in authoring both commercial and open textbooks to lay out some guidelines for reviewers of open textbooks.The issue
I discussed in my previous post the need for independent reviews of a self-published open or academic textbook, and the criteria I used in selecting reviewers.
Commercial publishers, when commissioning reviewers, usually send a letter or a standard document that sets out guidelines for reviewing a book in its first, full draft before printing and distribution, to ensure both consistency between reviewers, and to identify to reviewers what the publisher is looking for. Although sometimes the publishing editor will require responses to elements that are specific to a particular book, there are also a number of guidelines that are pretty generic.
The situation is somewhat different for a self-published textbook, where it is the responsibility of the author to decide whether to get independent reviews and if so, to provide appropriate guidelines to the reviewers. At the same time, many guidelines will be similar for both types of book, but there are also some aspects of open publishing that require specific guidelines. I have outlined in blue below those that are specific to open textbooks or to my book in particular.
Of course, many reviewers will have their own criteria in assessing a textbook, and they are to be encouraged to use such criteria and make them explicit in the review. At the same time, it is my experience that most reviewers welcome guidelines as to what to comment on, and this is particularly true for an open textbook.
I contacted BCcampus, which obtains independent reviews of all its open textbooks before making them available, and they provided me with a set of questions for reviewers, and I have added some of my own.Target audience
It is important first of all for the author to be clear as to the primary audience that is being targeted by the book. In my case, it was faculty and instructors in post-secondary education wishing to ensure that their teaching is relevant to the needs of contemporary learners and students. In another case, it may be first year undergraduate students. So one general question for reviewers is:
To what extent is the book successful in meeting the needs of its primary market?Other questions for reviewers
- Does the book meet the requirements of a scholarly work? Is it research and evidence-based, and does it provide a critical analysis of the key issues in the field?
- Does it provide evidence-based, practical guidelines for faculty and instructors that will help them improve their teaching?
- Does it cover adequately the main contemporary issues in teaching in a digital age?
- Is the book well written? Does it read well? Is it well organized and structured? Are there errors of grammar or serious typographical errors? Are the graphics and cases appropriately chosen?
- What major changes, if any, are needed before you can recommend this book? What minor changes would you like to see?
- If this book was to be offered to a commercial publisher, would you recommend it for publication?
Question 6 may seem a little odd, but my aim here is to ensure that the book meets the same standards as commercial publishing, where there is the added risk of financial loss for a commercial publisher if there is no market for the book, or if the book is not good enough to attract new readers over a period of time. While these risks do not apply to free, open textbooks, the fact that it is judged suitable for commercial publication will carry weight with those looking to ensure that the book meets quality standards.
There may be other questions or guidelines that will be specific to your book that you may want feedback on.Practical considerations
I specified a length for the reviews of between 800-1,500 words, and that the review would be covered under a Creative Commons CC-ND license. This means the reviews cannot be changed without permission of the writer of the review. However, reviewers would be free to publish the same review in an academic journal, if they wished, and the review could be re-used by, for instance, the author for marketing purposes (but unedited).
I sent out invitations to reviewers within two months of the full publication of the book. Ideally, on hindsight, the invitation should go out almost immediately after full publication, but not before, as it is important for reviewers to see the whole book in context. I gave a suggested deadline of two months to do the review.
I did not offer a fee for the review, but a small fee may be appreciated, as it is a substantial piece of work if the review is done properly.
I am waiting until all three reviews are submitted before posting them, so the reviewers will act independently and not be influenced by someone else’s review.Over to you
Do you feel that this process (including selection of reviewers, as covered in the previous post) ensures the same degree of independence and quality of peer assessment as you would find for a commercially published book? If not, what suggestions do you have to improve the process?
Even if this process was followed, do you think that there will still be concerns about adopting an open textbook or referencing it in student work?
Two years ago, I wrote about how D2L’s analytics package looked serious and potentially ground-breaking, but that there were serious architectural issues with the underlying platform that were preventing the product from working properly for customers. Since then, we’ve been looking for signs that the company has dealt with these issues and is ready to deliver something interesting and powerful. And what we’ve seen is…uh…
Well, the silence has ended. I didn’t get to go to FUSION this year, but I did look at the highlights of the analytics announcements, and they were…
OK, I’ll be honest. They were incredibly disappointing in almost every way possible, and good examples of a really bad pattern of hype and misdirection that we’ve been seeing from D2L lately.
You can see a presentation of the “NEW Brightspace Insights(TM) Analytics Suite” here. I would embed the video for you but, naturally, D2L uses a custom player from which they have apparently stripped embedding capabilities. Anyway, one of the first things we learn from the talk is that, with their new, space-age, cold-fusion-powered platform, they “deliver the data to you 20 times faster than before.” Wow! Twenty times faster?! That’s…like…they’re giving us the data even before the students click or something. THEY ARE READING THE STUDENTS’ MINDS!
Uh, no. Not really.
A little later on in the presentation, if you listen closely, you’ll learn that D2L was running a batch process to update the data once every 24 hours. Now, two years after announcing their supposed breakthrough data analytics platform, they are proud to tell us that they can run a batch process every hour. As I write this, I am looking at my real-time analytics feed on my blog, watching people come and go. Which I’ve had for a while. For free. Of course, saying it that way, a batch process every hour, doesn’t sound quite as awesome asTWENTY TIMES FASTER!!!!!
So they go with that.
There was an honest way in which they could have made the announcement and still sounded great. They could have said something like this:
You know, when LMSs were first developed, nobody was really thinking about analytics, and the technology to do analytics well really wasn’t at a level where it was practical for education anyway. Times have changed, and so we have had to rebuild Brightspace from the inside out to accommodate this new world. This is an ongoing process, but we’re here to announce a milestone. By being able to deliver you regular, intra-day updates, we can now make a big difference in their value to you. You can respond more quickly to student needs. We are going to show you a few examples of it today, but the bigger deal is that we have this new structural capability that will enable us to provide you with more timely analytics as we go.
That’s not a whole lot different in substance than what they actually said. And they really needed to communicate in a hype-free way, because what was the example that they gave for this blazing fast analytics capability? Why, the ability to see if students had watched a video.
Really. That was it.
Now, here again, D2L could have scored real points for this incredibly underwhelming example if they had talked honestly about Caliper and its role in this demo. The big deal here is that they are getting analytics not from Brightspace but from a third-party tool (Kaltura) using IMS Caliper. Regular readers know that I am a big fan of the standard-in-development. I think it’s fantastic that an LMS company has made an early commitment to implement the standard and is pushing it hard as differentiator. That can make the difference between a standard getting traction or remaining an academic exercise. How does D2L position this move? From their announcement:
With our previous analytics products, D2L clients received information on student success even before they took their first test. This has helped them improve student success in many ways, but the data is limited to Brightspace tools. The new Brightspace Insights is able to aggregate student data, leveraging IMS Caliper data, across a wide variety of learning tools within an institution’s technology ecosystem.
We’ve seen explosive growth in the use of external learning tools hooked into Brightspace over the past eighteen months. In fact, we are trending toward 200% growth over 2014. [Emphasis added.] That’s a lot of missing data.
This helps create a more complete view of the student. All of their progress and experiences are captured and delivered through high performance reports, comprehensive data visualizations, and predictive analytics.
Let’s think about an example like a student’s experiences with publisher content and applications. Until now, Brightspace was able to capture final grades but wouldn’t track things like practice quizzes or other assessments a student has taken. It wouldn’t know if a student didn’t get past the table of contents in a digital textbook. Now, the new Brightspace Insights captures all of this data and creates a more complete, living, breathing view of a student’s performance.
This is a big milestone for edtech. No other LMS provider is able to capture data across the learning technology ecosystem like this. [Emphasis added.]
I have no problem with D2L crowing about being early to market with a Caliper implementation. But let’s look at how they positioned it. First, they talked about 200% growth in use of external learning tools in 2015. But what does that mean? Going from one tool to three tools? And what kind of tools are they? And what do we know about how they are being used? OK, on that last question, maybe analytics are needed to answer it. But the point is that D2L has a pattern of punctuating every announcement or talk with an impressive-sounding but meaningless statistic to emphasize how awesome they are. Phil recently caught John Baker using…questionable retention statistics in a speech he gave. In that case, the problem wasn’t that the statistic itself was meaningless but rather that there was no reason to believe that D2L had anything to do with the improvement in the case being cited. And then there’s the slight-of-hand that Phil just called out regarding their LeaP marketing. It’s not as bad as some of the other examples, in my opinion, but still disturbingly consistent with the pattern we are seeing. I am starting to suspect that somebody in the company literally made a rule: Every talk or announcement must have a statistic in it. Doesn’t matter what the statistic is, or whether it means anything. Make one up if you have to, but get it in there.
But back to analytics. The more egregious claim in the quote above is that “no other LMS provider is able to capture data across the learning technology like this [example that we just gave],” because D2L can’t either yet. They have implemented a pre-final draft of a standard which requires both sides to implement in order for it to work. I don’t know of any publishers who have announced they are ready to provide data in the way described in D2L’s example. In fact, there are darned few app providers of any kind who are there yet. (Apparently, Kaltura is one of them.) Again, this could have been presented honestly in a way that made D2L look fantastic. Implementing first puts them in a leadership position, even if that leadership will take a while to pay practical dividends for the customer. But they went for hype instead.
I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.
I don’t like writing this harshly about a company—particularly one that I have had reason to praise highly in the past. I don’t do it very often. But enough is enough already.
Recently I wrote a post checking up on a claim by D2L that seems to imply that their learning platform leads to measurable improvements in academic performance. The genesis of this thread is a panel discussion at the IMS Global conference where I argued that LMS usage in aggregate has not improved academic performance but is important, or even necessary, infrastructure with a critical role. Unfortunately, I found that D2L’s claim from Lone Star was misleading:
That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.
Subsequently I found a press release at D2L with a claim that appeared to be more rigorous and credible (written in an awful protected web page that prevents select – copy – paste).
D2L Launches the Next Generation of BrightSpace and Strives to Accelerate the Nation’s Path to 60% Attainment
D2L, the EdTech company that created Brightspace, today announces the next generation of its learning platform, designed to develop smarter learners and increase graduation rates. By featuring a new faculty user interface (UI) and bringing adaptive learning to the masses, Brightspace is more flexible, smarter, and easier to use. [snip]
D2L is changing the EdTech landscape by enabling students to learn more with Brightspace LeaP adaptive learning technology that brings personalized learning to the masses, and will help both increase graduation rates and produce smarter learners. The National Scientific Research Council of Canada (NSERC) produced a recent unpublished study that states: “After collating and processing the results, the results were very favourable for LeaP; the study demonstrates, with statistical significance, a 24% absolute gain and a 34% relative gain in final test scores over a traditional LMS while shortening the time on task by 30% all while maintaining a high subjective score on perceived usefulness.”
I asked the company to provide more information on this “unpublished study”, and I got no response.
Hello, Internet search and phone calls – time to do some investigation to see if there is real data to back up claims.Details on the Study
The Natural Sciences and Engineering Research Council of Canada (NSERC) is somewhat similar to the National Science Foundation in the US – they are funding agency. When I called them they made it perfectly clear that they don’t produce any studies as claimed, they only fund them. I would have to find the appropriate study and contact the lead researcher. Luckily they shared the link to their awards database, and I did some searching on relevant terms. I eventually found some candidate studies and contacted the lead researchers. It turns out that the study in question was led by none other than Dragan Gasevic, founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012, and he is now at the University of Edinburgh.
The grant was one of NSERC’s Engage grants which look for researchers to team with companies, and Kowillage was the partner – they have an adaptive learning platform. D2L acquired Knowillage in the middle of the study, and they currently offer the technology as LeaP. LeaP is integrated into the main D2L learning platform (LMS).
The reason the study was not published was simply that Dragan was too busy, including his move to Edinburgh, to complete and publish, but he was happy to share information by Skype.
The study was done on an Introduction to Chemistry course at an unnamed Canadian university. Following ~130 students, the study looked at test scores and time to complete, with two objectives reported – from the class midterm and class final. This was a controlled experiment looking at three groupings:
- A control group with no LMS, using just search tools and loosely organized content;
- A group using Moodle as an LMS with no adaptive learning; and
- A group using Moodle as an LMS with Knowillage / LeaP integrated following LTI standards.
Of note, this study did not even use D2L’s core learning platform, now branded as BrightSpace. It used Moodle as the LMS, but the study was not about the LMS – it was about the pedagogical usage of the adaptive engine used on top of Moodle. It is important to call out that to date, LeaP has been an add-on application that works with multiple LMSs. I have noticed that D2L now redirects their web pages that called out such integrations (e.g. this one showing integration with Canvas and this one with Blackboard) to new marketing just talking about BrightSpace. I do not know if this means D2L no longer allows LeaP integration with other LMSs or not.
The study found evidence that Knowillage / LeaP allows students to have better test scores than students using just Moodle or no learning platform. This finding was significant even when controlling for students’ prior knowledge and for students’ dispositions (using a questionnaire commonly used in Psychology for motivational strategies and skills). The majority of the variability (a moderate effect size) was still explained by the test condition – use of adaptive learning software.
Dragan regrets the research team’s terminology of “absolute gain” and “relative gain”, but the research did clearly show increased test score gains by use of the adaptive software.
The results were quite different between the mid-term (no significant difference between Moodle+LeaP group and Moodle only group or control group) and the final (significant improvements for Moodle+LeaP well over other groups). Furthermore, the Moodle only group and control group with no LMS reversed gains between midterms and finals. To Dragan, these are study limitations and should be investigated in future research. He still would like to publish these results soon.
Overall, this is an interesting study, and I hope we get a published version soon – it could tell us a bit about adaptive learning, at least in the context of Intro to Chemistry usage.Back to D2L Claim
Like the Lone Star example, I find a real problem with misleading marketing. D2L could have been more precise and said something like the following:
We acquired a tool, LeaP, that when integrated with another LMS was shown to improve academic performance in a controlled experiment funded by NSERC. We are now offering this tool with deep integration into our learning platform, BrightSpace, as we hope to see similar gains with our clients in the future.
Instead, D2L chose to use imprecise marketing language that implies, or allows the reader to conclude that their next-generation LMS has been proven to work better than a traditional LMS. They never come out and say “it was our LMS”, but they also don’t say enough for the reader to understand the context.
What is clear is that D2L’s LMS (the core of the BrightSpace learning platform) had nothing to do with the study, the actual gains were recorded by LeaP integrated with Moodle, and that the study was encouraging for adaptive learning and LeaP but limited in scope. We also have no evidence that the BrightSpace integration gives any different results than Moodle or Canvas or Blackboard Learn integrations with LeaP. For all we know given the scope of the study, it is entirely possible that there was something unique about the Moodle / LeaP integration that enabled the positive results. We don’t know that, but we can’t rule it out, either.
Kudos to D2L for acquiring Knowillage and for working to make it more available to customers, but once again the company needs to be more accurate in their marketing claims.
The post About The D2L Claim Of BrightSpace LeaP And Academic Improvements appeared first on e-Literate.
Michael Feldstein addresses "the EDUCAUSE Learning Initiative’ s (ELI’ s) paper on a Next-Generation Digital Learning Environment (NGDLE) (OLDaily) and Tony Bates’ thoughtful response to it." He also mixes in copious reference to Jim Groom and the Domain of One's Own project, because it's consistent with the ELI paper. There are three major arguments from Bates that he weighs in on (the wording is Feldstein's, lightly edited by me):
- potentially heavy and bureaucratic standards-making process vulnerable to undue corporate influence.
- LEGO is a poor metaphor that suggests an industrialized model.
- NGDLE will push us further in the direction of computer-driven rather than human-driven classes.
His response to Bates is pretty much encapsulated in this slightly condescending overview: "Folks who are non-technical tend to think of software as a direct implementation of their functional needs, and their understanding of technical standards flows from that view of the world. As a result, it’ s easy to overgeneralize the lesson of the learning object metadata standards failures. But the history of computing is one of building up successive layers of abstraction." The thing is, in most areas, increasing levels of abstraction made it simple do do difficult things, but in education, increasing abstraction made it difficult to do simple things. And that's the core of Bates's argument, and I think Feldstein misses it.
I have talked in the past about how we as a society are developing a new multimedia language (and in the process, reshaping what 'language of thought' theories could possibly mean). We are seeing more and more evidence of this, beginning with this lead story. It's a great set of thought-experiments on how authors could respond to specific audience needs with more useful and informative multimedia responses. Do they work? Yes - as Poynter points out, the most popular features on the New York Times web site were interactives and multimedia, not stories. And the upstart (and excellent) news site Quartz has just launched Atlas, a site for charts and graphics. We won't recognize that we think of as 'learning content' in just a few years, as we move beyond texts and courses and toward engaging and interactive multimedia.[Link] [Comment]
I think this must is true: "The disruptive effect of MOOCs will be felt most significantly in the development of new forms of provision that go beyond the traditional HE market such as professional and corporate training that appeals to employers." And "There is great potential for add-on content services and the creation of new revenue models through building partnerships with institutions and other educational service providers." The big grey box in the diagram, I think, means "we're not sure what happened" and the other box says "this is where we think it's going".[Link] [Comment]
Jorum, self-described as "the UK's largest repository for discovering and sharing Open Educational Resources for HE, FE and Skills," is being closed by Jisc as of September, 2016. There is no solid announcement of what will replace Jorum, if anything - "Jisc will be testing and looking into... (and) exploring..." but not pomising anything solid. Jisc explains, "We have consulted with stakeholders, users and the Jorum technical team who agree that with the evolution of apps and platforms which give greater user functionality and interactivity a next generation version will be welcomed." Jorum contains some 16,000 resources. More details are available from Siobhan Burke on the Jorum-Updates list.[Link] [Comment]
No, this is not Dale's Cone (though you'd be forgiven for thinking it is). It is "a framework – for content curation." I've criticized the educational researcher's over-reliance on taxonomies in the past; this old saw is equally the villain. What we see here is very similar to Gagne's 9 events model. And like so many models before and after, it's a step-by-step model of how education or learning does or should work. It's very procedural, it's very prescriptive - and it's so utterly wrong. Education is not a linear process. It's not even something we can flow-chart. It's a constant complex and adaptive process, involving and balancing feedback from dozens of elements, pursuing a strange attractor of varying motivations, means and methods.[Link] [Comment]
Alex Reid responds to the push toward conversion of university degrees - even post-graduate degrees, and those in the humanities - toward workforce training. I think that nobody disputes the idea that graduates should be properly prepared for post-graduate life. But what does that mean? Reid raises a couple of ideas worth pondering. One, posted by Eric Johnson in the Chronicle, is that business should pay for its own workforce training. This is not a new idea; it has been discussed in these pages here and here, for example. The other is that "rather than creating more hyper-specialized humanities phds... we should produce more flexible intellectuals: not 'generalists' mind you, but adaptive thinkers and actors." I think it's hard to argue against this.[Link] [Comment]
Labour Market Information (LMI) is not perhaps the most popular subject to talk about. But with the advent of open and linked data, LMI is increasingly being open up to wider audiences and has considerable potential for helping people choose and plan future careers and plan education programmes, as well as for use in research, exploring future skills needs and for social and economic planning.
This is a video version of a presentation by Graham Attwell at the Slovenian ZRSZ Analytical Office conference on “Short-term Skills Anticipations and Mismatch in the Labour Market. Graham Attwell examines ongoing work on mid and long term skills anticipation in the UK. He will bases on work being undertaken by the UK Commission for Employment and Skills and the European EmployID project looking, in the mid term, at future skills needs and in the longer term at the future of work. He explains the motivation for undertaking these studies and their potential uses. He also explains briefly the data sources and statistical background and barriers to the wok on skills projections, whilst emphasising that they are not the only possible futures and can best serve as a a benchmark for debate and reflection that can be used to inform policy development and other choices and decisions. He goes on to look at how open and linked data is opening up more academic research to wider user groups, and presents the work of the UKCES LMI for All project, which has developed an open API allowing the development of applications for different user groups concerned with future jobs and future skills. Finally he briefly discusses the policy implications of this work and the choices and influence of policymakers in influencing different futures.