agregador de noticias

Philosophers On a Physics Experiment that “Suggests There’s No Such Thing As Objective Reality”

OLDaily - 23 Marzo, 2019 - 06:37
Justin Weinberg, Daily Nous, Mar 23, 2019

MIT Technology Review recently published an article entitled “A quantum experiment suggests there’s no such thing as objective reality. Of course, the suggestion fell far short of a proof, but that didn't stop people from speculating. I personally fall into the 'no objective reality' camp, but not because of quantum physics. Anyhow, what's interesting about this set of short articles is that they make clear how important is the interpretation of any objective evidence. No data, no matter how concrete, speaks for itself; what it means depends on what we recognize it to be.

Web: [Direct Link] [This Post]

EthicalML/xai

OLDaily - 23 Marzo, 2019 - 06:37
GitHub, Mar 23, 2019

From the website: " XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning." This may all seem pretty esoteric, but I can imagine a  student assignment of the (far) future: use your AI to automatically generate a response to a Reddit discussion and them employ the AI to explain the reasoning for your AI-generated response.

Web: [Direct Link] [This Post]

Could Remixing Old MOOCs Give New Life to Free Online Education?

OLDaily - 23 Marzo, 2019 - 06:37
Jeffrey R. Young, EdSurge, Mar 23, 2019

Remixing open educational resources has been invented by a Harvard professor. As EdSurge reports, "The idea comes from Robert Lue, a biology professor at Harvard University who was the founding faculty director of HarvardX, the college’s effort to build MOOCs. He’s leading a new platform called LabXChange that aims to let professors, teachers or anyone mix together their own free online course from pieces of other courses." Until this point, we have not really had any idea what to do with our old MOOCs, and could only rebuild them from scratch. Thanks to a $6.5-million grant from the Amgen Foundation, that's all changed.

Web: [Direct Link] [This Post]

16 Open Learning Resources that Go Beyond the Textbook

Campus Technology - 22 Marzo, 2019 - 20:22
The concept of "open learning" encompasses far more than what's found in a textbook. These sources provide other kinds of resources that will boost your students' learning.

Smart Assistants Helping to Drive Growth of Wearables

Campus Technology - 22 Marzo, 2019 - 18:59
Wearable devices will continue to see healthy growth over the next five years, driven by new use cases, new devices and the rise of smart assistants.

Smart Assistants Helping to Drive Growth of Wearables

THE Journal - 22 Marzo, 2019 - 18:59
Wearable devices will continue to see healthy growth over the next five years, driven by new use cases, new devices and the rise of smart assistants.

Scientists rise up against statistical significance

OLDaily - 22 Marzo, 2019 - 02:37
Valentin Amrhein, Sander Greenland, Blake McShane, Nature, Mar 21, 2019

I am definitely of the same sentiment as these authors, especially when it comes to the 'proofs' of non-hypotheses (like: there's no such thing as learning styles) to which we have been subjected over the years. Here's what the authors say: "We’re frankly sick of seeing such nonsensical ‘proofs of the null’ and claims of non-association in presentations, research articles, reviews and instructional materials." Underlying this is a reaction against the idea of a statistical 'proof' of an all-or-nothing statement. "Inferences should be scientific, and that goes far beyond the merely statistical...  eradicating categorization will help to halt overconfident claims, unwarranted declarations of ‘no difference’ and absurd statements about ‘replication failure’ when the results from the original and replication studies are highly compatible."

Web: [Direct Link] [This Post]

How do you do that online thing?

OLDaily - 22 Marzo, 2019 - 02:37
Laura Ritchie, Mar 21, 2019

You start either with an audience to speak to, or something to say. From there, you either tailor your message to your audience, or (better) you find an audience for what you have to say. Most of this video interview is focused on the latter, and people teaching students how to connect online will find this video, and the post its attached to, to be quite a useful resource. Having watched the video, I now see how I could maybe have been more useful, for example, by referring to something i wrote ages ago, How to be Heard. I will say, though, that Jonathan Worth has provided the right advice.

Web: [Direct Link] [This Post]

The New Social Network That Isn’t New at All

OLDaily - 22 Marzo, 2019 - 02:37
Mike Isaac, New York Times, Mar 21, 2019

It's a bit funny to be posting a reference to this article in an email newsletter that has been publishing for almost two decades, but there you have it: "My new social network is an email newsletter," writes Mike Isaac. "Every week or so, I blast it out to a few thousand people who have signed up to read my musings." This is part of the trend away from sharing on public platforms, he writes. " Now, more of us are moving toward private modes of sharing: a Slack group instead of a tweet; an encrypted Signal message instead of a status update." Of course that might be because most newsletters (including mine) are redirected into oblivion by platform-based (and ad-supported) algorithms.

Web: [Direct Link] [This Post]

Online Course Design Rubrics, Part 3: Now what?

e-Literate - 21 Marzo, 2019 - 23:35
Part 3 of 3: NOW WHAT?

In Part 1 of this series we looked at and compared seven online course design rubrics as they are today. In Part 2 we looked at why these rubrics have become more important to individuals, programs, institutions, and higher education systems. In this segment, Part 3, I'll review what's missing from the rubrics, what's next, and how various stakeholders are going, or should go, beyond what the current course design rubrics assess.

Ease of use (product)

Most of the rubrics have been shared with a Creative Commons license, making it easy to use or adapt them as a part of your institution's online course redesign or professional development efforts. However, many of the rubrics also pose some challenges, two of which I'll cover here: barreling and organization.

Criterion Barreling: Possibly to reduce the number of criteria and/or to shorten their length, some of the rubrics use criteria that evaluate more than one aspect of a course. This issue, known as barreling, can make it difficult to use the rubric accurately to review a course, since a course may meet one part of a criterion, but not another. The Blackboard Exemplary Course Program Rubric addresses this challenge by assigning higher scores "for courses that are strong across all items in [a given] subcategory." In their next round of revisions, rubric providers should investigate ways to decouple barreled criteria or move to holistic rubrics that ask reviewers to count the number of checkboxes checked in a certain category.

Categorical Organization: As discussed when comparing the rubrics in Part 1 (WHAT?), each rubric provider organizes their course design criteria differently and has a different number of categories. This makes it difficult to go back and find what you need to fix. For example, most of the rubrics put criteria about clear instructions and links to relevant campus policies at the beginning, but Blackboard puts them at the end in its Learner Support category. Ultimately, the categories should be seen as a messy Venn diagram rather than a linear list.

To increase meaning and motivation among online teachers and instructional designers, rubric providers should provide online versions of the rubrics to allow people to sort the criteria in different ways. For example, instructors and course designers could follow the backward design process as they build or redesign an online course. They would start with the objectives or outcomes, then confirm they ask students to demonstrate achievement of those objectives (assessment), then make sure they provide opportunities to practice (activities, interactivity, and assignments), then create and find course materials to support reaching the outcomes (content).

Ease of use (process)

Time for course review: Based on conversations with one of the CCC system's trained faculty peer reviewers, the course review process involves a) going through an entire online course, b) scoring the rubric, and c) providing useful and actionable written feedback. This takes her an average of ten hours per course, which is analogous to what I have experienced myself--if you are going through an entire course, it's going to take time. Like the CCC, institutions should train and compensate peer reviewers for the considerable time it takes to support their colleagues.

Tools for course review: Earlier I mentioned that online course design rubrics a) take time to use, and b) do not make it easy to link from document-based rubrics and feedback to elements within an online course. LMS vendors should support online course review by repurposing existing LMS rubric tools and allowing reviewers to share feedback with online instructors. This would involve creating a rubric that sits above a course to review the course as a whole, as opposed to rubrics that sit within a course to review assignments. Reviewers should also be able to tag specific course materials, activities, and assessments in reference to the rubric scores and/or written feedback. If we can annotate PDF documents, images, and video clips (e.g., Classroom Salon, Timelinely), we should be able to do the same with an online course.

Time for course revision: The course revision process takes time—several initiatives tied to the rubrics ask faculty to go through the review and redesign process over an entire academic term or summer break, before implementing the changes. Unless they are given release time, faculty complete this work in addition to their typical full load. For itinerant lecturers, this workload is spread across multiple institutions. Stipends can incentivize or motivate people to do the work, but release time actually may be more valuable to reaching the desired redesign goals quickly.

Tools for course revision: Most of the organizations that have created these rubrics also offer related professional development and/or course redesign support—Quality Matters, SUNY, the CCC system's @ONE unit, the CSU system's Quality Assurance project, and UW LaCrosse all offer some level of training and support for people redesigning online courses. Systems like the CCC have moved to local Peer Online Course Review processes to address the bottleneck effect of one central organization having to support everyone.

Exemplars

Only one of the rubrics—the Cal State system's QLT rubric—has a criterion related to showcasing samples of exemplary student work so students know what their work should look like. However, all of the online course design rubric providers should showcase what meeting and exceeding each rubric criterion looks like. By providing exemplars—courses, modules, materials, assessments, and activities—to novice and veteran online faculty alike, these initiatives would make it easier for those faculty to design or redesign their courses. For example, SUNY's OSCQR initiative devotes a section of its website to Explanations, Evidence & Examples--there are links to examples for some (but not all) of the 50 criteria and a call for visitors to share their own examples through the OSCQR Examples Contribution Form. Other rubric providers may fold examples into their professional development workshops and/or resources, but public libraries of examples would allow a larger number of faculty to benefit.

Engagement

In the Limitations and Strengths section in Part 1, I mentioned that the majority of the online course design rubric criteria focus on reviewing a course before any student activity begins. Even criteria related to interaction and collaboration measure whether or not participation requirements are explained clearly or collaboration structures have been set up. However, when my program at SF State completed an accreditation application to create a fully online Master's degree, one application reviewer made this comment and request: "Substantive faculty initiated interaction is required by the Federal government for all distance modalities. Please specifically describe how interaction is monitored and by whom." Some institutions have created tools to estimate the time that will be devoted to student engagement:

If both the research literature and the accreditation bodies state that interaction, community, and the like are critical to online student persistence and success, then the online course design rubric providers should provide more criteria for and guidance about reviewing faculty-student and student-student interaction after the course has begun.

Further still, LMS providers need to make it possible for instructors to see an average feedback response time. The research shows that timely feedback is critical, but instructors do not have a dashboard that lets them see how long they take to rate or reply to students' discussion posts, or post grades for assignments.

Empowerment

For the most part, the rubrics and related course redesign efforts focus on the instructor's side of the equation. However, some research shows that online learners benefit from online readiness orientations (e.g., Cintrón & Lang, 2012; Lorenzi, MacKeogh & Fox, 2004; Lynch, 2001), and need higher levels of self-directed learning skills to succeed (e.g., Azevedo, Cromley, & Seibert, 2004). Therefore, rubric providers should add more criteria related to things like online learner preparation, scaffolding to increase self-direction, and online learner support.

Equity

While eliminating online achievement gaps is a goal for several state-wide initiatives, none of the rubrics compared in this article address equity specifically and/or comprehensively. In a future post I will outline how Peralta Community College District (Oakland, CA) developed the Peralta Equity Rubric in response to this void. As part of the CCC system, the Peralta team has already begun working with the CVC-OEI's core team1 and its newest cohort that is focused on equity. If we keep seeing the highly positive (and thankful) reactions the Peralta team receives as it shares the rubric via conference presentations and virtual events, expect to see more institutions add equity to their rubrics in the near future.

Efficacy

As stated in the Evidence of impact section in Part 2, more rubric providers need to go beyond what got us to this point—i.e., "research supports these rubric criteria"—and validate these instruments further. It also would help the entire field for existing research to be made more visible. Reports like the Quality Matters updates to "What We're Learning" (Shattuck, 2015) are a start. Now we need to see more research at higher levels of the Kirkpatrick scale, as well as more granular studies of how course improvements impact different sets of students (e.g., first-generation, Latinx, African-American, academically underprepared). Here is a list of impact research efforts to watch in the near future:

  • The CCC's newly branded California Virtual Campus-Online Education Initiative plans to conduct more research in its second funding period (2018-2023), which just began last fall.
  • Fourteen CSU campuses are participating in the SQuAIR project—Student Quality Assurance Impact Research—to determine "the impact of QA professional development and course certification on teaching performance and student success in 2018-19 courses." The SQuAIR project will analyze course completion, pass rates, and grade distribution data, along with student and faculty survey results.
Enforcement

Colleges and universities, community college districts, and state-wide higher education systems should review these rubrics (if they do not already use one) and kick off adoption initiatives that include training for faculty and staff alike. (Kudos if you are already doing this!) These efforts take time, money, and institution-level buy-in, but if online course enrollments continue to increase at the current rate, then institutions must invest in increasing student success across the board, not just with early adopters and interested online teachers.

I'm not sure why all of the NOW WHAT? elements above begin with E, but I am sure that this list is not exhaustive. Further, while I primarily have focused on the rubrics themselves to maintain a reasonable scope for a blog post series, these rubrics are rarely used in a vacuum--the rubric providers and the institutions that adopt them have built robust professional development efforts that use the rubrics in different ways to increase student success. Keep an eye on the MindWires blog for more installments related to online course quality and supporting online student success.

References for citations in this three-part series

  1. Disclosure: OEI is a client of MindWires, and I have been working directly with Peralta CCD.

The post Online Course Design Rubrics, Part 3: Now what? appeared first on e-Literate.

XSEDE and Aristotle Cloud Federation Offer Joint Cloud Implementation Service

Campus Technology - 21 Marzo, 2019 - 21:23
The Extreme Science and Engineering Discovery Environment is partnering with the Aristotle Cloud Federation to help institutions deploy the OpenStack cloud on their campuses.

Makerbot Method Aims to Bridge Gap Between Desktop and Industrial 3D Printers

Campus Technology - 21 Marzo, 2019 - 17:39
Makerbot will begin shipping its Method 3D printer tomorrow. Starting at $6,499, the new model offers roughly double the print speed of a conventional desktop 3D printer with improved precision.

Makerbot Method Aims to Bridge Gap Between Desktop and Industrial 3D Printers

THE Journal - 21 Marzo, 2019 - 17:39
Makerbot will begin shipping its Method 3D printer tomorrow. Starting at $6,499, the new model offers roughly double the print speed of a conventional desktop 3D printer with improved precision.

Kaltura Intros Video Messaging Tool

Campus Technology - 21 Marzo, 2019 - 17:22
Video platform provider Kaltura has introduced Kaltura Pitch, a video messaging tool that allows users to create and share personalized video messages via desktop or mobile device, and provides data reports for measuring recipient engagement.

Can technology help bridge the gap between study and employment?

JISC News - 21 Marzo, 2019 - 15:59
21 March 2019

Launched in 2018 with investment from Jisc, the Placer app does just that - and it’s been nominated for the UK Top 100 Social Entrepreneur Index.

How many graduates does it take to change a lightbulb? Four. One to design a nuclear-powered bulb that never needs changing, one to write an essay on the significance of lightbulbs in the 20th century, one to choreograph an interpretive dance on the theme of darkness, and one to call an electrician.

It’s a bad joke but it illustrates a point: graduate recruiters often highlight the void between skills and experience. While most students emerge from higher education (HE) with a solid academic grasp of their discipline, all too often they are not deemed ‘work-ready’ by prospective employers.

The value of work experience

Great effort has gone into addressing this problem in recent years. Within universities, strengthening links with industry has made undergraduate study more relevant and meaningful while also helping young people develop contacts in their chosen sector.

Students find that work experience placements are increasingly important too, giving them the chance to explore careers that may suit them, to learn how to take the initiative and collaborate in the workplace, and to build a CV that stands out from the crowd.

Creating opportunities

Placer can help. Matching students with work experience opportunities, the app began with a real problem and some solid research.

Four years ago, the social entrepreneur David Barker was working with the National Centre for Universities and Business to explore how tech might help address problems in graduate employment. He picked up on the 2013 government FutureTrack research, which found that students who gain work experience during their time at university are both more likely to get a better degree result and less likely to be unemployed or underemployed. David says,

“The problem was, there weren’t enough placement opportunities for young people. We needed to innovate a new model.”

Making a positive impact

What emerged is an app and platform that partners with universities and engages employers of all sizes, locally and nationally, to publish opportunities for work experience. These range from insight days to summer internships and full years in industry. David comments,

“We aim to give all students in HE access to meaningful work experience to increase their employability and business network."

Initially funded by the Cisco Foundation, Placer won further investment from Jisc and Unite Students in 2017. The app went live with six universities in October 2018, featuring employers including the BBC, Unilever, Sage, and HSBC.

Placer now has plans for expansion, looking to bring in more universities and work with a greater number of SMEs. It’s going from strength to strength. As well as attracting support from ambassadors Peter Estlin, Lord Mayor of the City of London, and membership bodies the Institute for Engineering and Technology and the Creative Industries Federation, Placer Ltd has just been nominated for the UK Top 100 Social Entrepreneur Index.

“We’re delighted to see Placer gaining recognition for making a positive impact on students’ lives and graduates’ employability,”

says Sue Attewell, Jisc’s head of change.

Shining a light on societal problems as well as practical ones, David describes the company’s ‘triple-bottom-line’, of growth, profit, and doing good. We want to prove to employers the value UK graduates can bring to their business, and help students of all backgrounds find meaningful work.”

Find out more about Placer on their website.

The form of shame

Martin Weller - 21 Marzo, 2019 - 10:29

As the latest Brexit crises (it is not just one single crisis, but a series of crises now) unfolded this week, each more worrying, bizarre and removed from rationality than the previous one, I’ve noticed one overriding emotion emerging in myself. From the sludgy mix of anger, depression, puzzlement, hysteria, the one that emerged like a taste of celery overriding everything else was shame. I have never felt so ashamed to be British. I appreciate that nationality is a social, even imaginary construct, and I have never held romantic notions about Britain’s past. But I am, in my way, quite “British” in character – reserved, emotionally crippled, polite, fond of beer and pie. Like most people, I am a product of my culture, and if you’ve met me, you will know that there’s a streak of “British” running through my personality.

Every nation has its characteristics, and they are always a mixture of positive and negative elements. Having worked on many European projects, one sees that although national stereotypes are too simplistic, there is also an element of truth in them. In most European bids the British partner is usually seen as hard working, not necessarily imaginative, collegiate, humorous, but also usually a monoglot and a bit off to one side.

But this week more than any other, all of the counters I might have given to the negative aspects of Britishness and British history, have finally evaporated. All that remains is shame: shame that we inflicted this devastating crisis on ourselves; shame that we gave charlatans, racists and fools such prominence; shame that we have diminished the future for my daughter and her generation; shame that we have been so utterly rude and contemptuous to our European neighbours; shame that our cherished political systems have been so incapable of preventing the fiasco from continuing; shame that we look to the past, Empire and war instead of to the future; shame that the only arguments people have left are based on selfishness and delusion. And finally shame that I am part of that mix. I would like to think “this is not who we are”, but it seems that in fact, this is exactly who we are. It has now been revealed, the UK has taken on its final form, and it’s not attractive. It is cushioned somewhat by being in Wales, as most of the bluster comes from England. But Wales still voted to leave, still commits the sin of thinking there is some rational debate to be had with extremists. And Wales will suffer (more) the same fate, as part of Britain. And when it comes to Britain, I’ve finally come to feel that I no longer have any relationship with this pompous, ridiculous nation.

Anyway, here’s an Irish comedian capturing much of the aspects of Britishness:

Online Course Design Rubrics, Part 2: So what?

e-Literate - 21 Marzo, 2019 - 01:00

Part 2 of 3: SO WHAT?

Overview

In Part 1 of this series, I compared seven online course design rubrics that are used by multiple institutions to improve the quality, accessibility, and consistency of individual courses. The institutions do this with an eye toward offering online degree programs, credentials, and certificates. Rubric comparisons are all well and good, but why is this an important topic now?

Demand > Persistence > Success

First, it's about the numbers. By now, most people watching distance education have heard the statistics—while overall college enrollment is static or declining, enrollment in online courses and programs is growing dramatically. In Fall 2015, 5.95 million students—almost a third (29.8%) of higher education students in the United States—enrolled in at least one distance education course (NCES, 2018).

Similarly, enrollment in online courses at a CCC district I work with doubled in five years, from roughly 9% to 18%. However, that increase in enrollment is offset by rates of student retention (roughly 75% complete a course) and student success (roughly 67% pass a course). Until the district can improve those rates (e.g., through improving course quality and supporting online learners), it is reluctant to add more online course offerings.

In that Feb 12 article referenced in part 1, Phil Hill noted that improving online student success rates throughout the CCC system can be only partially attributed to the Online Education Initiative. If we look at that data further, success rates increased by 13% over a ten-year span. With only two in three students passing online courses, though, there is room for more improvement. (The fact that face-to-face success rates did not change over those ten years is a topic for another blog post.)

Student Success = $

It's also (or soon going to be) about the money. As more students choose online courses and higher education funding models begin to stress successful completion as much as or more than enrollment, institutions cannot just leave it up to chance that online learners will persist and succeed on their own. If one in five course enrollments is online, and one in three students will take online courses, and the numbers keep growing, the rubrics and related efforts will play an even larger role.

Moreover, while the rubrics focus at the course level, successful completion of online courses contributes to completion of degrees. The CSU system's Graduation Initiative 2025, the CCC system's Vision for Success, and the SUNY Completion agenda all point toward system-wide efforts to increase degree completion and eliminate equity and achievement gaps. It should be no surprise, then, that all three of these systems—three of the largest higher education systems in the country—have launched online course quality projects featuring rubrics.

Course design rubrics and related professional development are becoming as important to the institutions themselves as to the students they serve, and the large-scale initiatives that support them all still have work to do. The Public Policy Institute of California summed up the situation fairly well:

Our research suggests that a more data-driven, integrated, and systematic approach is needed to improve online learning. It is critical to move away from the isolated, faculty-driven model toward a more systematic approach that supports faculty with course development and course delivery. A systematic approach better ensures quality by creating teams of experts with a range of skills that a single instructor is unlikely to have completely. (Johnson, Cuellar Mejia, & Cook, 2015, p. 3)

Multiple achievement gaps exist

As colleges and universities address rapidly increasing distance education enrollments, they must also address two achievement gaps that often appear in the research:

  • Overall, online learners have lower retention and success rates than learners in face-to-face courses (Xu & Jaggars, 2014).
  • Achievement gaps are larger for some subpopulations of online learners—e.g., students who are male, who are academically underprepared, or who belong to specific ethnicity groups (Jaggars, 2014)

Evidence of scale

This comparison has become important for another reason: scale. Quality Matters and Blackboard serve international networks of systems and individual institutions with their rubrics, related professional development, and awards. Meanwhile, the California Community College (CCC) system, the California State University (CSU) system, the State University of New York (SUNY) system, and the Illinois Online Network (ION) all have led far-reaching efforts with their own, internally created rubrics and professional development training. Within the CCC system alone, the Online Education Initiative's Consortium serves 56 campuses1, all of which have committed to reviewing and redesigning—with the rubric—20% of their online course offerings over two years.

Evidence of impact (and the need for much more)

A number of the rubric providers have made an effort to evaluate their respective rubrics' impact. Currently, Quality Matters publishes "What We're Learning" reports to synthesize the research about the impact of its rubric (e.g., Shattuck, 2015). Quality Matters also shares the largest number of studies focused on its impact—through the Quality Matters Research Library that can be searched by standard or keyword and a set of Curated Resources. Of these 25 curated studies, four studies appear to look at the highest level of Kirkpatrick's Four Levels of Evaluation—i.e., those four studies look at the end results, or to what extent redesigning a course based on the rubric affects students completing and/or passing a course. An equal number of studies investigate Kirkpatrick's third level, or changes in faculty behavior as a result of training and exposure to the rubric. The largest subset of these curated studies focuses on learner or teacher perceptions, motivation, and satisfaction, but I have not reviewed the entire library!

In its 2018 grant proposal, the Foothill De Anza CCD team stated that "the OEI courses that are aligned to the [OEI Course Design] rubric, checked for accessibility and fully resourced have an average student success rate of 67.4%, which is 4.9 percentage points higher than the statewide average online success rate of 62.5%" (Nguyen, as cited in Foothill De Anza CCD, 2018, p. 4). Anecdotally, interviews with executive CCC stakeholders have identified that some online faculty also improved the quality of their face-to-face courses based on what they learned from the rubric.

That said, the evaluation of OEI's impact on student success is just a start. While that evaluation showed that rubric-reviewed and redesigned courses had better success rates compared to other online courses, the study did not identify a) which learners performed well or b) what aspects of the course design and/or facilitation helped specific subgroups of students who do not persist or succeed as much in online courses.

Overall, however, we still know very little about the impact of these rubrics. With access to learner analytics capabilities in online learning environments, institutions, programs, and individual instructors should be able to track online course activity and results in real-time. In a recent email exchange with a colleague who works with big data, I proposed tags for learner analytics data within a learning management system. Here are three sets of tags that correlate to three of the rubric comparison categories with the most criteria—Instructional Design & Course Materials (Content), Collaboration and Interaction (Interactivity), and Assessment:

  • find/create/share/review/reflect on content
    • content + access: student downloads files, accesses online media via links, visits an LMS content page
    • content + share: student shares a new, unique, external resource related to course topics--e.g., via discussion post or common page (Google doc)--that he or she has found or created
    • content + review: student plays media--e.g., while you cannot guarantee the student is actually watching a video, you can tell that it has played for X minutes and/or it has been played Y times
    • content + engage: student has used digital tools to highlight, annotate, leave questions about text or media
  • engage in an individual activity or interactivity
    • activity + engage: student starts an individual learning activity
    • activity + complete: student completes an individual learning activity, such as a simulation
    • activity + share: student shares a new, unique, external learning activity related to course topics
    • interactivity + initiate: a) student initiates contact with other students--e.g., student sends a message to a group or posts a new discussion thread in a group space (or a general forum for the entire class)--and/or b) student creates an activity or environment for working with other students--e.g., creates a Facebook group, creates a group homepage in Canvas
    • interactivity + support: student helps a peer who has identified a personal obstacle or challenge
    • interactivity + contribute: student completes a task as part of a whole-class activity, group activity, or group project--e.g., reply to a peer in a discussion, submit a file in a group area for others to review
    • interactivity + summarize: student creates a summary of a group discussion, virtual meeting, or project
  • complete assessment (or self-assessment) activities
    • assessment + self: student completes a prescribed self-assessment activity (e.g., practice quiz)
    • assessment + complete: student completes a low-stakes assessment--e.g., quiz--or high stakes assessment--e.g., essay
    • assessment + peer: student provides feedback to another student--e.g., Turnitin peermark assignment
    • assessment + course: student submits feedback about the course--e.g., completes a mid-semester evaluation survey or student evaluation of teaching effectiveness survey, posts feedback for instructor in a general forum
    • assessment + reflect: student submits a reflection in an assessment context--e.g., posts a reflection with an ePortfolio artifact

Of course, using LMS analytics data is not the only way to evaluate the effectiveness of rubric-guided course redesign (and universal design) efforts, but it's an avenue that holds promise—especially if we can determine what leads to student success for individual students and different student subpopulations. In my own online class, I emailed a student who earned an A the semester after failing the course to congratulate him for his efforts. He told me one of the biggest factors in his success the second time around was how I redesigned the instructions for everything—it had made everything so much clearer to him. It turns out that in between the two semesters I had redone ALL of my content review prompts, and discussion and assignment instructions after learning about the Transparent Assignment Template by Mary-Ann Winkelmes from University of Nevada Las Vegas. It made me wonder—how many times do those types of changes make an impact without the instructor knowing? It's time we started finding out.

In Part 3 of this three-part series, NOW WHAT?, I point out some new or upcoming research, as well as call for much, much more evidence about the impact of these rubrics and their individual criteria a) at the highest levels of Kirkpatrick's model, b) over time, and c) on specific populations of students.

References for citations in this three-part series

  1. Disclosure: OEI is a client of MindWires.

The post Online Course Design Rubrics, Part 2: So what? appeared first on e-Literate.

Los Angeles to Host City of STEM Science Festival

THE Journal - 20 Marzo, 2019 - 22:52
The city of Los Angeles will kick off a major event in April highlighting STEM resources and providing attendees with activities, entertainment and access to companies focused on everything from aerospace to makerspaces.

This is What I Keep Trying to Say…

OLDaily - 20 Marzo, 2019 - 22:37
Tony Hirst, OUseful Info, Mar 20, 2019

Tony Hirst outliners how to set up a remote desktop environmnent in a Docker container that can be distributed to students, relieving everybody of the need to do things like install Java for the desktop. "We could ship a single container that would allow students to run notebooks, the Genie web UI application, and the DaisyWorld Java application via a browser viewable desktop from a single container and via a single UI," he writes.

Web: [Direct Link] [This Post]

Using Linked Data for Discovery and Preservation

OLDaily - 20 Marzo, 2019 - 22:37
Sayeed Choudhury, EDUCAUSE Review, Mar 20, 2019

The idea of linked data has been around almost as long as the web itself, but the uptake has not been nearly as rapid. Part of the reason for this is that it hasn't been nearly as easy to adopt linked data as it was to adopt HTML. But as this article notes, this is beginning to change with more lightweight approaches to linked data, such as RMap, and with more tools able to take advantage of it, such as Archaeology of Reading and the Arches project. The author urges galleries, libraries and museums (GLAM) to adopt linked data for their collections, lest the be invisible to the wider community.

Web: [Direct Link] [This Post]

Páginas