agregador de noticias
This picture was taken in April 2011 at the start of the Plymouth Enhanced Learning conference (Pelecon), which was an annual learning technology event I chaired at Plymouth University. Our opening keynote speaker that year was Professor Stephen Heppell, and our Deputy Vice Chancellor was Bill Rammell.
Professor Stephen Heppell has been a regular mainstay on the keynote circuit for some time, due to his wide ranging and innovative research around education environments and learning technology. He has influenced my own work, encouraging me to be more aware of the entire learning environment. Stephen's research has resulted in some very useful insights into how children learn and why they don't. Consider for example his claim that red lighting in the morning wake students up, while blue lighting in the afternoon calms them down after lunch break. He also suggests that the entire sensory experience of school, including odours and configurations of wall spaces can positively influence children's learning. What resonates most for me though, is his statement that 'everything technology touches grows exponentially'.
Before joining Plymouth University as Deputy Vice Chancellor, Bill Rammell served as Member of Parliament for Harlow between 1997-2010. Among his other roles in government was his tenure as Minister of State for Higher Education under Labour prime ministers Tony Blair and Gordon Brown. During his all too short stay at Plymouth, Bill developed our student experience services and was also responsible for international developments. I will never forget his great spirit of service and his willingness to go the extra mile. He not only opened our conference that day, but also returned twice more during the 3 day event to see how we were doing. He left the university in 2012 to become Vice Chancellor of Bedfordshire University.
Coming soon: Selfie number 8.
Photo by Jason Truscott on Flickr
Selfie number 9 by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
Pero no es definitiva y puede tener muchos cambios.
Tras innumerables esfuerzos (en algunos momentos hemos tenido la duda de que pudiéramos ponerlo en la red) podemos decir que saldrá el número 46, especial dedicado al "pensamiento computacional" de RED. Fue precedido de una serie de posts en Hypotheses, Redes Abiertas, y en otros blogs, sessions en Academia.edu, Research gate, Mendeley,... y de debates y comentarios en redes sociales genéricas.
Los títulos y autores previstos son
PresentaciónWalter Bender. Squeakland Foundation, Sugar Labs, Massachusetts Institute of Technology. USA.Claudia Urrea. Squeakland Foundation, Sugar Labs, Massachusetts Institute of Technology. USA.Miguel Zapata-Ros. Universidad de Murcia. España.
Pensamiento computacional y alfabetización digitalMiguel Zapata-Ros. Universidad de Murcia. España.
Turtle Sensors How open hardware and software can empower students and communitiesTony Forster, Guzmán Trinidad, Andrés Aguirre, Facundo Benavides, Federico Andrade, Alan Aguiar, Gonzalo Tejera y Walter Bender. Squeakland Foundation, Sugar Labs, Massachusetts Institute of Technology. USA.
Robotics in voxel sandbox games: opportunities for mixing body-syntonic reasoning and game-based learningMiguel Ángel Sicilia. Universidad de Alcalá. España.
Pensamiento Computacional a través de la Programación: Paradigma de AprendizajeXabier Basogain Olabe. Universidad del País Vasco / Euskal Herriko Unibertsitatea. España.Miguel Ángel Olabe Basogain. Universidad del País Vasco / Euskal Herriko Unibertsitatea. España.Juan Carlos Olabe Basogain. Christian Brothers University. Estados Unidos de América.
Going from Bits to Atoms: Programming in Turtle Blocks JS and Personal Fabrication in Youth Maker Projects Josh Burker
Robótica Educativa. La programación como proceso.José Miguel García. CODICEN – ANEP. Uruguay.
Entornos de programación no mediados simbólicamente para el desarrollo del pensamiento computacional. Una experiencia en la formación de profesores de Informática de la Universidad Central del EcuadorHamilton Omar Pérez Narváez. Universidad Central. EcuadorRosabel Roig-Vila. Universidad de Alicante. España.
Dr. Scratch: Análisis Automático de Proyectos Scratch para Evaluar y Fomentar el Pensamiento ComputacionalJesús Moreno-León. Universidad Rey Juan Carlos. España.Gregorio Robles. Universidad Rey Juan Carlos. España.Marcos Román-González. Universidad Nacional de Educación aDistancia (UNED). España.
El pensamiento computacional y las nuevas ecologías del aprendizajeJesús Valverde Berrocoso. Universidad de Extremadura. España.María Rosa Fernández Sánchez. Universidad de Extremadura. España.María del Carmen Garrido Arroyo. Universidad de Extremadura. España.
Enseñando a programar: un camino directo para desarrollar el pensamiento computacionalPatricia Compañ-Rosique, Universidad de Alicante. España.Rosana Satorre-Cuerda, Universidad de Alicante. España.Faraón Llorens-Largo, Universidad de Alicante. España.Rafael Molina-Carmona, Universidad de Alicante. España. Estudio sobre diferencias de género en las competencias y las estrategias educativas para el desarrollo del pensamiento computacionalElisenda Eva Espino Espino. Instituto Universitario de Estudios de las Mujeres (IUEM)
Universidad de La Laguna. EspañaCarina Soledad González González. Instituto Universitario de Estudios de las Mujeres (IUEM). Universidad de La Laguna. España.
Representaciones de estudiantes de primaria y secundaria sobre las Ciencias de la Computación y su oficio.Ma. Cecilia Martinez. CONICET- Universidad Nacional de Córdoba. ArgentinaMa. Emilia Echeveste. CONICET- Universidad Nacional de Córdoba. Argentina
Rendimiento de los alumnos en el modelo 1 a 1Luis Rodolfo Lara. Universidad Nacional de Catamarca. Argentina.Marisa Elizabeth Krenz. Universidad Nacional de Catamarca. Argentina.Héctor Fernando Ortiz Avendaño. Universidad Nacional de Catamarca. Argentina.
Una vez más muchas gracias por tu colaboración.
Non-disclosure, plausible deniability and lack of transparency in leadership: UBC and the Duffy trial
Even if you have been holidaying in outer Mongolia, you are probably aware (if you are Canadian) of the trial of Senator Duffy and the sudden resignation of the President of the University of British Columbia. These two seemingly unrelated events however have common themes which I wish to explore.
First, let me be clear. I have no inside information on either event. I don’t know whether or not the Prime Minister knew about the $90,000 payment to Senator Duffy by his chief of staff, nor the ‘real’ reason for President Arvind Gupta’s resignation from his position as President of UBC, after only 13 months into a five year term. But that is exactly my point. Other than those on the ‘inside’, no-one knows. And we should.Plausible deniability
We don’t know whether Stephen Harper was a party to the deception being perpetrated by the Prime Minister’s Office about getting the Senator to appear to repay his expenses, because the whole premise of the PMO’s office is to enable ‘plausible’ deniability by the Prime Minister if anything should go wrong with the various scheming carried out by his office to protect the ‘brand’ of the Conservative Party. Damage control is the prime mandate of this office. The less the public knows of what it does and what the Prime Minster knows, the better – for the Conservative Party.Non-disclosure
The Board of Governors at UBC also has used a common tool to manage damage control, a non-disclosure agreement which prevents anyone involved in the decision-making that lead to the resignation of the President from speaking about it. To give some idea of the legal power of a non-disclosure agreement, not one of the more than 20 members of the Board, including student, staff and faculty representatives, has given any hint of a comment about this very unusual decision. Clearly, from the Board’s perspective also, the less the public knows about it, the better.
So here we have two clear instances of leaders hiding behind damage-control tools to avoid explaining their decisions and in essence denying their responsibility for such decisions. And it looks like they will both get away with not accepting responsibility or avoiding explanations if they can sit tight and keep quiet until the public gets tired, or gets distracted by other events.The consequences
I am angry about this, not because I feel I have a right to know what the Prime Minister or UBC’s Board of Governors does or why they did it, but because without the acceptance of responsibility for their decisions, our ‘governors’ have carte blanche to do what they like without restraint. All power corrupts and total power corrupts absolutely.The UBC case
With specific respect to the UBC context, it seems beyond plausible that the President voluntarily stepped down after only 13 months, and so soon after setting out a bold and personal vision for the university. The reason given in the only public statement by UBC is as follows:
This leave will enable him to focus on his research and scholarly work that will be of mutual benefit to Dr. Gupta and UBC.
If you believe that then you believe the Toronto Maple Leafs will win the Stanley Cup next season. There aren’t many plausible reasons why he would resign:
- overwhelming personal circumstances, such as a terminal sickness in the family
- malfeasance of some kind
- a sharp difference of views with at least the more powerful members of the board about the President’s policies or management decisions.
Let’s look at each of these reasons. It is hard to see why a non-disclosure agreement would be necessary for overwhelming personal circumstances. Most people would understand and feel great sympathy for the President in such circumstances, and the Board would really have no reason to feel responsible for this.
There has been no suggestion of malfeisance – wrongdoing by the President. However, in the unlikely and hypothetical case that it was malfeisance, then the Board might want to cover it up to protect the university’s reputation, but this would be totally the wrong decision. This would be a perversion of justice. I personally do not think this could possibly have been the reason. No Board would be that stupid.
So we are left with the most plausible reason – a disagreement between the Board and the President about policy and/or management. Now maybe the public and students (who after all pay the taxes and tuition fees that keep the university running) may not be in a good position to judge who is right on such issues, but certainly the faculty need to know whether or not there was a basic disagreement between Board and President, because faculty are tasked with moving the university in the direction set by the Board and President.
To give just one instance, two or so years ago, under the previous President, the university launched a visionary and ambitious flexible learning strategy that would transform teaching and learning at UBC. Do faculty continue to move in this direction, was it supported by the new President, or was it supported by the Board but not the President? The reason for the disagreement of course may have been over something completely different, but we don’t know and in such circumstances the university is on hold with regard to all its previous initiatives until a new (permanent) President and administration is in place.What should we do?
What can the public do about these decisions? In the case of the PMO’s office, I will vote for any of the opposition parties that comes forward with a practical plan that will make the Prime Minister and his/her office more accountable for the consequences of their decisions, and will put in place policies and procedures that will make government more transparent.
UBC is more difficult. I no longer work there, although I have a complex love/hate relationship with the institution. It is easy to be an arm-chair quarter-back over someone else’s decisions. Personally, though, I think there were problems with the new President, such as his firing the VP Administration within days of taking office (see here). If so, the Board should be commended for making the right decision in difficult circumstances (after all, they are the ones who hired him in the first place). However, the Board needs to come clean and give its reasons and not hide behind a non-disclosure agreement.
Lastly, I think politicians should look carefully at the use of non-disclosure agreements. They are too often used as a tool for covering up the paying off of incompetent leaders or for covering arbitrary firings when there are personal issues between a board chair and the CEO or President. Non-disclosure agreements too often encourage both bad governance decisions and above all a lack of transparency over how tax dollars are being used. But it will be a brave and clever government that finds a way to get rid of non-disclosure agreements while still protecting the charter rights of those involved.
In the meantime, both the Duffy and UBC cases point to a lack of transparency in decision-making at the highest levels in Canada. We should do better.
On the left of the picture is Harold Jarche, who is chair of the Internet Time Alliance and a champion of social learning. I have followed his work for some time, but the first time I met Harold was at the Learning Technologies event in London in 2013. He specialises in work based learning and has made quite an impact with his work on innovative leadership, networked business solutions, and personal knowledge management. I often quote from his work or retweet his blog posts, because he thinks deeply, writes clearly and has a unique approach to workplace learning.
In the centre of the photo is Joyce Seitzinger. I first met Joyce at the first Personal Learning Environment (PLE) conference in Barcelona in 2009 and we have been friends ever since. We were sat just a couple of seats away from each other, and were already friends on social media. I quickly realised that Joyce and I had a lot in common, including our passion for great learning, technology integration and creative applications in higher education. I have subsequently worked with her in New Zealand and Australia. Joyce is best known for her Moodle Tools for Teachers model which has been translated into more than a dozen languages. She now runs her own consultancy - Academic Tribe -which offers training and professional development for educators.
Far right is George Couros. I also met George for the first time at the Barcelona PLE event. He works as a divisional principal in Canada and is a regular speaker on the international conference circuit. On his blog 'Principal of Change' George is constantly posting articles and videos about innovative learning, and is a strong advocate for change and reform in school systems. I often share and repost content from George on social media. As the younger brother of Alec Couros, George has a big shadow to emerge from, but he is very much a keynote speaker, author and scholar in his own right, and deserves his reputation as a key mover and shaker in the world of education.
You may also recognise the other people in this selfie. Answers below in the comments box please :)
If you have any selfies with people you value, I encourage you to share them in a blogpost of your own, and say how they have inspired you.
Coming up: Selfie number 9
Photo by George Couros
Selfie number 10 by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Posted by Steve Wheeler from Learning with e's
Brief post outlining the importance of balance in game design. Balance refers to the fairness of a game (especially among multiple players) and the fine line between being too easy and too difficult. "Balancing a game is not an algorithmic calculation," writes Kapp. "As a fascinating article on Gamasutra states: A game being 'balanced' is also always, at best, a rough approximation."[Link] [Comment]
Frances bell asks, "is ‘ connection’ an unequivocal good in human learning? and in machine learning?" Not necessarily, she suggests. It depends on intentions. "The difference between Google and the ‘ good teacher’ is that Google wants to sell ads and demonstrate its influence on my purchasing (so it can sell more ads), whereas the ‘ good teacher’ wants us to learn more than they want to teach."[Link] [Comment]
Material matters for learning in virtual networks: A case study of a professional learning programme hosted in a Google+ online community.
The major impact of this article, for me, is that it offers proof that people are actually using Google+. OK, just kidding. But it does raise the question of why they would choose this environment over others. The article talks about the difficulties participants have accessing Google+ (including one case where a warning said "your activities have been logged") and in using the site (one comment says it would be difficult without a software background). Frankly, I don't see how the authors' appears to actor-network theory (ANT) help explain the comments and interactions described in the text. This is, to me, a classic example of over-theorizing.[Link] [Comment]
Interestingly, it would be very easy for me to insert ads into OLDaily that no ad blocker could block. Here's one: Drink Coke. The way ad blockers work is that they pick up on features that annoy, interrupt or violate your privacy. The blockers look for URLs that are different from the page, and for types of content associated with ads. That's why this article suggests that people using ad blockers might result in better ads in the long run. Facebook needs to tread lightly here.[Link] [Comment]
Richard Culatta from the US Department of Education (DOE, ED, never sure of proper acronym) wrote a Medium post today describing a new DOE initiative to evaluate ed tech app effectiveness.
As increasingly more apps and digital tools for education become available, families and teachers are rightly asking how they can know if an app actually lives up to the claims made by its creators. The field of educational technology changes rapidly with apps launched daily; app creators often claim that their technologies are effective when there is no high-quality evidence to support these claims. Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?
He then describes the traditional one-shot studies of the past (control group, control variables, year or so of studies, get results) and notes:
This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.
- Takes too long
- Costs too much and can’t keep up
- Not iterative
- Different purpose
This last one is worth calling out in detail, as it underlies the assumptions behind this initiative.
Traditional research approaches are useful in demonstrating causal connections. Rapid cycle tech evaluations have a different purpose. Most school leaders, for example, don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know if an app is likely to work with their students and teachers. If a tool’s use is limited to an after-school program, for example, the evaluation could be adjusted to meet this more targeted need in these cases. The collection of some evidence is better than no evidence and definitely better than an over-reliance on the opinions of a small group of peers or well-designed marketing materials.
The DOE are good in terms of improving the ability to evaluate effectiveness in such a manner that accounts for rapid technology evolution. The general idea of DOE investing in the ability to provide better decision-making information is a good one. It’s also very useful to see DOE recognize context of effectiveness claims.
The problem I see, and it could be a fatal one, is that DOE is asking the wrong question for any technology or apps related to teaching and learning. [emphasis added]
The important questions to be asked of an app or tool are: does it work? with whom? and in what circumstances? Some tools work better with different populations; educators want to know if a study included students and schools similar to their own to know if the tool will likely work in their situations.
Ed tech apps by themselves do not “work” in terms of improving academic performance. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”.
Consider our e-Literate TV coverage of Middlebury College and one professor’s independent discover of flipped classroom methods.
How do you get valuable information if you ask the question “Does YouTube work” to increase academic performance? You can’t. YouTube is a tool that the professor used. Now you could get valuable information if you ask the question “Does flipped classroom work for science courses, and which tools work in this context?” You could even ask “For the tools that support this flipped classroom usage, does the choice of tool (YouTube, Vimeo, etc) correlate with changes in student success in the course?”.
I could see that for certain studies, you could use the DOE template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.
I’ll post this as a comment to Richard’s Medium post as well. With a small change in the framing of the problem, this could be a valuable initiative from DOE.
- And yes, they throw in a line that it is not just about academic performance but also administrative claims. But the whole setup is on teaching and learning usage, which is the primary focus of my comments.
The post US Department of Education: Almost a good idea on ed tech evaluation appeared first on e-Literate.
I don’t know of any other way to put this. Purdue University is harming higher education by knowingly peddling questionable research for the purpose of institutional self-aggrandizement. Purdue leadership should issue a retraction and an apology.
We have covered Purdue’s Course Signals extensively here at e-Literate. It is a pioneering program, and evidence does suggest that it helps at-risk students pass courses. That said, Purdue came out with a later study that is suspect. The study in question claimed that students who used Course Signals in consecutive classes were more likely to see improved performance over time, even in courses that did not use the tool. Mike Caulfield looked at the results and had an intuition that the result of the study was actually caused by selection bias. Students who stuck around to take courses in consecutive semesters were more likely to…stick around and take more courses in consecutive semesters. So students who stuck around to take more Course Signals courses in consecutive semesters would, like their peers, be more likely to stick around and take more courses. Al Essa did a mathematical simulation and proved Mike’s intuition that Purdue’s results could be the result of selection bias. Mike wrote up a great explainer here on e-Literate that goes into all the details. If there was indeed a mistake in the research, it was almost certainly an honest one. Nevertheless, there was an obligation on Purdue’s part to re-examine the research in light of the new critique. After all, the school was getting positive press from the research and had licensed the platform to SunGard (now Ellucian). Furthermore, as a pioneering and high-profile foray into learning analytics, Course Signals was getting a lot of attention and influencing future research and product development in the field. We needed a clearer answer regarding the validity of the findings.
Despite our calls here on the blog, and our efforts to contact Purdue directly, and attention the issue got in the academic press, Purdue chose to remain silent on the issue. Our sources informed us at the time that Purdue leadership was aware of the controversy surrounding the study and made a decision not to respond. Keep in mind that the research was conducted by Purdue staff rather than faculty. As a results, those researchers did not have the cover of academic freedom and were not free to address the study on their own without first getting a green light from their employer. To make matters more complicated, none of the researchers on that project still work at Purdue anymore. So the onus was on the institution to respond. They chose not to do so.
That was bad enough. Today it became clear that Purdue is actively promoting that questionable research. In a piece published today in Education Dive, Purdue’s “senior communications and marketing specialist” Steve Tally said
the initial five- and six-year raw data about the impact of Signals showed students who took at least two Signals-enabled courses had graduation rates that were 20% higher. Tally said the program is most effective in freshman and sophomore year classes.
“We’re changing students’ academic behaviors,” Tally said, “which is why the effect is so much stronger after two courses with Signals rather than one.” A second semester with Signals early on in students’ degree programs could set behaviors for the rest of their academic careers.
It’s hard to read this as anything other than a reference the study that Mike and Al challenged. Furthermore, the comment about “raw data” suggests that Purdue has made no effort to control for the selection bias in question. Two years after the study was challenged, they have not responded, not looked into it, and continue to use it to promote the image of the university.
This is unconscionable. If an academic scholar behaved that way, she would be ostracized in her field. And if a big vendor like Pearson or Blackboard behaved that way, it would be broadly vilified in the academic press and academic community. Purdue needs to come clean. They need to defend the basis on which they continue to make claims about their program the same way a scholar applying for tenure at their institution would be expected to be responsible for her claims. Purdue’s peer institutions likewise need to hold the school accountable and let them know that their reputation for integrity and credibility is at stake.
The post 68 Percent of Statistics Are Meaningless, Purdue University Edition appeared first on e-Literate.
My first reaction after seeing this was that Preston, a member of the elite law enforcement agency, is steeped in the traditions of his society. He is fully invested in the requirement to repress his emotions, and knows all of the 'reasons' why this is demanded by his superiors. And then, during a house search, he discovers an old recording of a Beethoven symphony and in an unguarded moment, he decides to play it - on an old gramophone. As he waits for the music to start, he toys with a snow globe - a piece of trivia which holds his attention briefly. Then the music starts. Preston has an epiphany. In surprise, he drops the snow globe, which shatters on the floor... and he begins to weep as the music washes over him.
This speaks to me of the trivia of life, the irrelevant objects that grab our attention and within which we invest so much of our energy. We often we waste precious time on the less important things in life, when in reality we should be seeking the creative and emotional experiences that might forever shape our characters and define our lives. How many hours do children waste in school, studying subjects or content that will be useless or irrelevant when they finally leave school? How much content do schools deliver that resemble the snow globe? How much is a Beethoven symphony?
Another thought is that the record and the gramophone were old technology, from a bygone age. And yet, the music it produced was able to reduce a man to tears. Music has the power to reach deep down into our very souls and does so because it bypasses our intellect and aims directly for our hearts, our emotions. Regardless of the technology used, powerful experiences can be life changing. This is what all educators should seek to facilitate. It's a tipping of the balance for better learning experiences.
Photo by NCinDC on Flickr
Tipping the balance #blideo by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.Posted by Steve Wheeler from Learning with e's
This presentation uses the same slides as the presentation delivered in Banff in April, and the text of which may be found here. The context is updated a bit based on the last four months experience building the system being described. outline major aspects of the learning and performance support systems (LPSS) program as it relates to open education environments. In particular I focus on understanding OERs as words, aggregating and analyzing OERs, data representation, and learner production and sharing of OERs. I conclude with a number of brief case studies of how work in LPSS supports this perspective.Lunch and Learn Workshop, Tempe, Arizona (Keynote) Aug 20, 2015 [Comment]
Last week the Hechinger Report profiled an innovative charter school in San Diego called High Tech High (insert surfer jokes here) that follows an active, project based learning (PBL) model. The school doesn’t use textbooks, and they don’t base the curriculum on testing. The question they ask is whether this approach prepares students for college.
As a result, for [former HTH student Grace] Shefcik, college – with its large classes and lecture-based materials– came as a bit of a shock at first. At the University of California, Santa Cruz, she is one of more than 15,000 undergraduates, her assignments now usually consist of essays and exams. At High Tech High, Shefcik had just 127 students in her graduating class, allowing her to form close relationships with peers and teachers.
The premise of the article is that PBL prepares students for life but maybe not for college. Grace described the big difference between high school, with constant feedback and encouragement, to college, where you rarely get feedback. Other students describe their frustration in not knowing how to study for tests once they get to college.
After a recent screening of “Most Likely to Succeed” at the New Schools Summit in Burlingame, California, High Tech High CEO Larry Rosenstock told an audience, “We actually find that many of our students find themselves bored when they get to college.”
Teachers and administrators at High Tech High don’t tell many stories about their students reporting boredom, but they do hear about experiences like Shefcik’s. They say students find themselves overwhelmed by the different environment at college and have a difficult time making the transition to lecture-hall learning.
Students do tend to adjust, but this process can take longer than it does for traditionally-taught students.
But sometimes it takes High Tech High graduates a semester or a year at college or university before they feel like they’ve cracked the code.
“I had a harder time transitioning than other students,” said Mara Jacobs, a High Tech High graduate who just finished her second year at Cornell University in Ithaca, New York, and is the daughter of major donors Gary and Jerri-Ann Jacobs. “I couldn’t just do the work if I wasn’t bought into how I was being taught.”
My problem with the article is that it makes the assumption that all colleges outside of small private institutions base their entire curriculum on passive lectures and testing, not acknowledging many of the innovations and changes coming from these same colleges. We have profiled personalized learning approaches in our e-Literate TV series, including a PBL approach at Arizona State University for the Habitable Worlds course (see this episode for in-depth coverage).
Nevertheless, the general point remains that it is difficult for students to transition between active learning models and passive lecture and test models. The Hechinger Report calls out the example of K-12 students moving into college, but we talked to faculty and staff at UC Davis who saw the flip side of that coin – students used to passive learning at high school trying to adapt to an active learning science course in college.
Phil Hill: While the team at UC Davis is seeing some encouraging initial results from their course redesign, these changes are not easy. In our discussions, the faculty and staff provided insight into the primary barriers that they face when looking to build on their success and get other faculty members to redesign their courses.
Catherine Uvarov: Well, I have had some very interesting experiences with students. Last quarter, my class was mostly incoming freshman, and it’s like their very first quarter at UC Davis, so they have never taken a UC Davis class before. My class is pretty different from either classes they’ve taken in high school or other classes that they were still taking in their first quarter at Davis because these changes are not as widespread as they could be.
Some students push back at first, and they’re like, “Oh, my, gosh, I have to read the book. Oh, my, gosh, I have to open the textbook. Oh, my, gosh, I have to do homework every week. I have to do homework every day.” They kind of freaked out a little bit in the beginning, but as the quarter progressed, they realized that they are capable of doing this type of learning style.
There’s more info at both the Hechinger Report article and in the ASU and UC Davis case studies, but taken together they point out the challenges students face when transitioning between pedagogical models. These transitions can occur between high school and college, but more often they occur from course to course. Active learning and PBL are not just minor changes away from lecture and test – they require a new mindset and set of habits from students.
The post Challenge Of Student Transition Between Active And Passive Learning Models appeared first on e-Literate.
Catching up with the Learning Layers news – Part Two: Lessons from parallel work in healthcare sector
With my previous post I started a series of “catching up” blogs to report on the newest developments in the EU-funded Learning Layers (LL) project. The first post reported on consortium-wide discussions that pave the way for Year Three review and guide R&D activities and fieldwork during the coming times. This second post looks over the fence (or Channel) and reports on some interesting developments in the LL project work in healthcare sector. Here again, I have to start with the exploitation journey poster that was already presented in the consortium meeting in Tallinn (but I missed because of leaving earlier). Yet, I think it is worthwhile to take a second look and consider how the work with exploitation journey and stocktaking on specific issues can support our work with the construction sector partners.
1. Updating the sectoral exploitation journeys (with posters based on common format)
The exploitation journey poster of the LL healthcare sector has been praised by other LL partners time and again. Indeed, the poster has been well structured and uses good visualisations. The thematic blocks are mostly based on an earlier exploitation workshop (the game exercise in the Y3 Design Conference in Espoo). Yet, as I see it now, the poster gives a good overview for further development of the exploitation activities. Here some comments on the thematic blocks:
a) User needs/ working issues: Here we need to address needs, obstacles and possibilities with a focus on construction sites, companies, intermediate training centres and supporting service providers.
b) Products/ Services: Here we also need to formulate value propositions that take into account infrastructural improvements (Layers Box), integrative toolsets (Learning Toolbox), complementary (LL) tools and capacity building (training concepts).
c) First customers/ Future customers: Here we need to take into account multiple layers of partnership and customer relations that are emerging during the project and after the project.
d) The team/ Key partners: Here we need to take into account the differentiation of developmental teams and partnership constellations with different exploitation initiatives.
e) Getting out of the Building (= initial pilot context): Here we also need to give a picture, how the initial pilot activities with construction partners have prepared the ground for successor activities.
f) External resources: Here we need to give an overview on the proposals for external funding that we have prepared and will prepare (and highlight in which way they continue the work of the LL project).
g) Timeline: Here we also need to give a visualised picture of stakeholder/customer engagement, maturing of products/services and milestones in exploitation activities.
(In general, we had similar elements in the exploitation journey posters for construction sector but not in a similar systematic overview. It is clearly helpful for the consortium and for the reviewers to have similar overviews on both pilot sectors.
2. “Mixing and Matching event” – towards integrative toolsets in the healthcare sector
So far the LL field activities in the healthcare sector have been separate pilots with one particular tool in each pilot venue. Now, the most recent exploitation meeting provided the application partners an overview of parallel tools and opened the prospects for integrative pilots (by mixing and matching the parallel tools). As I have understood it, this was well received by the application partners.
As a contrast, the construction sector pilot has been developing an integrative toolset – the Learning Toolbox (LTB). Yet, with this toolset we also can see our next field tests taking up different tools (other LL tools or third party tools and apps) to be integrated into LTB. Here we have think of ways to spread the use of such tools and share experiences.
Also, in this “Mixing and Matching” event the LL healthcare colleagues made contact with health education network that is known as “Improvement Academy” and works with communities of practice, networks and project. As I have understood it, this encounter has led to further cooperation between the LL project and this network.
Here I see an interesting parallelity between the work of this Improvement Academy and a recent capacity-building initiative of the training centre Bau-ABC in the construction sector. The Bau-ABC colleagues have developed an internal training model based on “Theme rooms” (virtual and real) to engage their whole training staff with digital media and LL tools. To me, this model looks like a prototype for developing “Improvement Academy” services in the construction sector.
3. UYOS – Use your own solutions (adapted for the Learning Layers project itself)!
Third point that I find interesting in the newest LL healthcare activities is the commitment to use our own LL tools. I her e-mail to other LL partners Tamsin Treasure-Jones indicates that she has several ideas how we can use LL tools (that have been piloted in the field activities of healthcare sector) also within the project work. Now she has started with an initiative to use the Confer tool to support the preparation of the Report 5 (on sectoral pilots) for the Year Three review meeting.
Here the role of the Confer tool is
1) to support the gathering of input from different people (= examples on using digital media and LL tools to support work and learning in healthcare sector) and
2) using the process steps of Confer tool as a joint tool in the team that drafts the sectoral draft report for the Year 3 Deliverable.
As I see it, this is a very interesting initiative and it will give new visibility for users’ views. We need to consider in the German construction sector pilot, whether we can develop a similar approach.
I guess this is enough for the moment – both regarding lessons from the healthcare and the ‘catching up’ posts on newest developments in our project altogether. Now it is time for us to take further steps.
More blogs to come …