How to harness the predictive power of e-learning platforms
co-founder and chief executive officer, OnlineMedEd
E-learning platforms generate data that lets us determine which students will succeed
With enrollments increasing and costs soaring even more so, both universities and students are looking for a better knowledge experience. They demand a system that keeps students engaged, while preparing them for success both here and now and beyond graduation.
A new kind of platform
Universities want to know: Which of our students will succeed in their coursework, and how can we enhance their opportunity for success? Attendance and performance data (quiz and test scores) give an incomplete picture, at best, as new technologies change the learning experience. Where can universities look for help? The login and download records of a learning management system are a start but are helpless to provide insight after the materials are downloaded.
Enter e-learning platforms, where the bulk of the instruction takes place online. These environments are fully engaged, with developed curricula in multiple modalities, and metrics throughout–metrics tracking not only performance, but activity. And here, all kinds of meaningful data are generated, with predictive power.
Universities are partnering with companies like mine to get access to a whole new suite of tools for tracking and analyzing what happens between the initial login and the final exam.
The type and scale of data captured through an e-learning platform like OnlineMedEd allows for accurate prediction of student performance throughout the semester and especially during critical exam periods. In this ecosystem, not only is it possible to track which lectures or parts of lectures were viewed (the only equivalent of traditional “attendance”), but quality of engagement is also revealed. Activity and performance, taken together, create the baseline for predicting outcomes.
Keeping students engaged with content becomes easier with mobile-first tools designed to look more like Facebook than a textbook. Data is captured to allow us to ask questions like: How long did the student spend on each section? How does that compare to global and/or class-specific averages? Did the screen stall (sit idle) for a long time, which suggests the student wasn’t actively engaged? Did the device sit idle upon completion of a video, suggesting the student wasn’t present to stop it or move onto the next piece of material? Metrics and technologies that originated on YouTube and Netflix are now helping train artificial intelligence learning algorithms to evaluate active engagement with material.
With e-learning, lessons can be broken into multiple segments. These can be digested at the user’s own pace and reviewed as needed for full comprehension. It opens up the possibility of more granular assessment; the student can be assessed in bursts vs. total content mastery. Educational strategies used in conjunction, such as running or in-line assessments (questions that appear throughout the material rather than in a bunch at the end), enhance analysis. Students are measured on the fly, without the stress of the exam environment, making the very notion of exams less daunting and more positive, because answering questions organically comes across as natural and beneficial.
Of course, e-learning environments can also mimic the exam experience in a way that uses the particular characteristics of tablets, phones, and other devices–basically, more ways to test with rich options (video, text, flash cards, charts, pictures) included. Not only can one see how a student is doing, but what kind of instruction and content typing is best for them. The technology enables personalized learning, which can become adaptive and time-efficient. Learning algorithms offer a clear picture of what a student is good at, and continuous assessment produces a real-time view of student progression, so the student can be moved onto the next concept once mastery is demonstrated.
In short, deeper data from lesson activity and performance make it possible to predict student success or failure.
About this point, experienced teachers may be thinking, “Activity data in conjunction with performance measures make it possible for anyone to predict student success, not just computers!” This is true… to a point. E-learning and data tracking stand apart from traditional assessments by individual professors in a couple of ways. First, the computer will never suffer from bias (stressing the importance of creating fair, objective, and standardized metrics and rules to govern the algorithms). Second, while a teacher can track a small number of students using the old method (talk to them, correct their work), technology enables this to scale with virtually no limit, and can track and assess all students simultaneously. Moreover, in some settings (clinical training in medical school, for example), the instruction will not take place in a classroom but consist of hands-on experience at a job site. The instructor may not even be present, which is yet another aspect where e-learning can help close gaps in analysis.
While technology and media sites like Facebook and YouTube have added to the myriad of distractions facing students today, e-learning platforms have enabled education to evolve by keeping students engaged, providing advanced analytics to educators, and identifying at-risk students before it’s too late. Whether universities team up with companies like OnlineMedEd or build tools to harness data themselves, the opportunity for data’s utility in education has never been so promising.