Every day I am asked the same question by prospective customers: How does FedLearn’s artificial intelligence work?
In the simplest of terms, the 250+ algorithms that make up our machine learning model are running “behind the scenes” to deliver a wealth of advanced learning analytics across several dimensions via the FedLearn online experience platform. One of these dimensions is our “motifs meter” (accessible via the “My Learning Data” tab on the platform).
In this blog post, I will explain what the FedLearn motifs meter is and its value to learners and customers.
Understanding learner behaviors
The motifs meter is a graphical representation of learner behaviors. The algorithms in our ML model are grouped into five primary motif groups, which are primarily concerned with sequences of actions that:
- Reflect. Alternate between playing and pausing content while on the FedLearn platform
- Review. Move backward to return to previously covered content
- Skim. Skip content
- Speed. Change in the playback speed of audio or video.
- Search. Move back and forth across content
Deeper insights into online learning
The insights gained from the motifs meter go well beyond individual learners. For U.S. Department of Defense and Intelligence Community agencies and government contractors, FedLearn’s AI can determine trends and patterns across organizations, teams, learning paths and individual courses., for example:
- Identifying underperforming course content. Low reviewing and high skimming scores for an individual course indicate the content lacks engagement, is confusing to learners or is at the wrong level for learners (too basic or advanced). This means that the course content should be reviewed to ensure its value and relevancy and what measures need to be taken to drive better engagement scores.
- Determining content already known by learners. In personalized learning, the goal is to provide the specific content learners need. A high skimming motif score for a course likely conveys that certain content is already well understood by learners and not necessary.
- Pinpointing content of interest. A high searching score specifies content or topics of interest to learners.
- Identifying confusing content. A high reviewing score signals content that is confusing or misunderstood by learners.
You cannot deny that this learning “intelligence” transcends “traditional” online learning metrics, like simple course completions, time in training and quiz grade data.
So, if you are a chief AI officer, chief learning officer, chief human resource officer or other mission or business leader, would you rather rely on simple learning metrics or learning intelligence that tells a richer, more robust story of what is happening in online training when making learning investment decisions?
Food for thought…
Dr. J. Keith Dunbar
Founder and Chief Executive Officer
FedLearn