U.S. patent application number 14/712108 was filed with the patent office on 2016-03-03 for systems and methods to assist an instructor of a course.
The applicant listed for this patent is Zoomi, Inc.. Invention is credited to Christopher Greg Brinton, Mung Chiang, Sangtae Ha, William D. Ju, Stefan Rudiger Rill, James Craig Walker.
Application Number | 20160063881 14/712108 |
Document ID | / |
Family ID | 55403146 |
Filed Date | 2016-03-03 |
United States Patent
Application |
20160063881 |
Kind Code |
A1 |
Brinton; Christopher Greg ;
et al. |
March 3, 2016 |
SYSTEMS AND METHODS TO ASSIST AN INSTRUCTOR OF A COURSE
Abstract
A system and method to assist an instructor in managing,
understanding, and drawing conclusions about the learning behavior.
The system includes methods for processing data collected about
users as they interact with the various modalities of learning that
may be integrated into the course, and rendering this information
into visualizations that are displayed on an instructor dashboard
and updated in real time. This instructor interface includes a
plurality of modules targeted towards the management of and
interaction with users, and the analysis and visualization of a
different aspect of their learning
Inventors: |
Brinton; Christopher Greg;
(Berkeley Heights, NJ) ; Chiang; Mung; (Princeton,
NJ) ; Ha; Sangtae; (Superior, CO) ; Ju;
William D.; (Mendham, NJ) ; Rill; Stefan Rudiger;
(Augsburg, DE) ; Walker; James Craig; (Chester
Springs, PA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Zoomi, Inc. |
Malvern |
PA |
US |
|
|
Family ID: |
55403146 |
Appl. No.: |
14/712108 |
Filed: |
May 14, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62041655 |
Aug 26, 2014 |
|
|
|
Current U.S.
Class: |
434/353 |
Current CPC
Class: |
G09B 7/00 20130101; G09B
5/02 20130101 |
International
Class: |
G09B 7/00 20060101
G09B007/00; G09B 5/02 20060101 G09B005/02 |
Claims
1. A method for adjusting an in-process module delivery sequence of
an online course directed to a learner based on student usage of
modules in the sequence, said online course formed as a collection
of modules, said delivery using a web server in communication with
a content repository and including interfaces to student-controlled
processing workstations comprising the steps of: capturing location
and time stamp data regarding a learner's clicks and mouse
movements; analyzing said data for determining duration of use of
each module of said course and assessing learner performance based
on performance metrics embedded in said course; fitting a motif to
said data; providing a visual display of said durations and
performance; presenting a graphical display of recommended
adjustments of delivery of subsequent modules to said learner based
on said fit motif and differences from said fit motif; and
adjusting the module delivery directed to said learner.
2. The method of claim 1, wherein said visual display includes at
least one histogram and at least one set of recommendations showing
potential improvement for adjustments in course delivery to at
least one learner.
3. The method of claim 1, wherein said visual display includes at
least one scatter plot and a set of recommendations showing
potential improvement for adjustments in course delivery to at
least one learner.
4. The method of claim 1, wherein said analysis includes learner
assessment.
5. The method of claim 1, wherein said motif is determined at least
in part based on data collected regarding other learners in the
same course.
6. The method of claim 1, wherein said motif is determined at least
in part based on data collected regarding the same learner in other
courses.
7. A method for an instructor to individualize content directed to
a learner in an ongoing online course, said online course formed as
a collection of modules, to a learner using a web server and
including interfaces to learner-controlled processing workstations,
comprising the steps of: capturing location and time stamp data
regarding a learner's clicks and mouse movements; determining the
duration of use of modules of said course and assessing learner
performance based on performance metrics embedded in said course;
providing a visual display of said durations and performance;
fitting a motif to said data; recommending to an instructor
adjustments to delivery of subsequent modules to said learner based
on said motif and differences from said motif; and recommending to
an instructor introduction of or modification to existing course
modules.
8. The method of claim 7, wherein said visual display includes at
least one histogram and a set of recommendations showing potential
improvement for adjustments in course delivery to at least one
user.
9. The method of claim 7, wherein said visual display includes at
least one scatter plot and a set of recommendations showing
potential improvement for adjustments in course delivery to at
least one user.
10. The method of claim 7, wherein said data includes results of
user assessment.
11. The method of claim 7, wherein said motif is determined at
least in part based on data collected regarding other learners in
the same course.
12. The method of claim 7, wherein said motif is determined at
least in part based on data collected regarding the same learner in
other courses.
13. A method for improving module-based course delivery using a web
server including interfaces to learner-controlled processing
workstations comprising the steps of: capturing time stamp and
location data of a learner's clicks and mouse movements ; analyzing
said data for determining duration of use of modules of said
course; assessing student performance based on performance metrics
embedded in said course; providing a visual display of said
durations and performance; fitting a motif to said data;
recommending adjustments to delivery of subsequent modules to said
learner based on said motif and differences from said motif; and
recommending creation of or modification to existing course
content.
14. The method of claim 13, wherein said visual display includes at
least one histogram and a set of recommendations showing potential
improvement for adjustments in course delivery to at least one
user.
15. The method of claim 13, wherein said visual display includes at
least one scatter plot and a set of recommendations showing
potential improvement for adjustments in course delivery to at
least one user.
16. The method of claim 13, wherein said data includes results of
user assessment.
17. The method of claim 13, wherein said motif is determined at
least in part based on data collected regarding other learners in
the same course.
18. The method of claim 13, wherein said motif is determined at
least in part based on data collected regarding the same learner in
other courses.
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This patent application claims benefit under 35 U.S.C.
.sctn.119 to U.S. Provisional Patent Application No. 62/041,655,
filed Aug. 26, 2014, which is hereby incorporated by reference in
its entirety.
FIELD OF THE INVENTION
[0002] The present invention relates to a system and method for
managing, analyzing, and visualizing learning and content within a
course where the respective course, such as a college or a
corporate training course, has been designed to individualize the
content, even during a student's use of a course, based on a user
model, and in particular where there are various types of content
that are integrated into the course.
BACKGROUND OF THE INVENTION
[0003] Courses that integrate a wide set of learning modes and
materials in various learning and education settings are well
known. These courses are increasingly being made adaptive, such
that their content can change automatically and/or dynamically
based on a specific user's responses and/or consumption of the
course content. For purposes of adaptation, these courses generally
define, and continually update, a user model based on different
inputs that are collected regarding the user and the inputs
subsequently analyzed.
[0004] When deployed in a learning setting, the instructor of such
a course will likely not be the same person who authored the
learning materials and defined the adaptation for the course. This
can make it difficult for the instructor to manage and understand
the learning process for the course, especially if there are a
plurality of learning materials and if the adaptation structure
created by the author is complex. Further, if the course is
delivered online without any face-to-face interaction between the
instructor and the learners, it becomes difficult for the
instructor to determine when and how his/her assistance to
individual learners is most beneficial.
[0005] The systems that deliver these courses typically have the
ability to track a user's activities as the user interacts with
both the various learning materials and with other users. Such
tracked data, when collected, are usually captured in fine
granularity. These systems can also provide usage information about
specific course adaptations that were assigned user-to-user.
However, known systems do not have the ability to process, analyze,
or visualize the tracked data in real time, nor do they provide
methods of depicting the real time automated adaptivity
user-to-user or of recommending how adjustments should be made by
the instructor to benefit individual users. Such developments would
greatly assist an instructor in gaining a better understanding of
the learning process of the students, as well as of the methods
used by the course for content adaptation.
SUMMARY OF THE INVENTION
[0006] The present invention is directed to a plurality of systems
and methods that can overcome the above limitations by providing an
instructor with real time data, real time analysis of the data, and
real time visualization of the data, as well as recommended and/or
implemented adjustments to a course directed to the instructor
through an interface, where the data are directed to student
usage.
[0007] The present invention further comprises an interface that
can be used by an instructor to manage, facilitate, edit, analyze,
visualize, and ultimately draw conclusions about the course that
he/she is instructing related to the class as a whole or a subset
of students. In an embodiment, a course can be integrated (i.e., it
can contain various types of content, such as in an online course
for which an author has provided both recorded lecture videos and
excerpts from a textbook for the students to learn from) or
individualized (i.e., the content from the author is adapted to
each individual user, either through machine or human intelligence
or a combination thereof). In the present invention, the interface
can transform data to an aggregated and easy to comprehend form,
deliver conclusions based on the data, and deliver specific
suggestions for course adjustments by the instructor, among other
items. Such data and its analysis can be delivered based on
individual students or an aggregation of students.
[0008] The instructor interface is preferably organized as an
interrelated plurality of modules. These modules include, but are
not limited to, the interaction and scheduling of course material
for end users (i.e., the learners, or the consumers of the
content). They also include modules that relate to real time
processing, analysis, and visualization of various aspects of the
user learning experience collected as a user interacts with the
course material. Such modules include user concept proficiency of a
set of author-specified course concepts, user learning behavior of
the various forms of material (i.e., video, text) integrated into
the course, user learning paths traversed as a result of an
adaptation, and one or more networks related to social learning
networks formed by the users as they interact with each other and
the instructor on the various forms of discussion media integrated
into the course. The present invention also includes methods to
process user inputs in real time, and a multitude of methods by
which outputs of each module can be visualized.
[0009] By providing an instructor with real time analysis and
visualizations of user interaction and responses to course
material, the present invention streamlines the review, analysis
and interaction between an instructor and course users such that an
instructor can draw conclusions about where and when modifications
to a course for a user or a collection of users would be
beneficial.
BRIEF DESCRIPTION OF THE FIGURES
[0010] FIG. 1 is a schematic diagram of system components
integrated with an instructor interface to generate real time
analysis and visuals and support communication between an
instructor and end users;
[0011] FIGS. 2A and 2B are exemplary displays of an embodiment of
the Concept Proficiency Tracker module integrated as part of the
instructor interface;
[0012] FIGS. 3A-3D are visual displays of an embodiment of the
Learning Behavior Tracker module that are integrated as part of the
instructor interface to track user video-watching behavior;
[0013] FIGS. 4A and 4B are embodiments of visuals of the Social
Learning Network Tracker module displaying user interaction;
and
[0014] FIGS. 5A and 5B are embodiments of visuals of the Learning
Path Tracker module that can be available for display to an
instructor.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0015] The present invention relates to a plurality of systems and
methods for assisting an instructor in generally managing a course
and particularly for individual students. In the context of the
present invention, is it assumed that most if not all of the
material for the course has already been created by a course
author, and that an instructor is using this material as the basis
for his/her own course teaching. The particular content and its
delivery sequence may be determined by an instructor, and in the
online environment of the present invention, may differ by student.
This is analogous to how a teacher may use a standard textbook to
teach a course being hosted at an educational institution and
whereby the teacher can provide remedial assistance to selected
students or whereby the teacher can suggest alternative materials
for an excelling student.
[0016] A course in the context of this disclosure can be deployed
in any learning setting, such as, but not limited to, corporate
training, professional certification, private tutoring, primary,
secondary, or higher education, and open learning/continuing
education (i.e., those taken for leisure). The term "user" refers
generally to any consumer of course content, which could be an
employee, student, or more generally a learner, depending on the
course setting. The term "instructor" refers to one or more
individuals who deliver and manage the training, tutoring,
teaching, or more generally the instruction for the course.
[0017] In an embodiment, a prepared course is delivered directly to
end users. In the preferred embodiment, the course is delivered
online such that each student can self-pace and the student's
clicks, including the precise screen location and clock time of
each click, are captured for analysis. Such clicks include clicks
related to progress in the course, progress in one or more course
elements such as videos, and clicks related to course-associated
materials such as blogs or web searches. In addition to the clicks,
mouse movements and other similar activities may be treated and
characterized similarly.
[0018] Further, the course delivery may be individualized for each
student based on any number of parameters such as but not limited
to known student skills or an assessment of the class as a whole.
These courses may integrate a wide range of learning modes. Such
learning modes can include different ways by which the users can
learn the material, including both the course material and forums
for social interaction with the instructor or other students. The
course material can include, but is not limited to, videos, texts,
assessments, presentations (with both static images and interactive
animations), and audio files, chat rooms of prior administrations
of the course, as well as links to external content. Student social
interactions can occur through media including, but not limited to,
discussion forums and public, private, or group note sharing.
[0019] The baseline course may also contain automated
individualization, meaning that the various modes are adapted to
each specific user. The specific adaptation for a user is generally
determined through the application of rules based on a user model.
A user model is, generally speaking, a collection of information
associated with a particular user that serves as an internal
representation of the user, and may evolve over time. In the
context of the present invention, the user model captures a
learner's usage and proficiency with the course content, and tracks
his/her behavior with the various learning modes in the course, in
terms of both the use of the modes and performance in relation to
use. One can think of a user model as encompassing demographic
information about a user, prior course results, the process by
which the user traverses a course, the material presented to the
user for the course, and external material the user may rely upon
during the course, as well as any user's test results obtained
during the course. The test results may be compared with the user's
use of the modes offered to her/him, including clicks in the modes
and time spent.
[0020] The course material intended to be delivered to a user may
then vary from person to person based on the user models. In
particular, the initial individualization for a course will specify
a set of rules developed and implemented within the system of the
present invention (using either an authoring tool or having worked
with the platform provider of the course), which are used to
determine precisely how the user model is mapped to decisions on
how to adapt the content for particular users. With
individualization, each user traverses one of multiple potential
learning paths, each of which contains some combination of
different and/or modified content with which the user is presented.
Many users can traverse the same learning path. However, in the
context of the present invention, the initial individualization may
be adjusted in at least two fundamental ways tracking user behavior
can result in automatic changes to material presented to the user,
and tracked data can be presented to an instructor, together with
recommendations, whereby the instructor can adjust the subsequent
material to be presented to the user.
[0021] Examples of learning paths are shown in FIG. 5. In this
illustration, the paths are depicted as different ways of
traversing through a set of units that form the course. As an
alternate representation, one can imagine all of the content in a
course forming a single "linear array" containing all of the
material, with the adaptation rules choosing whether to show or not
show certain subsets of this material based on the user model. This
is analogous to an instructor in a course working from a
comprehensive textbook, and choosing only to cover the most
important parts of the information in class; in this case, what is
`important`(i.e., what is shown) is different for each user.
[0022] In an alternate example, one can envision a non-linear
approach, also evident in FIG. 5, whereby the determination of
which next module to undertake is determined based on the user's
recent history, where the history may include duration of portions
of previous modules, clicks, locations of clicks, interaction with
an instructor, review of ancillary material, interim test results,
and so on. Importantly, the rules encompass a syllabus of the
course to assure all requisite elements of the course are
covered.
[0023] In short, the present invention is directed in part to
selection of materials for presentation to a student such that the
likelihood of the student's success on a final examination or
assessment is optimized, where that optimization is determined
based on student progress.
[0024] As a result of the potential for integration and
individualization in these courses, they are sometimes referred to
as Integrated and Individualized Courses (IICs). In an embodiment,
the present invention provides an instructor become aware of
changes and helps the instructor with insights into the learning
process of a course, which helps the instructor make decisions and
draw conclusions about the users, the course and its materials, and
where and when intervention would be most beneficial. When the
course is an IIC, the instructor may be far removed from the
process of creating the various learning modes and the
individualization aspects of the content, and may also be far
removed geographically from the learners. As a result, the present
invention helps the instructor to manually differentiate learning,
by providing them with visuals of the learning process to help them
make decisions, and also by making recommendations to them as to
how to adjust the learning process for different end users. Both of
these can be seen as methods to supplement machine-based
individualization with human intelligence in the case where the
baseline IIC is already individualized.
[0025] FIG. 1 shows a schematic diagram of an embodiment of a
system in which an instructor is hosting a course for a group of
users. The instructor accesses a corresponding interface from a
workstation, which could be any computing device (e.g., laptop,
desktop, tablet, etc.). External assessments created by the
instructor can be stored locally on the workstation shown, but
could be stored remotely as well. The tracking information stored
on a workstation can include data collected about users as they
interact with a course.
[0026] In an embodiment, this tracking information is obtained from
a web server, which is connected to the instructor workstation over
an appropriate and known network interface, such as one using the
IP protocol. The web server hosts and stores information about the
course, including the content and user modeling. This includes
instructor presets for the course, user assignment submissions, and
user behavior, and more generally provides end to end connectivity
between the instructor workstation and the end user devices.
[0027] The end user devices depicted in FIG. 1 are connected to the
web server over an appropriate and known network interface. As with
the instructor workstation, these devices can be any known types of
computing device such as laptop computers or tablets running
Windows or IOS operating systems (or equivalent). In an embodiment,
as shown in FIG. 1, the end user devices can store a course
application that contains all modes of learning for the course. For
an IIC, this preferably will be a single application which
integrates all forms of learning, but in general the course could
be delivered as a series of applications. These devices also may
store some of the course content itself, while some larger files
(e.g., videos) may be streamed directly from the web server.
[0028] In an embodiment, each end user device has an interaction
recorder (IR) loaded into memory, to monitor user interaction with
the various learning modalities. For example, in a video, the time
interval between two successive click actions (e.g., play, pause,
jump, end of video, switching away from the video view, or closing
the course application) is measured by the IR, as well as the UNIX
Epoch time, starting position, and interval duration for each case.
The specific type of click is captured as well including, for
example, clicks away from the course material. As another example,
for textual content, the time the user has spent viewing a page
will be recorded by the IR each time she flips the page or switches
away from the current text view.
[0029] Once collected, these behavioral measurements are preferably
sent to the web server over the network connection. More
preferably, they are sent as they are collected, which also is the
time at which they are processed for adaptation. In the context of
the present invention, the measurements are in turn sent to and
stored at the instructor workstation, where they are processed by
the plurality of modules that constitute the instructor interface
application, for both visualization and recommendation purposes.
Six of these modules are depicted in FIG. 1 and will be elaborated
on here.
[0030] In an embodiment, a set of inputs collected about each user
includes, but the inputs are not limited to, the following: [0031]
a. Play, pause, stop, fast forward, rewind, playback rate change,
exit, and any other video player events, as well as corresponding
timestamps, durations, and any other information that specifies
user interaction with the video player. [0032] b. Page, font size,
exit, and other text viewer events, as well as corresponding
timestamps and durations that specifies user interaction with the
text viewer. [0033] c. Slide change, completion, button press, and
other events triggered from viewing a set of slides, as well as
corresponding timestamps and durations that specify user
interaction with the presentation viewer. [0034] d. Position and
length of highlights placed on video or text at specific locations,
or on a particular slide, where the video length is measured in
time of video and the text length in number of objects from the
starting position. [0035] e. Position and content of bookmarks
placed on video or text at specific locations, or on a particular
slide. [0036] f. Position and content of notes taken on video or
text at specific locations, or on a slide, as well as whether these
notes were either shared publically, shared with a specific set of
users, or not shared. [0037] g. Information on each post made in
discussion forums, including its content, whether it was meant as a
question, answer, or comment, and the number of up-votes it
received from other users or the instructor. [0038] h. Submission,
time spent, and number of attempts made for each assessment
submitted, as well as the points rewarded if the assessment was
machine gradable. [0039] i. Individualization structure, both from
automated and human intelligence, including the learning paths for
the course and the paths traversed by each user, and the user
modeling dimensions (concepts).
[0040] There are other examples of user data collection and use as
well. The method of the present invention may involve one or more
of the following non-exclusive approaches for tracking student
behavior and making recommendations based on the tracked behavior.
Described below are a series of tracked behaviors, ranging from
clicks, durations between clicks, clicks in a series, duration at
particular videos, clicks of varying types during video play, and
so on. In some cases, tracked behaviors may be analyzed as
individual behaviors, as collections of behaviors, or a sequence of
behaviors, any or all of which can be used to generate
recommendations.
[0041] Behaviors can correlate to potential test results,
particularly correct on first attempt (CFA) results. More
specifically, CFA is a binary measure, equal to 1 if the user
answered a question correctly on the first attempt, and 0
otherwise. A goal of the present invention is to improve each
user's overall percentage of CFA results. As such, recommendations
for implementation are based on improving such results.
[0042] Consequently, the present invention tracks behaviors,
compares behaviors to those of a known population, and identifies
adjustments (recommendations) for a user based on a combination of
behaviors of populations with high and low average CFA scores so as
to determine how to adjust material delivered to a student. Of
course, at least some of those changes are automatically
implemented and the instructor is given indication of those changes
as well as recommendations for other changes.
[0043] For instance, we have identified motifs, i.e., sequences of
events that form recurring patterns of user behavior, which are
significantly associated with CFA (i.e., correct answer) or non-CFA
(i.e., incorrect answer) submissions on questions corresponding to
the material. The events that form a motif can consist of any
combination of behavioral action collected from a learner as he/she
interacts with the course application, such as, but not limited to,
play, pause, skip backwards, skip forward, rate change faster, rate
change slower on a video or interactive slide presentation,
scrolling up or down in an article or resizing the view, creating
or sharing a note, and mouse movements. In this way, a motif can be
based on recurring patterns either within one particular learning
mode (e.g., sequences of actions in a video), or across multiple
modes (e.g. sequences of actions in a video, followed by a switch
to an article). One example is a series of behaviors which are
indicative of students reflecting on material, which are
significantly associated with the CFA sequences in at least one
course we tested. As another example, we have identified motifs
that are consistent with rapid-paced skimming through the material,
and have revealed that these are discriminatory in favor of non-CFA
(i.e., submitted incorrect answer) in different courses.
Incorporating the lengths (e.g., duration of play before the next
event is fired) in addition to the events themselves was essential
to these findings, because motif extraction with the events alone
does not reveal these insights. These findings can further be used
by an instructor to determine which patterns in behavior are
associated with successful results in his/her course; without the
data to back up the correlations, it is unclear whether a given
motif would be associated with CFA, non-CFA, or neither.
[0044] Specifically with respect to video, we have determined that
clickstreams may be analyzed so as to determine a likelihood of CFA
performance. Certain clickstreams are more indicative of improved
understanding and other clickstreams are more indicative of less
understanding of content. As stated, clickstream logs may be
generated as one of four types: play, pause, rate change, and skip.
Each time one of these events is fired, a data entry is recorded
that specifies the user and video IDs, event type, playback
position, playback speed, and timestamp for the event. In general,
we define each of these in a particular way and use collected data
to determine recommendations toward improving CFA.
[0045] In addition, the analysis may be performed relative to each
user, to a collection of users, or all users.
[0046] To result in a properly usable set of data, it is important
to denoise clickstreams. In order to remove noise associated with
unintentional user behavior, we preferably denoise in two ways.
First, we consider combining repeated, sequential events such as
those that occur within a short duration (5sec) of one another,
since this indicates that the user was adjusting to a final state.
For example, if a series of skip back or skip forward events occur
within a few seconds of each other, then likely the user was simply
looking for the final position, so it should be treated as a single
skip to that final location. Similarly, if a series of rate change
faster or rate change slower events occur in close proximity, then
the user was likely in the process of adjusting the rate to the
final value. Second, we consider discounting certain unnecessary
intervals between events, when the elapsed time between the two
events is extremely long (e.g., greater than 20 minutes), which
indicates the user was engaged in off-task behavior.
[0047] In one embodiment of the present invention, we fit collected
data to a variety of known or determined motifs, e.g., reflecting,
revising, and skimming behaviors. Each motif has a known set of
methodology for user improvement and, based on the motifs that a
user exhibits while interacting with the course material,
conclusions may be drawn relative to recommendations. For example,
a reflecting motif within a segment of content will consist of a
series of plays interspersed with long pauses; an instructor may be
recommended to divide content into chunks according to the play
events where this motif occurs, and then create additional content
within these chunks, because an instructor-generated summary may be
more efficient than a user spending a longer time to recap the same
content (as would be dictated by the motif).
[0048] As stated, at least some of these motifs are significantly
associated with performance, which can similarly be used to
generate recommendations about how content can be modified to
create a more effective learning experience. Some basic analysis,
as an example, shows that pausing to reflect on material (including
play back) repeatedly is the most commonly recurring behavior. If
the time spent reflecting is not too long, but longer than the time
spent watching, then a positive outcome is most likely.
[0049] In another example, the present invention factors in a
position-based sequence representation, which factors in the
location in videos that a user visited. These data may be used to
better define the student's motif, and lead to recommendations
within specific video intervals, rather than at the level of a
single video.
[0050] In addition, transitions between clickstream events may be
tracked and modeled as well.
[0051] Measurements collected about user interaction with a
specific learning mode can also be translated into intuitive
quantities that summarize behavior, such as the fraction completed
and time spent (relative to the length of the content).
Particularly with respect to video, we have computed the following
non-exclusive list of nine summary quantities (behaviors) of
interest:
[0052] 1. Fraction spent (fracSpent): The fraction of (real) time
the user spent playing the video, relative to its length.
[0053] 2. Fraction completed (fracComp): The percentage of the
video that the user played, not counting repeated play position
intervals; hence, it must be between 0 and 1.
[0054] 3. Fraction played (fracPlayed): The amount of the video
that the user played, with repetition, relative to its length.
[0055] 4. Number of pauses (numPaused): The number of times the
user paused the video.
[0056] 5. Fraction paused (fracPaused): The fraction of time the
user spent paused on the video, relative to its length.
[0057] 6. Average playback rate (avgPBR): The time-average of the
playback rates selected by the user.
[0058] 7. Standard deviation of playback rate (stdPBR): The
standard deviation of the playback rates selected over time.
[0059] 8. Number of rewinds (numRWs): The number of times the user
jumped backward in the video.
[0060] 9. Number of fast forwards (numFFs): The number of times the
user jumped forward in the video.
[0061] These quantities can also form a special type of motif,
where each "action" becomes a summary of actions on a specific
learning mode; e.g., completing 50% of a video, followed by fast
forwarding on the video twice, followed by skipping over 20% of an
article.
[0062] Machine learning algorithms, among others, are implemented
on these inputs so as to discern and categorize the types of human
interaction. Machine learning is a branch of artificial
intelligence (i.e., intelligence exhibited by software) where there
is an inductive step in which the algorithm learns from and is
augmented by the data. In this context, the algorithms for the
interface include both those required to process the data for
visualization, and those to recognize patterns within, make
predictions about, and generate recommendations from the data to
assist the instructor.
[0063] Since each course will typically be instructed in terms of a
set of learning concepts (e.g., in an algebra course, some concepts
may be "factoring polynomials," "solving quadratic equations,"
and/or "simplifying expressions"), the machine learning algorithms
may also be applied on a concept-by-concept basis, and may leverage
similarities detected between these concepts in monitoring user
interaction, which can further improve the quality of the interface
outputs. These concepts could either be pre-defined and labeled by
the author of the course, or in turn extracted through machine
learning to find the set of concepts that are optimal in the sense
of identifying the key factors affecting user performance. At
times, the system of the present invention may suggest
recommendations to the instructor. For example, the data collected
regarding a particular student might be inconsistent with known
patterns or might not yield sufficient confidence to implement a
change. In such circumstances, the system of the present invention
might present data regarding a student or regarding an entire
class, or something in between, indicating confidence intervals
around various options. For example, if students are spending an
inordinate amount of time on one lecture, the system may recommend
a number of alternatives.
[0064] The output of the instructor interface is then a processed,
analyzed, and visualized version of the inputs described above,
with recommendations made to the instructor as appropriate. These
include, but are not limited to, the following: [0065] a.
Depictions of video-watching quantities, such as percent completion
(i.e., percent played), time spent, and frequency of different
events for each user, both in aggregate across the video and for
individual intervals. [0066] b. Depictions of text-viewing
quantities, such as percent completion, time spent, and frequency
of different events for each user, both in aggregate across the
text document and for individual segments of the text. [0067] c.
Visualizations of similar quantities of behavior collected on other
forms of media, such as audio and presentations. [0068] d.
Recommendations to revisit specific portions of a learning mode
where the level of focus, as dictated by the quantities in (a)-(c),
is exceedingly high or low. [0069] e. Depictions of learning style
preferences, including the percentage of focus placed on each of
the different modes (video, text, audio, and/or social learning),
clusters of users based on these preferences. [0070] f. Depictions
of progress or proficiency on each of the learning concepts for the
course for each user, measured by performance on the corresponding
assessments, considering all assessments up to the present, all
through a current time, or even future predictions. [0071] g.
Recommendations as to which users and/or course concepts are in
need of intervention by the instructor, through the quantities in
(f) that identify particularly weak users or challenging material.
[0072] h. Early detection of users and/or content that may prove
particularly challenging in the future, based on proficiency
prediction and forecasting. [0073] i. Depictions of the social
network of users, obtained from their post and comment relations on
the discussion forums, and their sharing of notes, both in
aggregate across all material and for individual sections of
content. [0074] j. Recommendations as to which users can be
suggested to form study groups, based on their frequency of
interaction determined in (i), and their set of proficiencies in
(f), which should be mutually reinforcing. [0075] k. Depictions of
user learning paths, the level of mastery and/or learning style
preference required for each of the paths, the specific users
traversing each path, and aggregate information about behavior and
performance of users on the respective paths. [0076] I. Depiction
of the identified motifs (e.g., reflecting, revising, speeding,
skimming), which users/content modes have exhibited these motifs,
and how often they occur.
[0077] The output can be customized by/for an instructor so as to,
for example, provide further granularity. That is, an instructor
may pre-set displays.
[0078] The present invention includes a plurality of ways to
visualize these outputs on the instructor interface. These include,
but are not limited to, the following: [0079] a. Scatterplot of
points, in 2 or 3 dimensions, where the dimensions of interest are
selected by the instructor. [0080] b. Time-series plots of a
quantity, where the time interval and granularity of measurement
are selected by the instructor. [0081] c. Histogram plots, which
are a graphical representation of the distribution of a quantity of
interest. There can be one or two independent variables on top of
which this variation is measured, and they must take continuous
values (e.g., intervals of a video). [0082] d. Bar graphs, which
are representations of how a quantity of interest varies over one
or two discrete sets (e.g., set of students). [0083] e. Box and
whisker plots, which show the distribution of a set of points and
emphasize the median, quartiles, and outliers of the dataset. They
are typically depicted side-by-side for multiple datasets, to show
the difference in distributions. [0084] f. Network graph
structures, consisting of nodes, links between the nodes (either
directed or undirected), and possibly weights on the links, which
may be color coded to represent different ranges of values. These
graphs can emphasize various network substructures, such as
clusters, cliques, or the most central nodes. [0085] g. Popups and
notifications, which are included in the various modules for
recommendations and early detection as appropriate. [0086] h. Heat
maps, which indicate the level of focus of learners at specific
points within the content modes, and annotations on top of these
heat maps to depict motifs.
[0087] In an embodiment, each of these visuals is interactive,
meaning that the instructor can select the quantities, dimensions,
datasets, and graph plotting properties specified above. They are
also real-time in two senses: the displays may update
instantaneously when the instructor makes a new selection, and any
new input data will be processed immediately and the corresponding
display re-rendered.
[0088] Returning now to the instructor interface in FIG. 1, the six
modules included here are: (1) Assessment Manager; (2) Interaction
and Office Hours; (3) Concept Proficiency; (4) Learning Behavior;
(5) Learning Paths; and (6) Social Learning Networks. The latter
four are tracking modules, and each of them implements some
combination of input, output, and visualization discussed above.
Hence, they make extensive use of the tracking information
collected about the users.
[0089] In an embodiment, Assessment Manager relates to the creation
and management of a course assessment, including an evaluation of
user knowledge of the material at different points during the
course. The assessment course material can include, for example,
weekly assignments, comprehensive exams, or quizzes appearing
within or at the end of a course module. In an embodiment, the
Assessment Manager can allow instructors to create, import, edit,
schedule, assign, and distribute customized assessments to users,
as well as view and grade the submissions as needed.
[0090] To facilitate assessment creation, the Assessment Manager
module includes a graphical user interface (GUI), which can be used
to author multiple choice (e.g., radio response or checkmark
selection) or free response questions. Additionally, instructors
can import documents that include questions created with other
software that also reside on the instructor workstation, and
include the answers in an assessment for the course as needed. In
an embodiment, the instructor can edit assessments embedded in the
baseline course by the author. The module will also provide
recommendations for these edits, which are based on concept
proficiencies identified for individuals or groups of users
(methods for determining these proficiencies will be explained in
the context of the Concept Proficiency module), identifying which
of the concepts specific users require more practice with.
[0091] In an embodiment, assessments (such as exams) can be
scheduled for deployment to all users, a specific group of users,
or a single user to differentiate learning either at a certain date
and time or once the user has completed a certain portion of the
material. A group of users to which an assessment is deployed can
be specified in a number of ways, such as those users who are
following a specific learning path or one in a specific set of
paths, those who meet proficiency on a certain set of features, or
those in a list of names specified by the instructor.
[0092] From time to time, users can be delivered assessment
questions as a means for, at least in part, measuring proficiency.
For assessment questions that are machine graded (e.g., multiple
choice or programming assignments), the instructor need not review
user submissions but may review results. However, other assessment
questions, including those created by the instructor and those in
the baseline course, may require manual grading. In an embodiment,
a method allows an instructor to grade assignments and attach
scores to assignments as needed (e.g., manner similar to that
provided by a Portable Document Format (PDF) editor), with
functions such as adding comment boxes, highlighting, replacing,
inserting, and underlining text, equations, or images. The
instructor can also select a specific time at which grades and
markups should be viewable by the users.
[0093] In an embodiment, the Interaction and Office Hour module
supports a plurality of methods by which the instructor can
interact with the users. These include, but are not limited to,
one-way communication mechanisms of posting announcements, sending
emails, and including comments in the course material (i.e., video
or text documents) for users to read, and two-way communication
mechanisms for posting on discussion forums included in the course,
sending private messages, and holding Virtual Office Hour (VOH)
sessions. In an embodiment, the instructor has the ability to
handle this communication on a per-user or per-group basis.
[0094] VOHs require the support of real-time streaming from the
instructor workstation to the target devices running the course
application. In an embodiment, the instructor can have video and
audio streaming support and the end users can write comments in a
chat box that exists for the VOH sessions, similar to the interface
used by Ustream. In another embodiment, the end users may have
video and/or audio support as well, similar to a group chat like on
Skype or Google Hangout except that (i) many more users can be
supported, and (ii) the instructor has a master control over the
users' ability to speak and show video at different times, with
his/her audio and video output taking precedence over the others.
In either embodiment, the web server in FIG. 1 acts as the proxy
for streaming data between the instructor and the students.
[0095] Information about user participation during these sessions,
such as how long they spent logged into the VOH and their activity
level in terms of the number of asked and answered questions, can
be recorded, for use, if desired, by an instructor, for example, as
another factor in the grade given for the class and to help
differentiate instructions based on those seeming to struggle
during the sessions. In an embodiment, these sessions may also be
held on a per-group or even per-user basis, such as scheduling an
extra help or advanced material discussion session for users on a
given learning path. As with the Assignment Manager module,
recommendations are made based on concept proficiency for different
users, which will help guide the decision as to which VOH sessions
should be held.
[0096] The remaining four modules depicted for the instructor
interface in FIG. 1, Concept Proficiency, Learning Behavior, Social
Learning Networks, and Learning Paths, each track a different
aspect of user learning. A description of each module is given
first, followed by an example of (1) how an instructor can use
these modules to make useful conclusions about learning behavior in
their respective courses, and (2) the recommendation and early
detection aspects of the system, as appropriate.
[0097] In an embodiment, the Concept Proficiency tracker module
reveals information about the proficiency levels of individual
users, groups of users, and/or a class of users as a whole. Concept
proficiency here refers to the level of mastery a user has obtained
with, or his/her tendency towards, a given course concept. Recall
that these concepts may be pre-defined and labeled by the author,
or may be discovered through machine learning.
[0098] With these concepts in hand, and the association between
these concepts and the content learning material and assessments,
the proficiency levels can be determined through a variety of
methods. For example, the average score that a given user obtained
on all the assessments related to a given concept can be taken as a
measure of proficiency on that concept. This method can be enhanced
by the application of a number of machine learning algorithms as
well. For one, since a user may not have filled out all assessments
related to a concept, an algorithm can be applied to predict the
score that a user would have achieved on those assessments. This
algorithm could be of collaborative filtering in nature, where
similarities between users and assessments (e.g., quizzes) are
extracted from the available data, and in turn used to build models
on a per-user and per-quiz basis, the combination of which leads to
the desired prediction. This method could also leverage
correlations identified between behavioral information (e.g.,
fraction of time or number of pauses registered for that user on a
video related to the assessment) to enhance the proficiency
determination, by applying a supervised learning algorithm such as
a Support Vector Machine (SVM) that can readily identify such
correlations and apply them to prediction when they exist; this is
especially useful early on in a course where there is not yet much
information about specific users or quizzes for standard
collaborative filtering to be effective. Behavioral motifs can also
be used as machine learning features the enhance prediction quality
as well.
[0099] Note that these concepts will typically be the same as the
dimensions by which the course is individualized, but in an
embodiment the instructor will have the ability to define his/her
own dimensions to monitor as well, especially if the baseline
course is not individualized already.
[0100] In an embodiment of the Concept Proficiency tracker, an
instructor can choose the type and time interval of the
visualization, depending on preference and the conclusions that
need to be drawn. The possible visualization types for this module
include, but are not limited to, scatterplots of users in which
each dimension corresponds to a different concept, boxplots of
users in which there is a separate box for each concept, and a time
series plot of how the proficiency of different users vary for a
given concept. The time intervals could be up to and including the
present, and in a preferred embodiment, through some point in the
future, in which case a sophisticated prediction algorithm (which
could leverage the collaborative filtering and SVM techniques, or
those similar, given above) would be applied that analyzes trends
in user proficiency over time and for different concepts (both
individually and collectively) up to the present to give a forecast
of the future.
[0101] With the proficiencies and predictions computed
concept-by-concept for different users, an embodiment of the
Concept Proficiency tracker will provide recommendations to the
instructor about where intervening in the learning process will be
most useful and how that intervention may be deployed. This can be
accomplished in a number of ways. For example, the instructor may
input percentages for each of the concepts that define a level to
be obtained in order for a user to be considered "proficient."
Then, users who have not met this level on one or more concepts
would be flagged, ranked in descending order by the sum of their
deviation from the proficiency mark on each of the concepts that
they are not proficient in (users proficient on all concepts would
get a score of 0). These are the users that the instructor would be
recommended to focus his/her attention on. By the same logic, this
could be applied over all course concepts by finding the total
deviation across different users from the proficiency marks, in
order to generate recommendations as to which concepts the
instructor should give attention to.
[0102] On the other hand, if the instructor did not have set
proficiency levels, then these rankings could be generated on a
relative scale, considering the distribution of proficiencies
obtained by all users. For example, the proficiency on a concept
could be taken as the current average of all user performance on
the concepts, and the deviations computed accordingly; in this
case, the instructor would be recommended to focus on those users
who have the lowest overall performances relative to the
sample.
[0103] These recommendations, made early in the course, also
provide a method for early detection of users and/or concepts
requiring attention before proceeding with the remainder of the
course.
[0104] In an embodiment, the Learning Behavior tracker module
relates to tracking and visualizing how users of a course interact
with the content. Similar to the Concept Proficiency module, in an
embodiment the instructor will have the ability to see the results
for individual users or groups of them (e.g., those on a given
learning path), and for any time interval, which can include a
future point if appropriate prediction schemes are included. This
module leverages the inputs and visualizations for all of the
course material aside from the assessments. The possible
visualizations here include, but are not limited to, histograms of
the number of times an event occurs (e.g., pauses or plays for
videos) within an interval of length for a video, number of lines
for text, and so on; boxplots of the completion rates of users for
different types of material; and scatter plots of learning style
tendencies (e.g., visual, verbal, auditory) for each user. There
are a number of options for specifying the intervals of lengths at
which some of these plots (e.g., the histograms) are generated. For
example, they can simply be uniform across the full length of the
content, where the instructor inputs the increment (e.g., 15 second
chunks of a 3 minute video, for a total of 12 chunks), as depicted
in FIGS. 3b and 3d. Another possibility is to have these intervals
preset by the instructor in advance.
[0105] Recommendations generated from the Learning Behavior tracker
module serve to direct instructors to specific intervals of content
that are either too simple or may require additional explanation,
as opposed to the Concept Proficiency tracker which seeks to make
recommendations on a per-user or per-concept level. In an
embodiment, this is accomplished by analyzing the distributions of
the time spent at different intervals of the content, averaged
across users, and comparing these times across each of the
intervals. These distributions can be created for each separate
content file, can combine all content files of a given mode (e.g.,
all intervals across video files), or can combine all modes within
a given learning unit (e.g., all intervals across the files in a
given unit). Those intervals that qualify statistically as outliers
with respect to a distribution would be the ones of interest; those
on the low end (i.e., below the first quartile of the data) would
be those that the instructor is recommended to check for purposes
of identifying whether the content was too simple and perhaps not
necessary to include in the course, while those on the high end
(i.e., above the fourth quartile) would be those that the
instructor is recommended to check for purposes of identifying
particularly confusing or difficult content.
[0106] In an embodiment, the Social Learning Network tracker module
analyzes and displays information about users' interaction with one
another through various social networking functions that are
integrated into the course. As with the other tracking modules,
there are numerous ways to depict this behavior and relate it to
projected positive assessments, and in an embodiment, the
instructor can choose from a wide variety of display types and time
interval ranges, updated in real time as new information is
collected about user interaction. In visualizing the network of
interaction among users, many of the display types in this module
will take the form of a network graph structure, with users as
nodes and links indicating that some level of interaction has taken
place among them. These graphs can have directed edges, indicating
the flow of information, i.e., a link from A to B means that A sent
a message to B, or can be undirected, simply to indicate some
communication has taken place. The links can also be weighted, to
indicate the frequency of these interactions, i.e., a link from A
to B with weight of 3 could indicate that A commented on a post
made by B a total of three times.
[0107] In an embodiment, analysis will also be performed on the
graphs in the Social Learning Network tracker in order to generate
recommendations for the instructor. For example, centrality
measures can be computed on each of the users forming a graph,
which are direct functions of the graph structure, to identify
those who are the most influential. Then, the top-K (e.g., top 10)
most influential users can be divided into three groups: those who
are information seekers (i.e., those with highest in-degree, which
is the number of incoming links to a node), those who are
information providers (i.e., those with highest out-degree, or the
number of outgoing links), and those who are both (i.e., those with
highest total degree). Those who have the highest information
provider scores can be recommended to be rewarded for their
participation.
[0108] Further, the system can recommend pairs or groups of users
to work together based on combining the information about their
seeking and providing scores with information from the Concept
Proficiency tracker module. Ideally, a group would consist of a
complimentary set of information providers and seekers, with the
providers having high proficiency in certain learning concepts, and
the seekers needing to obtain proficiency in those same concepts.
In an embodiment, this is accomplished as follows, for different
concepts separately. First, the provider and seeker scores are
normalized across users, as a percentage of the maximum in each
case. Second, for each concept, the proficiency level is subtracted
from each user's performance (if the user is proficient, this will
be positive, if not, it will be negative). Third, each user's
provider score and seeker score are multiplied by the relative
proficiency, to get two numbers. Finally, those users with highest
provider product (i.e., most positive) are grouped with those that
have lowest seeker product (i.e., most negative), so that the
former can teach the latter about the given concepts. The
recommended groups are displayed to the instructor, who can in turn
choose to create these groups, or modify them as desired.
[0109] In an embodiment, the Learning Path Tracker module focuses
on visualizing the learning paths that are taken by users, and is
only applicable where the course is individualized, either by
machine or human intelligence or some combination. In an
embodiment, the visualizations available to the instructor will
include a description of the different paths that are traversed,
the number of users who traversed each path, and the exact paths
traversed by each individual user. Note that in general, an
individualized course supports two types of adaptation:
navigation-based, which is concerned with determining the segment
of content to display next based on the current user model, and
presentation-based, which is concerned with lower-level adaptation
of the individual content within the current segment. The logic
within the web server in FIG. 1 will enumerate all combinations of
these potential variations to determine the set of learning paths,
and associate users with them accordingly.
[0110] Below is an example of a trainer using the present invention
to host a compliance course at a company for a number of users
where the course is individualized and the primary course material
is a set of lecture videos, and discussion forums are the primary
mode of communications between the users. The set of learning
concepts for this course may correspond to key compliance topics
that all users must be proficient in, and the course author (e.g.,
someone from a compliance board) may have set a number of
competency values for each of these concepts that the users are
required to meet. The trainer can use the Concept Proficiency
tracker module to chart user progress towards meeting these goals.
In doing so, he/she is also given a number of recommendations, both
in terms of users, such as who would benefit from intervention from
the instructor and which groups may benefit from studying together,
and in terms of the course content, such as which concepts may need
to be explained more thoroughly, from which the trainer can draw
conclusions accordingly.
[0111] FIGS. 2A and 2B show a 3D scatterplot (FIG. 2A) and boxplots
of proficiency visualizations (FIG. 2B). In each of these figures,
the instructor has selected exactly which concepts to show, and
which users to highlight for each visual. As shown, users Bob and
Alice were selected, allowing the instructor to see that Bob has a
proficiency of 10, 3, and 3 on concepts A, C, and D, respectively.
In the example, Bob is proficient in A, surpassing the competency
value of 7 required, but is lagging behind in C and D. On the other
hand, Alice is proficient with C, but is lagging behind in A and D,
and is actually at the lowest point for A (judging by the
corresponding boxplot). One conclusion that the instructor could
make from this is that Alice and Bob may benefit from working
together as study partners, since Bob could explain concept A to
Alice, and Alice could explain concept C to Bob. The instructor may
then send a private message (e.g., email) to both Bob and Alice
through the Interaction and Office Hour module and make this
recommendation. Moreover, the instructor can clearly see from the
boxplot of concept D and a class average (3) that many of the users
need more assistance in learning the corresponding compliance
material for concept C. As a result, the instructor could prepare
supplementary material for that concept. This same recommendation
would also be made to the instructor through a popup notification
on the interface, for not only this concept, but also any of those
in the course for which the average scores were below proficiency.
The same would also be done on a per-user basis, with those lagging
behind in the most concepts recommended to be reached out to
first.
[0112] Upon analyzing the class scores across each concept (a
subset of which are shown in FIGS. 2A and 2B), the interface may
detect that the class as a whole is lagging behind in three of the
compliance topics, and convey this information to the instructor.
As a result, the instructor may turn to the Behavior Tracking
module to gain insights into whether the users may be confused by
or skipping past certain lectures. This module can also give the
instructor useful information about the learning process as a
whole, such as which of the videos users tend to focus on or skip
over the most, either for the video overall or within a given
interval of the videos, as well as actionable recommendations
(and/or algorithmic adjustments) based on this analysis. By
reviewing the user results, the instructor can determine which
chunks of material should be addressed specifically with the users
and how the author of the course can improve the course to increase
user engagement. That is, instructors have the ability to assess
the course, as configured by the author. Also, the instructor could
look for specific learning styles among the users (e.g., visual,
verbal, auditory learner, or some combination) by having the
interface analyze the amount of each mode available to users on the
respective learning path that the user focused on.
[0113] FIGS. 3A through 3D illustrate four types of visuals that an
instructor may select to view user behavior, including, for
example, boxplots of the overall fractions of videos completed by
each user who watched the respective videos (FIG. 3A) where the
trainer has chosen which videos to show in the plot, and which
users to highlight; a histogram of the average number of times
users paused within particular intervals of a video (FIG. 3B),
where the video, granularity (i.e., distance of each interval) and
overall interval of interest have been set by the trainer; a
histogram of the average number of times users skipped past
particular intervals of the video (FIG. 3C) where the same
parameters are set; and clusters of users by their auditory and
visual learning dimensions (FIG. 3D) where the trainer has chosen
the number of clusters to extract from the data for the plot.
[0114] From these graphs, the instructor could conclude that video
3 has a particularly low completion rate relative to some of the
other videos (FIG. 3A), with users skipping over the portion
between 2:30 and 4:00 quite frequently (FIG. 3C), with an average
of almost one skip per user. Detections such as this would also be
displayed automatically by the interface, using an algorithm to
determine the outliers of the distributions of time spent across
the intervals as described previously. If the material in video 3
corresponds to one of the concepts depicted in FIG. 2, then the
instructor could determine that either this material must be
improved, or that users must be directed to spend more time with
this video, and could communicate this to them accordingly. On the
other hand, the trainer can see that video 4 has a particularly
high completion rate (top-left), with users pausing well over once
on average between 2:00 and 2:30 (top-right); however, the
instructor can see that Bob was one of the users who focused on
this video much less compared with the others (top-left). If this
material corresponds to concept C in FIG. 2, then the trainer could
instruct Bob to pay attention to that material more carefully
before revisiting the corresponding assessments. Additionally, the
instructor could prepare supplementary material to explain what was
instructed between 2:00 and 2:30 more thoroughly, if in fact the
reason for pausing seemed to be confusion with that content.
[0115] In an embodiment, the instructor may also be interested in
the amount of interactions that are occurring between the users on
the discussion forums. Rather than having to peruse the multitude
of question, answer, and comment discussions created by the users
on the forums themselves, the instructor may desire a more
convenient visualization, as well as recommendations, through the
Social Learning Network tracker module. An example of such a
display is shown in FIGS. 4A-B, where the instructor has the
ability to vary the number of interactions required for there to be
a link present between two users. Here, the links are assumed to be
undirected. While the visual on the left appears to be noisy, which
only requires a single interaction for there to be a link, the one
on the right requires at least three interactions and paints a much
clearer picture. The trainer would be able to make a number of
conclusions from this; for instance, since the users lying at the
center of a group of nodes are those that interact with the largest
number of others, they likely either asking (i.e., information
seekers) or answering (i.e., information givers) many of the
questions posed by other users. Through further investigation, the
instructor could either task these users with assisting those in
need and rewarding them accordingly (e.g., with a statement of
distinction for the course), or reach out to them to offer
assistance on an individual basis, depending on which category they
fall in. The connected groups shown in FIG. 4B also may correspond
to efficient study groups; recommendations on how to form these
groups, on a concept-by-concept basis, would also be provided
through the interface using the methods described previously to
compare user proficiency with their information providing and
seeking scores.
[0116] Finally, the instructor may be interested in which learning
paths individual users are following, as well as the structure of
the course in general, in which case he/she would turn to the
Learning Path Tracker module.
[0117] FIGS. 5A and 5B show embodiments of visuals that can be
generated from this module, from which the trainer can see that the
course consists of 12 units, and that the adaptation is based
entirely on navigation adaptation (as opposed to presentation
adaptation, described in the corresponding description of this
module). In the visual at the top (FIG. 5A), the trainer has chosen
to see the top 3 learning paths traversed, from which he/she can
see that these encompass 75% of the users. Depending on the number
of users in the course, this may be too dense in the sense that a
single learning path encapsulates a large number of users, and the
trainer could conclude that the paths in the course should be
further separated to better individualize for each user, perhaps
through presentation-based adaptation. The instructor could further
differentiate learning to accomplish this through human
intelligence, or work with the author to define more paths via
automated adaptation. At the bottom (FIG. 5B), the instructor is
able to visually compare the learning paths taken by individual
users. Here, he/she has chosen to visualize those taken by Alice
and Bob. Clearly, these two users are not on the standard paths
taken by the majority of others, and the instructor may be able to
determine that the material these users were struggling with
(concepts C and A, respectively) are not as well represented on
these paths.
[0118] This module could assist the instructor in two other ways as
well. First, he/she would be able to send an email to only those
users who are on a specific learning path to advise them to peruse
some supplementary material that is likely to benefit this group in
particular. Second, if the instructor is able to add comments to
specific portions of the material for individuals to see, he/she
could choose to only have certain comments on a particular video
appear to users who follow a certain learning path, when those
comments are meant for this group in particular. Third, an
instructor could decide to focus attention on providing additional
explanation for the paths that have the most users traversing
them.
[0119] These four tracking modules have been described largely as
operating independent of one another. However, the combination of
information provided by them could be useful to an instructor as
well, and in an embodiment of the present invention, such visuals
and/or descriptive statistics would also be available. For example,
the invention can provide a visual of how concept proficiency or
learning behavior varies depending on the learning path chosen, or
on how different social learning network clusters of users tend to
indicate varying levels of concept proficiency. Also, these
modules, and the Learning Behavior tracker in particular, emphasize
the importance of the proposed method of real time communication
between the end user devices, the web server, and the instructor
workstation over the respective network interfaces, so that the
data displays can be updated and re-rendered in a fine-granular
fashion. Finally, it should be emphasized that the six modules
shown in FIG. 1 are only examples of what will typically be
included in an embodiment of the interface. In an embodiment, the
interface can have other modules as well as or in place of those
shown in the embodiment in FIG. 1 for additional management,
tracking, and recommendation functionalities.
[0120] Although the description above and accompanying drawings
contains much specificity, the details provided should not be
construed as limiting the scope of the embodiments, but merely as
describing some of the features of the embodiments. The description
and figures should not to be taken as restrictive and are
understood as broad and general teachings in accordance with the
present invention. While the embodiments have been described using
specific terms, such description is for illustrative purposes only,
and it is to be understood that modifications and variations to
such embodiments, including, but not limited to, the substitutions
of equivalent features and terminology may be readily apparent to
those of skill in the art based upon this disclosure without
departing from the spirit and scope of the invention.
* * * * *