U.S. patent application number 15/941545 was filed with the patent office on 2018-12-13 for apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
The applicant listed for this patent is Honeywell International Inc.. Invention is credited to Arif Shuail Ahamed, Subhan Basha Dudekula, Manas Dutta, Ramesh Babu Koniki, Mark Phillips.
Application Number | 20180357922 15/941545 |
Document ID | / |
Family ID | 64563592 |
Filed Date | 2018-12-13 |
United States Patent
Application |
20180357922 |
Kind Code |
A1 |
Dutta; Manas ; et
al. |
December 13, 2018 |
APPARATUS AND METHOD FOR ASSESSING AND TRACKING USER COMPETENCY IN
AUGMENTED/VIRTUAL REALITY-BASED TRAINING IN INDUSTRIAL AUTOMATION
SYSTEMS AND OTHER SYSTEMS
Abstract
A method includes receiving one or more records containing
commands, an association of the commands with visual objects in an
augmented reality/virtual reality (AR/VR) space, and an AR/VR
environment setup. The commands correspond to user actions taken in
the AR/VR space. The method also includes analyzing the user
actions based on the one or more records and assessing the user
actions based on the analysis. The one or more records could have a
portable file format. The commands could correspond to one or more
gestures made by a user, one or more voice commands or voice
annotations spoken by the user, one or more textual messages
provided by the user, and/or one or more pointing actions taken by
the user using at least one pointing device.
Inventors: |
Dutta; Manas; (Bangalore,
IN) ; Koniki; Ramesh Babu; (Bangalore, IN) ;
Dudekula; Subhan Basha; (Bangalore, IN) ; Ahamed;
Arif Shuail; (Bangalore, IN) ; Phillips; Mark;
(Sammamish, WA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Honeywell International Inc. |
Morris Plains |
NJ |
US |
|
|
Family ID: |
64563592 |
Appl. No.: |
15/941545 |
Filed: |
March 30, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62517006 |
Jun 8, 2017 |
|
|
|
62517015 |
Jun 8, 2017 |
|
|
|
62517037 |
Jun 8, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06Q 10/1053 20130101;
G06F 16/13 20190101; G06F 3/017 20130101; H04L 67/38 20130101; G09B
19/003 20130101; G06F 3/014 20130101; G06T 19/006 20130101; G09B
9/00 20130101; G06F 16/116 20190101; H04L 67/22 20130101; G06F
3/167 20130101; G09B 5/02 20130101; G06F 3/033 20130101; G09B 7/02
20130101; G06F 3/011 20130101 |
International
Class: |
G09B 19/00 20060101
G09B019/00; G09B 9/00 20060101 G09B009/00; G09B 5/02 20060101
G09B005/02; G06F 17/30 20060101 G06F017/30; G06Q 10/10 20060101
G06Q010/10 |
Claims
1. A method comprising: receiving one or more records containing
commands, an association of the commands with visual objects in an
augmented reality/virtual reality (AR/VR) space, and an AR/VR
environment setup, wherein the commands correspond to user actions
taken in the AR/VR space; analyzing the user actions based on the
one or more records; and assessing the user actions based on the
analysis.
2. The method of claim 1, wherein the one or more records have a
portable file format.
3. The method of claim 1, wherein the commands correspond to at
least one of: one or more gestures made by a user; one or more
voice commands or voice annotations spoken by the user; one or more
textual messages provided by the user; and one or more pointing
actions taken by the user using at least one pointing device.
4. The method of claim 1, wherein assessing the user actions
comprises determining whether each user action or group of user
actions was correct, partially correct, wrong, invalid, or
damaging.
5. The method of claim 1, wherein assessing the user actions
comprises using a set of validation rules, different sets of
validation rules used to validate different user actions or groups
of user actions.
6. The method of claim 1, wherein assessing the user actions
comprises using feedback from system software configured to manage
an industrial process, the feedback used to verify whether an
expected or desired outcome was achieved by a user.
7. The method of claim 1, further comprising: outputting an
assessment of the user actions to a learning management system.
8. The method of claim 1, further comprising: outputting an
assessment of the user actions to an analytics engine; analyzing
the assessment and past historical performance of a user with the
analytics engine to identify recommended training for the user; and
outputting an identification of the recommended training to a
learning management system.
9. An apparatus comprising: at least one processing device
configured to: receive one or more records containing commands, an
association of the commands with visual objects in an augmented
reality/virtual reality (AR/VR) space, and an AR/VR environment
setup, wherein the commands correspond to user actions taken in the
AR/VR space; analyze the user actions based on the one or more
records; and assess the user actions based on the analysis.
10. The apparatus of claim 9, wherein the one or more records have
a portable file format.
11. The apparatus of claim 9, wherein the commands correspond to at
least one of: one or more gestures made by a user; one or more
voice commands or voice annotations spoken by the user; one or more
textual messages provided by the user; and one or more pointing
actions taken by the user using at least one pointing device.
12. The apparatus of claim 9, wherein, to assess the user actions,
the at least one processing device is configured to determine
whether each user action or group of user actions was correct,
partially correct, wrong, invalid, or damaging.
13. The apparatus of claim 9, wherein, to assess the user actions,
the at least one processing device is configured to use a set of
validation rules, different sets of validation rules associated
with different user actions or groups of user actions.
14. The apparatus of claim 9, wherein, to assess the user actions,
the at least one processing device is configured to use a set of
validation rules, the set of validation rules being configured for
a specific type of user, a specific type of equipment, or a
specific type of operational scenario.
15. The apparatus of claim 9, wherein, to assess the user actions,
the at least one processing device is configured to use feedback
from system software that is configured to manage an industrial
process, the feedback used to verify whether an expected or desired
outcome was achieved by a user.
16. The apparatus of claim 9, wherein the at least one processing
device is further configured to at least one of: output an
assessment of the user actions; and analyze the assessment and past
historical performance of a user to identify recommended training
for the user and output an identification of the recommended
training.
17. A method comprising: receiving data defining user actions
associated with an augmented reality/virtual reality (AR/VR) space;
translating the user actions into associated commands; identifying
associations of the commands with visual objects in the AR/VR
space; aggregating the commands, the associations of the commands
with the visual objects, and an AR/VR environment setup into at
least one record; and transmitting the at least one record for
assessment of the user actions.
18. The method of claim 17, wherein the data defining the user
actions comprises one or more of: data defining one or more
gestures made by a user; data defining one or more voice commands
or voice annotations spoken by the user; data defining one or more
textual messages provided by the user; and data defining one or
more pointing actions taken by the user using at least one pointing
device.
19. The method of claim 17, wherein translating the user actions
into the associated commands comprises using a grammar reference
that associates different user input actions with different
commands.
20. The method of claim 17, wherein: the AR/VR space supports
dynamics of hardware modules associated with control or safety
system hardware used for industrial process control; and the AR/VR
space interfaces with at least one supervisory system used for
industrial process control.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY CLAIM
[0001] This application claims priority under 35 U.S.C. .sctn.
119(e) to U.S. Provisional Patent Application No. 62/517,006, U.S.
Provisional Patent Application No. 62/517,015, and U.S. Provisional
Patent Application No. 62/517,037, all filed on Jun. 8, 2017. These
provisional applications are hereby incorporated by reference in
their entirety.
TECHNICAL FIELD
[0002] This disclosure generally relates to augmented reality and
virtual reality systems. More specifically, this disclosure relates
to an apparatus and method for assessing and tracking user
competency in augmented/virtual reality-based training in
industrial automation systems and other systems.
BACKGROUND
[0003] Augmented reality and virtual reality technologies are
advancing rapidly and becoming more and more common in various
industries. Augmented reality generally refers to technology in
which computer-generated content is superimposed over a real-world
environment. Examples of augmented reality include games that
superimpose objects or characters over real-world images and
navigation tools that superimpose information over real-world
images. Virtual reality generally refers to technology that creates
an artificial simulation or recreation of an environment, which may
or may not be a real-world environment. An example of virtual
reality includes games that create fantasy or alien environments
that can be explored by users.
SUMMARY
[0004] This disclosure provides an apparatus and method for
assessing and tracking user competency in augmented/virtual
reality-based training in industrial automation systems and other
systems.
[0005] In a first embodiment, a method includes receiving one or
more records containing commands, an association of the commands
with visual objects in an augmented reality/virtual reality (AR/VR)
space, and an AR/VR environment setup. The commands correspond to
user actions taken in the AR/VR space. The method also includes
analyzing the user actions based on the one or more records and
assessing the user actions based on the analysis.
[0006] In a second embodiment, an apparatus includes at least one
processing device configured to receive one or more records
containing commands, an association of the commands with visual
objects in an AR/VR space, and an AR/VR environment setup. The
commands correspond to user actions taken in the AR/VR space. The
at least one processing device is also configured to analyze the
user actions based on the one or more records and assess the user
actions based on the analysis.
[0007] In a third embodiment, a method includes receiving data
defining user actions associated with an AR/VR space. The method
also includes translating the user actions into associated commands
and identifying associations of the commands with visual objects in
the AR/VR space. The method further includes aggregating the
commands, the associations of the commands with the visual objects,
and an AR/VR environment setup into at least one record. In
addition, the method includes transmitting the at least one record
for assessment of the user actions.
[0008] In a fourth embodiment, an apparatus includes at least one
processing device configured to perform the method of the third
embodiment or any of its dependent claims. In a fifth embodiment, a
non-transitory computer readable medium contains instructions that
when executed cause at least one processing device to perform the
method of the first embodiment or any of its dependent claims. In a
sixth embodiment, a non-transitory computer readable medium
contains instructions that when executed cause at least one
processing device to perform the method of the third embodiment or
any of its dependent claims.
[0009] Other technical features may be readily apparent to one
skilled in the art from the following figures, descriptions, and
claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] For a more complete understanding of this disclosure,
reference is now made to the following description, taken in
conjunction with the accompanying drawings, in which:
[0011] FIG. 1 illustrates an example architecture for capturing
user actions in augmented/virtual reality and assessing user
competency according to this disclosure;
[0012] FIG. 2 illustrates an example device that supports capturing
user actions in augmented/virtual reality or assessing user
competency according to this disclosure; and
[0013] FIGS. 3 and 4 illustrate example methods for capturing user
actions in augmented/virtual reality and assessing user competency
according to this disclosure.
DETAILED DESCRIPTION
[0014] FIGS. 1 through 4, discussed below, and the various
embodiments used to describe the principles of the present
invention in this patent document are by way of illustration only
and should not be construed in any way to limit the scope of the
invention. Those skilled in the art will understand that the
principles of the invention may be implemented in any type of
suitably arranged device or system.
[0015] In conventional training and skill development environments,
trainee competency is often assessed using questionnaires,
pictorial/image/Flash-based evaluations, or multiple objective
questions. These standard assessment techniques no longer suffice
due to the lack of validating a trainee's skills or performance
determined interactively "on the job." Also, an assessment of a
user's abilities is often based on end results, and it can be
difficult to suggest improvement opportunities quickly.
[0016] With growing augmented/virtual reality solutions for skill
development and training, in the absence of any external monitoring
mechanism, it is typically difficult to monitor and assess the
progress of a trainee and the impact of training in
augmented/virtual space. Ideally, a system could validate a user's
skills by tracking each user action and thereby assess the user's
competency and real-world problem solving skills.
[0017] This disclosure provides techniques for tracking and
assessing an industrial automation user or other user's actions in
an augmented/virtual environment, which overcomes challenges with
respect to tracking unwanted steps, tracking impacts on underlying
industrial systems or other systems, assessing intermediate steps,
performing behavioral assessments, and identifying responses to
panic situations or other situations. Among other things, this
disclosure describes a portable file format that captures content
such as user inputs, data formats, and training setups. The
portable file format allows for easier storage, computation, and
distribution of content and addresses technical constraints with
respect to space, computation, and bandwidth.
[0018] FIG. 1 illustrates an example architecture 100 for capturing
user actions in augmented/virtual reality and assessing user
competency according to this disclosure. As shown in FIG. 1, the
architecture 100 includes a training environment 102, which denotes
a visualization layer that allows interaction with an augmented
reality/virtual reality (AR/VR) space. In this example, the
training environment 102 can include one or more end user devices,
such as at least one AR/VR headset 104, at least one computing
device 106, or at least one interactive AR/VR system 108. Each
headset 104 generally denotes a device that is worn by a user and
that displays an AR/VR space. The headset 104 in FIG. 1 is a
MICROSOFT HOLOLENS device, although any other suitable AR/VR device
could be used. Each computing device 106 generally denotes a device
that processes data to present an AR/VR space (although not
necessarily in a 3D format) to a user. Each computing device 106
denotes any suitable computing device, such as a desktop computer,
laptop computer, tablet computer, or smartphone. Each interactive
AR/VR system 108 includes a headset and one or more user input
devices, such as interactive or smart gloves. Although not shown,
one or more input devices could also be used with the headset 104
or the computing device 106.
[0019] The architecture 100 also includes at least one processor,
such as in a server 110, that is used to record content. The server
110 generally denotes a computing device that receives content from
the training environment 102 and records and processes the content.
The server 110 includes various functions or modules to support the
recording and processing of training or other interactive content.
Each of these functions or modules could be implemented in any
suitable manner, such as with software/firmware instructions
executed by one or more processors. The server 110 could be
positioned locally with or remote from the training environment
102.
[0020] Functionally, the server 110 includes a user input receiver
112, which receives, processes, and filters user inputs made by the
user. The user inputs could include any suitable inputs, such as
gestures made by the user, voice commands or voice annotations
spoken by the user, textual messages provided by the user, or
pointing actions taken by the user using a pointing device (such as
a smart glove). Any other or additional user inputs could also be
received. The user inputs can be filtered in any suitable manner
and are output to an input translator 114. To support the use of
the architecture 100 by a wide range of users, input variants (like
voice/text in different languages) could be supported. The user
input receiver 112 includes any suitable logic for receiving and
processing user inputs.
[0021] The input translator 114 translates the various user inputs
into specific commands by referring to a standard action grammar
reference 116. The grammar reference 116 represents an
actions-to-commands mapping dictionary that associates different
user input actions with different commands. For example, the
grammar reference 116 could associate certain spoken words, text
messages, or physical actions with specific commands. The grammar
reference 116 could support one or multiple possibilities for
commands where applicable, such as when different commands may be
associated with the same spoken words or text messages but
different physical actions. The grammar reference 116 includes any
suitable mapping or other association of actions and commands. The
input translator 114 includes any suitable logic for identifying
commands associated with received user inputs.
[0022] The input translator 114 outputs identified commands to an
aggregator 118. The aggregator 118 associates the commands with
visual objects in the AR/VR space being presented to the user into
one or more records 120. The aggregator 118 also embeds an AR/VR
environment setup into the one or more records 120. The AR/VR
environment setup can define what visual objects are to be
presented in the AR/VR space. The records 120 therefore associate
specific commands (which were generated based on user inputs) with
specific visual objects in the AR/VR space as defined by the
environment setup. The aggregator 118 includes any suitable logic
for aggregating data.
[0023] The records 120 are created in a portable file format, which
allows the records 120 to be used by various other devices. For
example, the data in the records 120 can be processed to assess the
user's skills and identify whether additional training might be
needed. This can be accomplished without requiring the transport of
larger data files like video files. The portable file format could
be defined in any suitable manner, such as by using XML or
JSON.
[0024] The records 120 could be used in various ways. In this
example, the records 120 are provided (such as via a local intranet
or a public network like the Internet) to a cloud computing
environment 122, which implements various functions to support
analysis of the records 120 and assessment of the user. Note,
however, that the analysis and assessment functions could be
implemented in other ways and need not be performed by a cloud
computing environment. For instance, the analysis and assessment
functions could be implemented using the server 110.
[0025] As shown in FIG. 1, an assessment service application
programming interface (API) 124 is used to receive incoming records
120. The API 124 denotes a web interface that allows uploading of
records 120. The records 120 received through the API 124 can be
stored in a database 126 for analysis.
[0026] Records 120 from the API 124 or the database 126 can be
provided to an action validator 128, which has access to one or
more sets of validation rules 130. Different sets of validation
rules 130 could be provided, such as for different types of users,
different types of equipment, or different types of operational
scenarios. The validation rules 130 can therefore be configurable
in order to provide the desired functionality based on the user
actions being evaluated. The action validator 128 processes one or
more records 120 based on the appropriate set of validation rules
130. The action validator 128 can also receive and use feedback
from system software 132, which generally denotes software used to
control one or more industrial processes (such as EXPERION software
from HONEYWELL INTERNATIONAL INC. or safety system software) or
other processes. The feedback can be used to verify whether an
expected or desired outcome was achieved by the user. Based on this
information, the action validator 128 determines a result for each
action or group of actions taken by the user and identified in the
record(s) 120. Example results could include correct, partially
correct, wrong, invalid, or damaging. The action validator 128
includes any suitable logic for evaluating user actions.
[0027] An assessment engine 134 uses the results from the action
validator 128 to generate an assessment for the user. The
assessment could take any suitable form, such as a pass/fail score
for each action or collection of actions, reward points, or any
other measurement for each action or collection of actions. The
assessment engine 134 includes any suitable logic for assessing a
user's competencies.
[0028] The measurements from the assessment engine 134 can be
provided to a learning management system (LMS) 136. The user can be
enrolled in the LMS 136 for competency development, and the LMS 136
can use the measurements to identify areas where the user is
competent and areas where the user may require further training. An
analytics engine 138 could use the measurements from the assessment
engine 134, along with past historical performance of the user over
a period of time, to gain insights into the user's competencies.
The analytics engine 138 could then recommend training courses to
help improve the user's skills. The LMS 136 includes any suitable
logic for interacting with and providing information to users for
training or other purposes. The analytics engine 138 includes any
suitable logic for analyzing user information and identifying
training information or other information to be provided to the
user.
[0029] Based on this, the following process could be performed
using the various components of the server 110 in FIG. 1. A user
initiates a training assessment module or other function in an
AR/VR application, on a mobile device, or on any other suitable
device. The application begins recording and sends the user input
action details (such as gestures, voice, and textual messages) to
the user input receiver 112. The user input receiver 112 detects
and tracks the user input actions (such as gestures, voice, textual
messages, and pointing device actions), filters the actions as
needed, and passes the selected/filtered actions to the input
translator 114. The input translator 114 converts the user actions
into system-understandable commands by referring to the grammar
reference 116, and the input translator 114 passes these commands
to the aggregator 118. The aggregator 118 associates the
system-understandable commands to visual objects, embeds the AR/VR
environment setup, and prepares one or more records 120 in a
portable file format, which identifies the user actions against a
task being assessed. The records 120 are transmitted for training
assessment.
[0030] Moreover, based on this, the following process could be
performed using the various components of the cloud computing
environment 122 in FIG. 1. The API 124 stores incoming records 120
in the database 126 for later review or reassessment. The records
120 are also passed from the API 124 or the database 126 to the
action validator 128 for validation. The action validator 128 uses
the validation rules 130 to validate each action or group of
actions taken by the user. The action validator 128 can optionally
use feedback from the system software 132. The action validator 128
determines a result for each step or collection of steps taken by
the user. The results from the action validator 128 are provided to
the assessment engine 134, which determines pass/fail scores,
rewards points, or other measurements. The user is informed of the
measurements through the LMS 136. The measurements can be used by
the analytics engine 138 to gain insights into the user's
competencies by analyzing his or her past performance over a period
of time and to recommend any relevant training courses to "upskill"
the user.
[0031] In this way, the architecture 100 can be used to capture and
store users' actions in AR/VR environments. As a result, data
associated with the AR/VR environments can be easily captured,
stored, and distributed in the records 120. Other devices and
systems can use the records 120 to analyze the users' actions and
possibly recommend training for the users. The records 120 can
occupy significantly less space in memory and require significantly
less bandwidth for transmission, reception, storage, and analysis
compared to alternatives such as video/image recording. These
features can provide significant technical advantages, such as in
systems that collect and analyze large amounts of interactive data
related to a number of AR/VR environments.
[0032] This technology can find use in a number of ways in
industrial automation settings or other settings. For example,
control and safety systems and related instrumentations used in
industrial plants (such as refinery, petrochemical, and
pharmaceutical plants) are often very complex in nature. It may
take a lengthy period of time (such as more than five years) to
train new system maintenance personnel to become proficient in
managing plant and system upsets independently. Combining such long
delays with a growing number of experienced personnel retiring in
the coming years means that industries are facing acute skill
shortages and increased plant upsets due to the lack of experience
and skill.
[0033] Traditional classroom training, whether face-to-face or
online, often requires personnel to be away from the field for an
extended time (such as 20 to 40 hours). In many cases, this is not
practical, particularly for plants that are already facing resource
and funding challenges due to overtime, travel, or other issues.
Also, few sites have powered-on and functioning control hardware
for training. Due to the fast rate of change for technology, it may
no longer be cost-effective to procure and maintain live training
systems.
[0034] Simulating control and safety system hardware in the AR/VR
space, building dynamics of real hardware modules in virtual
objects, and interfacing the AR/VR space with real supervisory
systems (such as engineering and operator stations) can provide
various benefits. For example, it can reduce or eliminate any
dependency on real hardware for competency management. It can also
"gamify" the learning of complex and mundane control and safety
system concepts, which can help to keep trainees engaged. It can
further decrease the time needed to become proficient in control
and safety system maintenance through more hands-on practice
sessions and higher retention of the training being imparted.
[0035] This represents example ways in which the devices and
techniques described above could be used. However, these examples
are non-limiting, and the devices and techniques described above
could be used in any other suitable manner. In general, the devices
and techniques described in this patent document could be
applicable whenever one or more user actions in an AR/VR space are
to be recorded, stored, and analyzed (for whatever purpose).
[0036] Although FIG. 1 illustrates one example of an architecture
100 for capturing user actions in augmented/virtual reality and
assessing user competency, various changes may be made to FIG. 1.
For example, the architecture 100 could support any number of
training environments 102, headsets 104, computing devices 106,
AR/VR systems 108, servers 110, or other components. Also, the
records 120 could be used in any other suitable manner. In
addition, while described as being used with or including a
training environment 102 and generating records 120, the
architecture 100 could be used with or include any suitable
environment 102 and be used to generate any suitable records 120
containing interactive content (whether or not used for training
purposes).
[0037] FIG. 2 illustrates an example device 200 that supports
capturing user actions in augmented/virtual reality or assessing
user competency according to this disclosure. The device 200 could,
for example, represent a device that implements the functionality
of the server 110 in FIG. 1 and/or the functionality of the cloud
computing environment 122 or any of its components in FIG. 1.
[0038] As shown in FIG. 2, the device 200 includes at least one
processing device 202, at least one storage device 204, at least
one communications unit 206, and at least one input/output (I/O)
unit 208. The processing device 202 executes instructions that may
be loaded into a memory 210, such as instructions that (when
executed by the processing device 202) implement the functions of
the server 110 and/or the cloud computing environment 122 or any of
its components. The processing device 202 includes any suitable
number(s) and type(s) of processors or other devices in any
suitable arrangement. Example types of processing devices 202
include microprocessors, microcontrollers, digital signal
processors, field programmable gate arrays, application specific
integrated circuits, and discrete circuitry.
[0039] The memory 210 and a persistent storage 212 are examples of
storage devices 204, which represent any structure(s) capable of
storing and facilitating retrieval of information (such as data,
program code, and/or other suitable information on a temporary or
permanent basis). The memory 210 may represent a random access
memory or any other suitable volatile or non-volatile storage
device(s). The persistent storage 212 may contain one or more
components or devices supporting longer-term storage of data, such
as a read only memory, hard drive, Flash memory, or optical
disc.
[0040] The communications unit 206 supports communications with
other systems or devices. For example, the communications unit 206
could include a network interface card or a wireless transceiver
facilitating communications over a wired or wireless network (such
as a local intranet or a public network like the Internet). The
communications unit 206 may support communications through any
suitable physical or wireless communication link(s).
[0041] The I/O unit 208 allows for input and output of data. For
example, the I/O unit 208 may provide a connection for user input
through a keyboard, mouse, keypad, touchscreen, or other suitable
input device. The I/O unit 208 may also send output to a display,
printer, or other suitable output device.
[0042] Although FIG. 2 illustrates one example of a device 200 that
supports capturing user actions in augmented/virtual reality or
assessing user competency, various changes may be made to FIG. 2.
For example, computing devices come in a wide variety of
configurations, and FIG. 2 does not limit this disclosure to any
particular computing device.
[0043] FIGS. 3 and 4 illustrate example methods for capturing user
actions in augmented/virtual reality and assessing user competency
according to this disclosure. In particular, FIG. 3 illustrates an
example method 300 for capturing user actions in augmented/virtual
reality, and FIG. 4 illustrates an example method 400 for assessing
user competency based on captured user actions in augmented/virtual
reality. For ease of explanation, the methods 300 and 400 are
described as being performed using the device 300 operating as the
server 110 in FIG. 1 (method 300) or as the cloud computing
environment 122 or any of its components in FIG. 1 (method 400).
However, the methods 300 and 400 could be used with any suitable
devices and in any suitable systems.
[0044] As shown in FIG. 3, a recording of user actions related to
an AR/VR space is initiated at step 302. This could include, for
example, the processing device 302 of the server 110 receiving an
indication from a user device 104-108 that a user wishes to
initiate the recording. As a particular example, the user could be
engaged in an AR/VR training session designed to identify the
user's competency at performing one or more tasks or how the user
responds to one or more situations. The user, a manager, or other
personnel could initiate the recording before or after the user has
initiated the AR/VR training session.
[0045] Information defining an AR/VR environment setup is received
at step 304. This could include, for example, the processing device
302 of the server 110 receiving information identifying the overall
visual environment of the AR/VR space being presented to the user
by the user device 104-108 and information identifying visual
objects in the AR/VR space being presented to the user by the user
device 104-108.
[0046] Information defining user actions associated with the AR/VR
environment is received at step 306. This could include, for
example, the processing device 302 of the server 110 receiving
information identifying how the user is interacting with one or
more of the visual objects presented in the AR/VR space by the user
device 104-108. The interactions could take on various forms, such
as the user making physical gestures, speaking voice commands,
speaking voice annotations, or providing textual messages. This
information is used to detect, track, and filter the user actions
at step 308. This could include, for example, the processing device
302 of the server 110 processing the received information to
identify distinct gestures, voice commands, voice annotations, or
textual messages that occur. This could also include the processing
device 302 of the server 110 processing the received information to
identify visual objects presented in the AR/VR space that are
associated with those user actions.
[0047] The user actions are translated into commands at step 310.
This could include, for example, the processing device 302 of the
server 110 using the standard action grammar reference 116 and its
actions-to-commands mapping dictionary to associate different user
actions with different commands. Specific commands are associated
with specific visual objects presented in the AR/VR space at step
312. This could include, for example, the processing device 302 of
the server 110 associating specific ones of the identified commands
with specific ones of the visual objects presented in the AR/VR
space. This allows the server 110 to identify which visual objects
are associated with the identified commands.
[0048] At least one file is generated that contains the commands,
the associations of the commands with the visual objects, and the
AR/VR environment setup at step 314. This could include, for
example, the processing device 302 of the server 110 generating a
record 120 containing this information. The at least one file is
output, stored, or used in some manner at step 316. This could
include, for example, the processing device 302 of the server 110
providing the record 120 to the API 124 for storage in the database
126 or analysis by the action validator 128.
[0049] As shown in FIG. 4, at least one file associated with a
user's actions in an AR/VR space is received at step 402. This
could include, for example, the processing device 302 implementing
the API 124 receiving a record 120 identifying commands, an
association of the commands with visual objects in the user's AR/VR
space, and an AR/VR environment setup for the user's AR/VR space.
The record 120 could have been generated using the method 300 shown
in FIG. 3 and described above. This could also include the
processing device 302 implementing the API 124 storing the record
120 in the database 126 and/or passing the record to the action
validator 128.
[0050] Applicable validation rules are obtained at step 404. This
could include, for example, the processing device 302 implementing
the action validator 128 obtaining one or more sets of validation
rules 130. The validation rules 130 could be selected in any
suitable manner. Example selection criteria could include the type
of activity being performed by the user in the AR/VR space, the
type of user being evaluated, the type of equipment being simulated
in the AR/VR space, or the type of operational scenario being
simulated in the AR/VR space.
[0051] One or more actions or group of actions identified by the
received file are analyzed using the selected validation rules at
step 406, and results assessing the user's actions are determined
at step 408. This could include, for example, the processing device
302 implementing the action validator 128 using the validation
rules to determine whether the user performed correct or incorrect
actions within the user's AR/VR space. This could also include the
processing device 302 implementing the action validator 128
determining whether the desired outcome or result was obtained by
the user as a result of the user actions within the user's AR/VR
space. In some cases, the action validator 128 can use feedback,
such as from one or more devices used for industrial process
control, to determine whether the user's actions would have
resulted in the desired outcome or result.
[0052] The user can be informed of the results at step 410. This
could include, for example, the action validator 128 providing the
results to the LMS 136 for delivery to the user. The results can
also be analyzed to determine whether the user might require or
benefit from additional training at step 412, and the user can be
informed of any additional training opportunities at step 414. This
could include, for example, the processing device 302 implementing
the analytics engine 138 analyzing the user's current results and
possibly the user's prior results in order to recommend relevant
training courses that might benefit the user. This could also
include the analytics engine 138 providing the results to the LMS
136 for delivery to the user.
[0053] Although FIGS. 3 and 4 illustrate examples of methods for
capturing user actions in augmented/virtual reality and assessing
user competency, various changes may be made to FIGS. 3 and 4. For
example, while each figure illustrates a series of steps, various
steps in each figure could overlap, occur in parallel, occur in a
different order or occur any number of times.
[0054] In some embodiments, various functions described in this
patent document are implemented or supported by a computer program
that is formed from computer readable program code and that is
embodied in a computer readable medium. The phrase "computer
readable program code" includes any type of computer code,
including source code, object code, and executable code. The phrase
"computer readable medium" includes any type of medium capable of
being accessed by a computer, such as read only memory (ROM),
random access memory (RAM), a hard disk drive, a compact disc (CD),
a digital video disc (DVD), or any other type of memory. A
"non-transitory" computer readable medium excludes wired, wireless,
optical, or other communication links that transport transitory
electrical or other signals. A non-transitory computer readable
medium includes media where data can be permanently stored and
media where data can be stored and later overwritten, such as a
rewritable optical disc or an erasable storage device.
[0055] It may be advantageous to set forth definitions of certain
words and phrases used throughout this patent document. The terms
"application" and "program" refer to one or more computer programs,
software components, sets of instructions, procedures, functions,
objects, classes, instances, related data, or a portion thereof
adapted for implementation in a suitable computer code (including
source code, object code, or executable code). The term
"communicate," as well as derivatives thereof, encompasses both
direct and indirect communication. The terms "include" and
"comprise," as well as derivatives thereof, mean inclusion without
limitation. The term "or" is inclusive, meaning and/or. The phrase
"associated with," as well as derivatives thereof, may mean to
include, be included within, interconnect with, contain, be
contained within, connect to or with, couple to or with, be
communicable with, cooperate with, interleave, juxtapose, be
proximate to, be bound to or with, have, have a property of, have a
relationship to or with, or the like. The phrases "at least one of"
and "one or more of," when used with a list of items, mean that
different combinations of one or more of the listed items may be
used, and only one item in the list may be needed. For example, "at
least one of: A, B, and C" includes any of the following
combinations: A, B, C, A and B, A and C, B and C, and A and B and
C.
[0056] The description in the present application should not be
read as implying that any particular element, step, or function is
an essential or critical element that must be included in the claim
scope. The scope of patented subject matter is defined only by the
allowed claims. Moreover, none of the claims invokes 35 U.S.C.
.sctn. 112(f) with respect to any of the appended claims or claim
elements unless the exact words "means for" or "step for" are
explicitly used in the particular claim, followed by a participle
phrase identifying a function. Use of terms such as (but not
limited to) "mechanism," "module," "device," "unit," "component,"
"element," "member," "apparatus," "machine," "system," "processor,"
or "controller" within a claim is understood and intended to refer
to structures known to those skilled in the relevant art, as
further modified or enhanced by the features of the claims
themselves, and is not intended to invoke 35 U.S.C. .sctn.
112(f).
[0057] While this disclosure has described certain embodiments and
generally associated methods, alterations and permutations of these
embodiments and methods will be apparent to those skilled in the
art. Accordingly, the above description of example embodiments does
not define or constrain this disclosure. Other changes,
substitutions, and alterations are also possible without departing
from the spirit and scope of this disclosure, as defined by the
following claims.
* * * * *