U.S. patent application number 16/729876 was filed with the patent office on 2021-07-01 for hybrid conversations with human and virtual assistants.
The applicant listed for this patent is International Business Machines Corporation. Invention is credited to Jingshi LI, Mukesh Kumar MOHANIA, Jaysen OLLERENSHAW, Khoi-Nguyen Dao TRAN.
Application Number | 20210201896 16/729876 |
Document ID | / |
Family ID | 1000004589623 |
Filed Date | 2021-07-01 |
United States Patent
Application |
20210201896 |
Kind Code |
A1 |
TRAN; Khoi-Nguyen Dao ; et
al. |
July 1, 2021 |
HYBRID CONVERSATIONS WITH HUMAN AND VIRTUAL ASSISTANTS
Abstract
In some examples, a user, either a customer or potential
customer of a business, engages in conversations with a virtual
assistant (VA) provided by the business. The virtual assistant (VA)
is further supported by one or more human assistants (HA), if
needed. In embodiments, to facilitate seamless transitions between
a VA and a HA, when needed, an intelligent decision maker (IDM) is
provided. The IDM receives a user question and a proposed answer to
the question from a VA, evaluates the proposed answer in the
context of the conversation, and determines if the proposed answer
requires further review by an HA. In response to a determination
that the proposed answer requires further review, the IDM sends the
proposed answer to an HA, and, in response to an indication by the
HA, takes further action in the conversation.
Inventors: |
TRAN; Khoi-Nguyen Dao;
(Southbank, AU) ; LI; Jingshi; (Yarralumla,
AU) ; MOHANIA; Mukesh Kumar; (Waramanga, AU) ;
OLLERENSHAW; Jaysen; (Kaleen, AU) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
International Business Machines Corporation |
Armonk |
NY |
US |
|
|
Family ID: |
1000004589623 |
Appl. No.: |
16/729876 |
Filed: |
December 30, 2019 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G10L 2015/223 20130101;
G10L 15/1815 20130101; G10L 15/22 20130101; G10L 15/1822 20130101;
G10L 15/16 20130101 |
International
Class: |
G10L 15/16 20060101
G10L015/16; G10L 15/22 20060101 G10L015/22; G10L 15/18 20060101
G10L015/18 |
Claims
1. A method comprising: receiving a user question and a proposed
answer from a conversation managed by a virtual assistant (VA);
evaluating the proposed answer in a context of the conversation and
determining if the proposed answer requires further review by a
human assistant (HA); in response to a determination that the
proposed answer requires further review, sending the proposed
answer to an HA for review; in response to an indication from the
HA, taking further action in the conversation.
2. The method of claim 1, wherein the indication from the HA
includes a modified answer, and further comprising: outputting the
modified answer to the user.
3. The method of claim 1, wherein the indication from the HA is to
send the original proposed answer, and further comprising:
outputting the proposed answer to the user.
4. The method of claim 1, wherein the indication from the HA
advises that the HA is taking over the conversation, and further
comprising: ceasing to respond to the user's questions.
5. The method of claim 1, wherein evaluating the proposed answer
further comprises using one or more of statistical and rule based
predictive models to determine if the VA is not able to provide a
satisfactory answer to the user.
6. The method of claim 1, further comprising continuously
monitoring the conversation between the VA and the user and storing
the conversation in a chat log.
7. The method of claim 1, wherein sending the proposed answer to
the HA for review further comprises sending a message to the user
that additional time is required to provide the user with an
answer.
8. The method of claim 1, wherein sending the proposed answer to an
HA for review further comprises providing the HA a time limit in
which to respond.
9. The method of claim 8, wherein if the HA does not respond within
the time limit, further comprising outputting the original answer
to the user.
10. The method of claim 1, wherein evaluating the proposed answer
further comprises at least one of: determining a priority of the
conversation from a user or business pre-defined configuration; or
determining an urgency of the conversation from a quality of the
conversation.
11. A system, comprising: a user interface configured to receive a
question from a user and provide a corresponding answer to the
user; a VA, coupled to the user interface, to propose an answer to
the user question; and a conversation evaluator, coupled to the VAs
and to the user interface, configured to: evaluate the proposed
answer in a context of the conversation; determine if the VA's
proposed answer requires further review by an HA; and in response
to a determination that the proposed answer requires further
review, send the proposed answer to an HA for review.
12. The system of claim 11, wherein the conversation evaluator is
further configured to, in response to an indication from the HA,
take further action in the conversation
13. The system of claim 11, further comprising a memory, in which
is stored at least one of: one or more statistical or rule based
models configured to predict whether a VA is able to provide a
satisfactory answer to the user; one or more user profiles
generated for users of the system; or one or more chat logs that
record past chats between users and respective ones of the one or
more VAs.
14. The system of claim 11, wherein to evaluate the proposed answer
further comprises at least one of: determine a priority of the
conversation from a user or business pre-defined configuration; or
determine an urgency of the conversation from a quality of the
conversation.
15. The system of claim 14, wherein at least one of: the priority
of the conversation is higher if the user is a paying customer, or
is likely to become a paying customer of a business hosting the VA;
and the urgency of the conversation is higher the higher the
probability that the user is likely to be dissatisfied with one of:
the proposed answer, or the proposed answer in the context of the
last few answers from the VA.
16. A computer program product to manage hybrid VA-HA
conversations, the computer program product comprising: a
computer-readable storage medium having computer-readable program
code embodied therewith, the computer-readable program code
executable by one or more computer processors to: receive a user
question and a proposed answer from a conversation managed by a VA;
evaluate the proposed answer in a context of the conversation and
determine if the proposed answer requires further review by an HA;
in response to a determination that the proposed answer requires
further review, send the proposed answer to an HA for review; in
response to an indication from the HA, take further action in the
conversation.
17. The computer program product of claim 16, wherein the
computer-readable program code is further executable to: when the
indication from the HA includes a modified answer, output the
modified answer to the user, and. when the indication from the HA
advises that the HA is taking over the conversation, cease to
respond to the user's questions.
18. The computer program product of claim 17, wherein the context
of the conversation includes one or more of a priority of the user,
an urgency of the conversation and a summary of the
conversation.
19. The computer program product of claim 16, wherein to evaluate
the proposed answer further comprises to use one or more of
statistical and rule based predictive models to determine if the VA
is not able to provide a satisfactory answer to the user.
20. The computer program product of claim 16, wherein to send the
proposed answer to the HA for review further comprises to send a
message to the user that additional time is required to provide an
answer to the user.
Description
BACKGROUND
[0001] The present invention relates to automated conversation
systems, and more specifically to hybrid conversations with both
human and virtual assistants.
[0002] Current conversation systems are designed to be fully
automated. In such systems, companies attempt to create exhaustive
conversation paths for their customers. While this often takes care
of the most common customer needs, often there are complex queries
where a human assistant is needed. In these cases, conversation
systems are designed to fail gracefully and direct the customer to
other sources for human assistance. Or, for example, the customer
is put in a queue where he or she waits for a human assistant. The
current systems have tradeoffs that inconvenience the customer,
such as the limited serviceability of virtual assistants and the
limited availability of human assistants.
[0003] It is useful to provide solutions to these problems of
current conversation systems.
SUMMARY
[0004] According to one embodiment of the present disclosure, a
method is provided. The method includes receiving a user question
and a proposed answer from a conversation managed by a virtual
assistant (VA), evaluating the proposed answer in the context of
the conversation and determining if the proposed answer requires
further review by a human assistant (HA). The method further
includes, in response to a determination that the proposed answer
requires further review, sending the proposed answer to an HA for
review, and, in response to an indication from the HA, taking
further action in the conversation.
[0005] According to a second embodiment of the present disclosure,
a system is provided. The system includes a user interface
configured to receive a question from a user and provide a
corresponding answer to the user, and one or more virtual
assistants (VAs), coupled to the user interface, to propose an
answer to the user question. The system further includes, a
conversation evaluator, coupled to the VAs and to the user
interface, configured to evaluate the proposed answer in a context
of the conversation, determine if the proposed answer requires
further review by an HA, and, in response to a determination that
the proposed answer requires further review, send the proposed
answer to an HA for review.
[0006] According to a third embodiment of the present disclosure, a
computer-readable storage medium is provided. The computer-readable
storage medium has computer-readable program code embodied
therewith, the computer-readable program code executable by one or
more computer processors to perform an operation. The operation
includes receive a user question and a proposed answer from a
conversation managed by a virtual assistant (VA), evaluate the
proposed answer in a context of the conversation and determine if
the proposed answer requires further review, and in response to a
determination that the proposed answer requires further review,
send the proposed answer to an HA for review. The operation further
includes to take further action in the conversation, in response to
an indication from the HA, such as, for example, to provide an
answer as modified by the HA to the user, or cause the VA to cease
responding altogether, if the HA takes over the conversation.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0007] FIG. 1 illustrates a schematic drawing of an example system,
according to one embodiment disclosed herein.
[0008] FIG. 2 is a block diagram illustrating a system node
configured to provide hybrid VA-HA conversation management,
according to one embodiment disclosed herein.
[0009] FIG. 3 illustrates an example hybrid VA-HA conversation,
where the HA eventually takes over the entire conversation,
according to one embodiment disclosed herein.
[0010] FIG. 4 depicts process flow of an example hybrid
conversation management method, according to one embodiment
disclosed herein.
[0011] FIG. 5 depicts process flow of an alternate example hybrid
conversation management method, according to one embodiment
disclosed herein.
DETAILED DESCRIPTION
[0012] Embodiments and examples described herein relate to a hybrid
conversation system where human and virtual assistants can
seamlessly participate in a conversation with a user, depending on
the user's needs, their profile, and the flow of the conversation.
In one or more embodiments, an example system is configured to
leverage the benefits of both HAs and VAs in conversation systems.
These include, for example, on the one hand, the on-demand
availability of VAs, and on the other hand, the ability of HAs to
handle all types of queries. In some embodiments a targeted
scenario may be a customer portal at a company's website, where
customers first encounter a VA. As the customer chats with the VA,
an intelligent decision maker continuously analyzes the chat and
develops a customer profile, which may include any previous chats.
Based on a configuration, which may include, for example, a
customer's profile and business rules, in one or more embodiments a
customer is assigned a priority. The priority may reflect a high
likelihood that the user of the chat is a paying customer, a
premium customer, or a customer in an executive role.
[0013] Additionally, in one or more embodiments, an urgency rating
and summary may also be dynamically created for the conversation.
The urgency rating reflects how necessary it appears to be that an
HA intervene into the VA's conversation with the user, and may be a
function of, for example, the complexity of the user's inquiries,
user complaints, of the VA simply being unfamiliar with technology
that the user is asking about. For example, in a customer
assistance conversation, where a user has purchased a complicated
piece of equipment, and cannot figure out how to use it, or feels
that something is wrong with the machine. In one or more
embodiments, the priority, urgency rating and summary provides an
intelligent assessment of the customer in the context of the
business configuration accurately to determine the need to have a
HA review the conversation, potentially modify one or more answers
proposed by the VA, or, if needed, take over the conversation
completely.
[0014] Thus, in one or more embodiments, the missed opportunities
that occur when customers engage with a wholly autonomous
conversation system may be avoided.
[0015] It is noted that in the VA realm there are many stories
regarding the failure of VAs (also known as "chatbots", and
sometimes so referred to herein) based on the fact that they do not
have sufficient intelligence to address the diverse range of
customer queries. This results in the business using the VA losing
potential customers, or having dissatisfied and unengaged
customers, eventually leading to churn. It is therefore imperative
that such missed opportunities in chatbot conversations are
identified in real time, and the customer routed to a qualified HA
to pick up the slack. Using one or more embodiments as disclosed
herein, companies may scale out their customer support operations
through the use of VAs, but at the same time maintain the quality
of the experience by identifying opportunities where an HA can make
a significant impact to the customer engagement. In one or more
embodiments, this may be accomplished using a hybrid conversation
manager, that includes an intelligent decision maker configured to
know when to ask an HA to step in.
[0016] Thus, in one or more embodiments, a conversation interface
is provided that allows an HA or a VA to interject at any time,
with interjection control performed by the HA. Logs of each
conversation are continuously monitored and analyzed, by means of
which new conversation paths and examples can be extracted to
further improve the conversation capabilities of the VA.
[0017] In embodiments, in the event that a given VA completely
fails the address the needs of a customer, or the customer cannot
progress, and there are no HAs then available, the VA may engage a
contextual filler conversation system to keep the user engaged. For
example, in such a situation the VA may say "I need advice from
specialist. I've just sent a request." and then engage in other
conversation topics such as "did you need help with something else"
or "have you used our product X?"
[0018] Thus, in one or more embodiments it is necessary to have HAs
available when needed. In some embodiments, in order to keep a pool
of active supporting HAs an incentive system may be implemented,
where additional bonus pay for an HA is dynamically adjusted
depending on the available trained HAs and the quantity and quality
of service that each HA delivers.
[0019] FIG. 1 illustrates a schematic drawing of an example system
100, according to one embodiment disclosed herein. With reference
to FIG. 1 there is shown hybrid conversation manager 100,
configured to implement one or more embodiments of hybrid virtual
assistant and human assistant conversations. Hybrid conversation
manager 100 includes user interface 120 which receives questions
from a user 110 and also provides answers 111 to the user. The user
may be, for example, customer or prospective customer of a
business. The business supports online automated conversations for
users to engage with it, and most of these conversations are
handled by a VA. However, for instances when a VA simply cannot
provide a satisfactory answer, in one or more embodiments, the
business may implement an intelligent decision maker 141 to
automatically monitor conversations and call in HAs when
needed.
[0020] User interface 120 is connected to a set of VAs 130, via
links 121. The set of VAs may include a single VA, but in the
depicted example there are N VAs shown, namely VA 1 131, VA 2 132
and on through VA N 138. Having a set of VAs allows for different
calls to be routed to a VA specialized to respond to user queries
in a given knowledge domain. This may be particularly useful when
the business is large and provides a wide range of goods and
services. In one embodiment, one of the VAs engages in a
conversation with a user. As part of that conversation the user
submits a series of questions 110, and the VA assigned to the
conversation, in response to each user question 110, proposes an
answer. In one or more embodiments, the VAs may be local to the
hybrid conversation manager 100, or, for example, they may be in
the cloud, and thus communications links 123 and 130 are links that
run through a data communications network. Similarly, links 110 and
111 may be, and generally are, through a data communications
network, where a user accesses the business' website to carry on a
chat. Finally, hybrid conversation manager 100 may also be remote,
for example, from one or more of HAs 150, who may also communicate
with the hybrid conversation manager 100 over a data communications
network.
[0021] Continuing with reference to FIG. 1, the set of VAs 130 are
coupled to intelligent decision maker (IDM) 141. IDM 141
continuously monitors, through conversation evaluator 143, each
conversation between a user and a VA 130. Moreover, the chat logs
147 of all VA conversations, originally generated and stored within
each VA 130, are saved in a memory 145 of IDM 141, described more
fully below. In embodiments, saving the chat logs 147 allows for
the extraction of new conversation paths and examples, which may,
in some embodiments, be used to better train the VAs and thereby
improve their conversational capabilities. Additionally, in
embodiments, the chat logs 147 may also be used to train models
144, also described below, which the conversation evaluator 143
uses in referring, or deciding to refer, a proposed VA answer to an
HA for review.
[0022] As shown in FIG. 1, IDM 141 includes conversation evaluator
143 and memory 145. Conversation evaluator 143 includes models 144,
trainer 144A, human assistant interface (HAI) 148, and timer 149.
These elements of IDM 141 are next described.
[0023] Conversation evaluator 143 analyzes each conversation
between a VA and a user, in the context of the conversation, using
a hybrid of statistical and rule-based models 144. In one or more
embodiments, these models may be specified by, for example, an
administrator. For example, there may be a rule that stipulates
that a VA issuing three "I don't know" responses in a row in any
conversation requires an HA to review the response. This is
because, for obvious reasons, it may likely indicate that the VA is
out of its depth, and lacks sufficient knowledge to continue to
meaningfully respond to the user's queries. Thus, for example, in
that example scenario, IDM 141 by virtue of monitoring the
conversation is aware that two "I don't knows" have already been
issued. The third "I don't know" then appears as the VA's next
proposed answer. Upon seeing this next proposed answer, IDM 141,
following the rule, sends the conversation to an HA for review.
[0024] Another similar example may be one that defies being
expressed in a single rule, but that, if the conversation is
properly analyzed, it is clear that an HA should be flagged. This
may occur, for example, when the VA uses a semantically similar
term to "I don't know" and, in context of the conversation thus
far, it is also clear that the VA lacks sufficient knowledge to
adequately respond to the user's query. This latter example is
illustrated in FIG. 3, described in detail below. In this latter
example, it is hard to craft a hard and fast rule to capture the VA
coming very close to the edge of its knowledge base. Thus, in some
embodiments, models 144 may include one or more statistical models,
such as, for example, machine learning classification models that
use deep neural networks or support vector machines, that are
trained to predict when a VA is no longer able to handle a
conversation, and to flag an HA for review. In one or more
embodiments, these models may be trained, at least in part, by
trainer 144A.
[0025] In one or more embodiments, in making its decision, IDM 141
may also take into account a user's priority determined from a user
configuration or user profile 146, as well as an urgency determined
from the quality of the conversation itself. For example, different
types of users may contact a VA for information. Some of these are
actual customers, some are important customers, and some are not
yet customers. Thus, in one or more embodiments, as a user chats
with the VA, for example VA 131, IDM 141 continuously analyzes the
chat and develops a user profile 146, including any previous chats
with that user, which, in one or more embodiments, are stored in
chat logs 147. Based on the user configuration, which may include,
for example, a user profile 146 and be determined in part by
business rules, the user is assigned a priority. Some examples of
priorities may include "high likelihood of conversation to a paying
customer", "premium customer", or "a customer in an executive
role", etc., each of which would have a higher priority than a
regular customer, or than someone "just looking."
[0026] Additionally, in one or more embodiments, an urgency rating
and summary is also dynamically created for the conversation. This
is so that an on call HA may prioritize their intervention into a
VA's conversation as a function of, for example, complexity of user
inquiries, user complaints, or, for example, the VA's unfamiliarity
with the technology being discussed. In one or more embodiments,
the priority score, and the urgency rating and summary provide the
IDM 141 with an intelligent assessment of the user in the context
of the business configuration to determine if there is a need to
handover the conversation to an HA for review.
[0027] In one or more embodiments, once the IDM 141 determines that
the VA's proposed response requires further review, it flags the
response to an HA 150 for further evaluation. IDM 141 is thus
coupled to HA 1 151 and HA 2 152 (in the example of FIG. 1; in
embodiments, there may be more or less HAs in the set of available
HAs 150) through HAI 148, which sends and receives messages and
data to and from HAs 150 over communication links 134. As noted
above, the HAs may be remote from the IDM 141, and thus
communications links 134 and 135 may be over a data communications
network.
[0028] In one or more embodiments, IDM 141 sends the proposed VA
response, with the context of recent parts of the conversation
(e.g., the summary), a user priority and an urgency, under a
response timer. The timer is managed by timer 148 on IDM 141's
side. In addition to sending the proposed VA response to the HA,
IDM 141, at the same time, sends a message to the user through user
interface 120 that additional time is required to process a
response to the user's last question, so that the user is not left
waiting and feeling ignored or underserved. The HA, for example HA
1 151 or HA 2 152, then reviews the response under the response
timer, and makes a decision among three possible actions. These
include, for example, to deliver the proposed response in its
original form, to deliver a modified response, or to take over the
conversation from the VA.
[0029] In the event that the HA cannot submit a response under the
timer deadline, then IDM 141 will trigger a second timer, again
managed by timer 149, which gives the option for the HA to either
ask for still more time while continuing to work on the response,
which will be relayed to the user, to take over the conversation
completely, or to submit the original response as proposed by the
VA. In one or more embodiments, if the second timer expires before
the HA responds, the original VA proposed response is sent by IDM
141 to the user, and the interaction between IDM 141 and the HA
logged in a chat log for the conversation, such as, for example, in
chat logs 147.
[0030] However, if the HA can make a decision within either the
first timer or the second timer, then, in one or more embodiments,
if it is approving the proposed response as is, or if it is
modifying the response, that data is sent back to IDM 141, via HAI
148, across communication links 134, for forwarding the chosen
response to the user, through user interface 120. If, however, the
HA decides to take over the conversation completely from the VA,
then, in one or more embodiments, there are two options. A first
option is to have the HA communicate directly with the user, via
user interface 120, as shown by dashed communications link 135A,
or, for example, the messaging can continue to run through IDM 14,
through dashed communications link 135. In the latter case the HA's
side of the conversation can be logged, for example in chat logs
147, and trainer 144A can later be used to train one or both of the
VAs 130 and models 144 based on the log.
[0031] FIG. 2 is a block diagram illustrating a System Node 210
configured to provide hybrid conversation management according to
one embodiment disclosed herein. System Node 210 is functionally
equivalent to the hybrid conversation manager 100 schematically
depicted in FIG. 1, but, for ease of illustration, without showing
in FIG. 2 all of the detail, and all of the various internal (or
external) communications pathways, depicted in FIG. 1.
[0032] In the illustrated embodiment, the system node 210 includes
a processor 210, memory 215, storage 220, and a network interface
225. In the illustrated embodiment, the processor 210 retrieves and
executes programming instructions stored in memory 215, as well as
stores and retrieves application data residing in storage 220. The
processor 210 is generally representative of a single CPU, multiple
CPUs, a single CPU having multiple processing cores, and the like.
The memory 215 is generally included to be representative of a
random access memory. Storage 220 may be disk drives or flash-based
storage devices, and may include fixed and/or removable storage
devices, such as fixed disk drives, removable memory cards, or
optical storage, network attached storage (NAS), or storage area
network (SAN). Storage 220 may include one or more data bases,
including IASPs. Via the network interface 225, the system Node 210
can be communicatively coupled with one or more other devices and
components, such as cloud servers, other System Nodes 210,
monitoring nodes, storage nodes, and the like.
[0033] In the illustrated embodiment, Storage 220 includes a set of
objects 221. Although depicted as residing in Storage 220, in
embodiments, the objects 221 may reside in any suitable location.
In embodiments, the Objects 221 are generally representative of any
data (e.g., application data, saved files, databases, training data
and the like) that is maintained and/or operated on by the system
node 210. Objects 221 may include user profiles of users of the
conversation, chat logs of one or more of the conversations handled
through the hybrid conversation management system, as well as
summaries of conversations forwarded to the HAs for evaluating a
proposed VA response, which may include a priority and an urgency
calculated for a given user in a given conversation, all as
described above with reference to FIG. 1.
[0034] Additionally, objects 221 may further include one or more
models, including rule based as well as multivariate statistical
models, or the like, which are trained to, and then used to,
predict whether, given the priority of a user and the configuration
and summary of a chat, a HA should be flagged for further review of
a proposed answer by a VA to the pending query of a user in a
current conversation. Thus, such models may include one or more
statistical models, such as, for example, machine learning
classification models that use deep neural networks or support
vector machines, that are trained to predict when a VA is no longer
able to handle a conversation, and to flag an HA for review.
Objects 221 may still further include a set of training data used
to train one or more of the predictive models, such as, for
example, may be generated by a training data generator component
245 of hybrid conversation management application 230, as described
more fully below with reference to FIG. 3. As illustrated, the
memory 215 includes a hybrid conversation management application
230. Although depicted as software in memory 215, in embodiments,
the functionality of the hybrid conversation management application
230 can be implemented in any location using hardware, software,
firmware, or a combination of hardware, software and firmware.
Although not illustrated, the memory 215 may include any number of
other applications used to create and modify the objects 221 and
perform system tasks on the System Node 210.
[0035] As illustrated, the hybrid conversation management
application 230 includes a graphical user interface (GUI) component
235, a virtual assistant interface component 240, a conversation
evaluator component 241, a human assistant interface component 243,
a training data generation component 245, and, optionally, virtual
assistants 247. Although depicted as discrete components for
conceptual clarity, in embodiments, the operations and
functionality of the GUI component 235, a virtual assistant
interface component 240, a conversation evaluator component 241, a
human assistant interface component 243, a training data generation
component 245, and virtual assistants 247, if implemented in the
system node 210, may be combined, wholly or partially, or
distributed across any number of components. In an embodiment, the
hybrid conversation management application 230 is generally used to
analyze a proposed VA response to a user question in a conversation
managed by the VA and decide whether the response should be
forwarded to an HA for review and possible further action. In an
embodiment, the hybrid conversation management application 230 is
also used to train, via the training data generator component 245,
the models used in evaluating a proposed VA response, as described
above, to make the decision whether to forward the proposed
response and associated data to an HA for review and possible
further action.
[0036] In an embodiment, the GUI component 235 is used to provide
user interfaces through which the VAs (or HAs, if they take over a
conversation) communicate with users, so as to receive user
questions and provide responses to those questions. In some
embodiments, the GUI is, or is part of, a website maintained by the
business. In some embodiments the GUI pops up as a user is browsing
certain internal pages of the business' website, according to an
algorithm that predicts when a user may want to ask a question, but
is unaware there is a facility for a chat.
[0037] In the illustrated embodiment, the VA interface component
240 interfaces between the GUI component and the VAs, as well as
between the VAs and the conversation evaluator component 241. As
noted above, either of these interfaces maintained by VA interface
component 240 may be over a data communications network, as the VAs
may be remote form both users and the hybrid conversation
management application 230. In the illustrated embodiment, the
conversation evaluator component 241 receives user questions from
the GUI component 235, as well as recent chat activity in the
conversation and a current proposed answer form a VA, and decides,
based on these inputs, and also based on a user configuration and
user priority score that it dynamically generates, whether to, in
response to a VA's proposed answer to the user question, refer the
conversation to a HA, and to receive a response from the HA
regarding further action, via the HAI component 243.
[0038] In the illustrated embodiment, the models used by the
conversation evaluator component 241 to make its decision, may be
trained by training data generated by training data generator
component 245. As noted above, the training data may include user
profiles and chat logs, and, more importantly, chat logs for
conversations where the proposed VA response was referred to an HA,
and where the HA either modified the response or took over the
conversation. In the latter case, for example, the training data
further includes the HA interaction with a user once that HA took
over a conversation from the VA. As further noted above, the
training data may also be used to train the Vas so as to improve
their knowledge and their conversational abilities.
[0039] Finally, with reference to FIG. 2, in some embodiments,
System Node 210 may communicate with both users and cloud servers,
in which cloud based versions of the models may be stored, or in
which VAs may be hosted, via Network Interface 225.
[0040] To better illustrate the context of one or more embodiments
of the present disclosure, FIG. 3 illustrates a snippet of an
example hybrid VA-HA conversation, according to one embodiment
disclosed herein. In the example of FIG. 3, initially a VA responds
to a user's question. However, given the VA's proposed response to
the second question posed by the user, an example IDM forwards the
proposed response to a HA. Upon reviewing the response and the
conversation as a whole, the HA takes over the conversation. In
FIG. 3 the user's questions and decisions by the IDM and the HA are
indexed using even numbers, and the answers, whether provided by
the VA or the HA, are indexed using odd numbers, and are provided
in rounded corner boxes. Moreover, of these boxes, as indicated in
the key to the figure, solid lined boxes denote the VA's answers
and dashed lined boxes indicate the HA's answers. Finally, arrows
connect the various boxes to indicate the direction of
conversational flow.
[0041] With reference to FIG. 3, the snippet of the conversation as
shown begins with Question1 at 306, posed by a user: "I am
remodeling a bathroom, and I need an extender for your Model XYZ123
shower valve stub." To this VA1, who is handling this conversation,
responds, with Answer1 at 315: "Let me check. How much extension
from the normal stub length do you need?" The user follows this up
with Question2 at 308: "I am not sure; what increments does it come
in?" To this the VA seems not to be able to really answer the
question, and proposes the following answer at 317: "I am sorry,
madam. I see that this model number is now discontinued, so we do
not have the extenders in stock. Let me check further." Prior to
the proposed Answer at 317 being sent to the user, it is at this
point in the conversation, at 312, that the IDM decides, given the
recent activity in the conversation (e.g., a summary), the priority
of the customer (e.g., she is an actual customer who has made a
purchase of one of the business' valves), and the urgency of the
conversation (e.g., the VA is getting into a very specific,
somewhat uncommon area it does not have knowledge of), to hand the
conversation to an HA for further review. At the same time, the IDM
sends a message to the user that there will be a short delay.
[0042] It is here noteworthy that no specific rule was broken by
the VA that triggered the scrutiny of the IDM. Rather, a few key
inputs were at play at proposed Answer2A at 317, and these were
used by one of the predictive models that the IDM had available to
it (such as, for example, models 144 of FIG. 1), to sense a
potential problem in the conversation. First, the VA used the term
"let me check" at Answer1 315, followed by a more emphatic version
of the statement, "let me check further" in its proposed Answer2A
at 317. This indicates an acceleration of needed information not
immediately available to the VA. Moreover, the subject of the
conversation is Model XYZ123, which the proposed Answer2A notes is
discontinued. Here the IDM knows that while a VA for the business
has access to a current stock list, and can also tell if a given
product has been discontinued via a notation on that list, the VA's
knowledge stops there, as it does not have access to ways to find
replacement parts or accessories for a discontinued product. The
business simply did not think it a useful consumption of resources
to train the VAs with this level of infrequently sought detail.
Thus, the IDM knew that if the user presses for more information
along that line of questioning, it is highly likely that she will
become dissatisfied, and have a poor customer service experience.
Thus, at 312, IDM refers proposed Answer2A to an HA for review.
Because there may be a perceptible delay on the user's end, at 312
the IDM also sends the user a message that there will be a short
delay.
[0043] Continuing with reference to FIG. 3, at 314, the HA, having
reviewed the proposed response in the context of the conversation
thus far, and thus verifying the automated decision made by the
IDM, decides that the VA will not know much, if anything, about
replacement parts for discontinued items, such as the requested
Model XYZ123 extenders, or how to obtain them. Accordingly, as
shown at 314, the HA decides to take over the conversation from the
VA, so informs the IDM, and the VA, now advised by the IDM, ceases
to respond to the user's questions. It is noted that in FIG. 3,
once the proposed Answer2A was submitted to the HA, the arrows that
indicate the direction of conversational flow change from solid
arrows to slightly thicker dashed arrows, as shown. They remain of
the thicker dashed type for the remainder of the chat, inasmuch as
the entire chat is now in the HA's domain through to its end at
Answer3 at 321.
[0044] Thus, at 319, the HA provides the actual modified Answer2B
to the user's Question2 shown at 308. The HA tells the user, at
Answer2B: "Thank you for waiting, madam. Model XYZ123 is
discontinued. There is a special website through which we sell hard
to find replacement parts, including extenders, for as long as we
have them. It is not visible on our main site, so please log on now
and I will walk you through it." Off-screen (not shown), the HA
guides the user though a purchase of the extenders for the
discontinued valve. Following Answer2B, and the subsequent purchase
by the user, the user has a final question, Question3 at 316, which
is actually a closing statement seeking a confirmation: "Thank you
for all of your help. I see the extenders will be sent out
tomorrow." HA responds with Answer3 at 321, which includes a final
bit of advice: "Glad we could be of service. If your plumber feels
you may need more of these, better get some extras now, they are no
longer being manufactured, as I indicated." As may be surmised,
this latter piece of advice is not likely to be offered by a VA,
who does not have a full command of the relatively arcane subject
matter of this chat. In embodiments, this entire snippet may be
saved and highlighted to use in subsequent training of an IDM model
or models.
[0045] FIG. 4 depicts process flow of an example hybrid
conversation management method, according to one embodiment
disclosed herein. Method 400 includes blocks 410 through 448. In
alternate embodiments, method 400 may have more, or fewer, blocks.
In one embodiment, method 400 may be performed, for example, by
hybrid conversation manager 100 of FIG. 1, in particular
intelligent assessment 141, or, for example, by system node 210 of
FIG. 2, and in particular, hybrid conversation management
application 230.
[0046] Continuing with reference to FIG. 4, method 400 begins at
block 410, where a user's question and a VA's proposed answer is
received. For example, the question may be Question2 of the user as
shown at block 308 of FIG. 3, and the use may be a customer of the
business, a plumbing manufacturer, who needs an accessory part for
a faucet that she has previously purchased.
[0047] From block 410 method 400 proceeds to block 420, where the
proposed answer is evaluated in the context of the conversation
(e.g., summary, priority of user and urgency). For example, again
with reference to FIG. 3, Question1 and its response Answer1 may be
compared with Question2 and proposed Answer2A to catch any
acceleration of lack of knowledge on the part of the VA, as
described above.
[0048] From block 420, method 400 proceeds to query block 425,
where it is determined if the VA's proposed answer requires further
review. If a "No" is returned at query block 425, and the VA's
proposed answer is not predicted by an example IDM, using the
models available to it, and a calculated priority score and
urgency, to require further review by an HA, then method 400 moves
to block 427, where the VA's original proposed response is output
to the user, and method 400 then ends at block 428.
[0049] If, however, the return to query block 425 is a "Yes", and
thus the IDM predicts that there is enough uncertainty in the VA's
knowledge level to provide a satisfactory answer to the user, then
method 400 proceeds to block 430, where the VA's proposed answer is
sent to an HA with a time limit to respond. In some embodiments, at
this juncture a message may also be sent to the user advising that
there will be a slight delay in providing them with an answer, as
shown, for example, in FIG. 3 at 312.
[0050] From block 430, method 400 proceeds to query block 435,
where it is determined if the HA has responded within the time
limit provided to it at block 430. In real world embodiments there
may actually be multiple time limits, as described above, and thus,
as an example, the IDM will always offer a second time limit to an
HA at this juncture, and may be requested to provide a third time
limit, if the HA is still working on preparing an answer. However,
for ease of description, all of the available time limits, which
may be three, are treated as a single overall time limit in FIG. 4,
to illustrate what happens when the last time limit has expired and
the HA has not yet responded. In one or more embodiments, this is a
useful backstop against human error on the part of the HA, or, for
example, against a communications path going down so that a remote
HA cannot provide any response to an example IDM, which may reside
in the cloud, for example.
[0051] Thus, the return to query block 435 is a "Yes", and thus the
HA has responded prior to the expiry of the time limit, then method
400 moves to block 437, where the HA's directive is followed,
whatever it may be, such as, for example, to output a modified
response to the user, and method 400 then ends at block 438. If,
however, the return to query block 435 is a "No", and thus the HA
has not responded prior to the expiry of the time limit, then
method 400 moves to block 440, where, as a default, in absence of
any decision by the HA, the VA's original proposed answer is output
to the user, and the interaction with the HA at blocks 425, 430,
435 and 440 is logged and made a part of the chat log for this
conversation. Method 400 then ends at block 448.
[0052] It is noted that in the above description, at each of blocks
428, 438 and 448, method 400 ended. That is because method 400 only
describes a single round of the conversation, based on a single
proposed VA's answer for which a decision to refer or not is made.
In other versions of the method 400, at each of blocks 428, 438 and
448 process flow may return to block 410, for continued monitoring
of the next proposed answer in the chat. Such a continued
monitoring example is shown in FIG. 5, next described.
[0053] FIG. 5 depicts a process flow diagram of an alternate
example hybrid conversation management method, according to one
embodiment disclosed herein, where an IDM, for example, decides to
refer a proposed answer to an HA for review, and various possible
indications are received form the reviewing HA. Method 500 includes
blocks 510 through 571. In alternate embodiments, method 500 may
have more, or fewer, blocks. In one embodiment, method 500 may be
performed, for example, by hybrid conversation manager 100 of FIG.
1, or, for example, by IDM 141 of FIG. 1, or, for example, system
node 210 of FIG. 2.
[0054] Continuing with reference to FIG. 5, method 500 begins at
block 510, where a user's online conversation with a VA is
monitored, including a last proposed response to the user by the
VA. From block 510 method 500 proceeds to block 520, where the
priority of the conversation is determined from a configuration
defined by the user or a business hosting the VA. For example, the
priority may be expressed as a score on a pre-defined scale.
[0055] From block 520, method 500 proceeds to block 530, where an
urgency of the conversation is determined from a quality of the
conversation. For example, as described above with reference to
FIG. 3, if the VA repeats a phrase indicating lack of knowledge two
or more times, with additional emphasis on the second utterance of
the phrase, such as "let me check" and then "let me check further",
or, for example, as described above, the VA responds to three
sequential user questions with "I don't know", the urgency is
rather high, and a cause for potential concern. Or, for example,
even if there are not three sequential expressions of lack of
knowledge, but, for example, there are N "I don't know" (or its
semantic equivalent) containing responses within, say, the last N+K
answers output by the VA, where K is a pre-defined integer specific
to the VA, here as well the urgency is rather high, and a cause for
potential concern.
[0056] From block 520, method 500 proceeds to query block 540,
where it is determined, based at least in part on the user's
priority and the conversation's urgency, whether to flag the VA's
last proposed response to an HA for further evaluation. If the
return at query block 540 is a "No" then the VA's proposed response
is fine, and there is no reason to involve an HA. In that case
method 500 proceeds all the way to block 566, described below,
where the VA's proposed response is output to the user without any
HA intervention. If, on the other hand, the return at query block
540 is a "Yes" then method 500 proceeds to block 550, where an
indication is received form the HA regarding further action in the
conversation. As noted above, the further action may be to output
the VA's original answer, output a modified answer, or have the HA
completely take over the conversation with this user. All three
possibilities are tested for and appropriate actions taken, as
shown in blocks 557, 565 and 566. These are next described.
[0057] From block 550, method 500 proceeds to query block 555,
where it is determined if, in its response, the HA decided to take
over the conversation. If the return to query block 555 is a "Yes",
then method 500 proceeds to block 547, where the VA is directed to
stop responding to the user's questions in this chat. Because the
HA has taken over, there are no additional VA proposed responses,
and method 500 ends at block 559. That does not mean the IDM ceases
to log the interaction between the HA and the user, though, as
noted above, it just means that the VA is no longer involved.
[0058] If, however, a "No" is returned at query block 555, and the
HA has not taken over the conversation, then the VA remains
involved, it being a matter of whether the VA's original proposed
response is sent to the user, or a modified one. Thus, method 500
proceeds to a second query block, namely query block 560, where it
is determined if the HA provided a modified answer. If the return
to query block 560 is a "Yes", then method 500 proceeds to block
565, where the said modified answer is out put to the user. If,
however, the return to query block 560 is a "No", then method 500
proceeds to block 566, where the VA's original answer is output to
the user. Method 500 then proceeds in parallel from blocks 565 and
566 to a final query block 570, where it is determined if the
conversation is over. If it is, and the return to final query block
570 is a "Yes", then method 500 proceeds to block 571, and method
500 ends. If, however, the return to query block 570 is a "No", and
the conversation is not over, and thus the user has additional
questions, because at block 545 the HA did not take over the
conversation, the VA is still handling it, and process flow returns
to the beginning, at block 510 where method 500 starts anew.
[0059] The descriptions of the various embodiments of the present
invention have been presented for purposes of illustration, but are
not intended to be exhaustive or limited to the embodiments
disclosed. Many modifications and variations will be apparent to
those of ordinary skill in the art without departing from the scope
and spirit of the described embodiments. The terminology used
herein was chosen to best explain the principles of the
embodiments, the practical application or technical improvement
over technologies found in the marketplace, or to enable others of
ordinary skill in the art to understand the embodiments disclosed
herein.
[0060] In the present disclosure, reference is made to one or more
embodiments. However, the scope of the present disclosure is not
limited to specific described embodiments. Instead, any combination
of the following features and elements, whether related to
different embodiments or not, is contemplated to implement and
practice contemplated embodiments. Furthermore, although
embodiments disclosed herein may achieve advantages over other
possible solutions or over the prior art, whether or not a
particular advantage is achieved by a given embodiment is not
limiting of the scope of the present disclosure. Thus, the
following aspects, features, embodiments and advantages are merely
illustrative and are not considered elements or limitations of the
appended claims except where explicitly recited in a claim(s).
Likewise, reference to "the invention" shall not be construed as a
generalization of any inventive subject matter disclosed herein and
shall not be considered to be an element or limitation of the
appended claims except where explicitly recited in a claim(s).
[0061] Aspects of the present invention may take the form of an
entirely hardware embodiment, an entirely software embodiment
(including firmware, resident software, microcode, etc.) or an
embodiment combining software and hardware aspects that may all
generally be referred to herein as a "circuit," "module" or
"system."
[0062] The present invention may be a system, a method, and/or a
computer program product. The computer program product may include
a computer readable storage medium (or media) having computer
readable program instructions thereon for causing a processor to
carry out aspects of the present invention.
[0063] The computer readable storage medium can be a tangible
device that can retain and store instructions for use by an
instruction execution device. The computer readable storage medium
may be, for example, but is not limited to, an electronic storage
device, a magnetic storage device, an optical storage device, an
electromagnetic storage device, a semiconductor storage device, or
any suitable combination of the foregoing. A non-exhaustive list of
more specific examples of the computer readable storage medium
includes the following: a portable computer diskette, a hard disk,
a random access memory (RAM), a read-only memory (ROM), an erasable
programmable read-only memory (EPROM or Flash memory), a static
random access memory (SRAM), a portable compact disc read-only
memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a
floppy disk, a mechanically encoded device such as punch-cards or
raised structures in a groove having instructions recorded thereon,
and any suitable combination of the foregoing. A computer readable
storage medium, as used herein, is not to be construed as being
transitory signals per se, such as radio waves or other freely
propagating electromagnetic waves, electromagnetic waves
propagating through a waveguide or other transmission media (e.g.,
light pulses passing through a fiber-optic cable), or electrical
signals transmitted through a wire.
[0064] Computer readable program instructions described herein can
be downloaded to respective computing/processing devices from a
computer readable storage medium or to an external computer or
external storage device via a network, for example, the Internet, a
local area network, a wide area network and/or a wireless network.
The network may comprise copper transmission cables, optical
transmission fibers, wireless transmission, routers, firewalls,
switches, gateway computers and/or edge servers. A network adapter
card or network interface in each computing/processing device
receives computer readable program instructions from the network
and forwards the computer readable program instructions for storage
in a computer readable storage medium within the respective
computing/processing device.
[0065] Computer readable program instructions for carrying out
operations of the present invention may be assembler instructions,
instruction-set-architecture (ISA) instructions, machine
instructions, machine dependent instructions, microcode, firmware
instructions, state-setting data, or either source code or object
code written in any combination of one or more programming
languages, including an object oriented programming language such
as Smalltalk, C++ or the like, and conventional procedural
programming languages, such as the "C" programming language or
similar programming languages. The computer readable program
instructions may execute entirely on the user's computer, partly on
the user's computer, as a stand-alone software package, partly on
the user's computer and partly on a remote computer or entirely on
the remote computer or server. In the latter scenario, the remote
computer may be connected to the user's computer through any type
of network, including a local area network (LAN) or a wide area
network (WAN), or the connection may be made to an external
computer (for example, through the Internet using an Internet
Service Provider). In some embodiments, electronic circuitry
including, for example, programmable logic circuitry,
field-programmable gate arrays (FPGA), or programmable logic arrays
(PLA) may execute the computer readable program instructions by
utilizing state information of the computer readable program
instructions to personalize the electronic circuitry, in order to
perform aspects of the present invention.
[0066] Aspects of the present invention are described herein with
reference to flowchart illustrations and/or block diagrams of
methods, apparatus (systems), and computer program products
according to embodiments of the invention. It will be understood
that each block of the flowchart illustrations and/or block
diagrams, and combinations of blocks in the flowchart illustrations
and/or block diagrams, can be implemented by computer readable
program instructions.
[0067] These computer readable program instructions may be provided
to a processor of a general purpose computer, special purpose
computer, or other programmable data processing apparatus to
produce a machine, such that the instructions, which execute via
the processor of the computer or other programmable data processing
apparatus, create means for implementing the functions/acts
specified in the flowchart and/or block diagram block or blocks.
These computer readable program instructions may also be stored in
a computer readable storage medium that can direct a computer, a
programmable data processing apparatus, and/or other devices to
function in a particular manner, such that the computer readable
storage medium having instructions stored therein comprises an
article of manufacture including instructions which implement
aspects of the function/act specified in the flowchart and/or block
diagram block or blocks.
[0068] The computer readable program instructions may also be
loaded onto a computer, other programmable data processing
apparatus, or other device to cause a series of operational steps
to be performed on the computer, other programmable apparatus or
other device to produce a computer implemented process, such that
the instructions which execute on the computer, other programmable
apparatus, or other device implement the functions/acts specified
in the flowchart and/or block diagram block or blocks.
[0069] The flowchart and block diagrams in the Figures illustrate
the architecture, functionality, and operation of possible
implementations of systems, methods, and computer program products
according to various embodiments of the present invention. In this
regard, each block in the flowchart or block diagrams may represent
a module, segment, or portion of instructions, which comprises one
or more executable instructions for implementing the specified
logical function(s). In some alternative implementations, the
functions noted in the block may occur out of the order noted in
the figures. For example, two blocks shown in succession may, in
fact, be executed substantially concurrently, or the blocks may
sometimes be executed in the reverse order, depending upon the
functionality involved. It will also be noted that each block of
the block diagrams and/or flowchart illustration, and combinations
of blocks in the block diagrams and/or flowchart illustration, can
be implemented by special purpose hardware-based systems that
perform the specified functions or acts or carry out combinations
of special purpose hardware and computer instructions.
[0070] Embodiments of the invention may be provided to end users
through a cloud computing infrastructure. Cloud computing generally
refers to the provision of scalable computing resources as a
service over a network. More formally, cloud computing may be
defined as a computing capability that provides an abstraction
between the computing resource and its underlying technical
architecture (e.g., servers, storage, networks), enabling
convenient, on-demand network access to a shared pool of
configurable computing resources that can be rapidly provisioned
and released with minimal management effort or service provider
interaction. Thus, cloud computing allows a user to access virtual
computing resources (e.g., storage, data, applications, and even
complete virtualized computing systems) in "the cloud," without
regard for the underlying physical systems (or locations of those
systems) used to provide the computing resources.
[0071] Typically, cloud computing resources are provided to a user
on a pay-per-use basis, where users are charged only for the
computing resources actually used (e.g. an amount of storage space
consumed by a user or a number of virtualized systems instantiated
by the user). A user can access any of the resources that reside in
the cloud at any time, and from anywhere across the Internet. In
context of the present invention, a user may access a web-based
virtual assistant through a website. The website is likely hosted
in the cloud. The website supports virtual chats, which, when
necessary, may seamlessly change to conversations where human
assistants write the chat responses. The HA may be at an operations
center of the business where the business' website is managed by an
IT staff, but the operations center may or may not be where the
website is actually hosted, or where the VAs are hosted, which may
or may not be the same servers where the website is hosted.
Moreover, the HAs may themselves be remote are generally remote
from the operations center and the VAs. Moreover, in some
embodiments, the IDM may be remote from the business' website, and
may also be remote from the models accessed by the IDM to make
decisions, for example. Thus, some or all of the various and
several elements of an entire hybrid VA-HA conversation management
system as shown in FIG. 1 may each be remote from all of the
others, and thus each element connected to one or more of the other
elements over the cloud.
[0072] While the foregoing is directed to embodiments of the
present invention, other and further embodiments of the invention
may be devised without departing from the basic scope thereof, and
the scope thereof is determined by the claims that follow.
* * * * *