U.S. patent application number 15/083079 was filed with the patent office on 2017-03-09 for system and method for providing words or phrases to be uttered by members of a crowd and processing the utterances in crowd-sourced campaigns to facilitate speech analysis.
This patent application is currently assigned to VOICEBOX TECHNOLOGIES CORPORATION. The applicant listed for this patent is VOICEBOX TECHNOLOGIES CORPORATION. Invention is credited to Daniela BRAGA, Ahmad Khamis ELSHENAWY, Michael KENNEWICK, Faraz ROMANI.
Application Number | 20170069325 15/083079 |
Document ID | / |
Family ID | 56083189 |
Filed Date | 2017-03-09 |
United States Patent
Application |
20170069325 |
Kind Code |
A1 |
BRAGA; Daniela ; et
al. |
March 9, 2017 |
SYSTEM AND METHOD FOR PROVIDING WORDS OR PHRASES TO BE UTTERED BY
MEMBERS OF A CROWD AND PROCESSING THE UTTERANCES IN CROWD-SOURCED
CAMPAIGNS TO FACILITATE SPEECH ANALYSIS
Abstract
Systems and methods of providing text related to utterances, and
gathering voice data in response to the text are provide herein. In
various implementations, an identification token that identifies a
first file for a voice data collection campaign, and a second file
for a session script may be received from a natural language
processing training device. The first file and the second file may
be used to configure the mobile application to display a sequence
of screens, each of the sequence of screens containing text of at
least one utterance specified in the voice data collection
campaign. Voice data may be received from the natural language
processing training device in response to user interaction with the
text of the at least one utterance. The voice data and the text may
be stored in a transcription library.
Inventors: |
BRAGA; Daniela; (Bellevue,
WA) ; ROMANI; Faraz; (Renton, WA) ; ELSHENAWY;
Ahmad Khamis; (Lynnwood, WA) ; KENNEWICK;
Michael; (Bellevue, WA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
VOICEBOX TECHNOLOGIES CORPORATION |
BELLEVUE |
WA |
US |
|
|
Assignee: |
VOICEBOX TECHNOLOGIES
CORPORATION
BELLEVUE
WA
|
Family ID: |
56083189 |
Appl. No.: |
15/083079 |
Filed: |
March 28, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14846925 |
Sep 7, 2015 |
9361887 |
|
|
15083079 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06Q 10/10 20130101;
G10L 15/30 20130101; G06F 3/0481 20130101; G10L 15/063 20130101;
G10L 15/01 20130101; G06F 3/0488 20130101; G10L 17/22 20130101;
G06F 3/04842 20130101; G10L 15/00 20130101; H04W 4/21 20180201;
G06F 3/162 20130101 |
International
Class: |
G10L 15/30 20060101
G10L015/30; G10L 15/06 20060101 G10L015/06; G10L 15/01 20060101
G10L015/01; G06F 3/16 20060101 G06F003/16; G06F 3/0488 20060101
G06F003/0488; G06F 3/0484 20060101 G06F003/0484; G06F 3/0481
20060101 G06F003/0481; G10L 17/22 20060101 G10L017/22; H04W 4/20
20060101 H04W004/20 |
Claims
1. A computer-implemented method of providing crowd-sourced
campaigns to facilitate speech analysis, the method being
implemented in a computer system having one or more physical
processors programmed with computer program instructions that, when
executed by the one or more physical processors, program the
computer system to perform the method, the method comprising:
receiving, by the computer system, from a natural language
processing training device, an identification token containing a
first portion and a second portion, the first portion identifying a
first file associated with a voice data collection campaign, and
the second portion identifying a second file associated with a
session script, the session script comprising one or more prompts
to be provided at a mobile application on the natural language
processing training device; identifying, by the computer system,
one or more audits based on the identification token; conducting,
by the computer system, the identified one of more audits based on
the identification token via the mobile application; receiving, by
the computer system, a plurality of audit responses from the mobile
application; determining, by the computer system, a number of
failed audit responses based on the received plurality of audit
responses; storing, by the computer system, the number of failed
audits in association with a device identifier that identifies the
natural language processing training device; determining, by the
computer system, whether the number of failed audits for the
natural language processing training device exceeds a failed audits
threshold; and responsive to a determination that the number of
failed audits exceeds a failed audits threshold, barring, by the
computer system, the natural language processing training device
from the voice data collection campaign.
2. The method of claim 1, wherein conducting the one or more audits
comprises providing gold standard questions, captions that are not
machine-readable, and/or audio that is not understandable to
machines.
3. The method of claim 1, the method further comprising: gathering,
by the computer system, a first filename corresponding to the first
file; gathering, by the computer system, a second filename
corresponding to the second file; and creating, by the computer
system, the identification token using the first filename and the
second filename.
4. The method of claim 1, wherein one or more of the first file and
the second file comprises a JavaScript Object Notation (JSON)
file.
5. The method of claim 1, wherein the session script identifies one
or more prompts, and wherein each prompt comprises text of at least
one utterance, the method further comprising: providing, by the
computer system, the one or more prompts to a user via the mobile
application on the natural language processing training device;
receiving, by the computer system, voice data from the natural
language processing training device in response to user interaction
with the prompt; and storing, by the computer system, the voice
data and the prompt in a transcription library.
6. The method of claim 5, wherein the at least one utterance
comprises one or more of a syllable, a word, a phrase, or a variant
thereof.
7. The method of claim 5, wherein the user interaction comprises a
selection of a touch-screen button instructing the mobile
application to record the voice data.
8. The method of claim 5, the method further comprising: obtaining,
by the computer system, a decibel level of the received voice data;
obtaining, by the computer system, campaign data based on the first
file, wherein the campaign data includes a specification of a
minimum decibel level; determining, by the computer system, whether
the decibel level for the received voice data exceeds the specified
minimum decibel level; responsive to a determination that the
decibel level exceeds the specified minimum decibel level, storing,
by the computer system, the voice data and the text of the at least
one utterance in the transcription library; and responsive to a
determination that the decibel level does not exceed the specified
minimum decibel level, not storing, by the computer system, the
voice data and the text of the at least one utterance in the
transcription library.
9. The method of claim 5, the method further comprising: obtaining,
by the computer system, a decibel level of the received voice data;
obtaining, by the computer system, campaign data based on the first
file, wherein the campaign data includes a specification of a
maximum decibel level; determining, by the computer system, whether
the decibel level for the received voice data exceeds the specified
maximum decibel level; responsive to a determination that the
decibel level does not exceed the specified maximum decibel level,
storing, by the computer system, the voice data and the text of the
at least one utterance in the transcription library; and responsive
to a determination that the decibel level exceeds the specified
maximum decibel level, not storing, by the computer system, the
voice data and the text of the at least one utterance in the
transcription library.
10. The method of claim 5, the method further comprising:
obtaining, by the computer system, an audio duration of the
received voice data; obtaining, by the computer system, campaign
data based on the first file, wherein the campaign data includes a
specification of a maximum audio duration; determining, by the
computer system, whether the audio duration exceeds the specified
maximum audio duration; responsive to a determination that the
audio duration does not exceed the specified maximum audio
duration, storing, by the computer system, the voice data and the
text of the at least one utterance in the transcription library;
and responsive to a determination that the audio duration exceeds
the specified maximum audio duration, not storing, by the computer
system, the voice data and the text of the at least one utterance
in the transcription library.
11. The method of claim 5, the method further comprising:
receiving, by the computer system, a request to repeat the one or
more prompts to a second user via the mobile application on the
natural language processing training device; obtaining, by the
computer system, campaign data based on the first file, wherein the
campaign data specifies whether the natural language processing
training device can repeat the one or more prompts; and
determining, by the computer system, whether the natural language
processing training device can repeat the one or more prompts based
on the campaign data.
12. The method of claim 5, the method further comprising:
obtaining, by the computer system, campaign data based on the first
file, wherein the campaign data specifies whether a calibration
screen should be displayed via the mobile application on the
natural language processing training device; and instructing, by
the computer system, the natural language processing training
device to display the calibration screen via the mobile application
based on the campaign data.
13. The method of claim 1, wherein the natural language processing
training device comprises one or more of a mobile phone, a tablet
computing device, a laptop, and a desktop.
14. The method of claim 1, wherein the voice data collection
campaign is configured to collect demographic information related
to a user of the natural language processing training device.
15. A system for providing crowd-sourced campaigns to facilitate
speech analysis, the system comprising: one or more physical
processors programmed with computer program instructions that, when
executed by the one or more physical processors, program the one or
more physical processors to: receive from a natural language
processing training device an identification token containing a
first portion and a second portion, the first portion identifying a
first file associated with a voice data collection campaign, and
the second portion identifying a second file associated with a
session script, the session script comprising one or more prompts
to be provided at a mobile application on the natural language
processing training device; identify one or more audits based on
the identification token; conduct the identified one of more audits
based on the identification token via the mobile application;
receive a plurality of audit responses from the mobile application;
determine a number of failed audit responses based on the received
plurality of audit responses; store the number of failed audits in
association with a device identifier that identifies the natural
language processing training device; determine whether the number
of failed audits for the natural language processing training
device exceeds a failed audits threshold; and responsive to a
determination that the number of failed audits exceeds a failed
audits threshold, bar the natural language processing training
device from the voice data collection campaign.
16. The system of claim 15, wherein to conduct the one or more
audits, the one or more processors are further programmed to:
provide gold standard questions, captions that are not
machine-readable, and/or audio that is not understandable to
machines.
17. The system of claim 15, wherein the one or more processors are
further programmed to: gather a first filename corresponding to the
first file; gather a second filename corresponding to the second
file; and create the identification token using the first filename
and the second filename.
18. The system of claim 15, wherein one or more of the first file
and the second file comprises a JavaScript Object Notation (JSON)
file.
19. The system of claim 15, wherein the session script identifies
one or more prompts, each prompt comprising text of at least one
utterance, wherein the one or more processors are further
programmed to: provide the one or more prompts to a user via the
mobile application on the natural language processing training
device; receive voice data from the natural language processing
training device in response to user interaction with the prompt;
and store the voice data and the prompt in a transcription
library.
20. The system of claim 19, wherein the at least one utterance
comprises one or more of a syllable, a word, a phrase, or a variant
thereof.
21. The system of claim 19, wherein the user interaction comprises
a selection of a touch-screen button instructing the mobile
application to record the voice data.
22. The system of claim 19, wherein the one or more processors are
further programmed to: obtain a decibel level of the received voice
data; obtain campaign data based on the first file, wherein the
campaign data includes a specification of a minimum decibel level;
determine whether the decibel level for the received voice data
exceeds the specified minimum decibel level; responsive to a
determination that the decibel level exceeds the specified minimum
decibel level, store the voice data and the text of the at least
one utterance in the transcription library; and responsive to a
determination that the decibel level does not exceed the specified
minimum decibel level, not store the voice data and the text of the
at least one utterance in the transcription library.
23. The system of claim 19, wherein the one or more processors are
further programmed to: obtain a decibel level of the received voice
data; obtain campaign data based on the first file, wherein the
campaign data includes a specification of a maximum decibel level;
determine whether the decibel level for the received voice data
exceeds the specified maximum decibel level; responsive to a
determination that the decibel level does not exceed the specified
maximum decibel level, store the voice data and the text of the at
least one utterance in the transcription library; and responsive to
a determination that the decibel level exceeds the specified
minimum decibel level, not store the voice data and the text of the
at least one utterance in the transcription library.
24. The system of claim 19, wherein the one or more processors are
further programmed to: obtain an audio duration of the received
voice data; obtain campaign data based on the first file, wherein
the campaign data includes a specification of a maximum audio
duration; determine whether the audio length exceeds the specified
maximum audio duration; responsive to a determination that the
audio duration does not exceed the specified maximum audio
duration, store the voice data and the text of the at least one
utterance in the transcription library; and responsive to a
determination that the audio duration exceeds the specified maximum
audio duration, not store the voice data and the text of the at
least one utterance in the transcription library.
25. The system of claim 19, wherein the one or more processors are
further programmed to: receive a request to repeat the one or more
prompts to a second user via the mobile application on the natural
language processing training device; obtain campaign data based on
the first file, wherein the campaign data specifies whether the
natural language processing training device can repeat the one or
more prompts; and determine whether the natural language processing
training device can repeat the one or more prompts based on the
campaign data.
26. The system of claim 19, wherein the one or more processors are
further programmed to: obtain campaign data based on the first
file, wherein the campaign data specifies whether a calibration
screen should be displayed via the mobile application on the
natural language processing training device; and instruct the
natural language processing training device to display the
calibration screen via the mobile application based on the campaign
data.
27. The system of claim 15, wherein the natural language processing
training device comprises one or more of a mobile phone, a tablet
computing device, a laptop, and a desktop.
28. The system of claim 15, wherein the voice data collection
campaign is configured to collect demographic information related
to a user of the natural language processing training device.
Description
RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 14/846,925, filed Sep. 7, 2015, entitled
"SYSTEM AND METHOD FOR PROVIDING WORDS OR PHRASES TO BE UTTERED BY
MEMBERS OF A CROWD AND PROCESSING THE UTTERANCES IN CROWD-SOURCED
CAMPAIGNS TO FACILITATE SPEECH ANALYSIS," the entirety of which is
incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The field of the invention relates to collecting natural
language content for natural language content transcriptions, and
creating and distributing voice data collection campaigns and
session scripts that allow natural language processing trainers to
provide voice data used for natural language transcriptions.
BACKGROUND OF THE INVENTION
[0003] By translating voice data into text, speech recognition has
played an important part in many Natural Language Processing (NLP)
technologies. For instance, speech recognition has proven useful to
technologies involving vehicles (e.g., in-car speech recognition
systems), technologies involving health care, technologies
involving the military and/or law enforcement, technologies
involving telephony, and technologies that assist people with
disabilities. Speech recognition systems are often trained and
deployed to end-users.
[0004] The end-user deployment phase typically includes using a
trained acoustic model to identify text in voice data provided by
end-users. The training phase typically involves training an
acoustic model in the speech recognition system to recognize text
in voice data. The training phase often includes capturing voice
data, transcribing the voice data into text, and storing pairs of
voice data and text in transcription libraries. Capturing voice
data in the training phase typically involves collecting different
syllables, words, and/or phrases commonly used in speech. Depending
on the context, these utterances may form the basis of commands to
a computer system, requests to gather information, portions of
dictations, or other end-user actions.
[0005] Conventionally, NLP systems captured voice data using teams
of trained Natural Language Processing (NLP) trainers who were
housed in a recording studio or other facility having audio
recording equipment therein. The voice data capture process often
involved providing the NLP trainers with a list of utterances, and
recording the utterances using the audio recording equipment. Teams
of trained transcribers in dedicated transcription facilities
typically listened to the utterances, and manually transcribed the
utterances into text.
[0006] Though useful, conventional NLP systems have problems
accommodating the wide variety of utterances present in a given
language. More specifically, teams of trained NLP trainers may not
be able to generate the wide variety of syllables, words, phrases,
etc. that are used in many technological contexts. Moreover,
conventional recording studios and/or audio recording equipment are
often not cost-effective to deploy on a large scale. Additionally,
attempts to distribute collection of voice data to untrained NLP
trainers often results in noise (inaccuracies, spam, etc.) being
added to the voice data. As a result, it is often difficult to
cost-effectively collect voice data with conventional NLP systems.
It would be desirable to provide systems and methods that
effectively collect voice data for NLP technologies without
significant noise.
SUMMARY OF THE INVENTION
[0007] Systems and methods of providing text related to utterances,
and gathering voice data in response to the text are provide
herein. In various implementations, an identification token that
identifies a first file for a voice data collection campaign, and a
second file for a session script may be received from a natural
language processing training device. The first file and the second
file may be used to configure the mobile application to display a
sequence of screens, each of the sequence of screens containing
text of at least one utterance specified in the voice data
collection campaign. Voice data may be received from the natural
language processing training device in response to user interaction
with the text of the at least one utterance. The voice data and the
text may be stored in a transcription library.
[0008] In some implementations, a first filename corresponding to
the first file may be gathered. A second filename corresponding to
the second file may further be gathered. The identification token
may be created using the first filename and the second
filename.
[0009] In some implementations, identification token comprises an
alphanumeric character string. The identification token may
comprise a concatenation of the first portion and the second
portion. One or more of the first file and the second file may
comprise a JavaScript Object Notation (JSON) file.
[0010] In some implementations, the utterance comprises one or more
of a syllable, a word, a phrase, or a variant thereof. The user
interaction may comprise a selection of a touch-screen button
instructing the mobile application to record the voice data.
[0011] The natural language processing training device may comprise
one or more of a mobile phone, a tablet computing device, a laptop,
and a desktop. The voice data collection campaign may be configured
to collect demographic information related to a user of the natural
language processing training device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1 illustrates a block diagram of a natural language
processing environment, according to some implementations.
[0013] FIG. 2 illustrates a block diagram of a transcription
engine, according to some implementations.
[0014] FIG. 3 illustrates a block diagram of an example of a data
flow relating to operation of the natural language processing
environment during the training phase, according to some
implementations.
[0015] FIG. 4 illustrates a block diagram of an example of a data
flow relating to operation of the natural language processing
environment during the end-user deployment phase, according to some
implementations.
[0016] FIG. 5 illustrates a block diagram of an example of a data
flow relating to transcription of voice data by the natural
language processing environment during the training phase,
according to some implementations.
[0017] FIG. 6 illustrates a flowchart of a process for collecting
voice data for a voice data collection campaign, according to some
implementations.
[0018] FIG. 7 illustrates a flowchart of a process for creating and
distributing identification tokens for a natural language training
process, according to some implementations.
[0019] FIG. 8 illustrates a screenshot of a screen of a mobile
application of one of the NLP training device(s), according to some
implementations.
[0020] FIG. 9 shows an example of a computer system, according to
some implementations.
DETAILED DESCRIPTION
[0021] The system and method described herein measures and assures
high quality when performing large-scale crowdsourcing data
collections for acoustic model training. The system and method may
mitigate different types of spam encountered while collecting and
validating speech audio from unmanaged crowds. The system and
method may provide, to a mobile application executing on a client
device, prompts of utterances to be uttered. The mobile application
may collect audio from crowd members and control conditions of the
audio collection. Crowd members may be compensated for completing
their prompts. An example of a mobile application for collecting
utterances is described in U.S. patent application Ser. No.
14/846,926, filed on Sep. 7, 2015, entitled "SYSTEM AND METHOD OF
RECORDING UTTERANCES USING UNMANAGED CROWDS FOR NATURAL LANGUAGE
PROCESSING," the entirety of which is hereby incorporated herein in
its entirety.
[0022] The recordings may be validated using a two-step validation
process which ensures that workers are paid only when they have
actually used our application to complete their tasks. For example,
the collected audio may be run through a second crowdsourcing job
designed to validate that the speech matches the text with which
the speakers were prompted. For the validation task, gold-standard
test questions are used in combination with expected answer
distribution rules and monitoring of worker activity levels over
time to detect and expel likely spammers. Inter-annotator agreement
is used to ensure high confidence of validated judgments. This
process yielded millions of recordings with matching transcriptions
in American English. The resulting set is 96% accurate with only
minor errors. An example of a the validation process is described
in U.S. patent application Ser. No. 14/846,935, filed on Sep. 7,
2015, entitled "SYSTEM AND METHOD FOR VALIDATING NATURAL LANGUAGE
CONTENT USING CROWDSOURCED VALIDATION JOBS," the entirety of which
is hereby incorporated herein in its entirety.
[0023] Example of a System Architecture
[0024] The Structures of the Natural Language Processing
Environment 100
[0025] FIG. 1 illustrates a block diagram of a natural language
processing environment 100, according to some implementations. The
natural language processing environment 100 may include Natural
Language Processing (NLP) end-user device(s) 102, NLP training
device(s) 104, transcription device(s) 106, a network 108,
validation device(s) 110, and a transcription validation server
112. The NLP end-user device(s) 102, the NLP training device(s)
104, the transcription device(s) 106, the validation device(s) 110,
and the transcription validation server 112 are shown coupled to
the network 108.
[0026] The NLP end-user device(s) 102 may include one or more
digital devices configured to provide an end-user with natural
language transcription services. A "natural language transcription
service," as used herein, may include a service that converts audio
contents into a textual format. A natural language transcription
service may recognize words in audio contents, and may provide an
end-user with a textual representation of those words. The natural
language transcription service may be incorporated into an
application, process, or run-time element that is executed on the
NLP end-user device(s) 102. In an implementation, the natural
language transcription service is incorporated into a mobile
application that executes on the NLP end-user device(s) 102 or a
process maintained by the operating system of the NLP end-user
device(s) 102. In various implementations, the natural language
transcription service may be incorporated into applications,
processes, run-time elements, etc. related to technologies
involving vehicles, technologies involving health care,
technologies involving the military and/or law enforcement,
technologies involving telephony, technologies that assist people
with disabilities, etc. The natural language transcription service
may be supported by the transcription device(s) 106, the validation
device(s) 110, and the transcription validation server 112, as
discussed further herein.
[0027] The NLP end-user device(s) 102 may include components, such
as memory and one or more processors, of a computer system. The
memory may further include a physical memory device that stores
instructions, such as the instructions referenced herein. The NLP
end-user device(s) 102 may include one or more audio input
components (e.g., one or more microphones), one or more display
components (e.g., one or more screens), one or more audio output
components (e.g., one or more speakers), etc. In some
implementations, the audio input components of the NLP end-user
device(s) 102 may receive audio content from an end-user, the
display components of the NLP end-user device(s) 102 may display
text corresponding to transcriptions of the audio contents, and the
audio output components of the NLP end-user device(s) 102 may play
audio contents to the end-user. It is noted that in various
implementations, however, the NLP end-user device(s) 102 need not
display transcribed audio contents, and may use transcribed audio
contents in other ways, such as to provide commands that are not
displayed on the NLP end-user device(s) 102, use application
functionalities that are not displayed on the NLP end-user
device(s) 102, etc. The NLP end-user device(s) 102 may include one
or more of a networked phone, a tablet computing device, a laptop
computer, a desktop computer, a server, or some combination
thereof.
[0028] The NLP training device(s) 104 may include one or more
digital device(s) configured to receive voice data from an NLP
trainer. An "NLP trainer," as used herein, may refer to a person
who provides voice data during a training phase of the natural
language processing environment 100. The voice data provided by the
NLP trainer may be used as the basis of transcription libraries
that are used during an end-user deployment phase of the natural
language processing environment 100. The NLP training device(s) 104
may include components, such as memory and one or more processors,
of a computer system. The memory may further include a physical
memory device that stores instructions, such as the instructions
referenced herein. The NLP training device(s) 104 may include one
or more audio input components (e.g., one or more microphones), one
or more display components (e.g., one or more screens), one or more
audio output components (e.g., one or more speakers), etc. The NLP
training device(s) 104 may support a mobile application, process,
etc. that is used to capture voice data during the training phase
of the natural language processing environment 100. The NLP
end-user device(s) 102 may include one or more of a networked
phone, a tablet computing device, a laptop computer, a desktop
computer, a server, or some combination thereof.
[0029] In some implementations, the mobile application, process,
etc. on the NLP training device(s) 104 supports a sequence of
screens that allow a NLP trainer (e.g., a person operating the NLP
training device(s) 104) to view utterances and to record voice data
corresponding to those utterances. The NLP training device(s) 104
may correspond to recording devices discussed in U.S. patent
application Ser. No. 14/846,926, entitled, "SYSTEM AND METHOD OF
RECORDING UTTERANCES USING UNMANAGED CROWDS FOR NATURAL LANGUAGE
PROCESSING," which is hereby incorporated by reference herein.
[0030] The transcription device(s) 106 may include one or more
digital devices configured to support natural language
transcription services. The transcription device(s) 106 may receive
transcription job data from the transcription validation server
112. A "transcription job," as described herein, may refer to a
request to transcribe audio content into text. "Transcription job
data" may refer to data related to a completed transcription job.
Transcription job data may include audio content that is to be
transcribed, as well as other information (transcription timelines,
formats of text output files, etc.) related to transcription. The
transcription device(s) 106 may further provide transcription job
data, such as text related to a transcription of audio contents, to
the transcription validation server 112. In some implementations,
the transcription device(s) 106 gather voice data from the NLP
training device(s) 104 during a training phase of the natural
language processing environment 100.
[0031] In some implementations, the transcription device(s) 106
implement crowdsourced transcription processes. In these
implementations, an application or process executing on the
transcription device(s) 106 may receive transcription job data from
the transcription validation server 112 (e.g., from the
transcription engine 114). The transcription job data may specify
particular items of audio content an end-user is to transcribe. The
transcribers need not, but may, be trained transcribers.
[0032] In various implementations, the transcription device(s) 106
comprise digital devices that perform transcription jobs using
dedicated transcribers. In these implementations, the transcription
device(s) 106 may comprise networked phone(s), tablet computing
device(s), laptop computer(s), desktop computer(s), etc. that are
operated by trained transcribers. As an example of these
implementations, the transcription device(s) 106 may include
computer terminals in a transcription facility that are operated by
trained transcription teams.
[0033] The network 108 may comprise any computer network. The
network 108 may include a networked system that includes several
computer systems coupled together, such as the Internet. The term
"Internet" as used herein refers to a network of networks that uses
certain protocols, such as the TCP/IP protocol, and possibly other
protocols such as the hypertext transfer protocol (HTTP) for
hypertext markup language (HTML) documents that make up the World
Wide Web (the web). Content is often provided by content servers,
which are referred to as being "on" the Internet. A web server,
which is one type of content server, is typically at least one
computer system which operates as a server computer system and is
configured to operate with the protocols of the web and is coupled
to the Internet. The physical connections of the Internet and the
protocols and communication procedures of the Internet and the web
are well known to those of skill in the relevant art. In various
implementations, the network 108 may be implemented as a
computer-readable medium, such as a bus, that couples components of
a single computer together. For illustrative purposes, it is
assumed the network 108 broadly includes, as understood from
relevant context, anything from a minimalist coupling of the
components illustrated in the example of FIG. 1, to every component
of the Internet and networks coupled to the Internet.
[0034] In various implementations, the network 108 may include
technologies such as Ethernet, 802.11, worldwide interoperability
for microwave access (WiMAX), 3G, 4G, CDMA, GSM, LTE, digital
subscriber line (DSL), etc. The network 108 may further include
networking protocols such as multiprotocol label switching (MPLS),
transmission control protocol/Internet protocol (TCP/IP), User
Datagram Protocol (UDP), hypertext transport protocol (HTTP),
simple mail transfer protocol (SMTP), file transfer protocol (FTP),
and the like. The data exchanged over the network 108 can be
represented using technologies and/or formats including hypertext
markup language (HTML) and extensible markup language (XML). In
addition, all or some links can be encrypted using conventional
encryption technologies such as secure sockets layer (SSL),
transport layer security (TLS), and Internet Protocol security
(IPsec). In some implementations, the network 108 comprises secure
portions. The secure portions of the network 108 may correspond to
a networked resources managed by an enterprise, networked resources
that reside behind a specific gateway/router/switch, networked
resources associated with a specific Internet domain name, and/or
networked resources managed by a common Information Technology
("IT") unit.
[0035] The validation device(s) 110 may include one or more digital
devices configured to validate natural language transcriptions. The
validation device(s) 110 may receive validation job data from the
transcription validation server 112 (e.g., from the validation
engine 116). A "validation job," as used herein, may refer to a
request to verify the outcome of a transcription job. "Validation
job data" or a "validation unit," as described herein, may refer to
data related to a crowdsourced validation job. In various
implementations, the validation device(s) 110 implement
crowdsourced validation processes. A "crowdsourced validation
process," as described herein, may include a process that
distributes a plurality of validation jobs to a plurality of
validators. The validation device(s) 110 may correspond to the
validation devices described in U.S. patent application Ser. No.
14/846,935, entitled, "SYSTEM AND METHOD FOR VALIDATING NATURAL
LANGUAGE CONTENT USING CROWDSOURCED VALIDATION JOBS," which is
hereby incorporated by reference herein.
[0036] The transcription validation server 112 may comprise one or
more digital devices configured to support natural language
transcription services. The transcription validation server 112 may
include a transcription engine 114, a validation engine 116, and an
end-user deployment engine 118.
[0037] The transcription engine 114 may transcribe voice data into
text during a training phase of the natural language processing
environment 100. More specifically, the transcription engine 114
may collect audio content from the NLP training device(s) 104, and
may create and/or manage transcription jobs. The transcription
engine 114 may further receive transcription job data related to
transcription jobs from the transcription device(s) 106. According
to various implementations disclosed herein, the transcription
engine 114 distributes collection of voice data to untrained
speakers who provide utterances into the mobile application,
process, etc. executing on the NLP training device(s) 104. The
transcription engine 114 may allow providers of an NLP service to
create voice data collection campaigns that identify NLP utterances
to be collected from the speakers. A "voice data collection
campaign," as described herein, may refer to an instance of an
effort to collect voice data for specific utterances.
[0038] The transcription engine 114 may provide each of the NLP
training device(s) 104 with a session script that guides the
speakers through a series of utterances represented in the voice
data collection campaigns. A "session script," as described herein,
may refer to a script that provides an NLP trainer with one or more
prompts into which the NLP trainer can provide voice data in
response to text of utterances. The transcription engine 114 may
further collect voice data related to those utterances. The voice
data may be stored along with text of the utterances in a
transcription datastore that is used for crowdsourced transcription
jobs, crowdsourced validation jobs, and/or other processes
described herein. FIG. 2 shows the transcription engine 114 in
greater detail.
[0039] The validation engine 116 may manage validation of
transcriptions of voice data during a training phase of the natural
language processing environment 100. In various implementations,
the validation engine 116 provides validation jobs and/or
validation job scoring data to the validation device(s) 110. The
validation engine 116 may further receive validation job outcomes
from the validation device(s) 110. The validation engine 116 may
store validated transcription data in a validated transcription
data datastore. The validated transcription data datastore may be
used during an end-user deployment phase of the natural language
processing environment 100. The validation engine 116 may
correspond to the validation engine described in U.S. patent
application Ser. No. 14/846,935, entitled, "System and Method for
Validating Natural Language Content Using Crowdsourced Validation
Jobs."
[0040] The end-user deployment engine 118 may provide natural
language transcription services to the NLP end-user device(s) 102
during an end-user deployment phase of the natural language
processing environment 100. In various implementations, the
end-user deployment engine 118 uses a validated transcription data
datastore. Transcriptions in the validated transcription data
datastore may have been initially transcribed by the transcription
device(s) 106 and the transcription engine 114, and validated by
the validation device(s) 110 and the validation engine 116.
[0041] Though FIG. 1 shows the NLP end-user device(s) 102, the NLP
training device(s) 104, the transcription device(s) 106, and the
validation device(s) 110 as distinct sets of devices, it is noted
that in various implementations, one or more of the NLP end-user
device(s) 102, the NLP training device(s) 104, the transcription
device(s) 106, and the validation device(s) 110 may reside on a
common set of devices. For example, in some implementations,
devices used as the basis of the transcription device(s) 106 may
correspond to devices used as the basis of the NLP training
device(s) 104. In these implementations, people may use a digital
device configured: as an NLP training device(s) 104 to provide
voice data, and as a transcription device(s) 106 to transcribe
voice data provided by other people.
[0042] The Structures of the Transcription Engine 114
[0043] FIG. 2 illustrates a block diagram of a transcription engine
114, according to some implementations. The transcription engine
114 may include a network interface engine 202, a mobile
application management engine 204, a campaign engine 206, a session
script management engine 208, an identification token management
engine 210, a demographic data datastore 212, a campaign data
datastore 214, a session script data datastore 216, and an
identification token data datastore 218. One or more of the network
interface engine 202, the mobile application management engine 204,
the campaign engine 206, the session script management engine 208,
the identification token management engine 210, the demographic
data datastore 212, the campaign data datastore 214, the session
script data datastore 216, and the identification token data
datastore 218 may be coupled to one another or to modules not shown
in FIG. 2.
[0044] The network interface engine 202 may be configured to send
data to and receive data from the network 108. In some
implementations, the network interface engine 202 is implemented as
part of a network card (wired or wireless) that supports a network
connection to the network 108. The network interface engine 202 may
control the network card and/or take other actions to send data to
and receive data from the network 108.
[0045] The mobile application management engine 204 may be
configured to manage a mobile application on the NLP trainer
device(s) 104. In some implementations, the mobile application
management engine 204 may provide installation instructions (e.g.,
an executable installation file, a link to an application store,
etc.) to the NLP training device(s) 104. The mobile application
management engine 204 may instruct a mobile application on the NLP
trainer device(s) 104 to render on the screens of the NLP trainer
device(s) 104 a sequence of prompts that allow NLP trainers to
record utterances that are provided to them. In some
implementations, the utterances in the sequence of prompts follow
session scripts for a particular NLP trainer. The mobile
application management engine 204 may further configure the mobile
application in accordance with a voice data collection campaign.
For example, the mobile application management engine 204 may use
campaign data to configure relevant sound levels, tutorials,
session scripts, etc. of the mobile application.
[0046] The campaign engine 206 may collect campaign data from the
campaign data datastore 214. In some implementations, the campaign
engine 206 queries the campaign data datastore 214 for specific
items of campaign data for a particular voice data collection
campaign. The campaign engine 206 may provide campaign data to the
mobile application management engine 204 and/or other modules of
the transcription engine 114.
[0047] The session script management engine 208 may collect session
scripts from the session script data datastore 216. In various
implementations, the session script queries the session script data
datastore 216 for specific session scripts. The session script
management engine 208 may provide the session scripts to the mobile
application management engine 204 and/or other modules of the
transcription engine 114.
[0048] The identification token management engine 210 may manage
identification tokens. In some implementations, the identification
token management engine 210 gathers identification tokens to and
stores identification tokens in the identification token data
datastore 218. The identification token management engine 210 may
also create identification token using information related to
campaign data and/or session scripts. As an example, the
identification token management engine 210 may create an
identification token that includes a file name of specific campaign
data in a first part (e.g., in the first five characters), and
includes a file name of specific session scripts in a second part
(e.g., in the last four characters). The identification token
management engine 210 may provide identification tokens to and/or
receive identification tokens from the mobile application
management engine 204 and/or other modules of the transcription
engine 114.
[0049] The demographic data datastore 212 may store demographic
data related to NLP trainers. In some implementations, the
demographic data datastore 212 may be specified in a demographic
file referenced in campaign data stored in the campaign data
datastore 214. Demographic data may include usernames, first and
last names, addresses, phone numbers, emails, payment information,
etc. associated with NLP trainers. Table 1 shows an example of how
demographic data in the demographic data datastore 212 may be
structured:
TABLE-US-00001 TABLE 1 Example of Demographic Data. Key Description
Type Example Required? version Schema version of Double 1.0 Yes the
demographic file. data_points Yes data_points.label Name of the
field. String "First name" Yes Will be displayed beside the input
field. data_points.required Whether the Boolean true|false No
(True) participant is required to provide this information.
data_points.type Used to identify Integer 0|1|2|3|4 Yes what type
of input field to display. 0 = Text field (EditText) 1 = List
(Spinner) 2 = Checkboxes 3 = Radio buttons 4 = Switch
data_points.hint A hint of what the String "e.g., Bob" No
participant should input. Only used for type 0. data_points.options
Options to List<String> ["Male", Yes, if type is 1, 2,
populate types 1, "Female", or 3. 2, and 3. "Other"]
data_points.default Whether the Boolean true|false Yes, if type is
Switch should be set to true or false by default.
[0050] The campaign data datastore 214 may store campaign data. The
campaign data may comprise a file compatible with JavaScript Object
Notation (JSON) or other relevant format. Each item of campaign
data may be identified with campaign code, e.g., an alphanumeric
string that can be used to download a file corresponding to the
campaign data. In various implementations, the campaign data
includes information related to configuring the mobile application
on the NLP training device(s) 104 to record voice data. The
campaign data may include, for instance, definitions of different
sound levels that may be used by the mobile application on the NLP
training device(s) 104 during prompt recording and/or audio
calibration processes. Table 2 shows an example of how the campaign
data in the campaign data datastore 214 may be structured:
TABLE-US-00002 TABLE 2 Example of Campaign Data. Required Key
Description Type Example (Default) version Schema version of Double
1.0 Yes the settings file. id Same as the String "ao9hz3" Yes
filename. Used to easily identify the settings file. db_levels
Defines the Yes different decibels levels to be used by the VU
meter during prompt recording and calibration. db_levels.min The
minimum Double 0 Yes decibel level. db_levels.grn_max_ylw_min The
maximum Double 10.5 Yes decibel level for the green range and the
minimum for the yellow. db_levels.ylw_max_red_min The maximum
Double 20 Yes decibel level for the yellow range and the minimum
for the red. db_levels.max The maximum Double 30 Yes decibel level.
tutorial Name of the .tut String "test_tut" No file to download
tutorials from S3. Exclude to skip tutorials. Should not contain
spaces. demographic Name of the .dem String "test_dem" No file to
download demographic options from S3. Exclude to skip demographic.
Should not contain spaces. session_script_dir Name of folder String
"test_campaign" within jibe.data/sessions. scripts in which
campaign's session scripts are kept. number_of_sessions The number
of Integer 1 Yes sessions [1-n] a participant can do.
duplicate_sessions Whether a device Boolean true|false Yes can
repeat a session. Should be true if a device is shared.
do_calibration Whether the Boolean true|false No (False)
calibration screen should be displayed. external_storage Whether
Boolean true|false No (False) temporary, completed, and zip files
should be saved in the app's internals storage (more secure) or
external folder. If external, a Jibe folder will be created.
failed_audits_threshold Total number of Integer 10 No failed audits
a given device is allowed. If a device's total equals this value,
device will not be able to continue with campaign. upload Whether
to Boolean true|false No (True) automatically upload audio files
and other information. generate_completion_code Whether to Boolean
true|false No (False) generate an 8-digit alphanumerical string at
the end of each completed session max_audio_length The maximum Long
12000 Yes length an audio recording can be, in milliseconds.
silence Silence to precede No and following each audio recording
silence.leading Duration of Long 1000 No (500) silence prepending
the audio recording, in milliseconds. Default is 500
silence.trailing Duration of Long 5000 No (500) silence to append
the audio recording, in milliseconds. Default is 1000
[0051] The session script data datastore 216 may store data related
to session scripts. More particularly, in various implementations,
the session script data datastore 216 may store information related
to the specific sequence of screens used on one of the NLP training
device(s) 104 to collect voice data. Table 3 shows an example of
how the data related to session scripts in the session script data
datastore 216 may be structured:
TABLE-US-00003 TABLE 3 Example of Data Related to Session Scripts
Key Description Data Type Example Required? version Schema version
Double 1.2 Yes of the .ss file. id A unique 12 String
"5C3D6D4E2981" Yes character alphanumerical string used to identify
the session script. name Name given to String "POI-Seattle- Yes the
session. This WA - USA" won't be displayed to the participant. Used
for internal use. language- Language- String "en-us" No culture
culture information about the script. This will not be used by Jibe
but for post- processing. prompts Array that List<Prompt>
"en-us" Yes contains a list of Prompts. audio_config Configurations
List<Prompt> No for all the audio files generated during the
session. If this field is missing, the app will use the defaults
provided below. audio_config.sample_rate The sample rate Integer
16000 Yes for the audio recording. Default is 16000
audio_config.format The format for String "WAV"|"RAW"| Yes the
audio "PCM" recording. Default is "WAV". audio_config.channel
Integer Integer 1 = Mono Yes representing 2 = Stereo mono or
stereo. Default is 1 audio_config.bit_rate The bit rate for Integer
16 Yes the recording. Default is 8.
[0052] The identification token data datastore 218 may store
identification tokens. In various implementations, each
identification token may comprise an alphanumeric string that
contains therein identifiers of campaign data and session script
data. As an example, identification tokens in the identification
token data datastore 218 may comprise a nine character alphanumeric
string. In these implementations, the four characters of an
identification token may correspond to a campaign code for a
specific campaign data. Moreover, the last five characters of an
identification token may correspond to the first five characters of
a filename of session script data.
[0053] The Natural Language Processing Environment 100 in
Operation
[0054] The natural language processing environment 100 may operate
to collect voice data using campaign data and session scripts
delivered to the NLP training device(s) 104, as discussed further
herein. The natural language processing environment 100 may also
operate to transcribe voice data into text, and validate
transcriptions of voice data as discussed further herein. As
discussed herein, the natural language processing environment 100
may operate to support an end-user deployment phase in which voice
data from NLP end users is collected and transcribed by validated
transcription libraries. The natural language processing
environment 100 may also operate to support a training phase in
which voice data is gathered through campaign data and session
scripts, and assembled into transcription libraries.
[0055] Operation when Collecting Voice Data Pursuant to a Voice
Data Collection Campaign
[0056] The natural language processing environment 100 may operate
to collect voice data pursuant to voice data collection campaigns
provided to the NLP training device(s) 104 and managed by the
transcription engine 114. A mobile application on the NLP training
device(s) 104 may present NLP trainer(s) with a series of prompts
that display utterances in accordance with the voice data
collection campaigns and/or session scripts. The NLP trainers may
use the series of prompts to provide voice data that corresponds to
text related to the voice data collection campaigns.
[0057] FIG. 3 illustrates a block diagram of an example of a data
flow 300 relating to operation of the natural language processing
environment 100 during the training phase, according to some
implementations. FIG. 3 includes the transcription engine 114, the
network 108, the NLP training device(s) 104, and an NLP trainer
302.
[0058] The transcription engine 114 may receive and/or store a
voice data collection campaign file. The transcription engine 114
may also receive and/or store a session script file that is sent to
and/or stored in the transcription engine 114. In various
implementations, the transcription engine 114 creates
identification tokens that identify the voice data collection
campaign file and the session script file. As an example, the
transcription engine 114 may create nine character identification
tokens of which a first part (e.g., the first five characters)
represent a voice data collection campaign file and of which a
second part (e.g., the last four characters) represent a session
script file. The transcription engine 114 may include the
identification token into installation instructions, and may
incorporate the installation instructions into a network-compatible
transmission.
[0059] At an operation 304, the transcription engine 114 may send
the network-compatible transmissions to the network 108. At an
operation 306, the NLP training device(s) 104 may receive the
installation instructions. In response to the installation
instructions, the NLP training device(s) 104 may install and/or
configure a mobile application that supports gathering voice data.
The mobile application may specify identification tokens related to
particular voice data collection campaign and session scripts to be
assigned to the NLP trainer(s) 302. The NLP training device(s) 104
may incorporate the identification token into a network-compatible
transmission.
[0060] When the NLP trainer 302 accesses the mobile application on
the NLP training device(s) 104, one or more of the identification
tokens may be returned to the transcription engine 114. At an
operation 308, the NLP training device(s) 104 may provide the
network-compatible transmission to the network 108. At an operation
310, the transcription engine 114 may receive the
network-compatible transmission.
[0061] The one or more identification tokens may be used to look up
a voice data collection campaign and session scripts for the NLP
trainer 302. In some implementations, the transcription engine 114
parses the identification token. The transcription engine 114 may
identify a file corresponding to a voice data collection campaign
from a first part of the file. The transcription engine 114 may
also identify a file corresponding to session scripts from a second
part of the file. The transcription engine 114 may incorporate the
file corresponding to a voice data collection campaign or a link to
the file corresponding to a voice data collection campaign into a
network-compatible transmission. The transcription engine 114 may
also incorporate the file corresponding to the session scripts or a
link to the file corresponding to the session scripts into the
network-compatible transmission. At an operation 312, the
transcription engine 114 may provide the network-compatible
transmissions to the network 108.
[0062] At an operation 314, the NLP training device(s) 104 may
receive the network-compatible transmission. Using the voice data
collection campaign and the session scripts, the NLP training
device(s) 104 may be guided through a series of prompts that
specify utterances for which voice data is to be collected. More
specifically, each prompt may provide the NLP trainer(s) 302 with
one or more utterances to be recorded. Each prompt may provide the
NLP trainer(s) 302 with the ability to erase, record, control
recording parameters, etc. The NLP training device(s) 104 may
capture voice data from the NLP trainer(s) 302.
[0063] The NLP training device(s) 104 may further incorporate the
voice data into a network-compatible transmission, that, at an
operation 316, is provided to the network 108. At an operation 318,
the transcription engine 114 receives the network-compatible
transmission. The transcription engine 114 may store the voice data
along with the text corresponding to the voice data in one or more
transcription libraries used for the end-user deployment phase.
[0064] Operation when Implementing an End-User Deployment Phase
[0065] The natural language processing environment 100 may operate
to transcribe voice data from end-users during an end-user
deployment phase. In the end-user deployment phase, NLP end-user
device(s) 102 may provide voice data over the network 108 to the
transcription validation server 112. The end-user deployment engine
118 may use trained transcription libraries that were created
during the training phase of the natural language processing
environment 100 to provide validated transcription data to the NLP
end-user device(s) 102. In an implementation, the NLP end-user
device(s) 102 streams the voice data to the transcription
validation server 112, and the end-user deployment engine 118
returns real-time transcriptions of the voice data to the NLP
end-user device(s) 102.
[0066] FIG. 4 illustrates a block diagram of an example of a data
flow 400 relating to operation of the natural language processing
environment 100 during the end-user deployment phase, according to
some implementations. FIG. 4 includes end-user(s) 402, the NLP
end-user device 102, the network 108, and the end-user deployment
engine 118.
[0067] At an operation 404, the end-user(s) 402 provide voice data
to the NLP end-user device(s) 102. The NLP end-user device(s) 102
may capture the voice data using an audio input device thereon. The
NLP end-user device(s) 102 may incorporate the voice data into
network-compatible data transmissions, and at an operation 406, may
send the network-compatible data transmissions to the network
108.
[0068] At an operation 408, the end-user deployment engine 118 may
receive the network-compatible data transmissions. The end-user
deployment engine 118 may further extract and transcribe the voice
data using trained transcription libraries stored in the end-user
deployment engine 118. More specifically, the end-user deployment
engine 118 may identify validated transcription data corresponding
to the voice data in trained transcription libraries. The end-user
deployment engine 118 may incorporate the validated transcription
data into network-compatible data transmissions. At an operation
410, the end-user deployment engine 118 may provide the validated
transcription data to the network 108.
[0069] At an operation 412, the NLP end-user device(s) 102 may
receive the validated transcription data. The NLP end-user
device(s) 102 may further extract the validated transcription data
from the network-compatible transmissions. At an operation 414, the
NLP end-user device(s) 102 provide the validated transcription data
to the end-user(s) 402. In some implementations, the NLP end-user
device(s) 102 display the validated transcription data on a display
component (e.g., a screen). The NLP end-user device(s) 402 may also
use the validated transcription data internally (e.g., in place of
keyboard input for a specific function or in a specific
application/document).
[0070] Operation when Gathering Transcription Data in a Training
Phase
[0071] The natural language processing environment 100 may operate
to gather voice data from NLP trainers during a training phase.
More particularly, in a training phase, NLP trainers provide the
NLP training device(s) 104 with voice data. The voice data may
comprise words, syllables, and/or combinations of words and/or
syllables that are commonly appear in a particular language. In
some implementations, the NLP trainers use a mobile application on
the NLP training device(s) 104 to input the voice data. In the
training phase, the NLP training device(s) 104 may provide the
voice data to the transcription engine 114. The transcription
engine may provide the voice data to the transcription device(s)
106. In some implementations, the transcription engine provides the
voice data as part of crowdsourced transcription jobs to
transcribers. Transcribers may use the transcription device(s) 106
to perform these crowdsourced transcription jobs. The transcription
device(s) 106 may provide the transcription data to the
transcription engine 114. In various implementations, the
transcription data is validated and/or used in an end-user
deployment phase, using the techniques described herein.
[0072] FIG. 5 illustrates a block diagram of an example of a data
flow 500 relating to transcription of voice data by the natural
language processing environment 100 during the training phase,
according to some implementations. FIG. 5 includes NLP trainer(s)
502, the NLP training device(s) 104, the network 108, the
transcription engine 114, the transcription device(s) 106, and
transcriber(s) 504.
[0073] At an operation 506, the NLP trainer(s) 502 provide voice
data to the NLP training device(s) 104. The NLP training device(s)
104 may capture the voice data using an audio input device thereon.
A first mobile application may facilitate capture of the voice
data. The NLP training device(s) 104 may incorporate the voice data
into network-compatible data transmissions, and at an operation
508, may send the network-compatible data transmissions to the
network 108. In some implementations, the NLP trainer(s) 502
receive compensation (inducements, incentives, payments, etc.) for
voice data provided into the first mobile application.
[0074] At an operation 510, the transcription engine 114 may
receive the network-compatible data transmissions. The
transcription engine 114 may incorporate the crowdsourced
transcription jobs into network-compatible data transmissions, and
at an operation 512, may send the network-compatible data
transmissions to the network 108.
[0075] At an operation 514, the transcription device(s) 106 may
receive the network-compatible data transmissions from the network
108. The transcription device(s) 106 may play the voice data to the
transcriber(s) 504 on a second mobile application. In an
implementation, the transcription device(s) 106 play an audio
recording of the voice data and ask the transcriber(s) 504 to
return text corresponding to the voice data. The transcription
device(s) 106 may incorporate the text into crowdsourced
transcription job data that is incorporate into network compatible
data transmissions, which in turn is sent, at operation 516, to the
network 108. In some implementations, the transcriber(s) 504
receive compensation (inducements, incentives, payments, etc.) for
transcribing voice data.
[0076] At an operation 518, the transcription engine 114 receive
the network-compatible transmissions. The transcription engine 114
may extract crowdsourced transcription job data from the
network-compatible transmissions and may store the voice data and
the text corresponding to the voice data as unvalidated
transcription data. In various implementations, the unvalidated
transcription data is validated by crowdsourced validation jobs as
discussed herein.
[0077] The transcription engine 114 may identify compensation for
transcribing the voice data successfully. The transcription engine
114 may format compensation to the NLP trainer(s) 502 into
network-compatible data transmissions, and at operation 520, may
send the network-compatible data transmissions to the network 108.
At operation 522, the NLP training device(s) 522 may receive the
network-compatible data transmissions having the compensation.
[0078] FIG. 6 illustrates a flowchart of a process 600 for
collecting voice data for a voice data collection campaign,
according to some implementations. The process 600 is discussed in
conjunction with the transcription engine 114 discussed herein. It
is noted other structures may perform the process 600, and that the
process 600 may include more or less operations than explicitly
shown in FIG. 6.
[0079] At an operation 602, a file for a voice data collection
campaign may be gathered. The voice data collection campaign may
identify utterances to be collected from one or more NLP trainers.
In some implementations, the campaign engine 206 gathers campaign
data from the campaign data datastore 214. As discussed herein, the
file may comprise a JSON file. The campaign data may identify a set
of raw utterances that are to be collected from NLP trainers. As an
example, the voice data collection campaign may specify syllables,
words, phrases, etc. that are to be read and recorded by NLP
trainers into a mobile application on the NLP training device(s)
104. The campaign data may specify other information, such as
recording parameters, etc.
[0080] At an operation 604, a file related to one or more session
scripts may be gathered. The session scripts may identify a
sequence of prompts for NLP trainers to view text of the utterances
and provide natural language content in response to the utterances.
In various implementations, the session script management engine
208 gathers session scripts from the session script data datastore.
Each of the session scripts may identify a sequence of prompts for
NLP trainers to view text of the utterances and provide natural
language content in response to the utterances, as discussed
further herein.
[0081] At an operation 606, identifiers of the voice data
collection campaign and identifiers of the session scripts may be
incorporated into one or more identification token. In various
implementations, the identification token management engine 210
creates one or more identification token using file names of
campaign data and file names of session scripts. As discussed
herein, the one or more identification tokens may comprise an
alphanumeric character string that incorporates the file names of
campaign data and file names of session scripts therein.
[0082] At an operation 608, NLP trainers for the voice data
collection campaign may be identified. In various implementations,
the mobile application management engine 204 receives notifications
related to people who have installed a mobile application on one of
the NLP training device(s) 104 and want to provide voice data. At
an operation 610, identification tokens may be distributed to the
NLP trainers. The mobile application management engine 204 may
gather from the identification token management engine 210
identification tokens to be provided to the identified NLP
trainers. The mobile application management engine 204 may
distribute these identification tokens in any known or convenient
manner, e.g., over the network 108.
[0083] FIG. 7 illustrates a flowchart of a process 700 for creating
and distributing identification tokens for a natural language
training process, according to some implementations. The process
700 is discussed in conjunction with the transcription engine 114
discussed herein. It is noted other structures may perform the
process 700, and that the process 700 may include more or less
operations than explicitly shown in FIG. 7.
[0084] At an operation 702, an identification token that identifies
a voice data collection campaign and a session script may be
received from an NLP training device. In various implementations,
the mobile application management engine 204 may receive an
identification token from the NLP training device(s) 104 over the
network 108. The identification token may comprise a character
string that identifies a voice data collection campaign and a
session script. As an example, the identification token may
comprise a first portion that specifies a file name associated with
a voice data collection campaign and a second portion that
specifies a file name associated with a session script. The mobile
application management engine 204 may provide the identification
token to the identification token management engine 210.
[0085] At an operation 704, the identification token may be parsed
for an identifier of the voice data collection campaign and an
identifier of the session script. For example, the identification
token management engine 210 may identify a first portion (e.g., the
first five characters) of the identification token that corresponds
to a filename of a voice data collection campaign file. The
identification token management engine 210 may also identify a
second portion (e.g., the last four characters) of the
identification token that corresponds to a filename of a session
script.
[0086] At an operation 706, a voice data collection campaign file
may be gathered based on the identifier of the voice data
collection campaign. More specifically, the campaign engine 206 may
gather from the campaign data datastore 214 a voice data collection
campaign file based on the identifier of the voice data collection
campaign. At an operation 708, a session script file may be
gathered based on the identifier of the session script. In various
implementations, the session script management engine 208 gathers
from the session script data datastore 216 a session script file
based on the identifier of the session script.
[0087] At an operation 710, the voice data collection campaign file
is provided to the NLP training device. At an operation 712,
Provide the session script file to the NLP training device. The
mobile application management engine 204 may format the voice data
collection campaign file and/or the session script file in a format
that can be provided to the mobile application on the NLP training
device(s) 104. The network interface engine 202 may format the
voice data collection campaign file and/or the session script file
into network-compatible transmissions.
[0088] At an operation 714, text corresponding to the utterances in
the voice data collection campaign may be received. The network
interface engine 202 and/or the mobile application management
engine 204 may receive text corresponding to the utterances in the
voice data collection campaign. At an operation 716, the text and
the utterances may be stored as pairs for a crowdsourced validation
job
[0089] FIG. 8 illustrates a screenshot 800 of a screen of a mobile
application of one of the NLP training device(s) 104, according to
some implementations. The screen in FIG. 8 may include a recording
banner 802, a display area 804, an erase button 806, a record
button 808, and a play button 810. In various implementations, the
recording banner 802 and the display area 804 may display one or
more predetermined prompts to an NLP trainer during a training
phase of the natural language processing environment 100. The erase
button 806 may allow the NLP trainer to erase voice data that the
NLP trainer has previously recorded. The record button 808 may
allow the NLP trainer to record voice data. The play button 810 may
allow the NLP trainer to play voice that that the NLP trainer has
recorded.
[0090] In some implementations, the screen in FIG. 8 is provided to
an NLP trainer as part of a training phase of the natural language
processing environment 100. More specifically, when an NLP trainer
logs into the mobile application, the NLP trainer may receive a
unique identification token than is mapped to a specific collection
of prompts and audits to be used. Upon entering the identification
token, the NLP trainer may be guided through an arbitrary number of
prompts. The NLP training device(s) 104 may provide the voice data
to the transcription engine 114 using the techniques described
herein. In some implementations, the NLP trainer may be provided
with one or more audits (e.g., gold standard questions, captions
that are not machine-readable, audio that is not understandable to
machines, etc.). Upon completing a session, the NLP trainer may be
provided with a completion code that the NLP trainer can use for a
variety of purposes, such as obtaining compensation for the jobs
the NLP trainer has performed.
[0091] FIG. 9 shows an example of a computer system 900, according
to some implementations. In the example of FIG. 9, the computer
system 900 can be a conventional computer system that can be used
as a client computer system, such as a wireless client or a
workstation, or a server computer system. The computer system 900
includes a computer 902, I/O devices 904, and a display device 906.
The computer 902 includes a processor 908, a communications
interface 910, memory 912, display controller 914, non-volatile
storage 916, and I/O controller 918. The computer 902 can be
coupled to or include the I/O devices 904 and display device
906.
[0092] The computer 902 interfaces to external systems through the
communications interface 910, which can include a modem or network
interface. It will be appreciated that the communications interface
910 can be considered to be part of the computer system 900 or a
part of the computer 902. The communications interface 910 can be
an analog modem, ISDN modem, cable modem, token ring interface,
satellite transmission interface (e.g. "direct PC"), or other
interfaces for coupling a computer system to other computer
systems.
[0093] The processor 908 can be, for example, a conventional
microprocessor such as an Intel Pentium microprocessor or Motorola
power PC microprocessor. The memory 912 is coupled to the processor
908 by a bus 920. The memory 912 can be Dynamic Random Access
Memory (DRAM) and can also include Static RAM (SRAM). The bus 920
couples the processor 908 to the memory 912, also to the
non-volatile storage 916, to the display controller 914, and to the
I/O controller 918.
[0094] The I/O devices 904 can include a keyboard, disk drives,
printers, a scanner, and other input and output devices, including
a mouse or other pointing device. The display controller 914 can
control in the conventional manner a display on the display device
906, which can be, for example, a cathode ray tube (CRT) or liquid
crystal display (LCD). The display controller 914 and the I/O
controller 918 can be implemented with conventional well known
technology.
[0095] The non-volatile storage 916 is often a magnetic hard disk,
an optical disk, or another form of storage for large amounts of
data. Some of this data is often written, by a direct memory access
process, into memory 912 during execution of software in the
computer 902. One of skill in the art will immediately recognize
that the terms "machine-readable medium" or "computer-readable
medium" includes any type of storage device that is accessible by
the processor 908 and also encompasses a carrier wave that encodes
a data signal.
[0096] The computer system 900 is one example of many possible
computer systems which have different architectures. For example,
personal computers based on an Intel microprocessor often have
multiple buses, one of which can be an I/O bus for the peripherals
and one that directly connects the processor 908 and the memory 912
(often referred to as a memory bus). The buses are connected
together through bridge components that perform any necessary
translation due to differing bus protocols.
[0097] Network computers are another type of computer system that
can be used in conjunction with the teachings provided herein.
Network computers do not usually include a hard disk or other mass
storage, and the executable programs are loaded from a network
connection into the memory 912 for execution by the processor 908.
A Web TV system, which is known in the art, is also considered to
be a computer system, but it can lack some of the features shown in
FIG. 9, such as certain input or output devices. A typical computer
system will usually include at least a processor, memory, and a bus
coupling the memory to the processor.
[0098] Some portions of the detailed description are presented in
terms of algorithms and symbolic representations of operations on
data bits within a computer memory. These algorithmic descriptions
and representations are the means used by those skilled in the data
processing arts to most effectively convey the substance of their
work to others skilled in the art. An algorithm is here, and
generally, conceived to be a self-consistent sequence of operations
leading to a desired result. The operations are those requiring
physical manipulations of physical quantities. Usually, though not
necessarily, these quantities take the form of electrical or
magnetic signals capable of being stored, transferred, combined,
compared, and otherwise manipulated. It has proven convenient at
times, principally for reasons of common usage, to refer to these
signals as bits, values, elements, symbols, characters, terms,
numbers, or the like.
[0099] It should be borne in mind, however, that all of these and
similar terms are to be associated with the appropriate physical
quantities and are merely convenient labels applied to these
quantities. Unless specifically stated otherwise as apparent from
the following discussion, it is appreciated that throughout the
description, discussions utilizing terms such as "processing" or
"computing" or "calculating" or "determining" or "displaying" or
the like, refer to the action and processes of a computer system,
or similar electronic computing device, that manipulates and
transforms data represented as physical (electronic) quantities
within the computer system's registers and memories into other data
similarly represented as physical quantities within the computer
system memories or registers or other such information storage,
transmission or display devices.
[0100] Techniques described in this paper relate to apparatus for
performing the operations. The apparatus can be specially
constructed for the required purposes, or it can comprise a general
purpose computer selectively activated or reconfigured by a
computer program stored in the computer. Such a computer program
can be stored in a computer readable storage medium, such as, but
is not limited to, read-only memories (ROMs), random access
memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, any
type of disk including floppy disks, optical disks, CD-ROMs, and
magnetic-optical disks, or any type of media suitable for storing
electronic instructions, and each coupled to a computer system
bus.
[0101] For purposes of explanation, numerous specific details are
set forth in order to provide a thorough understanding of the
description. It will be apparent, however, to one skilled in the
art that implementations of the disclosure can be practiced without
these specific details. In some instances, modules, structures,
processes, features, and devices are shown in block diagram form in
order to avoid obscuring the description. In other instances,
functional block diagrams and flow diagrams are shown to represent
data and logic flows. The components of block diagrams and flow
diagrams (e.g., modules, blocks, structures, devices, features,
etc.) may be variously combined, separated, removed, reordered, and
replaced in a manner other than as expressly described and depicted
herein.
[0102] Reference in this specification to "one implementation", "an
implementation", "some implementations", "various implementations",
"certain implementations", "other implementations", "one series of
implementations", or the like means that a particular feature,
design, structure, or characteristic described in connection with
the implementation is included in at least one implementation of
the disclosure. The appearances of, for example, the phrase "in one
implementation" or "in an implementation" in various places in the
specification are not necessarily all referring to the same
implementation, nor are separate or alternative implementations
mutually exclusive of other implementations. Moreover, whether or
not there is express reference to an "implementation" or the like,
various features are described, which may be variously combined and
included in some implementations, but also variously omitted in
other implementations. Similarly, various features are described
that may be preferences or requirements for some implementations,
but not other implementations.
[0103] The language used herein has been principally selected for
readability and instructional purposes, and it may not have been
selected to delineate or circumscribe the inventive subject matter.
It is therefore intended that the scope be limited not by this
detailed description, but rather by any claims that issue on an
application based hereon. Accordingly, the disclosure of the
implementations is intended to be illustrative, but not limiting,
of the scope, which is set forth in the claims recited herein.
* * * * *