U.S. patent application number 13/836605 was filed with the patent office on 2013-10-24 for system for annotating media content for automatic content understanding.
The applicant listed for this patent is Eric David Petajan, Douglas W. Vunic, David Eugene Weite. Invention is credited to Eric David Petajan, Douglas W. Vunic, David Eugene Weite.
Application Number | 20130283143 13/836605 |
Document ID | / |
Family ID | 49381315 |
Filed Date | 2013-10-24 |
United States Patent
Application |
20130283143 |
Kind Code |
A1 |
Petajan; Eric David ; et
al. |
October 24, 2013 |
System for Annotating Media Content for Automatic Content
Understanding
Abstract
A system for annotating frames in a media stream frames includes
a pattern recognition system (PRS) to generate PRS output metadata
for a frame; an archive for storing ground truth metadata (GTM); a
device to merge the GTM and PRS output metadata and thereby
generate proposed annotation data (PAD); and a user interface for
use by the HA. The user interface includes an editor and an input
device used by the HA to approve GTM for the frame. An optimization
system receives the approved GTM and metadata output by the PRS,
and adjusts input parameters for the PRS to minimize a distance
metric corresponding to a difference between the GTM and PRS output
metadata.
Inventors: |
Petajan; Eric David;
(Watchung, NJ) ; Weite; David Eugene; (Woodcliff
Lake, NJ) ; Vunic; Douglas W.; (Darien, CT) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Petajan; Eric David
Weite; David Eugene
Vunic; Douglas W. |
Watchung
Woodcliff Lake
Darien |
NJ
NJ
CT |
US
US
US |
|
|
Family ID: |
49381315 |
Appl. No.: |
13/836605 |
Filed: |
March 15, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61637344 |
Apr 24, 2012 |
|
|
|
Current U.S.
Class: |
715/230 |
Current CPC
Class: |
G06F 16/48 20190101;
G11B 27/28 20130101; H04N 21/854 20130101; H04N 21/23424 20130101;
H04N 21/84 20130101; G11B 27/036 20130101; G06F 40/169 20200101;
H04N 21/23418 20130101; G11B 27/19 20130101 |
Class at
Publication: |
715/230 |
International
Class: |
G06F 17/24 20060101
G06F017/24 |
Claims
1. A system to annotate media content, comprising: a pattern
recognition system (PRS) having an initial set of input parameters
that generates PRS output metadata associated with a frame of a
media stream; an archive for storing ground truth metadata (GTM)
associated with the same frame of the media stream; a device to
merge the GTM and the PRS output metadata and thereby generate
proposed annotation data (PAD); and a user interface for use by a
human annotator (HA) including an editor and an input device to
approve or edit the PAD for the frame; and an optimization system
to adjust input parameters for the PRS to minimize a distance
metric corresponding to a difference between the GTM and PRS output
metadata.
2. The system of claim 1 wherein the GTM is obtained from one or
more of third party metadata, archived media stream and the HA.
3. The system of claim 2 wherein a time delay between third party
metadata and the media stream is corrected by alignment.
4. The system of claim 2 including a communication network to
enable a plurality of HA's to interface with the same media
stream.
5. The system of claim 2 wherein when the PAD is approved it is
converted to GTM.
6. The system of claim 5 wherein when the PAD is approved, it is
graphically overlayed on the media stream.
7. The system of claim 1 wherein the optimization system adjusts
the PRS initial set of input parameters to minimize the difference
between the GTM and PRS output metadata thereby increasing
accuracy.
8. The system of claim 1 wherein the PRS includes a set of state
variables stored as a temporal group adjustable as a group in
response to GTM.
9. A method comprising: receiving data from a media stream, the
data organized into frames; processing the data using a pattern
recognition system (PRS); storing a state of the PRS; generating
metadata associated with the frame using the PRS; receiving input
characterized as ground truth metadata (GTM), into an optimization
system; adjusting input parameters for the PRS to minimize a
distance metric corresponding to a difference between the GTM and
PRS output metadata.
10. The method of claim 9 wherein said input is obtained from one
or more of archived media streams, third party metadata and one or
more human annotators.
11. The method of claim 10 wherein subsequent to receiving said
input, said GTM and said metadata associated with said PRS are
temporally aligned.
12. The method of claim 10 wherein said GTM and said metadata
associated with said PRS are continuously stored and memory and
periodically stored to disk thereby enabling fast recovery from
system failure.
13. A method comprising receiving from a human annotator (HA), via
a human annotator user interface (HAUI), information regarding a
time point selected by the HA on a timeline of the media stream;
merging existing ground truth metadata (GTM) relating to a media
frame corresponding to the selected time point with pattern
recognition system (PRS) output metadata relating to said media
frame, thereby generating proposed annotation data (PAD) for the
media frame; displaying the media frame and the PAD to the HA;
receiving input from the HA including correction and/or approval of
the PAD, where approved PAD is characterized as new GTM related to
the selected time point; storing the new GTM; comparing the PRS
output metadata and the new GTM related to the selected time point;
and adjusting PRS input parameters so that a distance metric
corresponding to a difference between the new GTM and PRS output
metadata related to the selected time point is minimized.
14. The method of claim 13 wherein said GTM is obtained from one or
more of archived media streams, third party metadata, said human
annotators and other human annotators.
15. The method of claim 14 wherein when said human annotator
approves said PAD, said PAD is graphically overlaid on said media
stream.
16. A method comprising: generating output metadata associated with
a frame of a media stream, output by a pattern recognition system
(PRS); storing in an archive input from a human annotator (HA)
related to the frame, characterized as ground truth metadata (GTM);
merging the GTM and the PRS output metadata to thereby generate
proposed annotation data (PAD); and displaying the PAD to the HA by
a user interface; receiving via the user interface an input from
the HA indicating approval of the GTM for the frame; and adjusting
input parameters for the PRS using an optimization system, to
minimize a distance metric corresponding to a difference between
the GTM and the PRS output metadata.
17. The method of claim 16 wherein said GTM is obtained from one or
more of archived media streams, third party metadata, said human
annotators and other human annotators.
18. The method of claim 17 wherein when said human annotator
approves said PAD, said PAD is graphically overlaid on said media
stream.
Description
CROSS REFERENCE TO RELATED PATENT APPLICATION
[0001] This patent application claims a benefit to the priority
date of the filing of U.S. Provisional Patent Application Ser. No.
61/637,344, titled "System for Annotating Media Content for
Improved Automatic Content Understanding Performance," by Petajan
et al., that was filed on Apr. 24, 2012. The disclosure of U.S.
61/637,344 is incorporated by reference herein in its entirety.
FIELD OF THE DISCLOSURE
[0002] This disclosure relates to media presentations (e.g. live
sports events), and more particularly to a system for improving
performance by generating annotations for the media stream.
BACKGROUND OF THE DISCLOSURE
[0003] A media presentation, such as a broadcast of an event, may
be understood as a stream of audio/video frames (live media
stream). It is desirable to add information to the media stream to
enhance the viewer's experience; this is generally referred to as
annotating the media stream. The annotation of a media stream is a
tedious and time-consuming task for a human. Visual inspection of
text, players, balls, and field/court position is mentally taxing
and error prone. Keyboard and mouse entry are needed to enter
annotation data but are also error prone and mentally taxing.
Accordingly, systems have been developed to at least partially
automate the annotation process.
[0004] Pattern Recognition Systems (PRS), e.g. computer vision or
Automatic Speech Recognition (ASR), process media streams in order
to generate meaningful metadata. Recognition systems operating on
natural media streams always perform with less than absolute
accuracy due to the presence of noise. Computer Vision (CV) is
notoriously error prone and ASR is only useable under constrained
conditions. The measurement of system accuracy requires knowledge
of the correct PRS result, referred to here as Ground Truth
Metadata (GTM). The development of a PRS requires the generation of
GTM that must be validated by Human Annotators (HA). GTM can
consist of positions in space or time, labeled features, events,
text, region boundaries, or any data with a unique label that
allows referencing and comparison.
[0005] A compilation of acronyms used herein is appended to this
Specification.
[0006] There remains a need for a system that can reduce the human
time and effort required to create the GTM.
SUMMARY OF THE DISCLOSURE
[0007] We refer to a system for labeling features in a given frame
of video (or audio) or events at a given point in time as a Media
Stream Annotator (MSA). If accurate enough, a given PRS
automatically generates metadata from the media streams that can be
used to reduce the human time and effort required to create the
GTM. According to an aspect of the disclosure, an MSA system and
process, with a Human-Computer Interface (HCI), provides more
efficient GTM generation and PRS input parameter adjustment.
[0008] GTM is used to verify PRS accuracy and adjust PRS input
parameters or to guide algorithm development for optimal
recognition accuracy. The GTM can be generated at low levels of
detail in space and time, or at higher levels as events or states
with start times and durations that may be imprecise compared to
low-level video frame timing.
[0009] Adjustments to PRS input parameters that are designed to be
static during a program should be applied to all sections of a
program with associated GTM in order to maximize the average
recognition accuracy and not just the accuracy of the given section
or video frame. If the MSA processes live media, the effect of any
automated PRS input parameter adjustments must be measured on all
sections with (past and present) GTM before committing the changes
for generation of final production output.
[0010] A system embodying the disclosure may be applied to both
live and archived media programs and has the following features:
[0011] Random access into a given frame or section of the archived
media stream and associated metadata [0012] Real-time display or
graphic overlay of PRS-generated metadata on or near video frame
display [0013] Single click approval of conversion of Proposed
Annotation Data (PAD) into GTM [0014] PRS recomputes all metadata
when GTM changes [0015] Merge metadata from 3rd parties with human
annotations [0016] Graphic overlay of compressed and decoded
metadata on or near decoded low bit-rate video to enable real-time
operation on mobile devices and consumer-grade internet
connections
[0017] The foregoing has outlined, rather broadly, the preferred
features of the present disclosure so that those skilled in the art
may better understand the detailed description of the disclosure
that follows. Additional features of the disclosure will be
described hereinafter that form the subject of the claims of the
disclosure. Those skilled in the art should appreciate that they
can readily use the disclosed conception and specific embodiment as
a basis for designing or modifying other structures for carrying
out the same purposes of the present disclosure and that such other
structures do not depart from the spirit and scope of the
disclosure in its broadest form.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] FIG. 1 is a schematic illustration of the Media Stream
Annotator (MSA), according to an embodiment of the disclosure.
[0019] FIG. 2 is a schematic illustration of the Media Annotator
flow chart during Third Party Metadata (TPM) ingest, according to
an embodiment of the disclosure.
[0020] FIG. 3 is a schematic illustration of the Media Annotator
flow chart during Human Annotation, according to an embodiment of
the disclosure.
[0021] FIG. 4 is a schematic illustration of a football miniboard,
according to an embodiment of the disclosure.
DETAILED DESCRIPTION
[0022] The accuracy of any PRS depends on the application of
constraints that reduce the number or range of possible results.
These constraints can take the form of a priori information,
physical and logical constraints, or partial recognition results
with high reliability. A priori information for sports includes the
type of sport, stadium architecture and location, date and time,
teams, players, broadcaster, language, and the media ingest process
(e.g., original A/V resolution and transcoding). Physical
constraints include camera inertia, camera mount type, lighting,
and the physics of players, balls, equipment, courts, fields, and
boundaries. Logical constraints include the rules of the game,
sports production methods, uniform colors and patterns, and
scoreboard operation. Some information can be reliably extracted
from the media stream with minimal a priori information and can be
used to "boot strap" subsequent recognition processes. For example,
the presence of the graphical miniboard overlaid on the game video
(shown in FIG. 4) can be detected with only knowledge of the sport
and the broadcaster (e.g., ESPN, FOX Sports, etc).
[0023] If a live media sporting event is processed in real time,
only the current and past media streams are available for pattern
recognition and metadata generation. A recorded sporting event can
be processed with access to any frame in the entire program. The
PRS processing a live event can become more accurate as time
progresses since more information is available over time, while any
frame from a recorded event can be analyzed repeatedly from the
past or the future until maximum accuracy is achieved.
[0024] The annotation of a media stream is a tedious and
time-consuming task for a human. Visual inspection of text,
players, balls, and field/court position is mentally taxing and
error prone. Keyboard and mouse entry are needed to enter
annotation data but are also error prone and mentally taxing. Human
annotation productivity (speed and accuracy) is greatly improved by
properly displaying available automatically generated Proposed
Annotation Data (PAD) and thereby minimizing the mouse and keyboard
input needed to edit and approve the PAD. If the PAD is correct,
the Human Annotator (HA) can simultaneously approve the current
frame and select the next frame for annotation with only one press
of a key or mouse button. The PAD is the current best automatically
generated metadata that can be delivered to the user without
significant delay. Waiting for the system to maximize the accuracy
of the PAD may decrease editing by the HA but will also delay the
approval of the given frame.
[0025] FIG. 1 shows a Media Stream Annotator (MSA) system according
to an embodiment of the disclosure. The MSA ingests both live and
archived media streams (LMS 114 and AMS 115), and optional Third
Party Metadata (TPM) 101 and input from the HA 118. The PAD is
derived from a combination of PRS 108 result metadata and TPM 101.
Metadata output by PRS 108 is archived in Metadata Archive 109. If
the TPM 101 is available during live events the system can convert
the TPM 101 to GTM via the Metadata Mapper 102 and then use the
Performance Optimization System (POS) 105 to adjust PRS Input
Parameters to improve metadata accuracy for both past (AMS 115) and
presently ingested media (LMS 114). The PAD Encoder 110 merges GTM
with metadata for each media frame and encodes the PAD into a
compressed form suitable for transmission to the Human Annotator
User Interface (HAUI) 104 via a suitable network, e.g. Internet
103. This information is subsequently decoded and displayed to the
HA, in a form the HA can edit, by a Media Stream and PAD Decoder,
Display and Editor (MSPDE) 111. The HAUI also includes a Media
Stream Navigator (MSN) 117 which the HA uses to select time points
in the media stream whose corresponding frames are to be annotated.
A low bit-rate version of the media stream is transcoded from the
AMS by a Media Transcoder 116 and then transmitted to the HAUI.
[0026] As GTM is generated by the HA 118 and stored in the GTM
Archive 106, the POS 105 compares the PRS 108 output metadata to
the GTM and detects significant differences between them. During
the design and development of the PRS 108, input parameters are set
with initial estimated values that produce accurate results on an
example set of media streams and associated GTM. These parameter
values are adjusted by the POS 105 until the difference between the
all GTM and the PRS 108 generated metadata is minimized.
[0027] During development (as opposed to live production) the POS
105 does not need to operate in real time and exhaustive
optimization algorithms may be used. During a live program the POS
105 should operate as fast as possible to improve PRS 108
performance each time new GTM is generated by the HA 118; faster
optimization algorithms are therefore used during a live program.
The POS 105 is also invoked when new TPM 101 is converted to
GTM.
[0028] The choice of distance metric between PRS 108 output
metadata and GTM depends on the type of data and the allowable
variation. For example, in a presentation of a football game the
score information extracted from the miniboard must be absolutely
accurate while the spatial position of a player on the field can
vary. If one PRS input parameter affects multiple types of results,
then the distance values for each type can be weighted in a linear
combination of distances in order to calculate a single distance
for a given frame or time segment of the game.
[0029] A variety of TPM 101 (e.g. from stats.com) is available
after a delay period from the live action that can be used as GTM
either during development or after the delay period during a live
program. Since the TPM is delayed by a non-specific period of time,
it must be aligned in time with the program. Alignment can either
be done manually, or the GTM can be aligned with TPM 101, and/or
the PRS 108 result metadata can be aligned using fuzzy matching
techniques.
[0030] The PRS 108 maintains a set of state variables that change
over time as models of the environment, players, overlay graphics,
cameras, and weather are updated. The arrival of TPM 101 and, in
turn, GTM can drive changes to both current and past state
variables. If the history of the state variables is not stored
persistently, the POS 105 would have to start the media stream from
the beginning in order to use the PRS 108 to regenerate metadata
using new PRS 108 Input Parameters. The amount of PRS 108 state
variable information can be large, and is compressed using State
Codec 112 into one or more sequences of Group Of States (GOS) such
that a temporal section of PRS States is encoded and decoded as a
group for greater compression efficiency and retrieval speed. The
GOS is stored in a GOS Archive 113. The number of media frames in a
GOS can be as few as one.
[0031] If the PRS 108 result metadata is stored persistently, the
HA can navigate to a past point in time and immediately retrieve
the associated metadata or GTM via the PAD Encoder 110, which
formats and compresses the PAD for delivery to the HA 118 over the
network.
[0032] FIG. 2 shows a flow chart for MSA operation, according to an
embodiment of the disclosure in which both a live media stream
(LMS) and TPM are ingested. All LMS is archived in the AMS (step
201). At system startup, the initial or default values of the GOS
are input to the PRS which then starts processing the LMS in real
time (step 202). If the PRS does not have sufficient resources to
process every LMS frame, the PRS will skip frames to minimize the
latency between a given LMS frame and its associated result
Metadata (step 203). Periodically, the internal state variable
values of the PRS are encoded into GOS and archived (step 204).
Finally, the PRS generates metadata which is archived (step 205);
the process returns to step 201 and the next or most recent next
media frame is ingested. The processing loop 201-205 may iterate
indefinitely.
[0033] When TPM arrives via the Internet, it is merged with any GTM
that exists for that media frame via the Metadata Mapper (step
206). The POS is then notified of the new GTM and generates new
sets of PRS Input Parameters, while comparing all resulting
Metadata to any corresponding GTM for each set until an optimal set
of PRS Input Parameters are found that minimize the global distance
between all GTM and the corresponding Metadata (step 207).
[0034] FIG. 3 shows a flow chart for MSA operation while the HA
approves new GTM. This process operates in parallel with the
process shown in the flowchart of FIG. 2. The HA must first select
a point on the media stream timeline for annotation (step 301). The
HA can find a point in time by dragging a graphical cursor on a
media player while viewing a low bit-rate version of the media
stream transcoded from the AMS (step 302). The Metadata and any
existing GTM associated with the selected time point are retrieved
from their respective archives 109, 106 and encoded into the PAD
(step 303); transmitted with the Media Stream to the HAUI over the
Internet (step 304);and presented to the HA via the HAUI after
decoding both PAD and low bit-rate Media Stream (step 305). The
HAUI displays the PAD on or near the displayed Media Frame (step
306). The HA compares the PAD with the Media Frame and either
clicks on an Approve button 107 or corrects the PAD using an editor
and approves the PAD (step 307). After approval of the PAD, the
HAUI transmits the corrected and/or approved PAD as new GTM for
storage in the GTM Archive (step 308). The POS is then notified of
the new GTM and generates new sets of PRS Input Parameters, while
comparing all resulting Metadata to any corresponding GTM for each
set (step 309) until an optimal set of PRS Input Parameters are
found that minimize the global distance between all GTM and the
corresponding Metadata (step 310).
[0035] If the MSA is operating only on the AMS (and not on the
LMS), the POS can perform more exhaustive and time consuming
algorithms to minimize the distance between GTM and Metadata; the
consequence of incomplete or less accurate Metadata is more editing
time for the HA. If the MSA is operating on LMS during live
production, the POS is constrained to not update the PRS Input
Parameters for live production until the Metadata accuracy is
maximized.
[0036] The HA does not need any special skills other than a basic
knowledge of the media stream content (e.g. rules of the sporting
event) and facility with a basic computer interface. PRS
performance depends on the collection of large amounts of GTM to
ensure that optimization by the POS will result in optimal PRS
performance on new media streams. Accordingly, it is usually
advantageous to employ multiple HAs for a given media stream. The
pool of HAs is increased if the HAUI client can communicate with
the rest of the system over the consumer-grade internet or mobile
internet connections which have limited capacity. The main consumer
of internet capacity is the media stream that is delivered to the
HAUI for decoding and display. Fortunately, the bit-rate of the
media stream can be greatly lowered to allow carriage over consumer
or mobile internet connections by transcoding the video to a lower
resolution and quality. Much of the bit-rate needed for high
quality compression of sporting events is applied to complex
regions in the video, such as views containing the numerous
spectators at the event; however, the HA does not need high quality
video of the spectators for annotation. Instead, the HA needs a
minimal visual quality for the miniboard, player identification,
ball tracking, and field markings which is easily achieved with a
minimal compressed bit-rate.
[0037] The PAD is also transmitted to the HAUI, but this
information is easily compressed as text, graphical coordinates,
geometric objects, color properties or animation data. All PAD can
be losslessly compressed using statistical compression techniques
(e.g. zip), but animation data can be highly compressed using lossy
animation stream codecs such as can be found in the MPEG-4 SNHC
standard tools (e.g. Face and Body Animation and 3D Mesh
Coding).
[0038] The display of the transmitted and decoded PAD to the HA is
arranged for clearest viewing and comparison between the video and
the PAD. For example, as shown in FIG. 4, the miniboard content
from the PAD should be displayed below the video frame in its own
window pane 402 and vertically aligned with the miniboard in the
video 401. PAD content relating to natural (non-graphical) objects
in the video should be graphically overlayed on the video.
[0039] Editing of the PAD by the HA can be done either in the
miniboard text window directly for miniboard data or by dragging
spatial location data directly on the video into the correct
position (e.g. field lines or player IDs). The combined use of low
bit-rate, adequate quality video and compressed text, graphics and
animation data which is composited on the video results in a HAUI
that can be used with low bit-rate internet connections.
[0040] Referring back to FIG. 1, The Metadata Archive 109 and the
GTM Archive 106 are ideally designed and implemented to provide
fast in-memory access to metadata while writing archive contents to
disk as often as needed to allow fast recovery after system failure
(power outage, etc). In addition to the inherent speed of memory
access (vs disk access), the metadata archives should ideally be
architected to provide fast search and data derivation operations.
Fast search is needed to find corresponding entries in the GTM 106
vs Metadata 109 archives, and to support the asynchronous writes to
the GTM Archive 106 from the Metadata Mapper 102. Preferred designs
of the data structures in the archives that support fast search
include the use of linked lists and hash tables. Linked lists
enable insert edit operations without the need to move blocks of
data to accommodate new data. Hash tables provide fast address
lookup of sparse datasets.
[0041] The ingest of TPM 101 requires that the TPM timestamps be
aligned with the GTM 106 and Metadata 109 Archive timestamps. This
alignment operation may involve multiple passes over all datasets
while calculating accumulated distance metrics to guide the
alignment. The ingest of multiple overlapping/redundant TPM
requires that a policy be established for dealing with conflicting
or inconsistent metadata. In case there is conflict between TPMs
101, the Metadata Mapper 102 should ideally compare the PRS 108
generated Metadata 109 to the conflicting TPMs 101 in case other
prior knowledge does not resolve the conflict. If the conflict
can't be reliably resolved, then a confidence value should ideally
be established for the given metadata which is also stored in the
GTM 106. Alternatively, conflicting data can be omitted from the
GTM 106.
[0042] The GTM 106 and Metadata 109 Archives should ideally contain
processes for efficiently performing common operations on the
archives. For example, if the time base of the metadata needs
adjustment, an internal archive process could adjust each timestamp
in the whole archive without impacting other communication
channels, or tying up other processing resources.
[0043] An example of TPM is the game clock from a live sporting
event. TPM game clocks typically consist of an individual message
for each tick/second of the clock containing the clock value. The
delay between the live clock value at the sports venue and the
delivered clock value message can be seconds or tens of seconds
with variation. The PRS is recognizing the clock from the live
video feed and the start time of the game is published in advance.
The Metadata Mapper 102 should use all of this information to
accurately align the TPM clock ticks with the time base of the GTM
106 and Metadata 109 Archives. At the beginning of the game, there
might not be enough data to determine this alignment very
accurately, but as time moves forward, more metadata is accumulated
and past alignments can be update to greater accuracy.
[0044] Another desirable feature of the GTM 106 and Metadata 109
archives is the ability to virtually repopulate the archives as an
emulation of replaying of the original ingest and processing of the
TPM. This emulation feature is useful for system tuning and
debugging.
[0045] While the disclosure has been described in terms of specific
embodiments, it is evident in view of the foregoing description
that numerous alternatives, modifications and variations will be
apparent to those skilled in the art. Accordingly, the disclosure
is intended to encompass all such alternatives, modifications and
variations which fall within the scope and spirit of the disclosure
and the following claims.
COMPILATION OF ACRONYMS
[0046] AMS Archived Media Stream [0047] ASR Automatic Speech
Recognition [0048] CV Computer Vision [0049] GOS Group Of States
[0050] GTM Ground Truth Metadata [0051] HA Human Annotators [0052]
HAUI Human Annotator User Interface [0053] HCI Human Computer
Interface [0054] LMS Live Media Stream [0055] MSA Media Stream
Annotator [0056] MSN Media Stream Navigator [0057] MSPDE Media
Stream and PAD Decoder [0058] PAD Proposed Annotation Data [0059]
POS Performance Optimization System [0060] PRS Pattern Recognition
System [0061] TPM Third Party Metadata
* * * * *