U.S. patent application number 11/699032 was filed with the patent office on 2007-05-31 for recording medium and method and apparatus for reproducing and recording text subtitle streams.
Invention is credited to Byung Jin Kim, Kang Soo Seo, Jea Yong Yoo.
Application Number | 20070122119 11/699032 |
Document ID | / |
Family ID | 34889502 |
Filed Date | 2007-05-31 |
United States Patent
Application |
20070122119 |
Kind Code |
A1 |
Seo; Kang Soo ; et
al. |
May 31, 2007 |
Recording medium and method and apparatus for reproducing and
recording text subtitle streams
Abstract
In one embodiment, a data structure recorded on the recording
medium includes a text subtitle stream including one or more
presentation segments. Each of the presentation segments specifies
its own presentation start and end times. An initial value of a
system time clock of the text subtitle stream is a presentation
start time of the first presentation segment in the text subtitle
stream.
Inventors: |
Seo; Kang Soo; (Anyang-si,
KR) ; Kim; Byung Jin; (Seongnam-si, KR) ; Yoo;
Jea Yong; (Seoul, KR) |
Correspondence
Address: |
HARNESS, DICKEY & PIERCE, P.L.C.
P.O. BOX 8910
RESTON
VA
20195
US
|
Family ID: |
34889502 |
Appl. No.: |
11/699032 |
Filed: |
January 29, 2007 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11062792 |
Feb 23, 2005 |
|
|
|
11699032 |
Jan 29, 2007 |
|
|
|
Current U.S.
Class: |
386/241 ;
386/244; 386/337 |
Current CPC
Class: |
G11B 27/329 20130101;
G11B 2220/2541 20130101; G11B 27/10 20130101 |
Class at
Publication: |
386/095 |
International
Class: |
H04N 7/00 20060101
H04N007/00 |
Foreign Application Data
Date |
Code |
Application Number |
Feb 26, 2004 |
KR |
10-2004-0013098 |
Mar 17, 2004 |
KR |
10-2004-0018091 |
Claims
1. A recording medium comprising: a text subtitle stream including
one or more presentation segments, each of the presentation
segments specifying its own presentation start and end times,
wherein an initial value of a system time clock of the text
subtitle stream is a presentation start time of the first
presentation segment in the text subtitle stream.
2. The recording medium of claim 1, wherein the presentation start
and end times are defined on the system time clock.
3. The recording medium of claim 1, wherein each of the
presentation segments contains at least one region of dialog
text
4. The recording medium of claim 1, further comprising: a clip
information file corresponding to the text subtitle stream, the
clip information file including a data field indicating a source
packet number of a source packet where a STC-sequence starts, the
STC-sequence representing a sequence of source packets where the
system time clock is continuous.
5. The recording medium of claim 4, wherein the text subtitle
stream includes only one STC-sequence.
6. The recording medium of claim 4, wherein the clip information
file specifies presentation start and end times of the text
subtitle stream, wherein the presentation start time of the text
subtitle stream points to the presentation start time of the first
presentation segment in the text subtitle stream and the
presentation end time of the text subtitle stream points to a
presentation end time of the last presentation segment in the text
subtitle stream.
7. A method of reproducing data recorded on a recording medium, the
method comprising: reading a text subtitle stream including one or
more presentation segments from the recording medium, each of the
presentation segments specifying its own presentation start and end
times; and setting an initial value of a system time clock of the
text subtitle stream to a presentation start time of the first
presentation segment in the text subtitle stream.
8. The method of claim 7, further comprising; presenting at least
one presentation segment in the text subtitle stream in accordance
with its presentation start and end times referring to the system
time clock.
9. The method of claim 7, wherein each of the presentation segments
contains at least one region of dialog text and the at least one
region of dialog text is presented in accordance with its
presentation start and end times referring to the system time
clock.
10. The method of claim 7, wherein the text subtitle stream
includes only one STC-sequence, the STC-sequence representing a
sequence of source packets where the system time clock is
continuous.
11. The method of claim 7, wherein the text subtitle stream is
presented between presentation start and end times of the text
subtitle stream which are specified by a clip information file
corresponding to the text subtitle stream, wherein the presentation
start time of the text subtitle stream points to the presentation
start time of the first presentation segment in the text subtitle
stream and the presentation end time of the text subtitle stream
points to a presentation end time of the last presentation segment
in the text subtitle stream.
12. An apparatus for reproducing data recorded on a recording
medium, the apparatus comprising: a controller controlling a text
subtitle stream including one or more presentation segments to be
read from the recording medium, each of the presentation segments
specifying its own presentation start and end times, and the
controller setting an initial value of the system time clock to a
presentation start time of the first presentation segment in the
text subtitle stream.
13. The apparatus of claim 12, wherein the controller controls at
least one presentation segment in the text subtitle stream to be
presented in accordance with its presentation start and end times
referring to the system time clock.
14. The apparatus of claim 12, wherein the text subtitle stream
includes only one STC-sequence, the STC-sequence representing a
sequence of source packets where the system time clock is
continuous.
15. The apparatus of claim 12, wherein the controller refers to
only one system time clock in the text subtitle stream.
16. The apparatus of claim 12, wherein the controller controls the
text subtitle stream to be presented between presentation start and
end times of the text subtitle stream which are specified by a clip
information file corresponding to the text subtitle stream, wherein
the presentation start time of the text subtitle stream points to
the presentation start time of the first presentation segment in
the text subtitle stream and the presentation end time of the text
subtitle stream points to a presentation end time of the last
presentation segment in the text subtitle stream.
17. The apparatus of claim 12, further comprising: a reading unit
reading data from the recording medium, wherein the controller
controls the reading unit to read the text subtitle stream from the
recording medium.
18. A method of creating a text subtitle stream, the method
comprising: creating the text subtitle stream including one or more
presentation segments from the recording medium, each of the
presentation segments specifying its own presentation start and end
times which are defined on a system time clock of the text subtitle
stream, wherein an initial value of the system time clock is a
presentation start time of the first presentation segment in the
text subtitle stream.
19. The method of claim 18, wherein the text subtitle stream is
created to include only one STC-sequence, the STC-sequence
representing a sequence of source packets where the system time
clock is continuous.
20. The method of claim 18, further comprising: creating a clip
information file corresponding to the text subtitle stream, the
clip information file specifying presentation start and end times
of the text subtitle stream, wherein the presentation start time of
the text subtitle stream points to the presentation start time of
the first presentation segment in the text subtitle stream and the
presentation end time of the text subtitle stream points to a
presentation end time of the last presentation segment in the text
subtitle stream.
21. An apparatus for creating a text subtitle stream, the apparatus
comprising: a controller creating the text subtitle stream
including one or more presentation segments from the recording
medium, each of the presentation segments specifying its own
presentation start and end times which are defined on a system time
clock of the text subtitle stream, wherein an initial value of the
system time clock is a presentation start time of the first
presentation segment in the text subtitle stream.
22. The apparatus of claim 21, wherein the controller creates the
text subtitle stream to include only one STC-sequence, the
STC-sequence representing a sequence of source packets where the
system time clock is continuous.
23. The apparatus of claim 21, wherein the controller further
creates a clip information file corresponding to the text subtitle
stream, the clip information file specifying presentation start and
end times of the text subtitle stream, wherein the presentation
start time of the text subtitle stream points to the presentation
start time of the first presentation segment in the text subtitle
stream and the presentation end time of the text subtitle stream
points to a presentation end time of the last presentation segment
in the text subtitle stream.
24. A method of recording data on a recording medium, the method
comprising: creating a text subtitle stream including one or more
presentation segments from the recording medium, each of the
presentation segments specifying its own presentation start and end
times which are defined on a system time clock of the text subtitle
stream; and recording the text subtitle stream on the recording
medium, wherein an initial value of the system time clock is a
presentation start time of the first presentation segment in the
text subtitle stream.
25. The method of claim 24, wherein the text subtitle stream is
created to include only one STC-sequence, the STC-sequence
representing a sequence of source packets where the system time
clock is continuous.
26. The method of claim 24, further comprising: creating a clip
information file corresponding to the text subtitle stream, the
clip information file specifying presentation start and end times
of the text subtitle stream, and recording the clip information
file on the recording medium, wherein the presentation start time
of the text subtitle stream points to the presentation start time
of the first presentation segment in the text subtitle stream and
the presentation end time of the text subtitle stream points to a
presentation end time of the last presentation segment in the text
subtitle stream.
27. An apparatus for recording data on a recording medium, the
apparatus comprising: a controller creating a text subtitle stream
including one or more presentation segments from the recording
medium, each of the presentation segments specifying its own
presentation start and end times which are defined on a system time
clock of the text subtitle stream, and the controller controlling
the text subtitle stream to be recorded on the recording medium,
wherein an initial value of the system time clock is a presentation
start time of the first presentation segment in the text subtitle
stream.
28. The apparatus of claim 27, wherein the controller creates the
text subtitle stream to include only one STC-sequence, the
STC-sequence representing a sequence of source packets where the
system time clock is continuous.
29. The apparatus of claim 27, wherein the controller further
creates a clip information file corresponding to the text subtitle
stream, the clip information file specifying presentation start and
end times of the text subtitle stream, and controls the clip
information file to be recorded on the recording medium, wherein
the presentation start time of the text subtitle stream points to
the presentation start time of the first presentation segment in
the text subtitle stream and the presentation end time of the text
subtitle stream points to a presentation end time of the last
presentation segment in the text subtitle stream.
30. The apparatus of claim 27, further comprising: recording unit
recording data on the recording medium, wherein the controller
controls the recording unit to record the text subtitle stream on
the recording medium.
Description
DOMESTIC PRIORITY INFORMATION
[0001] This is a continuation application of application Ser. No.
11/062,792 filed Feb. 23, 2005, the entire contents of which are
hereby incorporated by reference.
FOREIGN PRIORITY INFOMRATION
[0002] This application claims the benefit of the Korean Patent
Application No. 10-2004-0013098, filed on Feb. 26, 2004, and No.
10-2004-0018091, filed on Mar. 17, 2004, which are hereby
incorporated by reference as if fully set forth herein.
BACKGROUND OF THE INVENTION
[0003] 1. Field of the Invention
[0004] The present invention relates to a recording medium, and
more particularly, to a recording medium and method and apparatus
for reproducing and recording text subtitle streams. Although the
present invention is suitable for a wide scope of applications, it
is particularly suitable for recording the text subtitle stream
file within the recording medium and effectively reproducing the
recorded text subtitle stream.
[0005] 2. Discussion of the Related Art
[0006] Optical discs are widely used as an optical recording medium
for recording mass data. Presently, among a wide range of optical
discs, a new high-density optical recording medium (hereinafter
referred to as "HD-DVD"), such as a Blu-ray Disc (hereafter
referred to as "BD"), is under development for writing and storing
high definition video and audio data. Currently, global standard
technical specifications of the Blu-ray Disc (BD), which is known
to be the next generation technology, are under establishment as a
next generation optical recording solution that is able to have a
data significantly surpassing the conventional DVD, along with many
other digital apparatuses.
[0007] Accordingly, optical reproducing apparatuses having the
Blu-ray Disc (BD) standards applied thereto are also being
developed. However, since the Blu-ray Disc (BD) standards are yet
to be completed, there have been many difficulties in developing a
complete optical reproducing apparatus. Particularly, in order to
effectively reproduce the data from the Blu-ray Disc (BD), not only
should the main AV data as well as various data required for a
user's convenience, such as subtitle information as the
supplementary data related to the main AV data, be provided, but
also managing information for reproducing the main data and the
subtitle data recorded in the optical disc should be systemized and
provided.
[0008] However, in the present Blu-ray Disc (BD) standards, since
the standards of the supplementary data, particularly the subtitle
stream file, are not completely consolidated, there are many
restrictions in the full-scale development of a Blu-ray Disc (BD)
basis optical reproducing apparatus. And, such restrictions cause
problems in providing the supplementary data such as subtitles to
the user.
SUMMARY OF THE INVENTION
[0009] The present invention relates to a recording medium.
[0010] In one embodiment, a data structure recorded on the
recording medium includes a text subtitle stream including one or
more presentation segments. Each of the presentation segments
specifies its own presentation start and end times. An initial
value of a system time clock of the text subtitle stream is a
presentation start time of the first presentation segment in the
text subtitle stream.
[0011] For example, the presentation start and end times may be
defined on the system time clock.
[0012] In one embodiment, each of the presentation segments
contains at least one region of dialog text.
[0013] In another embodiment, the recording medium further includes
a clip information file corresponding to the text subtitle stream.
The clip information file includes a data field indicating a source
packet number of a source packet where a STC-sequence starts. The
STC-sequence represents a sequence of source packets where the
system time clock is continuous.
[0014] The present invention also relates to a method of
reproducing data recorded on a recording medium.
[0015] In one embodiment, the method includes reading a text
subtitle stream including one or more presentation segments from
the recording medium. Each of the presentation segments specifies
its own presentation start and end times. An initial value of a
system time clock of the text subtitle stream is set to a
presentation start time of the first presentation segment in the
text subtitle stream.
[0016] The present invention further relates to an apparatus for
reproducing data recorded on a recording medium.
[0017] In one embodiment, the apparatus includes a controller
controlling a text subtitle stream including one or more
presentation segments to be read from the recording medium. Each of
the presentation segments specifies its own presentation start and
end times, and the controller sets an initial value of the system
time clock to a presentation start time of the first presentation
segment in the text subtitle stream.
[0018] The present invention may also relate to methods and
apparatuses for creating a text subtitle stream.
[0019] Still further, the present invention may relate to methods
and apparatuses for recording data on a recording medium.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are included to provide a
further understanding of the invention and are incorporated in and
constitute a part of this application, illustrate embodiments of
the invention and together with the description serve to explain
the principle of the invention. In the drawings:
[0021] FIG. 1 illustrates a structure of data files recorded in a
high density optical disc according to the present invention;
[0022] FIG. 2 illustrates data storage areas of the high density
optical disc according to the present invention;
[0023] FIG. 3 illustrates a text subtitle and a main image
presented on a display screen according to the present
invention;
[0024] FIG. 4 illustrates a schematic diagram illustrating
reproduction control of a text subtitle stream according to the
present invention;
[0025] FIGS. 5A to 5C illustrate applications of the reproduction
control information for reproducing the text subtitle stream
according to the present invention;
[0026] FIG. 6 illustrates a dialog, which forms a text subtitle
stream according to the present invention, and its relation with a
presentation time;
[0027] FIG. 7 illustrates a structure of the text subtitle stream
according to the present invention;
[0028] FIG. 8 illustrates a set of SubPath information among
reproduction control file information for controlling reproduction
of the text subtitle stream according to the present invention;
[0029] FIG. 9 illustrates a method of synchronizing the text
subtitle stream and a main AV stream according to the present
invention;
[0030] FIG. 10 illustrates a set of type information included in a
general AV stream;
[0031] FIG. 11 illustrates a ClipInfo file among reproduction
control file information for controlling reproduction of the text
subtitle stream according to the present invention; and
[0032] FIG. 12 illustrates an optical recording and/or reproducing
apparatus including a reproduction of the text subtitle stream file
according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0033] Reference will now be made in detail to the preferred
embodiments of the present invention, examples of which are
illustrated in the accompanying drawings. Wherever possible, the
same reference numbers will be used throughout the drawings to
refer to the same or like parts. In addition, although the terms
used in the present invention are selected from generally known and
used terms, some of the terms mentioned in the description of the
present invention have been selected by the applicant at his or her
discretion, the detailed meanings of which are described in
relevant parts of the description herein. Furthermore, it is
required that the present invention is understood, not simply by
the actual terms used but by the meaning of each term lying
within.
[0034] In this detailed description, "recording medium" refers to
all types of medium that can record data and broadly includes all
types of medium regardless of the recording method, such as an
optical disc, a magnetic tape, and so on. Hereinafter, for
simplicity of the description of the present invention, the optical
disc and, more specifically, the "Blu-ray disc (BD)" will be given
as an example of the recording medium proposed herein. However, it
will be apparent that the spirit or scope of the present invention
may be equally applied to other types of recording medium.
[0035] In this detailed description, "main data" represent
audio/video (AV) data that belong to a title (e.g., a movie title)
recorded in an optical disc by an author. In general, the AV data
are recorded in MPEG2 format and are often called AV streams or
main AV streams. In addition, "supplementary data" represent all
other data required for reproducing the main data, examples of
which are text subtitle streams, interactive graphic streams,
presentation graphic streams, and supplementary audio streams
(e.g., for a browsable slideshow). These supplementary data streams
may be recorded in MPEG2 format or in any other data format. They
could be multiplexed with the AV streams or could exist as
independent data files within the optical disc.
[0036] A "subtitle" represents caption information corresponding to
video (image) data being reproduced, and it may be represented in a
predetermined language. For example, when a user selects an option
for viewing one of a plurality of subtitles represented in various
languages while viewing images on a display screen, the caption
information corresponding to the selected subtitle is displayed on
a predetermined portion of the display screen. If the displayed
caption information is text data (e.g., characters), the selected
subtitle is often called a "text subtitle". According to one aspect
of the present invention, a plurality of text subtitle streams in
MPEG2 format may be recorded in an optical disc, and they may exist
as a plurality of independent stream files. Each "text subtitle
stream file" is created and recorded within an optical disc. And,
the purpose of the present invention is to provide a method and
apparatus for reproducing the recorded text subtitle stream file.
Most particularly, by synchronizing the text subtitle stream with
the main data (AV stream), the present invention proposes a method
of providing to the users a text subtitle stream synchronized with
the main data (AV stream).
[0037] FIG. 1 illustrates a file structure of the data files
recorded in a Blu-ray disc (hereinafter referred to as "BD")
according to the present invention. Referring to FIG. 1, at least
one BD directory (BDMV) is included in a root directory (root).
Each BD directory includes an index file (index.bdmv) and an object
file (MovieObject.bdmv), which are used for interacting with one or
more users. For example, the index file may contain data
representing an index table having a plurality of selectable menus
and movie titles. Each BD directory further includes four file
directories that include audio/video (AV) data to be reproduced and
various data required for reproduction of the AV data.
[0038] The file directories included in each BD directory are a
stream directory (STREAM), a clip information directory (CLIPINF),
a playlist directory (PLAYLIST), and an auxiliary data directory
(AUX DATA). First of all, the stream directory (STREAM) includes
audio/video (AV) stream files having a particular data format. For
example, the AV stream files may be in the form of MPEG2 transport
packets and be named as "*.m2ts", as shown in FIG. 1. The stream
directory may further include one or more text subtitle stream
files, where each text subtitle stream file includes text (e.g.,
characters) data for a text subtitle represented in a particular
language and reproduction control information of the text data. The
text subtitle stream files exist as independent stream files within
the stream directory and may be named as "*.m2ts" or "*.txtst", as
shown in FIG. 1. An AV stream file or text subtitle stream file
included in the stream directory is often called a clip stream
file.
[0039] Next, the clip information directory (CLIPINF) includes clip
information files that correspond to the stream files (AV or text
subtitle) included in the stream directory, respectively. Each clip
information file contains property and reproduction timing
information of a corresponding stream file. For example, a clip
information file may include mapping information, in which
presentation time stamps (PTS) and source packet numbers (SPN) are
in a one-to-one correspondence and are mapped by an entry point map
(EPM), depending upon the clip type. Accordingly, the ClipInfo file
(*.clpi) related to the text subtitle stream according to the
present invention will be described in detail in a later process
with reference to FIG. 11.
[0040] Using the mapping information, a particular location of a
stream file may be determined from a set of timing information
(In-Time and Out-Time) provided by a PlayItem or SubPlayItem, which
will be discussed later in more details. In the industry standard,
each pair of a stream file and its corresponding clip information
file is designated as a clip. For example, 01000.clpi included in
CLIPINF includes property and reproduction timing information of
01000.m2ts included in STREAM, and 01000.clpi and 01000.m2ts form a
clip.
[0041] Referring back to FIG. 1, the playlist directory (PLAYLIST)
includes one or more PlayList files (*.mpls), where each PlayList
file includes at least one PlayItem that designates at least one
main AV clip and the reproduction time of the main AV clip. More
specifically, a PlayItem contains information designating In-Time
and Out-Time, which represent reproduction begin and end times for
a main AV clip designated by Clip_Information_File_Name within the
PlayItem. Therefore, a PlayList file represents the basic
reproduction control information for one or more main AV clips. In
addition, the PlayList file may further include a SubPlayItem,
which represents the basic reproduction control information for a
text subtitle stream file. When a SubPlayItem is included in a
PlayList file to reproduce one or more text subtitle stream files,
the SubPlayItem is synchronized with the PlayItem(s). On the other
hand, when the SubPlayItem is used to reproduce a browsable
slideshow, it may not be synchronized with the PlayItem(s).
According to the present invention, the main function of a
SubPlayItem is to control reproduction of one or more text subtitle
stream files.
[0042] Accordingly, a main path refers to a path reproducing the
main data by using at least one PlayItem included in a PlayList
file, and a sub pat refers to a path reproducing the text subtitle
stream as the supplementary data. More specifically, a main path
must be included in a PlayList file, and at least one sub path for
each attribute of a corresponding set of supplementary data is
included, when the supplementary data exist. Thus, a reproduction
using the main path and the sub path will be described in detail in
a later process with reference to FIG. 4, and more specifically, a
sub path syntax for reproducing the text subtitle stream according
to the present invention will be described in detail with reference
to FIG. 8.
[0043] Lastly, the auxiliary data directory (AUX DATA) may include
supplementary data stream files, examples of which are font files
(e.g., aaaaa.font or aaaaa.otf), pop-up menu files (not shown), and
sound files (e.g., Sound.bdmv) for generating click sound. The text
subtitle stream files mentioned earlier may be included in the
auxiliary data directory instead of the stream directory.
[0044] FIG. 2 illustrates data storage areas of an optical disc
according to the present invention. Referring to FIG. 2, the
optical disc includes a file system information area occupying the
inmost portion of the disc volume, a stream area occupying the
outmost portion of the disc volume, and a database area occupied
between the file system information area and the stream area. In
the file system information area, system information for managing
the entire data files shown in FIG. 1 is stored. Next, main data
and supplementary data (i.e., AV streams and one or more text
subtitle streams) are stored in the stream area. The main data may
include audio data, video data, and graphic data. And, the
supplementary data (i.e., the text subtitle) is independently
stored in the stream area without being multiplexed with the main
data. The general files, PlayList files, and clip information files
shown in FIG. 1 are stored in the database area of the disc volume.
As discussed above, the general files include an index file and an
object file, and the PlayList files and clip information files
include information required to reproduce the AV streams and the
text subtitle streams stored in the stream area. Using the
information stored in the database area and/or stream area, a user
is able to select a specific playback mode and to reproduce the
main AV and text subtitle streams in the selected playback
mode.
[0045] Hereinafter, the structure of the text subtitle stream file
according to the present invention will be described in detail.
First of all, the control information for reproducing the text
subtitle stream will be newly defined. Then, the detailed
description of the method of creating the text stream file
including the newly defined control information, and the method and
apparatus of reproducing the text subtitle stream for reproducing
the recorded stream file will follow. FIG. 3 illustrates a text
subtitle and a main image presented on a display screen according
to the present invention. The main image and the text subtitle are
simultaneously displayed on the display screen when a main AV
stream and a corresponding text subtitle stream are reproduced in
synchronization. Accordingly, the text subtitle stream must be
provided by being synchronized in the main data, and the method of
synchronizing the text subtitle stream with the main data will be
proposed in the present invention.
[0046] FIG. 4 is a schematic diagram illustrating reproduction
control of a main AV clip and text subtitle clips according to the
present invention. Referring to FIG. 4, a PlayList file includes at
least one PlayItem controlling reproduction of at least one main AV
clip and a SubPlayItem controlling reproduction of a plurality of
text subtitle clips. More specifically, at least one PlayItem is
included in the PlayList file as a main path for controlling
reproduction of the main data (i.e., main clip). And, when a
corresponding text subtitle stream exists within the main data, the
text subtitle stream is controlled by a SubPlayItem as the sub
path. For example, referring to FIG. 4, a text subtitle clip 1
(English) and a text subtitle clip 2 (Korean) are each reproduced
and controlled by a single SubPlayItem. And, each of the text
subtitle clip 1 and the text subtitle clip 2 is synchronized with
the main data, thereby enabling the text subtitle and the main data
to be displayed on a display screen simultaneously at a desired
presentation time.
[0047] In order to display the text subtitle on the display screen,
display control information (e.g., position and size information)
and presentation time information, examples of which are
illustrated in FIG. 5A to FIG. 5C, are required. Hereinafter,
diverse information included in the text subtitle stream will be
described in detail with reference to FIG. 5A to FIG. 7. And, the
method of synchronizing the text subtitle stream with the main data
will be described in detail with reference to FIG. 8 to FIG.
11.
[0048] FIG. 5A illustrates a dialog presented on a display screen
according to the present invention. A dialog represents entire text
subtitle data displayed on a display screen during a given
presentation time, so as to facilitate reproduction control of the
text subtitle stream. In general, presentation times of the dialog
may be represented in presentation time stamps (PTS). More
specifically, a PTS section for reproducing one dialog includes a
"dialog_start_PTS" and a "dialog_end_PTS" for each dialog. Also,
for example, presentation of the dialog shown in FIG. 5A starts at
PTS (k) (i.e., dialog_start_PTS) and ends at PTS (k+1) (i.e.,
dialog_end_PTS). Therefore, when the dialog shown in FIG. 5A
represents an entire unit of text subtitle data which are displayed
on the display screen between PTS (k) and PTS (k+1), all of the
text subtitle data is defined by the same dialog. Herein, a dialog
includes a maximum of 100 character codes in one text subtitle.
[0049] In addition, FIG. 5B illustrates regions of a dialog
according to the present invention. A region represents a divided
portion of text subtitle data (dialog) displayed on a display
screen during a given presentation time. In other words, a dialog
includes at least one region, and each region may include at least
one line of subtitle text. The entire text subtitle data
representing a region may be displayed on the display screen
according to a region style (global style) assigned to the region.
The maximum number of regions included in a dialog should be
determined based on a desired decoding rate of the subtitle data
because the greater number of regions generally results in a lower
decoding ratio. For example, the maximum number of regions for a
dialog may be limited to two in order to achieve a reasonably high
decoding rate.
[0050] FIG. 5C illustrates style information for regions of a
dialog according to the present invention. Style information
represents information defining properties required for displaying
at least a portion of a region included in a dialog. Some of the
examples of the style information are position, region size,
background color, text alignment, text flow information, and many
others. The style information may be classified into region style
information (global style information) and inline style information
(local style information).
[0051] Region style information defines a region style (global
style) which is applied to an entire region of a dialog. For
example, the region style information may contain at least one of a
region position, region size, font color, background color, text
flow, text alignment, line space, font name, font style, and font
size of the region. For example, two different region styles are
applied to region 1 and region 2, as shown in FIG. 5C. A region
style with position 1, size 1, and blue background color is applied
to Region 1, and a different region style with position 2, size 2,
and red background color is applied to Region 2.
[0052] On the other hand, inline style information defines an
inline style (local style) which is applied to a particular portion
of text strings included in a region. For example, the inline style
information may contain at least one of a font type, font size,
font style, and font color. The particular portion of text strings
may be an entire text line within a region or a particular portion
of the text line. Referring to FIG. 5C, a particular inline style
is applied to the text portion "mountain" included in Region 1. In
other words, at least one of the font type, font size, font style,
and font color of the particular portion of text strings is
different from the remaining portion of the text strings within
Region 1.
[0053] FIG. 6 illustrates a method of creating each dialog applied
to each presentation time (PTS) section. For example, when 4
dialogs exist between PTS1 to PTS6, Dialog #1 is displayed as text
data Text #1 between PTS1 and PTS2. And, Dialog #2 includes 2
regions (Region 1 and Region 2) between PTS2 and PTS3, wherein
Region 1 is displayed as text data Text #1 and Region 2 is
displayed as text data Text #2. Further, Dialog #3 is displayed as
text data Text #2 between PTS3 and PTS4, and Dialog #4 is displayed
as text data Text #3 between PTS5 and PTS6. Referring to FIG. 6,
text subtitle data does not exist between PTS4 and PTS5. In the
method for creating each of the above-described dialog information,
each of the dialogs must include timing information (i.e., PTS set)
for displaying the corresponding dialog, style information, and
actual text data (hereinafter referred to as "dialog data").
[0054] Accordingly, as described above, the timing information
(i.e., PTS set) being displayed is recorded as "dialog_start_PTS"
and "dialog_end_PTS". The style information is defined as the
above-described Global_Style_Info and Local_Style_Info. However, in
the present invention, the style information will be recorded as
region_styles and inline_styles. Furthermore, text data that is
displayed on the actual display screen is recorded in the dialog
data. More specifically, since Dialog #2 includes 2 regions (Region
1 and Region 2), a set of style information and dialog data is
recorded for each of Region 1 and Region 2.
[0055] FIG. 7 illustrates a text subtitle stream file (e.g.,
10001.m2ts shown in FIG. 1) according to the present invention. The
text subtitle stream file may be formed of an MPEG2 transport
stream including a plurality of transport packets (TP), all of
which have a same packet identifier (e.g., PID=0x18xx). When a disc
player receives many input streams including a particular text
subtitle stream, it finds all the transport packets that belong to
the text subtitle stream using their PIDs. Referring to FIG. 7,
each sub-set of transport packets form a packet elementary stream
(PES) packet. One of the PES packets shown in FIG. 7 corresponds to
a dialog style segment (DSS) defining a group of region styles. All
the remaining PES packets after the second PES packet correspond to
dialog presentation segments (DPSs).
[0056] In the above-described text subtitle stream structure of
FIG. 7, each of the dialog information shown in FIGS. 5A to 5C and
FIG. 6 represent a dialog presentation segment (DPS). And, the
style information included in the dialog information represents a
set of information that links any one of the plurality of region
style sets defined in the dialog style segment (DSS), which can
also be referred to as "region_style_id", and inline styles. A
standardized limited number of region style sets is recorded in the
dialog style segment (DSS). For example, a maximum of 60 sets of
specific style information is recorded, each of which is described
by a region_style_id.
[0057] FIG. 8 illustrates a syntax structure of a sub path and a
SubPlayItem according to the present invention. Herein, the sub
path and the SubPlayItem are not used to control reproduction of
the text subtitle stream only. Rather, an object of the sub path
and the SubPlayItem is to define and reproduce diverse
supplementary data including the text subtitle stream, depending
upon the sub path type, which will be described in a later process.
Therefore, a specific field within the syntax structure of the sub
path and the SubPlayItem will be defined to have a specific value,
when the specific field is irrelevant to the text subtitle stream
or when the specific field is used in the text subtitle stream.
[0058] More specifically, referring to FIG. 8, a SubPath( ) syntax
designates a path of the supplementary data that is associated with
the main data included in a single PlayList. The SubPath( ) syntax
includes a SubPath_type field, an is_repeat_SubPath field, a
number_of_SubPlayItems field, and a SubPlayItem(i) field. Herein,
the SubPath_type field designates the type of a sub path.
SubPath_type=2 represents a supplementary audio browsable
slideshow, SubPath_type=3 represents an interactive graphic
presentation menu, and SubPath_type=4 represents a text subtitle
presentation. Therefore, the optical recording and reproducing
apparatus according to the present invention can determined which
type of clip is being controlled by each corresponding sub path
through the SubPath_type field. Also, the number_of_SubPlayItems
field represents the number of SubPlayItems included within the sub
path. Herein, the optical recording and reproducing apparatus can
verify the number of SubPlayItems that are being controlled in the
specific sub path through the number_of_SubPlayItems field. The
is_repeat_SubPath field represents a set of 1-byte flag information
for verifying whether the corresponding SubPath is to be used
repeatedly. More specifically, when is_repeat_SubPath=0b the
SubPath is not used repeatedly, and when is_repeat_SubPath=1b the
SubPath is used repeatedly.
[0059] Accordingly, when controlling reproduction of the text
subtitle stream by using the SubPath, SubPath_type=4,
number_of_SubPlayItems=1, and is_repeat_SubPath=0b. In other words,
in the SubPath controlling the text subtitle stream, a single
SubPlayItem controls reproduction of a plurality of text subtitle
streams (e.g., as shown in FIG. 4), and the SubPath is not used
repeatedly. Therefore, when reproducing the supplementary audio
browsable slideshow as SubPath_type=2, and when reproducing the
interactive graphic presentation menu stream as SubPath_type=3, the
number_of_SubPlayItems field and the is_repeat-SubPath field may be
defined differently and used.
[0060] In addition, a detailed syntax of the SubPlayItem(i) field
will now be described. A Chip_Information_file_name field is used
as information designating a file name of a stream that is
controlled by the corresponding SubPlayItem, and a
Clip_codec_identifier field represents a coding format of the
designated clip. As described above, since the text subtitle
information according to the present invention is encoded in an
MPEG-2 format, the Clip_codec_identifier field is defined as
Clip_codec_identifier=M2TS. Also, a SubPlayItem_IN_time field and a
SubPlayItem_OUT_time field are used as information for designating
the reproduction begin time and reproduction end time within the
designated clip. Accordingly, as described in FIG. 1, the
SubPlayItem_IN_time and the SubPlayItem_OUT_time are changed to a
set of address information (also referred to as a source packet
number (SPN)) within the designated ClipInfo file (*.clpi), so as
to decide the reproduction section within the actual clip. Further,
a ref_to_STC_id field represents information deciding a position of
a seamless reproduction unit, which is applied to the reproduction
section, within the designated clip, a detailed description of
which will be described in detail with reference to FIG. 10.
[0061] An is_multi_clip_entries field is a set of 1-byte flag
information representing a number of clip entries being controlled
by the corresponding SubPlayItem. For example, referring to FIG. 4,
since a plurality of clips exists when 2 text subtitle clips are
controlled by a single SubPlayItem, is_multi_clip_entries=1b.
Therefore, when is_multi_clip_entries=1b, at least 2 clip entries
exist, and in this case, a subclip_entry_id is assigned to each
clip entry, and a specific Clip_Information_file_name field,
Clip_codec_identifier field and a ref_to_STC_id field for each
subclip_entry_id are defined.
[0062] Finally, a set of information for a synchronization with the
main data is included within the SubPlayItem syntax, wherein the
information includes a sync_PlayItem_id field and a
sync_start_PTS_of_PlayItem field. More specifically, the
sync_PlayItem_id field and the sync_start_PTS_of_PlayItem field are
used only when reproduction control of the text subtitle stream is
performed by using the SubPlayItem. Therefore, when reproducing the
supplementary audio browsable slideshow as SubPath_type=2, and when
reproducing the interactive graphic presentation menu stream as
SubPath_type=3, the sync_PlayItem_id field and the
sync_start_PTS_of_PlayItem field become unnecessary information,
and in this case, the corresponding field is set as `00h`.
[0063] In describing the sync_PlayItem_id field and the
sync_start_PTS_of_PlayItem field, the sync information for
reproducing the text subtitle stream according to the present
invention will now be described with reference to FIG. 9. First of
all, the text subtitle stream is synchronized with the main data,
and the reproduction begin time of the text subtitle stream being
synchronized with the main data is decided by the sync_PlayItem_id
field and the sync_start_PTS_of_PlayItem field. More specifically,
the sync_PlayItem_id field is a set of information designating a
specific PlayItem among at least one PlayItems within the PlayList
file. And, the sync_start_PTS_of PlayItem field is a set of
information indicating the reproduction begin time of the text
subtitle stream as a PlayItem PTS, within the designated specific
PlayItem. For example, referring to FIG. 9, in the SubPlayItem for
reproducing the text subtitle stream, as information for deciding
the reproduction begin time of the text subtitle stream being
synchronized with the main data, the sync_PlayItem_id field and the
sync_start_PTS_of_PlayItem field may be defined as
sync_PlayItem_id=0 and sync_start_PTS_of_PlayItem=t1. More
specifically, reproduction of the text subtitle stream using the
SubPlayItem begins at a specific point of "t1(PTS)" within PlayItem
1 (PlayItem_id=0).
[0064] Secondly, the text subtitle stream uses a counter of 90
kilohertz (khz) as a system time clock. And, an initial value of
the counter is the dialog_start_PTS within the first dialog
presentation segment (DPS #1), shown in FIG. 7. More specifically,
unlike other streams, the text subtitle stream does not have a
program clock reference (PCR) as the initial value. Accordingly,
all PTS (dialog_start_PTS and dialog_end_PTS) values within
subsequent dialog presentation segments (DPS) are decided by
counting from the dialog_start_PTS within the first dialog
presentation segment (DPS #1).
[0065] Therefore, reproduction of the text subtitle begins starting
from the time decided by the sync_PlayItem_id field and the
sync_start_PTS_of_PlayItem field. Afterwards, a system time clock
counter of 90 kilohertz (khz), having an initial value identical to
that of the dialog_start_PTS within the first dialog presentation
segment (DPS #1), is used so as to reproduce each dialog in
accordance with the PTS (dialog_start_PTS and dialog_end_PTS) value
for each dialog defined within the dialog presentation segment
(DPS) included in the text subtitle stream. Thereafter,
reproduction ends at dialog_end_PTS within the last dialog
presentation segment (last DPS) included in the text subtitle
stream.
[0066] FIG. 10 illustrates a system time clock sequence
(STC_sequence) and an arrival time clock sequence (ATC_sequence)
within the clip according to the present invention. Accordingly,
the STC_sequence refers to a continuous clip that is decided by a
time reference. More specifically, a new STC_sequence is created
starting from a packet, which includes a program clock reference
(PCR) as the time reference, among the transport packets being
inputted. In the example shown in FIG. 10, a total of 3
STC_sequences (i.e., STC #0, STC #1, and STC #2) exists. Therefore,
a STC_discontinuity may occur between each STC_sequence. Also, at
least one STC_sequence (e.g., STC #0, STC #1, and STC #2)
configures a single ATC_sequence. More specifically, each of the
clips within the optical disc is formed of an ATC_sequence
including at least one STC_sequence.
[0067] However, the number of ATC_sequence is not limited to only
one ATC_sequence, as described in the example shown in FIG. 10. In
other words, the clip may include a plurality of ATC_sequences.
Accordingly, in case of the text subtitle stream, the system time
clock is decided by using the dialog_start_PTS within the first
dialog presentation segment (DPS #1) as an initial value. Thus, the
text subtitle stream is formed of an ATC_sequence including a
STC_sequence.
[0068] FIG. 11 illustrates a set of information recorded within a
ClipInfo file (*.clpi) according to the present invention and, more
specifically, illustrates information included in a SequenceInfo( )
area. Referring to FIG. 11, a field for recording STC_sequence and
ATC_sequence information of each corresponding clip is included in
the SequenceInfo( ) area within the ClipInfo file (*.clpi). More
specifically, information on the number of ATC_sequences is
recorded in a number_of ATC_sequences field. And, in the present
invention, one ATC_sequence is included in the text subtitle clip.
Therefore, since only one ATC_sequence is included in the text
subtitle clip, only an ATC ID exists as atc_id=0. Additionally, a
number_of_STC_sequences(atc_id) field includes information on the
number of STC_sequences within the ATC_sequence (e.g., atc_id=0).
And, as described above, only one STC_sequence is included in the
text subtitle clip according to the present invention. Further, a
stc_id is assigned to each STC_sequence, and each stc_id includes a
PCR_PIC field, a SPN_STC_start field, a presentation_start_time
field, and a presentation_end_time field.
[0069] Herein, a set of information designating packet
identification (PID) including a program clock reference (PCR),
which is a time reference of the STC_sequence, is recorded in the
PCR_PID field. And, a set of information designating a start source
packet number (SPN) of the STC_sequence is recorded in the
SPN-STC-start field. Accordingly, since a PCR does not exist in the
text subtitle clip according to the present invention, either a
dummy data having no meaning is recorded in the PCR_PID field, or
the PCR_PID field is set as `00h`. Finally, a set of information
designating the starting time and the ending time of each
corresponding clip is respectively recorded in the
presentation_start_time field and the presentation_end_time field.
Accordingly, as described above, the presentation_start_time of the
clip becomes the dialog_start_PTS within the first dialog
presentation segment (DPS #1), and the presentation_end_time
becomes the dialog_end_PTS within the last dialog presentation
segment (last DPS).
[0070] FIG. 12 illustrates a detailed view of an optical recording
and/or reproducing apparatus 10 according to the present invention,
including the reproduction of the text subtitle data. The optical
recording and/or reproducing apparatus 10 basically includes a
pick-up unit 11 for reproducing main data, text subtitle stream and
corresponding reproduction control information recorded on the
optical disc, a servo 14 controlling the operations of the pick-up
unit 11, a signal processor 13 either recovering the reproduction
signal received from the pick-up unit 11 to a desired signal value,
or modulating a signal to be recorded to an optical disc recordable
signal and transmitting the modulated signal, and a microcomputer
16 controlling the above operations.
[0071] In addition, an AV decoder and text subtitle (Text ST)
decoder 17 performs final decoding of output data depending upon
the controls of the controller 12. And, in order to perform the
function of recording a signal on the optical disc, an AV encoder
18 converts an input signal into a signal of a specific format
(e.g., an MPEG-2 transport stream) depending upon the controls of
the controller 12 and, then, provides the converted signal to the
signal processor 13. Accordingly, the AV decoder and text subtitle
(Text ST) decoder 17 is included in the present invention as a
single decoder, for simplicity of the description. However, it is
apparent that only the text subtitle (Text ST) decoder can be
independently included as an element of the present invention.
[0072] A buffer 18 is used for preloading and storing the text
subtitle stream in advance, in order to decode the text subtitle
stream according to the present invention. The controller 12
controls the operations of the optical recording and/or reproducing
apparatus. And, when the user inputs command requesting a text
subtitle of a specific language to be displayed. Then, the
corresponding text subtitle stream is preloaded and stored in the
buffer 18. Subsequently, among the text subtitle stream data that
is preloaded and stored in the buffer 18, the controller 12 refers
to the above-described dialog information, region information,
style information, and so on, and controls the text subtitle
decoder 17 so that the actual text data are displayed with a
specific size and at a specific position on the screen.
[0073] More specifically, the text subtitle decoder 17 reproduces
the text subtitle stream preloaded in the buffer 18. However, the
text subtitle decoder 17 includes a counter 17a, so as to set the
dialog_start_PTS within the first dialog presentation segment (DPS
#1) as an initial value, so as to create the system time clock
(e.g., by using a frequency of 90 khz). Furthermore, the text
subtitle decoder 17 verifies a synchronization point of the text
subtitle stream with the main data from the SubPlayItem within the
PlayList file associated with the reproduction of the text subtitle
clip. For example, information included in the sync_PlayItem_id
field and the sync_start_PTS_of_PlayItem field within the
above-described SubPlayItem is read, and based upon the read
information, the text subtitle stream is reproduced starting from a
specific time within a specific PlayItem.
[0074] As described above, the recording medium and method and
apparatus for reproducing and recording text subtitle streams have
the following advantages. The text subtitle stream is synchronized
with the main data, and therefore, the text subtitle stream and the
main data are reproduced simultaneously.
[0075] It will be apparent to those skilled in the art that various
modifications and variations can be made in the present invention
without departing from the spirit or scope of the inventions. Thus,
it is intended that the present invention covers the modifications
and variations of this invention provided they come within the
scope of the appended claims and their equivalents.
* * * * *