U.S. patent application number 14/337198 was filed with the patent office on 2015-02-19 for data processing apparatus for transmitting/receiving compressed pixel data groups of picture over display interface and related data processing method.
The applicant listed for this patent is MEDIATEK INC.. Invention is credited to Chi-Cheng Ju, Tsu-Ming Liu.
Application Number | 20150049099 14/337198 |
Document ID | / |
Family ID | 52466526 |
Filed Date | 2015-02-19 |
United States Patent
Application |
20150049099 |
Kind Code |
A1 |
Ju; Chi-Cheng ; et
al. |
February 19, 2015 |
DATA PROCESSING APPARATUS FOR TRANSMITTING/RECEIVING COMPRESSED
PIXEL DATA GROUPS OF PICTURE OVER DISPLAY INTERFACE AND RELATED
DATA PROCESSING METHOD
Abstract
A data processing apparatus has a mapper, a plurality of
compressors, and an output interface. The mapper receives pixel
data of a plurality of pixels of a picture, and splits the pixel
data of the pixels of the picture into a plurality of pixel data
groups. The compressors compress the pixel data groups and generate
a plurality of compressed pixel data groups, respectively. The
output interface packs the compressed pixel data groups into at
least one output bitstream, and outputs the at least one output
bitstream via a display interface.
Inventors: |
Ju; Chi-Cheng; (Hsinchu
City, TW) ; Liu; Tsu-Ming; (Hsinchu City,
TW) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
MEDIATEK INC. |
Hsin-Chu |
|
TW |
|
|
Family ID: |
52466526 |
Appl. No.: |
14/337198 |
Filed: |
July 21, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61865345 |
Aug 13, 2013 |
|
|
|
Current U.S.
Class: |
345/520 |
Current CPC
Class: |
G06T 9/00 20130101; G09G
5/006 20130101; G09G 2310/0297 20130101; G09G 2350/00 20130101;
G09G 3/20 20130101; H04N 19/42 20141101; G09G 2310/027 20130101;
G09G 2340/02 20130101; G09G 2310/0218 20130101; G09G 2370/04
20130101; G09G 3/2096 20130101; G09G 2370/08 20130101; G06T 1/60
20130101; G09G 2370/10 20130101; H04N 21/8455 20130101; G09G
2360/08 20130101 |
Class at
Publication: |
345/520 |
International
Class: |
G09G 5/00 20060101
G09G005/00 |
Claims
1. A data processing apparatus, comprising: a mapper, configured to
receive pixel data of a plurality of pixels of a picture, and split
the pixel data of the pixels of the picture into a plurality of
pixel data groups; a plurality of compressors, configured to
compress the pixel data groups and generate a plurality of
compressed pixel data groups, respectively; and an output
interface, configured to pack the compressed pixel data groups into
at least one output bitstream, and output the at least one output
bitstream via a display interface.
2. The data processing apparatus of claim 1, wherein compression
operations performed by the compressors are independent of each
other.
3. The data processing apparatus of claim 2, further comprising: a
rate controller, configured to apply bit rate control to the
compressors, respectively.
4. The data processing apparatus of claim 1, wherein the display
interface is a display serial interface (DSI) standardized by a
Mobile Industry Processor Interface (MIPI) or an embedded display
port (eDP) standardized by a Video Electronics Standards
Association (VESA).
5. The data processing apparatus of claim 1, wherein pixel data of
each pixel of the picture includes a plurality of bits
corresponding to different bit planes, and the mapper is configured
to split the bits of the pixel data of each pixel of the picture
into a plurality of bit groups, and distribute the bit groups to
the pixel data groups, respectively.
6. The data processing apparatus of claim 1, wherein the mapper is
configured to split the pixels of the picture into a plurality of
pixel groups, and distribute pixel data of the pixel groups to the
pixel data groups, respectively.
7. The data processing apparatus of claim 6, wherein adjacent
pixels located at a same pixel line of the picture are distributed
to different pixel groups, respectively.
8. The data processing apparatus of claim 6, wherein adjacent pixel
segments located at a same pixel line of the picture are
distributed to different pixel groups, respectively, and each of
the adjacent pixel segments includes a plurality of successive
pixels.
9. The data processing apparatus of claim 8, wherein at least one
pixel line of the picture is divided into a plurality of pixel
segments, and a number of the pixel segments is equal to a number
of the pixel data groups.
10. The data processing apparatus of claim 8, wherein at least one
pixel line of the picture is divided into a plurality of pixel
segments, and a number of the pixel segments is larger than a
number of the pixel data groups.
11. The data processing apparatus of claim 6, further comprising: a
rate controller, configured to apply bit rate control to the
compressors, respectively; wherein the rate controller adjusts the
bit rate control according to a position of each pixel boundary
between different pixel groups.
12. The data processing apparatus of claim 11, wherein concerning a
specific pixel boundary between a first pixel group and a second
pixel group, the rate controller is configured to increase an
original bit budget assigned to a first compression unit by an
adjustment value and decrease an original bit budget assigned to a
second compression unit by the adjustment value; the first
compression unit and the second compression unit are adjacent
compression units in any of the first pixel group and the second
pixel group; and the first compression unit is nearer to the
specific pixel boundary than the second compression unit.
13. The data processing apparatus of claim 6, wherein each of the
compressors is further configured to set a compression order
according to a position of each pixel boundary between different
pixel groups.
14. The data processing apparatus of claim 13, wherein concerning a
specific pixel boundary between a first pixel group and a second
pixel group, a first compressor is configured to compress a first
compression unit prior to compressing a second compression unit,
and a second compressor is configured to compress a third
compression unit prior to compressing a fourth compression unit;
the first compression unit and the second compression unit are
adjacent compression units in the first pixel group, and the first
compression unit is nearer to the specific pixel boundary than the
second compression unit; and the third compression unit and the
fourth second compression unit are adjacent compression units in
the second pixel group, and the third compression unit is nearer to
the specific pixel boundary than the fourth compression unit.
15. The data processing apparatus of claim 1, wherein the data
processing apparatus is coupled to another data processing
apparatus via the display interface; and the data processing
apparatus informs the another data processing apparatus of a pixel
data grouping setting employed to split the pixel data of the
pixels of the picture.
16. The data processing apparatus of claim 1, wherein the data
processing apparatus is coupled to another data processing
apparatus via the display interface, and the data processing
apparatus further comprises: a controller, configured to check a
de-compression capability and requirement of the another data
processing apparatus, and determines a number of the pixel data
groups in response to a checking result.
17. A data processing apparatus, comprising: an input interface,
configured to receive at least one input bitstream from a display
interface, and un-pack the at least one input bitstream into a
plurality of compressed pixel data groups of a picture; a plurality
of de-compressors, configured to de-compress the compressed pixel
data groups and generate a plurality of de-compressed pixel data
groups, respectively; and a de-mapper, configured to merge the
de-compressed pixel data groups into pixel data of a plurality of
pixels of the picture.
18. The data processing apparatus of claim 17, wherein
de-compression operations performed by the de-compressors are
independent of each other.
19. The data processing apparatus of claim 17, wherein the display
interface is a display serial interface (DSI) standardized by a
Mobile Industry Processor Interface (MIPI) or an embedded display
port (eDP) standardized by a Video Electronics Standards
Association (VESA).
20. The data processing apparatus of claim 17, wherein pixel data
of each pixel of the picture includes a plurality of bits
corresponding to different bit planes, and the de-mapper is
configured to obtain a plurality of bit groups from the
de-compressed pixel data groups, respectively, and merge the bit
groups to obtain the bits of the pixel data of each pixel of the
picture.
21. The data processing apparatus of claim 17, wherein the
de-mapper is configured to obtain pixel data of a plurality of
pixel groups from the de-compressed pixel data groups,
respectively, and merge the pixel data of the pixel groups to
obtain the pixel data of the pixels of the picture.
22. The data processing apparatus of claim 21, wherein adjacent
pixels located at a same pixel line of the picture are obtained
from different pixel groups, respectively.
23. The data processing apparatus of claim 21, wherein adjacent
pixel segments located at a same pixel line of the picture are
obtained from different pixel groups, respectively, and each of the
adjacent pixel segments includes a plurality of successive
pixels.
24. The data processing apparatus of claim 23, wherein at least one
pixel line of the picture is obtained by merging a plurality of
pixel segments, and a number of the pixel segments is equal to a
number of the de-compressed pixel data groups.
25. The data processing apparatus of claim 23, wherein at least one
pixel line of the picture is obtained by merging a plurality of
pixel segments, and a number of the pixel segments is larger than a
number of the de-compressed pixel data groups.
26. The data processing apparatus of claim 17, wherein the data
processing apparatus is coupled to another data processing
apparatus via the display interface; and the data processing
apparatus receives a pixel data grouping setting of splitting the
pixel data of the pixels of the picture from the another data
processing apparatus.
27. The data processing apparatus of claim 17, wherein the data
processing apparatus is coupled to another data processing
apparatus via the display interface, and the data processing
apparatus further comprises: a controller, configured to inform the
another data processing apparatus of a de-compression capability
and requirement of the data processing apparatus.
28. A data processing method, comprising: receiving pixel data of a
plurality of pixels of a picture, and splitting the pixel data of
the pixels of the picture into a plurality of pixel data groups;
compressing the pixel data groups to generate a plurality of
compressed pixel data groups, respectively; and packing the
compressed pixel data groups into at least one output bitstream,
and outputting the at least one output bitstream via a display
interface.
29. A data processing method, comprising: receiving at least one
input bitstream from a display interface, and un-packing the at
least one input bitstream into a plurality of compressed pixel data
groups of a picture; de-compressing the compressed pixel data
groups to generate a plurality of de-compressed pixel data groups,
respectively; and merging the de-compressed pixel data groups into
pixel data of a plurality of pixels of the picture.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional
application No. 61/865,345, filed on Aug. 13, 2013 and incorporated
herein by reference.
BACKGROUND
[0002] The disclosed embodiments of the present invention relate to
transmitting and receiving data over a display interface, and more
particularly, to a data processing apparatus for
transmitting/receiving compressed pixel data groups of a picture
over a display interface and a related data processing method.
[0003] A display interface is disposed between a first chip and a
second chip to transmit display data from the first chip to the
second chip for further processing. For example, the first chip may
be a host application processor, and the second chip may be a
driver integrated circuit (IC). The display data may be single view
data for two-dimensional (2D) display or multiple view data for
three-dimensional (3D) display. When a display panel supports a
higher display resolution, 2D/3D display with higher resolution can
be realized. Hence, the display data transmitted over the display
interface would have a larger data size/data rate, which increases
the power consumption of the display interface inevitably. If the
host application processor and the driver IC are both located at a
portable device (e.g., a smartphone) powered by a battery device,
the battery life is shortened due to the increased power
consumption of the display interface. Thus, there is a need for an
innovative design which can effectively reduce the power
consumption of the display interface.
SUMMARY
[0004] In accordance with exemplary embodiments of the present
invention, a data processing apparatus for transmitting/receiving
compressed pixel data groups of a picture over a display interface
and a related data processing method are proposed.
[0005] According to a first aspect of the present invention, an
exemplary data processing apparatus is disclosed. The exemplary
data processing apparatus includes a mapper, a plurality of
compressors, and an output interface. The mapper is configured to
receive pixel data of a plurality of pixels of a picture, and split
the pixel data of the pixels of the picture into a plurality of
pixel data groups. The compressors are configured to compress the
pixel data groups and generate a plurality of compressed pixel data
groups, respectively. The output interface is configured to pack
the compressed pixel data groups into at least one output
bitstream, and output the at least one output bitstream via a
display interface.
[0006] According to a second aspect of the present invention, an
exemplary data processing apparatus is disclosed. The exemplary
data processing apparatus includes an input interface, a plurality
of de-compressors, and a de-mapper. The input interface is
configured to receive at least one input bitstream from a display
interface, and un-pack the at least one input bitstream into a
plurality of compressed pixel data groups of a picture. The
de-compressors are configured to de-compress the compressed pixel
data groups and generate a plurality of de-compressed pixel data
groups, respectively. The de-mapper is configured to merge the
de-compressed pixel data groups into pixel data of a plurality of
pixels of the picture.
[0007] According to a third aspect of the present invention, an
exemplary data processing method is disclosed. The exemplary data
processing method includes: receiving pixel data of a plurality of
pixels of a picture, and splitting the pixel data of the pixels of
the picture into a plurality of pixel data groups; compressing the
pixel data groups to generate a plurality of compressed pixel data
groups, respectively; and packing the compressed pixel data groups
into at least one output bitstream, and outputting the at least one
output bitstream via a display interface.
[0008] According to a fourth aspect of the present invention, an
exemplary data processing method is disclosed. The exemplary data
processing method includes: receiving at least one input bitstream
from a display interface, and un-packing the at least one input
bitstream into a plurality of compressed pixel data groups of a
picture; de-compressing the compressed pixel data groups to
generate a plurality of de-compressed pixel data groups,
respectively; and merging the de-compressed pixel data groups into
pixel data of a plurality of pixels of the picture.
[0009] These and other objectives of the present invention will no
doubt become obvious to those of ordinary skill in the art after
reading the following detailed description of the preferred
embodiment that is illustrated in the various figures and
drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 is a block diagram illustrating a data processing
system according to an embodiment of the present invention.
[0011] FIG. 2 is a diagram illustrating a pixel data splitting
operation performed by a mapper based on a first pixel data
grouping design.
[0012] FIG. 3 is a diagram illustrating a pixel data merging
operation performed by a de-mapper based on the first pixel data
grouping design.
[0013] FIG. 4 is a diagram illustrating a pixel data splitting
operation performed by a mapper based on a second pixel data
grouping design.
[0014] FIG. 5 is a diagram illustrating a pixel data merging
operation performed by a de-mapper based on the second pixel data
grouping design.
[0015] FIG. 6 is a diagram illustrating a first pixel section based
pixel data grouping design according to an embodiment of the
present invention.
[0016] FIG. 7 is a diagram illustrating a second pixel section
based pixel data grouping design according to an embodiment of the
present invention.
[0017] FIG. 8 is a flowchart illustrating a control and data flow
of the data processing system shown in FIG. 1 according to an
embodiment of the present invention.
[0018] FIG. 9 is a diagram illustrating a position-aware rate
control mechanism according to an embodiment of the present
invention.
[0019] FIG. 10 is a diagram illustrating an alternative design of
step 806 in FIG. 8.
[0020] FIG. 11 is a diagram illustrating a modified compression
mechanism according to an embodiment of the present invention.
[0021] FIG. 12 is a diagram illustrating an alternative design of
step 808 in FIG. 8.
DETAILED DESCRIPTION
[0022] Certain terms are used throughout the description and
following claims to refer to particular components. As one skilled
in the art will appreciate, manufacturers may refer to a component
by different names. This document does not intend to distinguish
between components that differ in name but not function. In the
following description and in the claims, the terms "include" and
"comprise" are used in an open-ended fashion, and thus should be
interpreted to mean "include, but not limited to . . . ". Also, the
term "couple" is intended to mean either an indirect or direct
electrical connection. Accordingly, if one device is coupled to
another device, that connection may be through a direct electrical
connection, or through an indirect electrical connection via other
devices and connections.
[0023] The present invention proposes applying data compression to
a display data and then transmitting a compressed display data over
a display interface. As the data size/data rate of the compressed
display data is smaller than that of the original un-compressed
display data, the power consumption of the display interface is
reduced correspondingly. However, there may be a throughput
bottleneck for a compression/de-compression system due to long data
dependency of previous compressed/reconstructed data. To minimize
or eliminate the throughput bottleneck of the
compression/de-compression system, the present invention further
proposes a data parallelism design. For example, the rate control
intends to optimally sub-optimally adjust the bit rate of each
compression unit so as to achieve the content-aware bit budget
allocation and therefore improve the visual quality. However, the
rate control generally suffers from the long data dependency. When
the proposed data parallelism design is employed, there will be a
compromise between the processing throughput and the rate control
performance. It should be noted that the proposed data parallelism
design is not limited to enhancement of the rate control, any
compression/de-compression system using the proposed data
parallelism design falls within the scope of the present invention.
Further details will be described as below.
[0024] FIG. 1 is a block diagram illustrating a data processing
system according to an embodiment of the present invention. The
data processing system 100 includes a plurality of data processing
apparatuses such as an application processor 102 and a driver
integrated circuit (IC) 104. The application processor 102 and the
driver IC 104 may be implemented in different chips, and the
application processor 102 may communicate with the driver IC 104
via a display interface 103. In this embodiment, the display
interface 103 may be a display serial interface (DSI) standardized
by a Mobile Industry Processor Interface (MIPI) or an embedded
display port (eDP) standardized by a Video Electronics Standards
Association (VESA).
[0025] The application processor 102 is coupled between a data
source 105 and the display interface 103, and supports compressed
data transmission. The application processor 102 receives an input
display data from the external data source 105, where the input
display data may be image data or video data that includes pixel
data DI of a plurality of pixels of a picture to be processed. By
way of example, but not limitation, the data source 105 may be a
camera sensor, a memory card or a wireless receiver. As shown in
FIG. 1, the application processor 102 includes a display controller
111, an output interface 112 and a processing circuit 113. The
processing circuit 113 includes circuit elements required for
processing the pixel data DI to generate a plurality of compressed
pixel data groups (e.g., two compressed pixel data groups DG.sub.1'
and DG.sub.2' in this embodiment). For example, the processing
circuit 113 has a mapper 114, a plurality of compressors (e.g., two
compressors 115_1 and 115_2 in this embodiment), a rate controller
116, and other circuitry 117, where the other circuitry 117 may
have a display processor, additional image processing element(s),
etc. The display processor may perform image processing operations,
including scaling, rotating, etc. For example, the input display
data provided by the data source 105 may be bypassed or processed
by the additional image processing element(s) located before the
display processor to generate a source display data, and then the
display processor may process the source display data to generate
the pixel data DI to the mapper 114. In other words, the pixel data
DI to be processed by the mapper 114 may be directly provided from
the data source 105 or indirectly obtained from the input display
data provided by the data source 105. The present invention has no
limitation on the source of the pixel data DI.
[0026] The mapper 114 acts as a splitter, and is configured to
receive the pixel data DI of one picture and split the pixel data
DI of one picture into a plurality of pixel data groups (e.g., two
pixel data groups DG.sub.1 and DG.sub.2 in this embodiment)
according to a pixel data group setting DG.sub.SET. Further details
of the mapper 114 will be described later. Since the pixel data DI
is split into two pixel data groups DG.sub.1 and DG.sub.2, two
compressors 115_1 and 115_2 are selected from multiple compressors
implemented in the processing circuit 113, and enabled to compress
the pixel data groups DG.sub.1 and DG.sub.2 to generate compressed
pixel data groups DG.sub.1' and DG.sub.2', respectively. In other
words, the number of enabled compressors depends on the number of
pixel data groups.
[0027] Each of the compressors 115_1 and 115_2 may employ a
lossless compression algorithm or a lossy compression algorithm,
depending upon the actual design consideration. The rate controller
116 is configured to apply bit rate control (i.e., bit budget
allocation) to the compressors 115_1 and 115_2, respectively. In
this way, each of the compressed pixel data groups DG.sub.1' and
DG.sub.2' is generated at a desired bit rate. In this embodiment,
compression operations performed by the compressors 115_1 and 115_2
are independent of each other, thus enabling rate control with data
parallelism. Since the long data dependency is alleviated, the rate
control performance can be improved.
[0028] The output interface 112 is configured to pack/packetize the
compressed pixel data groups DG.sub.1' and DG.sub.2' into at least
one output bitstream according to the transmission protocol of the
display interface 103, and transmit the at least one output
bitstream to the driver IC 104 via the display interface 103. By
way of example, one bitstream BS may be generated from the
application processor 102 to the driver IC 104 via one display port
of the display interface 103.
[0029] Regarding the driver IC 104, it communicates with the
application processor 102 via the display interface 103. In this
embodiment, the driver IC 104 is coupled between the display
interface 103 and a display panel 106, and supports compressed data
reception. By way of example, the display panel 106 may be
implemented using any 2D/3D display device. When the application
processor 102 transmits compressed display data (e.g., compressed
pixel data groups DG.sub.1' and DG.sub.2' packed in the bitstream
BS) to the driver IC 104, the driver IC 104 is configured to
receive the compressed display data from the display interface 103
and drive the display panel 106 according to de-compressed display
data derived from de-compressing the compressed display data.
[0030] As shown in FIG. 1, the driver IC 104 includes a driver IC
controller 121, an input interface 122 and a processing circuit
123. The input interface 122 is configured to receive at least one
input bitstream from the display interface 103 (e.g., the bitstream
BS received by one display port of the display interface 103), and
un-pack/un-packetize the at least one input bitstream into a
plurality of compressed pixel data groups of a picture (e.g., two
compressed pixel data groups DG.sub.3' and DG.sub.4' in this
embodiment). It should be noted that, if there is no error
introduced during the data transmission, the compressed pixel data
group DG.sub.3' generated from the input interface 122 should be
identical to the compressed pixel data group DG.sub.1' received by
the output interface 112, and the compressed pixel data group
DG.sub.4' generated from the input interface 122 should be
identical to the compressed pixel data group DG.sub.2' received by
the output interface 112.
[0031] The processing circuit 123 may include circuit elements
required for driving the display panel 106. For example, the
processing circuit 123 has a de-mapper 124, a plurality of
de-compressors (e.g., two de-compressors 125_1 and 125_2 in this
embodiment), and other circuitry 127, where the other circuitry 127
may have a display buffer, additional image processing element (s),
etc. The de-compressor 125_1 is configured to de-compress the
compressed pixel data group DG.sub.3' to generate a de-compressed
pixel data group DG.sub.3, and the de-compressor 125_2 is
configured to de-compress the compressed pixel data group DG.sub.4'
to generate a de-compressed pixel data group DG.sub.4. In this
embodiment, the de-compression operations performed by the
de-compressors 125_1 and 125_2 are independent of each other. In
this way, the de-compression throughput is improved due to data
parallelism.
[0032] The de-compression algorithm employed by each of the
de-compressors 125_1 and 125_2 should be properly configured to
match the compression algorithm employed by each of the compressors
115_1 and 115_2. In other words, the de-compressors 125_1 and 125_2
are configured to perform lossless de-compression when the
compressors 115_1 and 115_2 are configured to perform lossless
compression; and the de-compressors 125_1 and 125_2 are configured
to perform lossy de-compression when the compressors 115_1 and
115_2 are configured to perform lossy compression. If there is no
error introduced during the data transmission and a lossless
compression algorithm is employed by the compressors 115_1 and
115_2, the de-compressed pixel data group DG.sub.3 fed into the
de-mapper 124 should be identical to the pixel data group DG.sub.1
generated from the mapper 114, and the de-compressed pixel data
group Da.sub.4 fed into the de-mapper 124 should be identical to
the pixel data group DG.sub.2 generated from the mapper 114.
[0033] The de-mapper 124 acts as a combiner, and is configured to
merge the de-compressed pixel data groups into pixel data DO of a
plurality of pixels of a reconstructed picture based on the pixel
data grouping setting DG.sub.SET that is employed by the mapper
114. The pixel data grouping setting DG.sub.SET employed by the
mapper 114 may be transmitted from the application processor 102 to
the driver IC 104 via an in-band channel (i.e., display interface
103) or an out-of-band channel 107 (e.g., an I.sup.2C
(Inter-Integrated Circuit) bus). Specifically, the display
controller 111 controls the operation of the application processor
102, and the driver IC controller 121 controls the operation of the
driver IC 104. Hence, the display controller 111 may first check a
de-compression capability and requirement of the driver IC 104, and
then determine the number of pixel data groups in response to a
checking result. In addition, the display controller 111 may
further determine the pixel data grouping setting DG.sub.SET
employed by the mapper 114 to generate the pixel data groups that
satisfy the de-compression capability and requirement of the driver
IC 104, and transmit the pixel data grouping setting DG.sub.SET
over display interface 103 or out-of-band channel 107. When
receiving a query issued from the display controller 111, the
driver IC controller 121 may inform the display controller 111 of
the de-compression capability and requirement of the driver IC 104.
In addition, when receiving the pixel data grouping setting
DG.sub.SET from display interface 103 or out-of-band channel 107,
the driver IC controller 121 may control the de-mapper 124 to
perform the pixel data merging operation based on the received
pixel data grouping setting DG.sub.SET.
[0034] The present invention proposes several pixel data grouping
designs that can be used to split pixel data of a plurality of
pixels of one picture into multiple pixel data groups. Examples of
the proposed pixel data grouping designs are detailed as below.
[0035] In a first pixel data grouping design, the mapper 114 splits
the pixel data DI of pixels of one picture by dividing bit
depths/bit planes into different groups. FIG. 2 is a diagram
illustrating a pixel data splitting operation performed by the
mapper 114 based on the first pixel data grouping design. As shown
in FIG. 2, the width of a picture 200 is W, and the height of the
picture 200 is H. Thus, the picture 200 has W.times.H pixels 201.
In this embodiment, pixel data of each pixel 201 has a plurality of
bits corresponding to different bit planes. For example, each pixel
201 has 12 bits B.sub.0-B.sub.11 for each color channel R/G/B. The
bits B.sub.0-B.sub.11 correspond to different bit planes
Bit-plane[0]-Bit-plane[11]. Specifically, the least significant bit
(LSB) B.sub.0 corresponds to the bit plane Bit-plane[0], and the
most significant bit (MSB) B.sub.11 corresponds to the bit plane
Bit-plane[11]. When the first pixel data grouping design is
employed, the display controller 111 controls the pixel data
grouping setting DG.sub.SET to instruct the mapper 114 to split
bits of the pixel data of each pixel into a plurality of bit groups
(e.g., two bit groups BG.sub.1 and BG.sub.2 in this embodiment),
and distribute the bit groups to the pixel data groups (e.g., pixel
data groups DG.sub.1 and DG.sub.2 in this embodiment),
respectively. Concerning bits B.sub.0-B.sub.11 of color channels R,
G, B of each pixel 201, the mapper 114 may categorize even bits
B.sub.0, B.sub.2, B.sub.4, B.sub.6, B.sub.8, B.sub.10 as one bit
group BG.sub.1, and categorize odd bits B.sub.1, B.sub.3, B.sub.5,
B.sub.7, B.sub.9, B.sub.11 as another bit group BG.sub.2. However,
this is for illustrative purposes only, and is not meant to be a
limitation of the present invention. In an alternative design, the
mapper 114 may categorize more significant bits B.sub.6-B.sub.11 as
one bit group BG.sub.1, and categorize less significant bits
B.sub.0-B.sub.5 as another bit group BG.sub.2. In short, any bit
interleaving manner capable of splitting bits of pixel data of each
pixel 201 of the picture 200 into multiple bit groups may be
employed by the mapper 114.
[0036] As mentioned above, the pixel data groups DG.sub.1 and
DG.sub.2 are transmitted from the application processor 102 to the
driver IC 104 after undergoing data compression. Hence, the driver
IC 104 obtains one de-compressed pixel data group DG.sub.3
corresponding to the pixel data group DG.sub.1 and another
de-compressed pixel data group DG.sub.4 corresponding to the pixel
data group DG.sub.2 after data de-compression is performed. FIG. 3
is a diagram illustrating a pixel data merging operation performed
by the de-mapper 124 based on the first pixel data grouping design.
The operation of the de-mapper 124 may be regarded as an inverse of
the operation of the mapper 114. Hence, based on the pixel data
grouping setting DG.sub.SET employed by the mapper 114, the
de-mapper 124 obtains a plurality of bit groups (e.g., two bit
groups BG.sub.1 and BG.sub.2 in this embodiment) from the
de-compressed pixel data groups (e.g., two de-compressed pixel data
groups DG.sub.3 and DG.sub.4 in this embodiment), respectively, and
merge the bit groups to obtain bits of pixel data of each pixel
201' of a reconstructed picture 200'. The resolution of the
reconstructed picture 200' generated at the driver IC 104 is
identical to the resolution of the picture 200 processed in the
application circuit 102. Hence, the width of the reconstructed
picture 200' is W, and the height of the reconstructed picture 200'
is H. The pixel data of each pixel 201' of the reconstructed
picture 200' includes a plurality of bits B.sub.0-B.sub.11
corresponding to different bit planes Bit-plane [0] -Bit-plane
[11]. For example, each color channel R/G/B of one pixel 201' in
the reconstructed 200' includes 12 bits B.sub.0-B.sub.11. The
de-mapper 124 may obtain the bit group BG.sub.1 composed of even
bits B.sub.0, B.sub.2, B.sub.4, B.sub.6, B.sub.8, B.sub.10 of color
channels R, G, B of a pixel 201', obtain another bit group BG.sub.2
composed of odd bits B.sub.1, B.sub.3, B.sub.5, B.sub.7, B.sub.9,
B.sub.11 of color channels R, G, B of the pixel 201', and merge the
bit groups BG.sub.1 and BG.sub.2 to recover all bits
B.sub.0-B.sub.11 of the pixel data of the pixel 201'. However, this
is for illustrative purposes only, and is not meant to be a
limitation of the present invention. In another case where the
mapper 114 categorizes more significant bits B.sub.6-B.sub.11 as
one bit group BG.sub.1 and categorizes less significant bits
B.sub.0-B.sub.5 as another bit group BG.sub.2, the de-mapper 124
may obtain the bit group BG.sub.1 composed of more significant bits
B.sub.6-B.sub.11 of color channels R, G, B of a pixel 201', obtain
another bit group BG.sub.2 composed of less significant bits
B.sub.0-B.sub.5 of color channels R, G, B of the pixel 201', and
merge the bit groups BG.sub.1 and BG.sub.2 to recover all bits
B.sub.0-B.sub.11 of the pixel data of the pixel 201'. To put it
simply, the bit de-interleaving manner employed by the de-mapper
124 depends on the bit interleaving manner employed by the mapper
114.
[0037] In a second pixel data grouping design, the mapper 114
splits the pixel data DI of pixels of one picture by dividing
complete pixels into different groups. FIG. 4 is a diagram
illustrating a pixel data splitting operation performed by the
mapper 114 based on the second pixel data grouping design. As shown
in FIG. 4, the width of a picture 400 is W, and the height of the
picture 400 is H. Thus, the picture 400 has W.times.H pixels. As
shown in FIG. 4, pixels located at the same pixel line (e.g., the
same pixel row in this embodiment) include a plurality of pixels
P.sub.0, P.sub.1, P.sub.2, P.sub.3 . . . P.sub.W-2, P.sub.W-1. When
the second pixel data grouping design is employed, the display
controller 111 controls the pixel data grouping setting DG.sub.SET
to instruct the mapper 114 to split pixels of the picture 400 into
a plurality of pixel groups (e.g., two pixel groups PG.sub.1 and
PG.sub.2 in this embodiment), and distribute pixel data of the
pixel groups to the pixel data groups (e.g., two pixel data groups
DG.sub.1 and DG.sub.2 in this embodiment), respectively. For
example, adjacent pixels located at the same pixel line (e.g., the
same pixel row) are distributed to different groups, respectively.
Hence, the pixel group PG.sub.1 includes all pixels of even pixel
columns C.sub.0, C.sub.2 . . . C.sub.W-2 of the picture 400, and
the pixel group PG.sub.1 includes all pixels of the odd pixel
columns C.sub.1, C.sub.3 . . . C.sub.W-1 of the picture 400. As
shown in FIG. 4, the pixel data group DG.sub.1 includes pixel data
of H.times.(W/2) pixels, and the pixel data group DG.sub.2 includes
pixel data of H.times.(W/2) pixels. However, this is for
illustrative purposes only, and is not meant to be a limitation of
the present invention. In an alternative design, the aforementioned
pixel line may be a pixel column. Hence, adjacent pixels located at
the same pixel column are distributed to different groups,
respectively. Hence, the pixel group PG.sub.1 may include all
pixels of even pixel rows of the picture 400, and the pixel group
PG.sub.2 may include all pixels of the odd pixel rows of the
picture 400. In other words, the pixel data group DG.sub.1 may be
formed by gathering pixel data of (H/2).times.W pixels, and the
pixel data group DG.sub.2 may be formed by gathering pixel data of
(H/2).times.W pixels. To put it simply, any pixel interleaving
manner capable of splitting adjacent pixels of the picture 400 into
different pixel groups may be employed by the mapper 114.
[0038] As mentioned above, the pixel data groups DG.sub.1 and
DG.sub.2 are transmitted from the application processor 102 to the
driver IC 104 after undergoing data compression. Hence, the driver
IC 104 obtains one de-compressed pixel data group DG.sub.3
corresponding to the pixel data group DG.sub.1 and another
de-compressed pixel data group DG.sub.4 corresponding to the pixel
data group DG.sub.2 after data de-compression is performed. FIG. 5
is a diagram illustrating a pixel data merging operation performed
by the de-mapper 124 based on the second pixel data grouping
design. The operation of the de-mapper 124 may be regarded as an
inverse of the operation of the mapper 114. Hence, based on the
pixel data grouping setting DG.sub.SET employed by the mapper 114,
the de-mapper 124 obtains pixel data of a plurality of pixel groups
(e.g., two pixel groups PG.sub.1 and PG.sub.2 in this embodiment)
from the de-compressed pixel data groups (e.g., two pixel data
groups DG.sub.3 and DG.sub.4 in this embodiment), respectively, and
merge the pixel data of the pixel groups to obtain pixel data of
pixels of a reconstructed picture 400', where adjacent pixels
located at the same pixel line (e.g., the same pixel row) of the
reconstructed picture 400' are obtained from different pixel
groups, respectively. However, this is for illustrative purposes
only, and is not meant to be a limitation of the present invention.
In another case where the mapper 114 distributes adjacent pixels
located at the same pixel column to different groups, respectively,
the de-mapper 124 may obtain pixel data of a plurality of pixel
groups from the de-compressed pixel data groups, respectively, and
merge the pixel data of the pixel groups obtain pixel data of
pixels of the reconstructed picture 400', where adjacent pixels
located at the same pixel column of the reconstructed picture 400'
are obtained from different pixel groups, respectively. To put it
simply, the pixel de-interleaving manner employed by the de-mapper
124 depends on the pixel interleaving manner employed by the mapper
114.
[0039] Regarding the second pixel data grouping design mentioned
above, the pixels are categorized into different pixel groups in a
single-pixel based manner. In one alternative design, the pixels
may be categorized into different pixel groups in a pixel section
based manner, where each pixel section includes a plurality of
successive pixels located at the same pixel line (e.g., the same
pixel row or the same pixel column). FIG. 6 is a diagram
illustrating a first pixel section based pixel data grouping design
according to an embodiment of the present invention. Each of the
pixel lines (e.g., pixel rows R.sub.0-R.sub.H-1 in this embodiment)
is divided into a plurality of pixel segments (e.g., two pixel
sections S.sub.1 and S.sub.2 in this embodiment), and the number of
the pixel segments located at the same pixel line is equal to the
number of pixel data groups (e.g., two pixel data groups DG.sub.1
and DG.sub.2 in this embodiment). Concerning the pixel data
splitting operation, adjacent pixel segments located at the same
pixel line (e.g., the same pixel row in this embodiment) are
distributed to different pixel groups (e.g., two pixel groups
PG.sub.1 and PG.sub.2 in this embodiment), respectively. Hence, as
shown in FIG. 6, the pixel group PG.sub.1 is composed of pixel
sections S1 each extracted from one of the pixel rows
R.sub.0-R.sub.H-1 of the picture 400, and the pixel group PG.sub.2
is composed of pixel sections S2 each extracted from one of the
pixel rows R.sub.0-R.sub.H-1 of the picture 400.
[0040] Concerning the pixel data merging operation, adjacent pixel
segments located at the same pixel line (e.g., the same pixel row
in this embodiment) are obtained from different pixel groups (e.g.,
two pixel groups PG.sub.1 and PG.sub.2 in this embodiment),
respectively. Hence, as shown in FIG. 6, the reconstructed picture
400' has pixel rows R.sub.0-R.sub.H-1 each reconstructed by merging
one pixel section S.sub.1 obtained from the pixel group PG.sub.1
and another pixel section S.sub.2 obtained from the pixel group
PG.sub.2.
[0041] It should be noted that the aforementioned pixel line may be
a pixel column in another exemplary implementation. Therefore, each
of the pixel columns is divided into a plurality of pixel segments,
and the number of the pixel segments located at the same pixel
column is equal to the number of pixel data groups. Concerning the
pixel data splitting operation, adjacent pixel segments located at
the same pixel column are distributed to different pixel groups,
respectively. Concerning the pixel data merging operation, adjacent
pixel segments located at the same pixel column are obtained from
different pixel groups, respectively.
[0042] FIG. 7 is a diagram illustrating a second pixel section
based pixel data grouping design according to an embodiment of the
present invention. Each of the pixel lines (e.g. , pixel rows
R.sub.0-R.sub.H-1 in this embodiment) is divided into a plurality
of pixel segments (e.g. , four pixel sections S.sub.1, S.sub.2,
S.sub.3 and S.sub.4 in this embodiment), and the number of the
pixel segments located at the same pixel line is larger than the
number of pixel data groups (e.g. , two pixel data groups DG.sub.1
and DG.sub.2 in this embodiment). Concerning the pixel data
splitting operation, adjacent pixel segments located at the same
pixel line (e.g., the same pixel row in this embodiment) are
distributed to different pixel groups (e.g., two pixel groups
PG.sub.1 and PG.sub.2 in this embodiment), respectively. Hence, as
shown in FIG. 7, the pixel group PG.sub.1 is composed of pixel
sections S.sub.1, each extracted from one of the pixel rows
R.sub.0-R.sub.H-1 of the picture 400, and pixel sections S.sub.3,
each extracted from one of the pixel rows R.sub.0-R.sub.H-1 of the
picture 400; and the pixel group PG.sub.2 is composed of pixel
sections S.sub.2, each extracted from one of the pixel rows
R.sub.0-R.sub.H-1 of the picture 400, and pixel sections S.sub.4,
each extracted from one of the pixel rows R.sub.0-R.sub.H-1 of the
picture 400. Concerning the pixel data merging operation, adjacent
pixel segments located at the same pixel line (e.g., the same pixel
row in this embodiment) are obtained from different pixel groups
(e.g., two pixel groups PG.sub.1 and PG.sub.2 in this embodiment),
respectively. Hence, as shown in FIG. 7, the reconstructed picture
400' has pixel rows R.sub.0-R.sub.H-1 each reconstructed by merging
pixel sections S.sub.1 and S.sub.3 both obtained from the pixel
group PG.sub.1 and pixel sections S.sub.2 and S.sub.4 both obtained
from the pixel group PG.sub.2 .
[0043] It should be noted that the aforementioned pixel line may be
a pixel column in another exemplary implementation. Therefore, each
of the pixel columns is divided into a plurality of pixel segments,
and the number of the pixel segments located at the same pixel
column is larger than the number of pixel data groups. Concerning
the pixel data splitting operation, adjacent pixel segments located
at the same pixel column are distributed to different pixel groups,
respectively. Concerning the pixel data merging operation, adjacent
pixel segments located at the same pixel column are obtained from
different pixel groups, respectively.
[0044] FIG. 8 is a flowchart illustrating a control and data flow
of the data processing system shown in FIG. 1 according to an
embodiment of the present invention. Provided that the result is
substantially the same, the steps are not required to be executed
in the exact order shown in FIG. 8. The exemplary control and data
flow may be briefly summarized by following steps.
[0045] Step 802: Check a de-compression capability and requirement
of a driver IC.
[0046] Step 803: Inform an application processor of the
de-compression capability and requirement.
[0047] Step 804: Determine a pixel data grouping setting according
to a checking result.
[0048] Step 806: Apply rate control to a plurality of compressors,
independently.
[0049] Step 808: Generate a plurality of compressed pixel data
groups by using the compressors to compress a plurality of pixel
data groups obtained from pixel data of a plurality of pixels of a
picture based on the pixel data grouping setting. For example, the
pixel data groups may be generated based on any of the proposed
pixel data grouping designs shown in FIG. 2, FIG. 4, FIG. 6 and
FIG. 7.
[0050] Step 810: Pack/packetize the compressed pixel data groups
into an output bitstream.
[0051] Step 812: Transmit the output bitstream via a display
interface.
[0052] Step 814: Transmit the pixel data grouping setting via an
in-band channel (i.e., display interface) or an out-of-band channel
(e.g., I.sup.2C bus).
[0053] Step 816: Receive the pixel data grouping setting from the
in-band channel (i.e., display interface) or the out-of-band
channel (e.g., I.sup.2C bus).
[0054] Step 818: Receive an input bitstream from the display
interface.
[0055] Step 820: Un-pack/un-packetize the input bitstream into a
plurality of compressed data groups.
[0056] Step 822: Generate pixel data of a plurality of pixels of a
reconstructed picture by using a plurality of de-compressors to
de-compress the compressed pixel data groups, independently, and
then merging a plurality of de-compressed pixel data groups based
on the pixel data grouping setting.
[0057] It should be noted that steps 802 and 804-814 are performed
by the application processor (AP) 102, and steps 803 and 816-822
are performed by the driver IC 104. As a person skilled in the art
can readily understand details of each step shown in FIG. 8 after
reading above paragraphs, further description is omitted here for
brevity.
[0058] Moreover, the proposed data parallelism scheme may be
inactivated when using a single compressor at the AP side and a
single de-compressor at the driver IC side is capable of meeting
the throughput requirement. For example, the application processor
may refer to information of the de-compression capability and
requirement informed by the driver IC to decide the throughput M
(pixels per clock cycle) of one de-compressor in the driver IC and
the target throughput requirement N (pixels per clock cycle) of the
display panel driven by the driver IC. Assume that the throughput
of one compressor in the application processor is also M (pixels
per clock cycle). When N/M is not greater than one, this means that
using a single compressor at the AP side and a single de-compressor
at the driver IC side is capable of meeting the throughput
requirement. Hence, the proposed data parallelism scheme is
inactivated, and the conventional rate-controlled compression and
de-compression is performed. When N/M is greater than one, this
means that using a single compressor at the AP side and a single
de-compressor at the driver IC side is unable to meet the
throughput requirement. Hence, the proposed data parallelism scheme
is activated. In addition, the number of compressors enabled in the
application processor and the number of de-compressors enabled in
the driver IC may be determined based on the value of N/M.
[0059] The pixel data splitting operation performed by the mapper
114 is to generate multiple pixel data groups that will undergo
rate-controlled compression independently. However, it is possible
that pixel data of adjacent pixel lines (e.g., pixel rows or pixel
columns) in the original picture are categorized into different
pixel data groups. The rate control generally optimizes the bit
rate in terms of pixel context rather than pixel positions. The
pixel boundary may introduce artifacts since the rate control is
not aware of the boundary position. Taking the pixel data grouping
design shown in FIG. 6 for example, the rate control applied to the
pixel section S.sub.1 of the pixel row R.sub.0 is independent of
the rate control applied to the pixel section S.sub.2 of the same
pixel row R.sub.0. Specifically, the pixel section S.sub.1 is
compressed in an order from P.sub.0 to P.sub.M, and the pixel
section S.sub.2 is compressed in an order from P.sub.M+1 to
P.sub.W-1. Concerning the pixels P.sub.M and P.sub.M+1 on opposite
sides of the pixel boundary between pixel sections S.sub.1 and
S.sub.2, the pixel P.sub.M may be part of a compression unit with a
first bit budget allocation, and the pixel P.sub.M+1 may be part of
another compression unit with a second bit budget allocation
different from the first bit budget allocation. The difference
between the first bit budget allocation and the second bit budget
allocation may be large. As a result, the rate controller 116 may
allocate bit rates un-evenly on the pixel boundary, thus resulting
in degraded image quality on the pixel boundary in a reconstructed
picture. To avoid or mitigate the image quality degradation caused
by artifacts on the pixel boundary, the present invention further
proposes a position-aware rate control mechanism which optimizes
the bit budget allocation in terms of pixel positions.
[0060] FIG. 9 is a diagram illustrating a position-aware rate
control mechanism according to an embodiment of the present
invention. As shown in FIG. 9, there are compression units CU.sub.1
and CU.sub.2 on one side of a pixel boundary and compression units
CU.sub.3 and CU.sub.4 on the other side of the pixel boundary. The
compression units CU.sub.1 and CU.sub.2 belong to one pixel group
PG.sub.1, and the compression unit CU.sub.1 is nearer to the pixel
boundary than the compression unit CU.sub.2. The compression units
CU.sub.3 and CU.sub.4 belong to another pixel group PG.sub.2, and
the compression unit CU.sub.3 is nearer to the pixel boundary than
the compression unit CU.sub.4. In one exemplary embodiment, each of
the compression units CU.sub.1-CU.sub.4 may include 4.times.2
pixels, and the compression units CU.sub.1-CU.sub.4 may be
horizontally or vertically adjacent in a picture. When the
position-aware rate control mechanism is activated, the rate
controller 116 may be configured to adjust the bit rate control
according to a position of each pixel boundary between different
pixel groups. For example, the rate controller 116 increases an
original bit budget BBori_CU.sub.1 assigned to the compression unit
CU.sub.1 by an adjustment value .DELTA.1 (.DELTA.1>0) to thereby
determine a final bit budget BBtar_CU.sub.1, and decreases an
original bit budget BBori_CU.sub.2 assigned to the compression unit
CU.sub.2 by the adjustment value .DELTA.1 to thereby determine a
final bit budget BBtar_CU.sub.2. In addition, the rate controller
116 increases an original bit budget BBori_CU.sub.3 assigned to the
compression unit CU.sub.3 by an adjustment value .DELTA.2
(.DELTA.2>0) to thereby determine a final bit budget
BBtar_CU.sub.3, and decreases an original bit budget BBori_CU.sub.4
assigned to the compression unit CU.sub.4 by the adjustment value
.DELTA.2 to thereby determine a final bit budget BBtar_CU.sub.4.
The adjustment value .DELTA.2 maybe equal to or different from the
adjustment value .DELTA.1, depending upon actual design
consideration. Since the proposed position-aware rate control tends
to set a larger bit budget near the pixel boundary, the artifacts
on the pixel boundary can be reduced. In this way, the image
quality around the pixel boundary in a reconstructed picture can be
improved.
[0061] In a case where the position-aware rate control is employed,
the flow shown in FIG. 8 may be modified to have step 806 replaced
with the following step shown in FIG. 10.
[0062] Step 1002: Apply rate control to a plurality of compressors
according to pixel boundary positions, independently.
[0063] As a person skilled in the art can readily understand
details of step 1002 after reading above paragraphs, further
description is omitted here for brevity.
[0064] Taking the pixel data grouping design shown in FIG. 6 for
example, the rate control applied to the pixel section S.sub.1 of
the pixel row R.sub.0 is independent of the rate control applied to
the pixel section S.sub.2 of the same pixel row R.sub.0. The pixel
section S.sub.1 is compressed in an order from P.sub.0 to P.sub.M,
and the pixel section S.sub.2 is compressed in an order from
P.sub.M+1 to P.sub.W-1. As a result, the bit budget allocation
condition for the pixel P.sub.M (which is the last compressed pixel
in the pixel section S.sub.1) may be different from the bit budget
allocation condition for the pixel P.sub.M+1 (which is the first
compressed pixel in the pixel section S.sub.2). To avoid or reduce
artifacts on the pixel boundary, the present invention further
proposes a modified compression mechanism with compression orders
set based on pixel boundary positions. FIG. 11 is a diagram
illustrating a modified compression mechanism according to an
embodiment of the present invention. As shown in FIG. 11, there are
compression units CU.sub.1 and CU.sub.2 on one side of a pixel
boundary and compression units CU.sub.3 and CU.sub.4 on the other
side of the pixel boundary. The compression units CUand CU.sub.2
belong to one pixel group PG.sub.1, and the compression unit
CU.sub.1 is nearer to the pixel boundary than the compression unit
CU.sub.2. The compression units CU.sub.3 and CU.sub.4 belong to
another pixel group PG.sub.2, and the compression unit CU.sub.3 is
nearer to the pixel boundary than the compression unit CU.sub.4. In
one exemplary embodiment, each of the compression units
CU.sub.1-CU.sub.4 may include 4.times.2 pixels, and the compression
units CU.sub.1-CU.sub.4 may be horizontally or vertically adjacent
in a picture. When the modified compression mechanism is activated,
each of the compressors 115_1 and 115_2 may be configured to set a
compression order according to a position of each pixel boundary
between different pixel groups. For example, the compressor 115_1
compresses the compression unit CU.sub.1 prior to compressing the
compression unit CU.sub.2, and the compressor 115_2 compresses the
compression unit CU.sub.3 prior to compressing the compression unit
CU.sub.4. In other words, two adjacent pixel sections located at
the same pixel line are compressed in opposite compression orders.
Since the modified compression scheme starts the compression from
compression units near the pixel boundary between adjacent pixel
groups, the bit budget allocation conditions near the pixel
boundary may be more similar. In this way, the image quality around
the pixel boundary in a reconstructed picture can be improved. When
the modified compression mechanism is activated at the AP side, the
de-mapper 124 at the driver IC side may be configured to further
consider the compression orders when merging the de-compressed
pixel data groups DG.sub.3 and DG.sub.4.
[0065] In a case where the modified compression mechanism is
employed, the flow shown in FIG. 8 may be modified to have step 808
replaced with the following step shown in FIG. 12.
[0066] Step 1202: Generate a plurality of compressed pixel data
groups by splitting pixel data of a plurality of pixels of a
picture into a plurality of pixel data groups based on the pixel
data grouping setting and using the compressors to compress the
pixel data groups according to compression orders set based on
pixel boundary positions.
[0067] As a person skilled in the art can readily understand
details of step 1202 after reading above paragraphs, further
description is omitted here for brevity.
[0068] Those skilled in the art will readily observe that numerous
modifications and alterations of the device and method may be made
while retaining the teachings of the invention. Accordingly, the
above disclosure should be construed as limited only by the metes
and bounds of the appended claims.
* * * * *