U.S. patent application number 17/423125 was filed with the patent office on 2022-03-10 for image processing device and image processing method.
This patent application is currently assigned to Sony Group Corporation. The applicant listed for this patent is Sony Group Corporation. Invention is credited to Masaru IKEDA.
Application Number | 20220078416 17/423125 |
Document ID | / |
Family ID | |
Filed Date | 2022-03-10 |
United States Patent
Application |
20220078416 |
Kind Code |
A1 |
IKEDA; Masaru |
March 10, 2022 |
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
Abstract
A decision unit decides whether or not a deblocking filter is
applied to color difference components of pixels belonging to a
line orthogonal to a block boundary in two adjacent blocks adjacent
to each other sandwiching the block boundary of the decoded image,
by using color difference components of pixels belonging to a line
identical to a line used to decide whether or not the deblocking
filter is applied to luminance components of the pixels belonging
to the line orthogonal to the block boundary. The filtering unit
applies the deblocking to the color difference components of the
pixels for which it is decided that the deblocking filter is
applied. The present technology can be applied to, for example, a
case where encoding and decoding of an image are performed.
Inventors: |
IKEDA; Masaru; (Tokyo,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Sony Group Corporation |
Tokyo |
|
JP |
|
|
Assignee: |
Sony Group Corporation
Tokyo
JP
|
Appl. No.: |
17/423125 |
Filed: |
February 13, 2020 |
PCT Filed: |
February 13, 2020 |
PCT NO: |
PCT/JP2020/005473 |
371 Date: |
July 15, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62811104 |
Feb 27, 2019 |
|
|
|
International
Class: |
H04N 19/117 20060101
H04N019/117; H04N 19/80 20060101 H04N019/80; H04N 19/86 20060101
H04N019/86; H04N 19/186 20060101 H04N019/186; H04N 19/176 20060101
H04N019/176 |
Claims
1. An image processing device comprising: a decoding unit that
decodes a bitstream to generate a decoded image; a decision unit
that decides whether or not a deblocking filter is applied to color
difference components of pixels belonging to a line orthogonal to a
block boundary in two adjacent blocks adjacent to each other
sandwiching the block boundary of the decoded image generated by
the decoding unit, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary; and
a filtering unit that applies the deblocking to the color
difference components of the pixels for which it is decided by the
decision unit that the deblocking filter is applied.
2. The image processing device according to claim 1, wherein in a
case where a color format of the decoded image is a YUV422 format,
the decision unit performs decision by using color difference
components of pixels belonging to two horizontal lines located at
both ends of a partial vertical block boundary in the adjacent
blocks, among horizontal lines orthogonal to the partial vertical
block boundary that is a unit of processing when it is decided
whether or not the deblocking filter is applied to a vertical block
boundary that is a block boundary in a vertical direction.
3. The image processing device according to claim 2, wherein a
length of the partial vertical block boundary is four pixels, and
the decision unit decides whether or not the deblocking filter is
applied to color difference components of pixels belonging to four
horizontal lines of from first to fourth horizontal lines
orthogonal to the partial vertical block boundary, by using color
difference components of pixels belonging to the first and fourth
horizontal lines.
4. The image processing device according to claim 2, wherein the
decision unit decides whether or not the deblocking filter is
applied, by using color difference components of pixels belonging
to two horizontal lines located at both ends of the partial
vertical block boundary in the adjacent blocks and two horizontal
lines adjacent to each other at a center of the partial vertical
block boundary in the adjacent blocks, among horizontal lines
orthogonal to the vertical block boundary that is a block boundary
in the vertical direction.
5. The image processing device according to claim 4, wherein a
length of the vertical block boundary is eight pixels, and the
decision unit decides whether or not the deblocking filter is
applied to color difference components of pixels belonging to eight
horizontal lines of from first to eighth horizontal lines
orthogonal to the vertical block boundary, by using color
difference components of pixels belonging to the first, fourth,
fifth, and eighth horizontal lines.
6. The image processing device according to claim 1, wherein in a
case where a color format of the decoded image is a YUV444 format,
the decision unit decides whether or not the deblocking filter is
applied, by using color difference components of pixels belonging
to two horizontal lines located at both ends of a partial vertical
block boundary in the adjacent blocks, among horizontal lines
orthogonal to the partial vertical block boundary that is a unit of
processing when it is decided whether or not the deblocking filter
is applied to a vertical block boundary that is a block boundary in
a vertical direction.
7. The image processing device according to claim 6, wherein the
decision unit decides whether or not the deblocking filter is
applied, by using color difference components of pixels belonging
to two horizontal lines located at both ends of the partial
vertical block boundary in the adjacent blocks and two horizontal
lines adjacent to each other at a center of the partial vertical
block boundary in the adjacent blocks, among horizontal lines
orthogonal to the vertical block boundary that is a block boundary
in the vertical direction.
8. The image processing device according to claim 1, wherein in a
case where a color format of the decoded image is a YUV444 format,
the decision unit decides whether or not the deblocking filter is
applied, by using color difference components of pixels belonging
to two vertical lines located at both ends of a partial horizontal
block boundary in the adjacent blocks, among vertical lines
orthogonal to the partial horizontal block boundary that is a unit
of processing when it is decided whether or not the deblocking
filter is applied to a horizontal block boundary that is a block
boundary in a horizontal direction.
9. The image processing device according to claim 8, wherein the
decision unit decides whether or not the deblocking filter is
applied, by using color difference components of pixels belonging
to two vertical lines located at both ends of the partial
horizontal block boundary in the adjacent blocks and two vertical
lines adjacent to each other at a center of the partial horizontal
block boundary in the adjacent blocks, among vertical lines
orthogonal to the horizontal block boundary that is a block
boundary in the horizontal direction.
10. An image processing method comprising: decoding a bitstream to
generate a decoded image; deciding whether or not a deblocking
filter is applied to color difference components of pixels
belonging to a line orthogonal to a block boundary in two adjacent
blocks adjacent to each other sandwiching the block boundary of the
decoded image, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary; and
applying the deblocking to the color difference components of the
pixels for which it is decided that the deblocking filter is
applied.
11. An image processing device comprising: a decision unit that
decides whether or not a deblocking filter is applied to color
difference components of pixels belonging to a line orthogonal to a
block boundary in adjacent blocks adjacent to each other
sandwiching the block boundary of a locally decoded image locally
decoded when an image is encoded, by using color difference
components of pixels belonging to a line identical to a line used
to decide whether or not the deblocking filter is applied to
luminance components of the pixels belonging to the line orthogonal
to the block boundary; a filtering unit that applies the deblocking
to the color difference components of the pixels for which it is
decided by the decision unit that the deblocking filter is applied,
to generate a filter image; and an encoding unit that encodes the
image by using the filter image generated by the filtering
unit.
12. An image processing method comprising: deciding whether or not
a deblocking filter is applied to color difference components of
pixels belonging to a line orthogonal to a block boundary in
adjacent blocks adjacent to each other sandwiching the block
boundary of a locally decoded image locally decoded when an image
is encoded, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary;
applying the deblocking to the color difference components of the
pixels for which it is decided that the deblocking filter is
applied, to generate a filter image; and encoding the image by
using the filter image.
Description
TECHNICAL FIELD
[0001] The present technology relates to an image processing device
and an image processing method, and more particularly to an image
processing device and an image processing method that make it
possible to unify processes of a luminance component and a color
difference component, for example.
BACKGROUND ART
[0002] In Joint Video Experts Team (JVET) that is a joint
standardization organization of ITU-T and ISO/IEC, for the purpose
of further improving coding efficiency compared to H.265/HEVC,
standardization work of Versatile Video Coding (VVC) is underway
that is a next-generation image coding method.
[0003] In the standardization work of VVC, in Non-Patent Document
1, a method has been devised that the deblocking filter that can be
applied to the color difference component is changed to two types
similarly to the deblocking filter that can be applied to the
luminance component, and the strong filter can be applied also to
the color difference component.
CITATION LIST
Non-Patent Document
[0004] Non-patent document 1: Jianle Chen, Yan Ye, Seung Hwan Kim:
Algorithm description for Versatile Video Coding and Test Model 2
(VTM 2), Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and
ISO/IEC JTC 1/SC 29/WG 11 11th Meeting, Ljubljana, SI, 10-18 Jul.
2018.
SUMMARY OF THE INVENTION
Problems to be Solved by the Invention
[0005] In Non-Patent Document 1, the processes of the luminance
component and the color difference component are not unified.
[0006] The present technology has been made in view of such a
situation, and makes it possible to unify the processes of the
luminance component and the color difference component.
Solutions to Problems
[0007] A first image processing device of the present technology is
an image processing device including: a decoding unit that decodes
a bitstream to generate a decoded image; a decision unit that
decides whether or not a deblocking filter is applied to color
difference components of pixels belonging to a line orthogonal to a
block boundary in two adjacent blocks adjacent to each other
sandwiching the block boundary of the decoded image generated by
the decoding unit, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary; and
a filtering unit that applies the deblocking filter to the color
difference components of the pixels for which it is decided by the
decision unit that the deblocking filter is applied.
[0008] A first image processing method of the present technology is
an image processing method including: decoding a bitstream to
generate a decoded image; deciding whether or not a deblocking
filter is applied to color difference components of pixels
belonging to a line orthogonal to a block boundary in two adjacent
blocks adjacent to each other sandwiching the block boundary of the
decoded image, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary; and
applying the deblocking filter to the color difference components
of the pixels for which it is decided that the deblocking filter is
applied.
[0009] In the first image processing device and image processing
method of the present technology, a bitstream is decoded to
generate a decoded image. Furthermore, whether or not a deblocking
filter is applied to color difference components of pixels
belonging to a line orthogonal to a block boundary in two adjacent
blocks adjacent to each other sandwiching the block boundary of the
decoded image, is decided by using color difference components of
pixels belonging to a line identical to a line used to decide
whether or not the deblocking filter is applied to luminance
components of the pixels belonging to the line orthogonal to the
block boundary. Then, the deblocking filter is applied to the color
difference components of the pixels for which it is decided that
the deblocking filter is applied.
[0010] A second image processing device of the present technology
is an image processing device including: a decision unit that
decides whether or not a deblocking filter is applied to color
difference components of pixels belonging to a line orthogonal to a
block boundary in adjacent blocks adjacent to each other
sandwiching the block boundary of a locally decoded image locally
decoded when an image is encoded, by using color difference
components of pixels belonging to a line identical to a line used
to decide whether or not the deblocking filter is applied to
luminance components of the pixels belonging to the line orthogonal
to the block boundary; a filtering unit that applies the deblocking
filter to the color difference components of the pixels for which
it is decided by the decision unit that the deblocking filter is
applied, to generate a filter image; and an encoding unit that
encodes the image by using the filter image generated by the
filtering unit.
[0011] A second image processing method of the present technology
is an image processing method including: deciding whether or not a
deblocking filter is applied to color difference components of
pixels belonging to a line orthogonal to a block boundary in
adjacent blocks adjacent to each other sandwiching the block
boundary of a locally decoded image locally decoded when an image
is encoded, by using color difference components of pixels
belonging to a line identical to a line used to decide whether or
not the deblocking filter is applied to luminance components of the
pixels belonging to the line orthogonal to the block boundary;
applying the deblocking filter to the color difference components
of the pixels for which it is decided that the deblocking filter is
applied, to generate a filter image; and encoding the image by
using the filter image.
[0012] In the second image processing device and image processing
method of the present technology, whether or not a deblocking
filter is applied to color difference components of pixels
belonging to a line orthogonal to a block boundary in adjacent
blocks adjacent to each other sandwiching the block boundary of a
locally decoded image locally decoded when an image is encoded, is
decided by using color difference components of pixels belonging to
a line identical to a line used to decide whether or not the
deblocking filter is applied to luminance components of the pixels
belonging to the line orthogonal to the block boundary.
Furthermore, the deblocking filter is applied to the color
difference components of the pixels for which it is decided that
the deblocking filter is applied, and a filter image is generated.
Then, the image is encoded by using the filter image.
[0013] Note that, the image processing device can be implemented by
causing a computer to execute a program. The program can be
provided by being recorded on a recording medium or by being
transmitted via a transmission medium.
BRIEF DESCRIPTION OF DRAWINGS
[0014] FIG. 1 is a diagram explaining a method of calculating bS in
HEVC.
[0015] FIG. 2 is a diagram explaining a method of calculating
bS.
[0016] FIG. 3 is an explanatory diagram illustrating an example of
color difference components (U component and V component) in two
blocks Bp and Bq adjacent to each other sandwiching a vertical
block boundary BB.
[0017] FIG. 4 is a diagram illustrating an example of a color
format (chroma format) of an image.
[0018] FIG. 5 is a diagram explaining filtering decision for a
deblocking filter applied to (pixels in the horizontal direction
orthogonal to) a vertical block boundary.
[0019] FIG. 6 is a diagram explaining filtering decision of a
deblocking filter applied to (pixels in the vertical direction
orthogonal to) a horizontal block boundary.
[0020] FIG. 7 is a block diagram illustrating a configuration
example of an embodiment of an image processing system to which the
present technology is applied.
[0021] FIG. 8 is a block diagram illustrating a detailed
configuration example of an encoder 11.
[0022] FIG. 9 is a flowchart explaining an example of an encoding
process by the encoder 11.
[0023] FIG. 10 is a block diagram illustrating a detailed
configuration example of a decoder 51.
[0024] FIG. 11 is a flowchart explaining an example of a decoding
process by the decoder 51.
[0025] FIG. 12 is a block diagram illustrating a configuration
example of a deblocking filter 31a.
[0026] FIG. 13 is a flowchart explaining a process of the
deblocking filter 31a.
[0027] FIG. 14 is a diagram explaining filtering decision in a case
where the color format is the YUV420 format.
[0028] FIG. 15 is a diagram explaining filtering decision in a case
where the color format is the YUV444 format.
[0029] FIG. 16 is a diagram explaining filtering decision in a case
where the color format is the YUV422 format.
[0030] FIG. 17 is a diagram explaining filtering decision in a case
where the color format is the YUV422 format.
[0031] FIG. 18 is a block diagram illustrating a configuration
example of an embodiment of a computer.
MODE FOR CARRYING OUT THE INVENTION
[0032] The scope disclosed in the present specification is not
limited to the content of the embodiments, and the content of the
following reference documents REF1 to REF8 known at the time of
filing is also incorporated herein by reference. That is, the
content described in the following reference documents REF1 to REF8
is also a basis for determining support requirements. For example,
even in a case where a Quad-Tree Block Structure described in the
reference document REF2, a Quad Tree Plus Binary Tree (QTBT) Block
Structure described in the reference document REF3, and a
Multi-type Tree (MTT) Block Structure described in the reference
documents REF4, REF5, and REF8 are not directly defined in the
detailed description of the invention, they are still within the
scope of the present disclosure and shall meet the support
requirements of the claims. Furthermore, similarly, even in a case
where technical terms, for example, parsing, syntax, semantics, and
the like are not directly defined in the detailed description of
the invention, they are still within the scope of the present
disclosure and shall meet the support requirements of the
claims.
[0033] REF1: Recommendation ITU-T H.264 (April 2017) "Advanced
video coding for generic audiovisual services", April 2017
[0034] REF2: Recommendation ITU-T H.265 (December 2016) "High
efficiency video coding", Dec. 2016
[0035] REF3: J. Chen, E. Alshina, G. J. Sullivan, J.-R. Ohm, J.
Boyce, "Algorithm Description of Joint Exploration Test Model
(JEM7)", JVET-G1001, Joint Video Exploration Team (JVET) of ITU-T
SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT,
13-21 Jul. 2017
[0036] REF4: B. Bross, J. Chen, S. Liu, "Versatile Video Coding
(Draft 3)," JVET-L1001, Joint Video Experts Team (JVET) of ITU-T SG
16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macau, CN, 312
October 2018
[0037] REF5: J. J. Chen, Y. Ye, S. Kim, "Algorithm description for
Versatile Video Coding and Test Model 3 (VTM 3)", JVET-L1002, Joint
Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC
29/WG 11 12th Meeting: Macau, CN, 312 October 2018
[0038] REF6: J. Boyce (Intel), Y. Ye (InterDigital), Y.-W. Huang
(Mediatek), M. Karczewicz (Qualcomm), E. Francois (Technicolor), W.
Husak (Dolby), J. Ridge (Nokia), A. Abbas (GoPro), "Two tier test
model", JVET-J0093, Joint Video Experts Team (JVET) of ITU-T SG 16
WP 3 and ISO/IEC JTC 1/SC 29/WG 11 10th Meeting: San Diego, US,
1020 April 2018
[0039] REF7: S. De-Luxan-Hernandez, V. George, J. Ma, T. Nguyen, H.
Schwarz, D. Marpe, T. Wiegand (HHI), "CE3: Intra Sub-Partitions
Coding Mode (Tests 1.1.1 and 1.1.2)", JVET-M0102, Joint Video
Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG
11 13th Meeting: Marrakech, Mass., 918 January 2019
[0040] REF8: M. Ikeda, T. Suzuki (Sony), D. Rusanovskyy, M.
Karczewicz (Qualcomm), W. Zhu, K. Misra, P. Cowan, A. Segall (Sharp
Labs of America), K. Andersson, J. Enhorn, Z. Zhang, R. Sjoberg
(Ericsson), "CE11.1.6, CE11.1.7 and CE11.1.8: Joint proposals for
long deblocking from Sony, Qualcomm, Sharp, Ericsson", JVET-M0471,
Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC
1/SC 29/WG 11 13th Meeting: Marrakesh, Mass., 918 January 2019
[0041] <Definition>
[0042] In this application, the following terms are defined as
follows.
[0043] Color difference-related parameters mean all parameters
related to color difference. For example, the color
difference-related parameters may include information regarding
orthogonal transform coefficients of color difference component,
for example, orthogonal transform coefficients (quantization
coefficients) of the color difference component included in any
blocks such as a Transform Unit (TU), a Prediction Unit (PU), a
Coding Unit (CU), and others, a flag indicating presence or absence
of a significant coefficient (non-zero orthogonal transform
coefficient) of the color difference component in each block, and
the like. The color difference-related parameters are not limited
to such examples, and may be various parameters related to the
color difference.
[0044] Application necessity of a deblocking filter means whether
or not the deblocking filter is applied. The application necessity
decision of the deblocking filter means deciding whether or not the
deblocking filter is applied. Furthermore, a decision result of the
application necessity decision is a result of deciding whether or
not the deblocking filter is applied. The decision result of the
application necessity decision may be information indicating either
"apply" or "not apply".
[0045] The filtering strength decision means deciding (determining)
filtering strength of a deblocking filter in a case where the
deblocking filter is applied. For example, in a case where there
are a weak filter, and a chroma long filter having a larger number
of taps, that is, stronger filtering strength, than the weak
filter, as a deblocking filter of the color difference component,
in the filtering strength decision, it is decided (determined)
which of the weak filter and the chroma long filter is used as the
deblocking filter to be applied to the color difference
component.
[0046] Regarding the deblocking filter of the color difference
component, the application necessity decision and the filtering
strength decision decide that the deblocking filter is not applied
or a type of the deblocking filter to be applied.
[0047] For example, in a case where there are the weak filter and
the chroma long filter as the deblocking filter of the color
difference component, in the application necessity decision and the
filtering strength decision, it is decided that the deblocking
filter is not applied to the color difference component, the weak
filter is applied, or the chroma long filter is applied.
[0048] Hereinafter, the application necessity decision and the
filtering strength decision are also collectively referred to as
filtering decision.
[0049] <Overview of Deblocking Filter>
[0050] A process related to a deblocking filter in an existing
image coding method such as HEVC includes filtering decision
(application necessity decision and filtering strength decision)
and filtering (application of a filter). In the following, an
overview of the deblocking filter will be described using HEVC as
an example.
[0051] Note that, in the following, the deblocking filter for the
color difference component of the decoded image (including a
locally decoded image locally decoded at the time of encoding) will
be described, and the description of the deblocking filter for the
luminance component will be omitted as appropriate.
[0052] As a process related to the deblocking filter, first,
filtering decision is performed. In the filtering decision, first,
application necessity decision is performed for deciding whether or
not the deblocking filter is applied to the block boundary of the
decoded image.
[0053] Note that, in HEVC, the block boundary is identified on the
basis of a block structure of a Quad-Tree Block Structure described
in the reference document REF2. Specifically, among edges of an
8.times.8 pixel block (sample grid) that is the minimum block unit,
an edge that satisfies a condition that the edge is at least one of
a Transform Unit (TU) boundary or a Prediction Unit (PU) boundary
is identified as the block boundary in HEVC.
[0054] The application necessity decision is performed on the basis
of boundary strength (hereinafter also referred to as bS) of the
block boundary. In HEVC, when four lines in a direction orthogonal
to a partial block boundary (a part of the block boundary), which
is a unit of processing when the filtering decision (application
necessity decision) of the deblocking filter is performed on the
block boundary, are defined as a unit of filter application to
which the deblocking filter is applied, the bS is calculated every
four lines that are the unit of filter application. In a case where
the block boundary is a vertical boundary, a line of the unit of
filter application is a line (row) in the horizontal direction
orthogonal to the vertical boundary.
[0055] Furthermore, in a case where the block boundary is a
horizontal boundary, a line of the unit of filter application is a
line (column) in the vertical direction orthogonal to the
horizontal boundary.
[0056] FIG. 1 is a diagram explaining a method of calculating bS in
HEVC.
[0057] As illustrated in FIG. 1, in HEVC, the bS is calculated on
the basis of the truth or falsehood (satisfied or not satisfied) of
a condition A that is a condition related to intra prediction, a
condition B1 that is a condition related to a significant
coefficient of the Y component, and a condition B2 that is a
condition related to a motion vector (MV) and a reference picture.
Referring to FIG. 1, the bS is set to 2 in a case where the
condition A is true. Furthermore, in a case where the condition A
is false and at least one of the condition B1 or the condition B2
is true, the bS is set to 1. Then, in a case where the condition A,
the condition B1, and the condition B2 are all false, the bS is set
to 0. Note that, the conditions A, B1, and B2 illustrated in FIG. 1
are as follows. Furthermore, here, for the sake of simplicity, the
block boundary is assumed to be a vertical boundary.
[0058] Condition A: Among Coding Units (CUs) including pixels of
the uppermost line among lines orthogonal to the block boundary
that is a calculation target of the bS and sandwiching the block
boundary, an encoding mode of at least one of the CUs is an intra
prediction mode.
[0059] Condition B1: The block boundary is the TU boundary, and
among two TUs including pixels of the uppermost line among lines
orthogonal to the block boundary that is a calculation target of
the bS and sandwiching the block boundary, the significant
coefficient of the Y component exists in at least one of the
TUs.
[0060] Condition B2: Between two CUs including pixels of the
uppermost line among lines orthogonal to the block boundary that is
a calculation target of the bS and sandwiching the block boundary,
an absolute value of a difference between MVs is one pixel or more,
or reference pictures of motion compensation are different from
each other or the numbers of MVs are different from each other.
[0061] Moreover, in HEVC, a deblocking filter for the luminance
component (Y component) of the decoded image can be applied to the
block boundary for which the bS set as described above is set to
greater than or equal to 1. For that reason, in HEVC, the decision
result of the application necessity decision of the deblocking
filter for the luminance component of the decoded image may differ
depending on whether or not the condition B1 and the condition B2
are satisfied.
[0062] Note that, in HEVC, as the deblocking filter for the
luminance component of the decoded image, the strong filter having
a high filtering strength and the weak filter having a low
filtering strength are prepared. In a case where the bS is greater
than or equal to 1, in a process related to the deblocking filter
for the luminance component of the decoded image, application
necessity decision based on additional conditions is further
performed, and then decision of the filtering strength and
filtering are performed. Details of these processes are described
in the reference document REF2, and the description thereof is
omitted here.
[0063] On the other hand, a deblocking filter for the color
difference components (U component, V component) of the decoded
image in HEVC is applied only to the block boundary whose bS is 2.
For that reason, as illustrated in FIG. 1, whether or not the
conditions B1 and B2 are satisfied does not affect the application
necessity decision of the deblocking filter to the color difference
components of the decoded image, in HEVC.
[0064] Furthermore, in HEVC, the deblocking filter that can be
applied to the color difference components of the decoded image is
only the weak filter. For that reason, a filtering strength
decision process is not necessary for the color difference
components of the decoded image, and in a case where the bS is 2,
the weak filter is applied to the color difference components of
the decoded image.
[0065] By the way, as described in the reference document REF3, in
the block division by the QTBT Block Structure in VVC, a block
having a larger size can be selected than that in the block
division by the Quad-Tree Block Structure in HEVC. In a case where
the block size is large in a flat area (an area in which a change
in pixel value in the area is small), block distortion is likely to
occur. For that reason, in VVC in which a block having a larger
size can be selected, in a case where the deblocking filter that
can be applied to the color difference components of the decoded
image is only the weak filter similarly to HEVC, there has been a
possibility that a remarkable block distortion remains in the color
difference components. In view of such a situation, it is desired
to improve the deblocking filter for the color difference
components of the decoded image.
[0066] Thus, in Non-Patent Document 1, a method of applying the
deblocking filter has been devised, which is different from that in
HEVC. In the method of applying the deblocking filter of Non-Patent
Document 1, it has been devised that, for example, the deblocking
filter that can be applied to the color difference component is
changed to two types similarly to the deblocking filter that can be
applied to the luminance component, and the strong filter can be
applied also to the color difference component. Furthermore, it has
also been devised that the deblocking filter can be applied to the
color difference components of the decoded image not only in a case
where the bS is 2 but also in a case where the bS is 1.
[0067] FIG. 2 is a diagram explaining a method of calculating bS in
the method of applying the deblocking filter of Non-Patent Document
1.
[0068] In the method of applying the deblocking filter of
Non-Patent Document 1, the bS is calculated on the basis of the
conditions A, B1, and B2 described above, similarly to the example
in HEVC illustrated in FIG. 2. However, as described above, the
deblocking filter can be applied to the color difference components
of the decoded image not only in the case where the bS is 2 but
also in the case where the bS is 1. For that reason, as illustrated
in FIG. 2, the decision result of the application necessity
decision of the deblocking filter for the color difference
components (U component, V component) of the decoded image may
differ depending on whether or not the condition B1 and the
condition B2 are satisfied.
[0069] Hereinafter, a description will be given of filtering
decision (application necessity decision and filtering strength
decision) and filtering regarding a deblocking filter that can be
applied to the color difference component of the decoded image in
Non-Patent Document 1.
[0070] FIG. 3 is a diagram illustrating an example of pixels of
color difference components (U component and V component) of a
block Bp and a block Bq as two adjacent blocks adjacent to each
other sandwiching a vertical block boundary BB that is a block
boundary in the vertical direction.
[0071] Note that, here, the vertical block boundary will be
described as an example, but the matters described for the vertical
block boundary can be similarly applied to the horizontal block
boundary that is the block boundary in the horizontal direction
unless otherwise specified. Furthermore, although FIG. 3
illustrates an example in which the block Bp and the block Bq of
the color difference component are blocks of 4.times.4 pixels, the
matters described here can be similarly applied to blocks of other
sizes.
[0072] In the example of FIG. 3, the color difference component
(and pixels of the color difference component) in the block Bp are
indicated by symbols p.sub.i, j. The i is a column index and the j
is a row index. The column indexes i are numbered 0, 1, 2, and 3 in
order from a column closest to the vertical block boundary BB (from
left to right in the figure). The row indexes j are numbered 0, 1,
2, and 3 from top to bottom. On the other hand, the color
difference component (and pixels of the color difference component)
in the block Bq are indicated by symbols q.sub.k j. The k is a
column index and the j is a row index. The column indexes k are
numbered 0, 1, 2, and 3 in order from a column closest to the
vertical block boundary BB (from right to left in the figure).
[0073] Note that, here, the block boundary BB is assumed to be the
vertical block boundary, but the block boundary BB can be regarded
as the horizontal block boundary, and the block Bp and the block Bq
can be regarded as two adjacent blocks adjacent to each other
sandwiching the horizontal block boundary BB. In this case, in the
p.sub.i, j, the i is a row index and the j is a column index. The
same applies to the q.sub.k j.
[0074] After the bS is calculated as described with reference to
FIG. 2, the filtering decision is performed using the following
three conditions. In a case where the color format of the decoded
image is, for example, the YUV420 format, the filtering decision is
performed every two lines of the color difference components.
[0075] That is, in the case where the color format of the decoded
image is the YUV420 format, a partial vertical block boundary that
is a unit of processing when it is decided whether or not the
deblocking filter is applied to (pixels in the horizontal direction
orthogonal to) the vertical block boundary BB, is a vertical block
boundary for two lines of the color difference components
continuous in the vertical direction, and orthogonal to two lines
of the color difference components.
[0076] The filtering decision for the vertical block boundary BB is
performed for each partial vertical block boundary.
[0077] In the example illustrated in FIG. 3, the filtering decision
is performed separately for a partial vertical block boundary b1
for two lines of the line L11 and the line L12, and a partial
vertical block boundary b2 for two lines of the line L21 and the
line L22.
[0078] The filtering decision for the partial vertical block
boundary b1 is performed using the line L11 and the line L12 (of
the color difference components) in the horizontal direction
orthogonal to the partial vertical block boundary b1. Similarly,
the filtering decision for the partial vertical block boundary b2
is performed using the line L21 and the line L22 in the horizontal
direction orthogonal to the partial vertical block boundary b2.
[0079] In the following, a description will be given of filtering
decision and filtering performed for the partial vertical block
boundary b1.
[0080] In the filtering decision, in the adaptation necessity
decision, it is decided in order whether or not a condition C91 and
a condition C92 below are true.
(bS==2.parallel.bS ==1 && (block width>16 &&
block_height>16)) Condition C91
d<beta Condition C92
[0081] Note that, in the condition C91, the block_width and the
block_height are the horizontal size and the vertical size of a
block (for example, CU) over the partial vertical block boundary b1
to be subjected to the filtering decision, as illustrated in FIG.
3. .parallel. represents a logical sum operation, and &&
represents a logical product operation.
[0082] Furthermore, the variable beta in the condition C92 is an
edge decision threshold value, and the variable beta is given
depending on a quantization parameter. Furthermore, the variable d
in the condition C92 is calculated by the following equations (1)
to (7).
dp0=Abs(p.sub.2, 0-2*p.sub.1, 0+p.sub.0, 0) (1)
dp1=Abs(p.sub.2, 1-2*p.sub.1, 1+p.sub.0, 1) (2)
dq0=Abs(q.sub.2, 0-2*q.sub.1, 0+q.sub.0, 0) (3)
dq1=Abs(q.sub.2, 1-2*q.sub.1, 1+q.sub.0, 1) (4)
dpq0=dp0+dq0 (5)
dpq1=dp1+dq1 (6)
d=dpq0+dpq1 (7)
[0083] Note that, the condition C92 is similar to a condition used
in the filtering decision of the deblocking filter applied to the
luminance component in HEVC (hereinafter, referred to as a
condition in the luminance component) except that a line referred
to is different. In the condition of the luminance component,
pixels of the first line and pixels of the fourth line are referred
to, and the decision is performed every four lines (segments). In
the YUV420 format, the pixel density in each of the horizontal
direction and vertical direction of the color difference components
(U component and V component) are half the pixel density of the
luminance component, so that four lines of the luminance components
correspond to two lines of the color difference components.
Regarding the condition C92, pixels of the two lines L11 and L12 of
the color difference components corresponding to the four lines of
the luminance components are referred to, and the decision is
performed every two lines.
[0084] In a case where at least one of the condition C91 or the
condition C92 is false, the deblocking filter is not applied to the
color difference components of the decoded image. On the other
hand, in a case where both the condition C91 and the condition C92
are true, the filtering strength decision is performed in the
filtering decision.
[0085] In the filtering strength decision, as a decision of which
of the strong filter and the weak filter is applied, it is decided
whether or not a condition C93 below is true.
[0086] Condition C93: (block_width>16 &&
block_height>16)
[0087] Note that, the block_width and block height in the condition
C93 are the horizontal size and the vertical size of a block over
the partial vertical block boundary b1 to be subjected to the
filtering decision, similarly to the block_width and block_height
in the condition C91.
[0088] In a case where the condition C93 is true, the strong filter
is applied to the color difference components of the decoded image
at the partial vertical block boundary b1, and in a case where the
condition C93 is false, the weak filter is applied to the color
difference components of the decoded image at the partial vertical
block boundary b1.
[0089] The strong filter applied to the color difference component
in Non-Patent Document 1 is similar to the strong filter applied to
the luminance component in HEVC, and is represented by the
following equations (8) to (13).
p.sub.0'=Clip3(p.sub.0-2*tc, p.sub.0+2*t.sub.c,
(p.sub.2+2*p.sub.1+2*p.sub.2+2*q.sub.0+q.sub.1+4)>>3) (8)
p.sub.1'=Clip3(p.sub.1-2*tc, p.sub.1+2*t.sub.c,
(p.sub.2+p.sub.1+p.sub.0+q.sub.2+2)>>2) (9)
p.sub.2'=Clip3(p.sub.2-2*tc, p.sub.2+2*t.sub.c, (2*p.sub.3+3*
p.sub.2+p.sub.1+p.sub.0+q.sub.0+4)>>3) (10)
p.sub.0'=Clip3(q.sub.0-2*tc, q.sub.0+2*t.sub.c,
(p.sub.1+2p.sub.0+2q.sub.0+2q.sub.1+q.sub.2+4)>>3) (11)
q.sub.1'=Clip3(q.sub.1-2*tc, q.sub.1+2*t.sub.c,
(p.sub.0+q.sub.0+q.sub.1+q.sub.2+2)>>2) (12)
q.sub.2'=Clip3(q.sub.2-2*t.sub.c, q.sub.2+2*t.sub.c,
(p.sub.0+q.sub.0+q.sub.1+3*q.sub.2+2*q.sub.3+4)>>3) (13)
[0090] Note that, in the equations (8) to (13), p.sub.i and q.sub.k
are pixel values (color difference components) of the pixels of the
color difference components (hereinafter, also referred to as color
difference pixels) before the application of the deblocking filter.
Furthermore, p.sub.i' and q.sub.k' are the color difference
components of the color difference pixels after the deblocking
filter is applied. Here, the i and k are column indexes in the
block Bp and the block Bq described above, respectively, and row
indexes are omitted since they are the same in equations (8) to
(13). Furthermore, the t.sub.c is a parameter given depending on
the quantization parameter. Furthermore, the Clip3(a, b, c)
represents a clipping process in which the value c is clipped in a
range ofa.quadrature.c.quadrature.b.
[0091] The weak filter applied to the color difference components
in Non-Patent Document 1 is the same as the weak filter applied to
the color difference components in HEVC.
[0092] In the above, the process has been described related to the
deblocking filter that can be applied to the color difference
components of the decoded image in Non-Patent Document 1. According
to the method described above, the strong filter is applied not
only to the luminance component but also to the color difference
components depending on the conditions.
[0093] <Color Format>
[0094] FIG. 4 is a diagram illustrating an example of a color
format (chroma format) of an image.
[0095] Examples of the color format of the image to be encoded
include the YUV420 format, the YUV422 format, the YUV444 format,
and the like. Note that, the color format of the image to be
encoded is not limited to these.
[0096] In the YUV420 format, the densities in the horizontal
direction and the vertical direction of (the pixels of) the color
difference components (chroma) are down-sampled to 1/2 of the
densities in the horizontal direction and the vertical direction of
(the pixels of) the luminance component (luminance), respectively.
In the YUV422 format, the density in the vertical direction of the
color difference component is the same as the density in the
vertical direction of the luminance component, but the density in
the horizontal direction of the color difference component is
down-sampled to 1/2 of the density in the horizontal direction of
the luminance component. In the YUV444 format, the densities in the
horizontal direction and the vertical direction of the color
difference components are the same as the densities in the
horizontal direction and the vertical direction of the luminance
component, respectively.
[0097] Note that, in FIG. 5, the arrows indicate the scanning order
of the luminance component and the color difference component.
[0098] In the reference document REF4, regarding images in the
YUV420 format, the YUV422 format, and the YUV444 format, it has
been devised to perform the filtering decision using two lines, the
first line and the fourth line, of the four lines (segments) for
the luminance component, and perform the filtering decision using
two lines, the first line and the second line, of the two lines
(segments) or the four lines (segments) for the color difference
component.
[0099] Here, since the density of the color difference component in
the vertical direction in the YUV422 format is the same as the
density of the luminance component in the vertical direction, the
filtering decision for the color difference component at the
vertical block boundary (block boundary in the vertical direction)
is performed in units of four lines similarly to the filtering
decision for the luminance component. The same applies to the color
difference component in the horizontal direction and the vertical
direction in the YUV444 format.
[0100] On the other hand, since the density of the color difference
component in the horizontal direction in the YUV422 format is 1/2
of the density of the luminance component in the horizontal
direction, the filtering decision for the color difference
component at the horizontal block boundary (block boundary in the
horizontal direction) is performed in units of two lines that is
1/2 of the units of four lines of the filtering decision. The same
applies to the color difference component in the horizontal
direction and the vertical direction in the YUV420 format.
[0101] Thus, in the reference document REF4, for the YUV420 format,
although the densities of the color difference component in the
horizontal direction and the vertical direction are 1/2 of the
densities of the luminance component in the horizontal direction
and the vertical direction, respectively, the filtering decision
for the color difference component in the vertical direction is
performed using two lines similarly to the filtering decision for
the luminance component in the horizontal direction and the
vertical direction. For this reason, there is a possibility that a
difference occurs in accuracy between the filtering decision for
the color difference component and the filtering decision for the
luminance component, and the image quality degrades. The same
applies to the color difference component and the luminance
component in the horizontal direction in the YUV422 format.
[0102] To make the accuracy of the filtering decision for the color
difference component and the filtering decision for the luminance
component about the same, for the horizontal direction and the
vertical direction in the YUV420 format and the horizontal
direction in the YUV422 format, it is desirable to use one line out
of the two lines for the filtering decision for the color
difference component in correspondence with the use of the two
lines out of the four lines for the filtering decision for the
luminance component.
[0103] Furthermore, although the density of the color difference
component in the vertical direction in the YUV422 format is the
same as the density of the luminance component in the vertical
direction, the filtering decision for the luminance component in
the vertical direction is performed using two lines, the first line
and the fourth line, of the four lines, whereas the filtering
decision for the color difference component in the vertical
direction is performed using the two lines different from the case
of the luminance component of the four lines, that is, the two
lines, the first line and the second line. For this reason, there
is a possibility that a difference occurs in accuracy between the
filtering decision for the color difference component and the
filtering decision for the luminance component, and the image
quality degrades. The same applies to the color difference
component and the luminance component in the horizontal direction
and the vertical direction in the YUV444 format.
[0104] To make the accuracy of the filtering decision for the color
difference component and the filtering decision for the luminance
component about the same, for the vertical direction in the YUV422
format and the horizontal direction and the vertical direction of
the YUV444 format, it is desirable to use the first line and the
fourth line out of the four lines for the filtering decision for
the color difference component in correspondence with the use of
the first line and the fourth line out of the four lines for the
filtering decision for the luminance component.
[0105] Thus, in the present technology, the number of reference
lines to be referred to in Deblocking filter decision is changed
depending on the color format (YUV420/422/444). That is, in the
present technology, the number of reference lines used for the
filtering decision of the color difference component is set
depending on the color format.
[0106] For example, in a case where the color format is the YUV420
format, down-sampling is performed in the horizontal direction and
the vertical direction, so that one line is set as the reference
line for both the horizontal and vertical block boundaries. For
example, in a case where the color format is YUV422 format,
down-sampling is performed in the horizontal direction, so that one
line is set as the reference line at the block boundary in the
horizontal direction, and two lines (Luma (luminance component) are
set as the reference lines at the block boundary in the vertical
direction. For example, in a case where the color format is YUV444
format, two lines (same as Luma) are set as the reference lines for
both the horizontal and vertical block boundaries.
[0107] Moreover, in the present technology, by matching the line
(reference line) used for the filtering decision for the color
difference component with the line used for the filtering decision
for the luminance component, correspondence is made between the
line used for the filtering decision for the color difference
component and the line used for the filtering decision for the
luminance component, whereby the processes of the luminance
component and the color difference component are unified.
[0108] FIGS. 5 and 6 are diagrams explaining an overview of the
present technology.
[0109] FIG. 5 is a diagram explaining filtering decision
(hereinafter, also referred to as vertical block boundary filtering
decision) for a deblocking filter applied to (pixels in the
horizontal direction orthogonal to) a vertical block boundary.
[0110] A of FIG. 5 illustrates luminance components and color
difference components in the YUV420 format. B of FIG. 5B
illustrates luminance components and color difference components in
the YUV422 format. C of FIG. 5 illustrates luminance components and
color difference components in the YUV444 format.
[0111] In the present technology, for the luminance component, for
example, in any color format of the YUV420 format, the YUV422
format, and the YUV444 format, as devised in the reference document
REF4, with a vertical block boundary for four lines in the
horizontal direction as a partial vertical block boundary, the
vertical block boundary filtering decision is performed using two
lines, a first line D1 and a fourth line D4, of the four lines in
the horizontal direction orthogonal to the partial vertical block
boundary, for each partial vertical block boundary.
[0112] Furthermore, in the present technology, for the color
difference component in the YUV420 format, the density in the
vertical direction is 1/2 of that of the luminance component, so
that the vertical block boundary filtering decision is performed
using only a first line D11 out of two lines in the horizontal
direction of the color difference component corresponding to the
four lines in the horizontal direction of the luminance
component.
[0113] Moreover, in the present technology, for the color
difference component of the YUV422 format or the YUV444 format, the
density in the vertical direction is the same as that of the
luminance component, so that the vertical block boundary filtering
decision is performed using two lines, a first line D21 and a
fourth line D24, or a first line D31 and a fourth line D34, of the
four lines in the horizontal direction of the color difference
component corresponding to the four lines in the horizontal
direction of the color difference component, similarly to the
luminance component.
[0114] FIG. 6 is a diagram explaining filtering decision
(hereinafter, also referred to as horizontal block boundary
filtering decision) for a deblocking filter applied to (pixels in
the vertical direction orthogonal to) a horizontal block
boundary.
[0115] A of FIG. 6 illustrates luminance components and color
difference components in the YUV420 format. B of FIG. 6 illustrates
luminance components and color difference components in the YUV422
format. C of FIG. 6 illustrates luminance components and color
difference components in the YUV444 format.
[0116] In the present technology, for the luminance component, in
any color format of the YUV420 format, the YUV422 format, and the
YUV444 format, as devised in the reference document REF4, with a
horizontal block boundary for four lines in the vertical direction
as a partial horizontal block boundary, the horizontal block
boundary filtering decision is performed using two lines, a first
line D51 and a fourth line D54, of the four lines in the vertical
direction orthogonal to the partial horizontal block boundary, for
each partial horizontal block boundary.
[0117] The partial horizontal block boundary is a unit of
processing when it is decided whether or not the deblocking filter
is applied (the pixels in the vertical direction orthogonal to) the
horizontal block boundary, similarly to the partial vertical block
boundary.
[0118] Furthermore, in the present technology, for the color
difference component in the YUV420 format or the YUV422, the
density in the horizontal direction is 1/2 of that of the luminance
component, so that the horizontal block boundary filtering decision
is performed using only a first line D61 or D71 of the two lines in
the vertical direction of the color difference component
corresponding to the four lines in the vertical direction of the
luminance component.
[0119] Moreover, in the present technology, for the color
difference component in the YUV444 format, the density in the
horizontal direction is the same as that of the luminance
component, so that the horizontal block boundary filtering decision
is performed using two lines, a first line D81 and a fourth line
D84, of the four lines in the vertical direction of the color
difference component corresponding to the four lines in the
vertical direction of the luminance component, similarly to the
luminance component.
[0120] <Image Processing System to which the Present Technology
is Applied>
[0121] FIG. 7 is a block diagram illustrating a configuration
example of an embodiment of an image processing system to which the
present technology is applied.
[0122] An image processing system 10 includes an image processing
device as an encoder 11, and an image processing device as a
decoder 51.
[0123] The encoder 11 encodes an original image to be encoded
supplied to the encoder 11 and outputs an encoded bitstream
obtained by the encoding. The encoded bitstream is supplied to the
decoder 51 via a recording medium or a transmission medium (not
illustrated).
[0124] The decoder 51 decodes the encoded bitstream supplied to the
decoder 51 and outputs a decoded image obtained by the
decoding.
[0125] <Configuration Example of Encoder 11>
[0126] FIG. 8 is a block diagram illustrating a detailed
configuration example of the encoder 11 of FIG. 7.
[0127] Note that, in the block diagram described below, lines for
supplying information (data) necessary for a process for each block
are omitted as appropriate to avoid complicating the drawing.
[0128] In FIG. 8, the encoder 11 includes an A/D conversion unit
21, a screen rearrangement buffer 22, a calculation unit 23, an
orthogonal transform unit 24, a quantization unit 25, a lossless
encoding unit 26, and an accumulation buffer 27. Moreover, the
encoder 11 includes an inverse quantization unit 28, an inverse
orthogonal transform unit 29, a calculation unit 30, a frame memory
32, a selection unit 33, an intra prediction unit 34, a motion
prediction/compensation unit 35, a predicted image selection unit
36, and a rate control unit 37. Furthermore, the encoder 11
includes a deblocking filter 31a, an adaptive offset filter 41, and
an adaptive loop filter (ALF) 42.
[0129] The A/D conversion unit 21 performs A/D conversion of an
original image of an analog signal (encoding target) into an
original image of a digital signal, and supplies the original image
to the screen rearrangement buffer 22 for storage. Note that, in a
case where the original image of the digital signal is supplied to
the encoder 11, the encoder 11 can be configured without being
provided with the A/D conversion unit 21.
[0130] The screen rearrangement buffer 22 rearranges frames of the
original image into the encoding (decoding) order from the display
order depending on a Group Of Picture (GOP), and supplies the
frames to the calculation unit 23, the intra prediction unit 34,
and the motion prediction/compensation unit 35.
[0131] The calculation unit 23 subtracts a predicted image supplied
from the intra prediction unit 34 or the motion
prediction/compensation unit 35 via the predicted image selection
unit 36 from the original image from the screen rearrangement
buffer 22, and supplies a residual (prediction residual) obtained
by the subtraction to the orthogonal transform unit 24.
[0132] The orthogonal transform unit 24 performs an orthogonal
transform such as a discrete cosine transform or a Karhunen-Loeve
transform on the residual supplied from the calculation unit 23,
and supplies orthogonal transform coefficients obtained by the
orthogonal exchange to the quantization unit 25.
[0133] The quantization unit 25 quantizes the orthogonal transform
coefficients supplied from the orthogonal transform unit 24. The
quantization unit 25 sets a quantization parameter on the basis of
a target value of the code amount (code amount target value)
supplied from the rate control unit 37, and quantizes the
orthogonal transform coefficients. The quantization unit 25
supplies coded data that is the quantized orthogonal transform
coefficients to the lossless encoding unit 26.
[0134] The lossless encoding unit 26 encodes the quantized
orthogonal transform coefficients as coded data from the
quantization unit 25 with a predetermined lossless encoding
method.
[0135] Furthermore, the lossless encoding unit 26 acquires, from
each block, encoding information necessary for decoding by the
decoding device 170 out of encoding information regarding
predictive encoding by the encoder 11.
[0136] Here, the encoding information includes, for example, a
prediction mode of intra prediction or inter-prediction, motion
information such as a motion vector, the code amount target value,
the quantization parameter, a picture type (I, P, B), filter
parameters of the deblocking filter 31a and the adaptive offset
filter 41, and the like.
[0137] The prediction mode can be acquired from the intra
prediction unit 34 or the motion prediction/compensation unit 35.
The motion information can be acquired from the motion
prediction/compensation unit 35. The filter parameters of the
deblocking filter 31a and the adaptive offset filter 41 can be
acquired from the deblocking filter 31a and the adaptive offset
filter 41, respectively.
[0138] The lossless encoding unit 26 encodes the encoding
information with a lossless encoding method, for example,
variable-length encoding or arithmetic encoding such as
Context-Adaptive Variable Length Coding (CAVLC) or Context-Adaptive
Binary Arithmetic Coding (CABAC), or others, generates a
(multiplexed) encoded bitstream including the encoding information
after encoding, and the coded data from the quantization unit 25,
and supplies the encoded bitstream to the accumulation buffer
27.
[0139] The accumulation buffer 27 temporarily stores the encoded
bitstream supplied from the lossless encoding unit 26. The encoded
bitstream accumulated in the accumulation buffer 27 is read and
transmitted at a predetermined timing.
[0140] The coded data that is the orthogonal transform coefficients
quantized by the quantization unit 25 is supplied to the lossless
encoding unit 26 and also to the inverse quantization unit 28. The
inverse quantization unit 28 performs inverse quantization on the
quantized orthogonal transform coefficients with a method
corresponding to the quantization by the quantization unit 25, and
supplies the orthogonal transform coefficients obtained by the
inverse quantization to the inverse orthogonal transform unit
29.
[0141] The inverse orthogonal transform unit 29 performs inverse
orthogonal transform on the orthogonal transform coefficients
supplied from the inverse quantization unit 28 with a method
corresponding to an orthogonal transform process performed by the
orthogonal transform unit 24, and supplies a residual obtained as a
result of the inverse orthogonal transform to the calculation unit
30.
[0142] The calculation unit 30 adds the predicted image supplied
from the intra prediction unit 34 or the motion
prediction/compensation unit 35 via the predicted image selection
unit 36 to the residual supplied from the inverse orthogonal
transform unit 29, and therefore obtains and outputs (a part of) a
decoded image obtained by decoding the original image.
[0143] The decoded image output by the calculation unit 30 is
supplied to the deblocking filter 31a or the frame memory 32.
[0144] The frame memory 32 temporarily stores the decoded image
supplied from the calculation unit 30, and a decoded image (filter
image) to which the deblocking filter 31a, the adaptive offset
filter 41, and the ALF 42 are applied, supplied from the ALF 42.
The decoded image stored in the frame memory 32 is supplied to the
selection unit 33 at a necessary timing, as a reference image used
for generating the predicted image.
[0145] The selection unit 33 selects a supply destination of the
reference image supplied from the frame memory 32.
[0146] In a case where the intra prediction is performed in the
intra prediction unit 34, the selection unit 33 supplies the
reference image supplied from the frame memory 32 to the intra
prediction unit 34. In a case where inter-prediction is performed
in the motion prediction/compensation unit 35, the selection unit
33 supplies the reference image supplied from the frame memory 32
to the motion prediction/compensation unit 35.
[0147] The intra prediction unit 34 performs intra prediction
(in-screen prediction) using the original image supplied from the
screen rearrangement buffer 22 and the reference image supplied
from the frame memory 32 via the selection unit 33. The intra
prediction unit 34 selects an optimal intra prediction mode on the
basis of a predetermined cost function (for example, RD cost, or
the like), and supplies a predicted image generated from the
reference image in the optimal intra prediction mode to the
predicted image selection unit 36. Furthermore, as described above,
the intra prediction unit 34 appropriately supplies the prediction
mode indicating the intra prediction mode selected on the basis of
the cost function to the lossless encoding unit 26 and the like.
[0122]
[0148] The motion prediction/compensation unit 35 performs motion
prediction (inter-prediction) using the original image supplied
from the screen rearrangement buffer 22, and the reference image
supplied from the frame memory 32 via the selection unit 33.
Moreover, the motion prediction/compensation unit 35 performs
motion compensation depending on the motion vector detected by the
motion prediction, to generate the predicted image. The motion
prediction/compensation unit 35 performs inter-prediction in a
plurality of inter-prediction modes prepared in advance, to
generate a predicted image from the reference image.
[0149] The motion prediction/compensation unit 35 selects an
optimal inter-prediction mode on the basis of a predetermined cost
function of the predicted image obtained for each of the plurality
of inter-prediction modes. Moreover, the motion
prediction/compensation unit 35 supplies the predicted image
generated in the optimal inter-prediction mode to the predicted
image selection unit 36.
[0150] Furthermore, the motion prediction/compensation unit 35
supplies, to the lossless encoding unit 26, a prediction mode
indicating the inter-prediction mode selected on the basis of the
cost function, and motion information such as a motion vector
required in decoding of the coded data encoded in the
inter-prediction mode, and the like.
[0151] The predicted image selection unit 36 selects a supply
source of the predicted image to be supplied to the calculation
units 23 and 30 from the intra prediction unit 34 and the motion
prediction/compensation unit 35, and supplies the predicted image
supplied from the selected supply source to the calculation units
23 and 30.
[0152] The rate control unit 37 controls a rate of quantization
operation in the quantization unit 25 on the basis of the code
amount of the encoded bitstream accumulated in the accumulation
buffer 27 so that overflow or underflow does not occur. That is,
the rate control unit 37 sets a target code amount of the encoded
bitstream not to cause overflow and underflow of the accumulation
buffer 27, and supplies the target code amount to the quantization
unit 25.
[0153] The deblocking filter 31a applies the deblocking filter to
the decoded image from the calculation unit 30 as necessary, and
supplies, to the adaptive offset filter 41, the decoded image
(filter image) to which the deblocking filter is applied, or the
decoded image to which the deblocking filter is not applied.
[0154] The adaptive offset filter 41 applies the adaptive offset
filter to the decoded image from the deblocking filter 31a as
necessary, and supplies, to the ALF 42, the decoded image (filter
image) to which the adaptive offset filter is applied, or the
decoded image to which the adaptive offset filter is not
applied.
[0155] The ALF 42 applies ALF to the decoded image from the
adaptive offset filter 41 as necessary, and supplies, to the frame
memory 32, the decoded image to which the ALF is applied or the
decoded image to which the ALF is not applied.
[0156] <Encoding Process>
[0157] FIG. 9 is a flowchart explaining an example of an encoding
process by the encoder 11 in FIG. 8.
[0158] Note that, the order of the steps of the encoding process
illustrated in FIG. 9 is an order for convenience of description,
and the steps of the actual encoding process are appropriately
performed in parallel and in a necessary order. The same applies to
processes described later.
[0159] In the encoder 11, in step S11, the A/D conversion unit 21
performs A/D conversion on the original image and supplies the
converted original image to the screen rearrangement buffer 22, and
the process proceeds to step S12.
[0160] In step S12, the screen rearrangement buffer 22 stores the
original image from the A/D conversion unit 21 and performs
rearrangement in the encoding order to output the original image,
and the process proceeds to step S13.
[0161] In step S13, the intra prediction unit 34 performs an intra
prediction process in the intra prediction mode, and the process
proceeds to step S14. In step S14, the motion
prediction/compensation unit 35 performs inter-motion prediction
process of performing motion prediction and motion compensation in
the inter-prediction mode, and the process proceeds to step
S15.
[0162] In the intra prediction process by the intra prediction unit
34 and the inter-motion prediction process by the motion
prediction/compensation unit 35, cost functions of various
prediction modes are calculated, and a predicted image is
generated.
[0163] In step S15, the predicted image selection unit 36
determines an optimal prediction mode on the basis of each cost
function obtained by the intra prediction unit 34 and the motion
prediction/compensation unit 35. Then, the predicted image
selection unit 36 selects and outputs a predicted image in the
optimal prediction mode from the predicted image generated by the
intra prediction unit 34 and the predicted image generated by the
motion prediction/compensation unit 35, and the process proceeds
from step S15 to step S16.
[0164] In step S16, the calculation unit 23 calculates a residual
between a target image to be encoded that is the original image
output from the screen rearrangement buffer 22, and the predicted
image output from the predicted image selection unit 36, and
supplies the residual to the orthogonal transform unit 24, and the
process proceeds to step S17.
[0165] In step S17, the orthogonal transform unit 24 performs
orthogonal transform on the residual from the calculation unit 23,
and supplies orthogonal transform coefficients obtained as a result
of the orthogonal transform, to the quantization unit 25, and the
process proceeds to step S18.
[0166] In step S18, the quantization unit 25 quantizes the
orthogonal transform coefficients from the orthogonal transform
unit 24, and supplies quantization coefficients obtained by the
quantization to the lossless encoding unit 26 and the inverse
quantization unit 28, and the process proceeds to step S19.
[0167] In step S19, the inverse quantization unit 28 performs
inverse quantization on the quantization coefficients from the
quantization unit 25, and supplies orthogonal transform
coefficients obtained as a result of the inverse quantization, to
the inverse orthogonal transform unit 29, and the process proceeds
to step S20. In step S20, the inverse orthogonal transform unit 29
performs inverse orthogonal transform on the orthogonal transform
coefficients from the inverse quantization unit 28, and supplies a
residual obtained as a result of the inverse orthogonal transform,
to the calculation unit 30, and the process proceeds to step
S21.
[0168] In step S21, the calculation unit 30 adds the residual from
the inverse orthogonal transform unit 29 and the predicted image
output from the predicted image selection unit 36 together, to
generate a decoded image corresponding to the original image
subjected to residual calculation in the calculation unit 23. The
calculation unit 30 supplies the decoded image to the deblocking
filter 31a, and the process proceeds from step S21 to step S22.
[0169] In step S22, the deblocking filter 31a applies the
deblocking filter to the decoded image from the calculation unit
30, supplies a filter image obtained as a result of the
application, to the adaptive offset filter 41, and the process
proceeds to step S23.
[0170] In step S23, the adaptive offset filter 41 applies the
adaptive offset filter to the filter image from the deblocking
filter 31a, supplies a filter image obtained as a result of the
application, to the ALF 42, and the process proceeds to step
S24.
[0171] In step S24, the ALF 42 applies the ALF to the filter image
from the adaptive offset filter 41, supplies a filter image
obtained as a result of the application, to the frame memory 32,
and the process proceeds to step S25.
[0172] In step S25, the frame memory 32 stores the filter image
supplied from the ALF 42, and the process proceeds to step S26. The
filter image stored in the frame memory 32 is used as a reference
image that is a source for generating the predicted image, in steps
S13 and S14.
[0173] In step S26, the lossless encoding unit 26 encodes the coded
data that is the quantization coefficients from the quantization
unit 25, and generates an encoded bitstream including the coded
data. Moreover, the lossless encoding unit 26 encodes encoding
information as necessary, such as the quantization parameter used
for quantization in the quantization unit 25, the prediction mode
obtained in the intra prediction process in the intra prediction
unit 34, the prediction mode and motion information obtained in the
inter-motion prediction process in the motion
prediction/compensation unit 35, and the filter parameters of the
deblocking filter 31a and the adaptive offset filter 41, and
includes the encoding information in the encoded bitstream.
[0174] Then, the lossless encoding unit 26 supplies the encoded
bitstream to the accumulation buffer 27, and the process proceeds
from step S26 to step S27.
[0175] In step S27, the accumulation buffer 27 accumulates the
encoded bitstream from the lossless encoding unit 26, and the
process proceeds to step S28. The encoded bitstream accumulated in
the accumulation buffer 27 is appropriately read and
transmitted.
[0176] In step S28, the rate control unit 37 controls the rate of
the quantization operation in the quantization unit 25 on the basis
of the code amount (generated code amount) of the encoded bitstream
accumulated in the accumulation buffer 27 so that overflow or
underflow does not occur, and the encoding process ends.
<Configuration Example of Decoder 51>
[0177] FIG. 10 is a block diagram illustrating a detailed
configuration example of the decoder 51 of FIG. 7.
[0178] In FIG. 10, the decoder 51 includes an accumulation buffer
61, a lossless decoding unit 62, an inverse quantization unit 63,
an inverse orthogonal transform unit 64, a calculation unit 65, a
screen rearrangement buffer 67, and a D/A conversion unit 68.
Moreover, the decoder 51 includes a frame memory 69, a selection
unit 70, an intra prediction unit 71, a motion
prediction/compensation unit 72, and a selection unit 73.
Furthermore, the decoder 51 includes a deblocking filter 31b, an
adaptive offset filter 81, and an ALF 82.
[0179] The accumulation buffer 61 temporarily accumulates an
encoded bitstream transmitted from the encoder 11, and supplies the
encoded bitstream to the lossless decoding unit 62 at a
predetermined timing.
[0180] The lossless decoding unit 62 receives the encoded bitstream
from the accumulation buffer 61, and decodes the encoded bitstream
with a method corresponding to the encoding method of the lossless
encoding unit 26 in FIG. 8.
[0181] Then, the lossless decoding unit 62 supplies quantization
coefficients as coded data included in a decoding result of the
encoded bitstream to the inverse quantization unit 63.
[0182] Furthermore, the lossless decoding unit 62 has a function of
performing parsing. The lossless decoding unit 62 parses the
necessary encoding information included in the decoding result of
the encoded bitstream, and supplies the encoding information to the
intra prediction unit 71, the motion prediction/compensation unit
72, the deblocking filter 31b, the adaptive offset filter 81, and
other necessary blocks.
[0183] The inverse quantization unit 63 performs inverse
quantization on the quantization coefficients as the coded data
from the lossless decoding unit 62 with a method corresponding to
the quantization method of the quantization unit 25 in FIG. 8, and
supplies orthogonal transform coefficients obtained by the inverse
quantization to the inverse orthogonal transform unit 64.
[0184] The inverse orthogonal transform unit 64 performs inverse
orthogonal transform on the orthogonal transform coefficients
supplied from the inverse quantization unit 63 with a method
corresponding to the orthogonal transform method of the orthogonal
transform unit 24 in FIG. 8, and supplies a residual obtained as a
result of the inverse orthogonal transform, to the calculation unit
65.
[0185] To the calculation unit 65, the residual is supplied from
the inverse orthogonal transform unit 64, and also a predicted
image is supplied from the intra prediction unit 71 or the motion
prediction/compensation unit 72 via the selection unit 73.
[0186] The calculation unit 65 adds the residual from the inverse
orthogonal transform unit 64 and the predicted image from the
selection unit 73 together, to generate a decoded image, and
supplies the decoded image to the deblocking filter 31b.
[0187] The screen rearrangement buffer 67 temporarily stores the
decoded image supplied from the ALF 82, rearranges frames
(pictures) of the decoded image into the display order from the
encoding (decoding) order, and supplies the frames to the D/A
conversion unit 68.
[0188] The D/A conversion unit 68 performs D/A conversion on the
decoded image supplied from the screen rearrangement buffer 67, and
outputs the converted decoded image to a display (not illustrated)
for display. Note that, in a case where a device connected to the
decoder 51 accepts an image of a digital signal, the decoder 51 can
be configured without being provided with the D/A conversion unit
68.
[0189] The frame memory 69 temporarily stores the decoded image
supplied from the ALF 82. Moreover, the frame memory 69 supplies,
to the selection unit 70, the decoded image as a reference image to
be used for generating the predicted image, at a predetermined
timing or on the basis of an external request from the intra
prediction unit 71, the motion prediction/compensation unit 72, or
the like.
[0190] The selection unit 70 selects a supply destination of the
reference image supplied from the frame memory 69. In a case where
an image encoded in the intra prediction is decoded, the selection
unit 70 supplies the reference image supplied from the frame memory
69 to the intra prediction unit 71. Furthermore, in a case where an
image encoded in the inter-prediction is decoded, the selection
unit 70 supplies the reference image supplied from the frame memory
69 to the motion prediction/compensation unit 72.
[0191] In accordance with the prediction mode included in the
encoding information supplied from the lossless decoding unit 62,
in the intra prediction mode used in the intra prediction unit 34
in FIG. 8, the intra prediction unit 71 performs intra prediction
by using the reference image supplied via the selection unit 70
from the frame memory 69. Then, the intra prediction unit 71
supplies the predicted image obtained by the intra prediction to
the selection unit 73.
[0192] In accordance with the prediction mode included in the
encoding information supplied from the lossless decoding unit 62,
in the inter-prediction mode used in the motion
prediction/compensation unit 35 in FIG. 8, the motion
prediction/compensation unit 72 performs inter-prediction by using
the reference image supplied via the selection unit 70 from the
frame memory 69. The inter-prediction is performed using the motion
information and the like included in the encoding information
supplied from the lossless decoding unit 62, as necessary.
[0193] The motion prediction/compensation unit 72 supplies the
predicted image obtained by the inter-prediction to the selection
unit 73.
[0194] The selection unit 73 selects the predicted image supplied
from the intra prediction unit 71 or the predicted image supplied
from the motion prediction/compensation unit 72, and supplies the
selected predicted image to the calculation unit 65.
[0195] The deblocking filter 31b applies the deblocking filter to
the decoded image from the calculation unit 65 in accordance with
the filter parameters included in the encoding information supplied
from the lossless decoding unit 62, and supplies, to the adaptive
offset filter 81, the decoded image (filter image) to which the
deblocking filter is applied, or the decoded image to which the
deblocking filter is not applied.
[0196] The adaptive offset filter 81 applies the adaptive offset
filter to the decoded image from the deblocking filter 31b as
necessary in accordance with the filter parameters included in the
encoding information supplied from the lossless decoding unit 62,
and supplies, to the ALF 82, the decoded image (filter image) to
which the adaptive offset filter is applied, or the decoded image
to which the adaptive offset filter is not applied.
[0197] The ALF 82 applies the ALF to the decoded image from the
adaptive offset filter 81 as necessary, and supplies the decoded
image to which the ALF is applied or the decoded image to which the
ALF is not applied, to the screen rearrangement buffer 67 and the
frame memory 69.
[0198] <Decoding process>
[0199] FIG. 11 is a flowchart explaining an example of a decoding
process by the decoder 51 of FIG. 10.
[0200] In the decoding process, in step S51, the accumulation
buffer 61 temporarily accumulates an encoded bitstream transmitted
from the encoder 11, and appropriately supplies the encoded
bitstream to the lossless decoding unit 62, and the process
proceeds to step S52.
[0201] In step S52, the lossless decoding unit 62 receives and
decodes the encoded bitstream supplied from the accumulation buffer
61, and supplies the quantization coefficients as the coded data
included in the decoding result of the encoded bitstream to the
inverse quantization unit 63.
[0202] Furthermore, the lossless decoding unit 62 parses the
encoding information included in the decoding result of the encoded
bitstream. Then, the lossless decoding unit 62 supplies the
necessary encoding information to the intra prediction unit 71, the
motion prediction/compensation unit 72, the deblocking filter 31b,
the adaptive offset filter 81, and other necessary blocks.
[0203] Then, the process proceeds from step S52 to step S53, and
the intra prediction unit 71 or the motion prediction/compensation
unit 72 performs intra prediction process or inter-motion
prediction process of generating a predicted image, in accordance
with the reference image supplied via the selection unit 70 from
the frame memory 69, and the encoding information supplied from the
lossless decoding unit 62. Then, the intra prediction unit 71 or
the motion prediction/compensation unit 72 supplies the predicted
image obtained by the intra prediction process or the inter-motion
prediction process to the selection unit 73, and the process
proceeds from step S53 to step S54.
[0204] In step S54, the selection unit 73 selects the predicted
image supplied from the intra prediction unit 71 or the motion
prediction/compensation unit 72, and supplies the predicted image
to the calculation unit 65, and the process proceeds to step
S55.
[0205] In step S55, the inverse quantization unit 63 performs
inverse quantization on the quantization coefficients from the
lossless decoding unit 62, and supplies orthogonal transform
coefficients obtained as a result of the inverse quantization, to
the inverse orthogonal transform unit 64, and the process proceeds
to step S56.
[0206] In step S56, the inverse orthogonal transform unit 64
performs inverse orthogonal transform on the orthogonal transform
coefficients from the inverse quantization unit 63, and supplies a
residual obtained as a result of the inverse orthogonal transform,
to the calculation unit 65, and the process proceeds to step
S57.
[0207] In step S57, the calculation unit 65 generates a decoded
image by adding the residual from the inverse orthogonal transform
unit 64 and the predicted image from the selection unit 73
together. Then, the calculation unit 65 supplies the decoded image
to the deblocking filter 31b, and the process proceeds from step
S57 to step S58.
[0208] In step S58, the deblocking filter 31b applies the
deblocking filter to the decoded image from the calculation unit 65
in accordance with the filter parameters included in the encoding
information supplied from the lossless decoding unit 62, and
supplies a filter image obtained as a result of the application, to
the adaptive offset filter 81, and the process proceeds to step
S59.
[0209] In step S59, the adaptive offset filter 81 applies the
adaptive offset filter to the filter image from the deblocking
filter 31b in accordance with the filter parameters included in the
encoding information supplied from the lossless decoding unit 62,
and supplies a filter image obtained as a result of the
application, to the ALF 82, and the process proceeds to step
S60.
[0210] The ALF 82 applies the ALF to the filter image from the
adaptive offset filter 81, and supplies the filter image obtained
as a result of the application, to the screen rearrangement buffer
67 and the frame memory 69, and the process proceeds to step
S61.
[0211] In step S61, the frame memory 69 temporarily stores the
filter image supplied from the ALF 82, and the process proceeds to
step S62. The filter image (decoded image) stored in the frame
memory 69 is used as a reference image that is a source for
generating the predicted image, in the intra prediction process or
the inter-motion prediction process in step S53.
[0212] In step S62, the screen rearrangement buffer 67 performs
rearrangement of the filter image supplied from the ALF 82 in the
display order, and supplies the filter image to the D/A conversion
unit 68, and the process proceeds to step S63.
[0213] In step S63, the D/A conversion unit 68 performs D/A
conversion on the filter image from the screen rearrangement buffer
67, and the process, the decoding process, ends. The filter image
(decoded image) after the D/A conversion is output and displayed on
a display (not illustrated).
[0214] <Configuration Example of Deblocking Filter 31a>
[0215] FIG. 12 is a block diagram illustrating a configuration
example of the deblocking filter 31a.
[0216] Note that, the deblocking filter 31b is configured similarly
to the deblocking filter 31a.
[0217] In FIG. 12, the deblocking filter 31a includes a boundary
strength calculation unit 261, a decision unit 310, a filtering
unit 320, a line buffer 330, and a controller 340.
[0218] The boundary strength calculation unit 261 calculates bS
(boundary strength) using the color difference-related parameters
related to the color difference, targeting the block boundary of
the decoded image. In a case where a signal in the YUV420 format is
a calculation target of the bS, the boundary strength calculation
unit 261 calculates the bS in units of four lines in the luminance
component of the decoded image, that is, in units of two lines in
the color difference component of the decoded image.
[0219] The color difference-related parameters used by the boundary
strength calculation unit 261 to calculate the bS include a flag
indicating the presence or absence of the significant coefficient
of the U component in the block whose block boundary is located on
a grid, and a flag indicating the presence or absence of the
significant coefficient of the V component in the block. The color
difference-related parameters including a flag indicating the
presence or absence of the significant coefficient of each
component (Y component, U component, V component) in each block is
supplied to the boundary strength calculation unit 261 from the
controller 340.
[0220] The boundary strength calculation unit 261 calculates the bS
using the color difference-related parameters and the like from the
controller 340. The boundary strength calculation unit 261
calculates the bS on the basis of whether or not the significant
coefficient of the color difference component exists in two
adjacent blocks sandwiching the block boundary that is a
calculation target of the bS, and the like. The boundary strength
calculation unit 261 supplies the bS to the decision unit 310.
[0221] Note that, as the method of calculating the bS, for example,
the method described in the reference document REF4 or any other
method can be adopted. Furthermore, as the bS, any value
representing the boundary strength can be adopted. Here, as the bS,
values 0, 1, and 2 that divide the boundary strength into three
stages are adopted, and the stronger the boundary strength, the
larger the value of the bS.
[0222] The decision unit 310 includes a filtering decision unit 311
and a filtering strength decision unit 312, and performs filtering
decision.
[0223] The bS is supplied from the boundary strength calculation
unit 261 to the filtering decision unit 311. Furthermore, the
decoded image is supplied to the filtering decision unit 311 from
the outside of the deblocking filter 31a (the calculation unit 30
in FIG. 8 or the calculation unit 65 in FIG. 10) or the line buffer
330.
[0224] The filtering decision unit 311 performs the application
necessity decision using the bS from the boundary strength
calculation unit 261 and, further, using the decoded image from the
outside of the deblocking filter 31a or the line buffer 330, and
the like.
[0225] The filtering decision unit 311 supplies the decision result
of the application necessity decision to the filtering strength
decision unit 312.
[0226] To the filtering strength decision unit 312, the decision
result of the application necessity decision is supplied from the
filtering decision unit 311, and also the decoded image is supplied
from the outside of the deblocking filter 31a or the line buffer
330.
[0227] In a case where the decision result of the application
necessity decision from the filtering decision unit 311 indicates
that the deblocking filter is applied, the filtering strength
decision unit 312 performs the filtering strength decision for
deciding the filtering strength of the deblocking filter applied to
the color difference component of the decoded image by using the
decoded image from the outside of the deblocking filter 31a or the
line buffer 330. Then, the filtering strength decision unit 312
supplies the decision result of the filtering strength decision to
the filtering unit 320 as the decision result of the filtering
decision.
[0228] In the deblocking filter 31a, as the filter types of the
deblocking filter applied to the color difference component of the
decoded image, there are two filter types, for example, a weak
filter and a chroma long filter having a larger number of taps than
the weak filter, that is, having a stronger filtering strength. The
decision result of the filtering strength indicates the weak filter
or the chroma long filter.
[0229] Furthermore, in a case where the decision result of the
application necessity decision from the filtering decision unit 311
indicates that the deblocking filter is not applied, the filtering
strength decision unit 312 supplies the decision result of the
application necessity decision to the filtering unit 320 as the
decision result of the filtering decision.
[0230] To the filtering unit 320, the decision result of the
filtering decision is supplied from the filtering strength decision
unit 312, and also the decoded image is supplied from the outside
of the deblocking filter 31a or the line buffer 330.
[0231] In a case where the decision result of the filtering
decision from (the filtering strength decision unit 312 of) the
decision unit 310 indicates that the deblocking filter is not
applied, the filtering unit 320 outputs the decoded image as it is
without applying the deblocking filter to the decoded image.
[0232] Furthermore, in a case where the decision result of the
filtering decision from the filtering strength decision unit 312
indicates the chroma long filter or the weak filter, the filtering
unit 320 performs a filtering process of applying the chroma long
filter or the weak filter indicated by the decision result of the
filtering decision to the decoded image.
[0233] That is, the filtering unit 320 performs calculation as a
filtering process of the target pixels that are color difference
pixels to be subjected to the filtering process, in the decoded
image from the outside of the deblocking filter 31a or the line
buffer 330, by using color difference pixels in the vicinity of the
target pixels.
[0234] Here, a pixel used for the filtering decision of the
decision unit 310 (a pixel referred to for the filtering decision)
is also referred to as a filter reference pixel. Furthermore, a
pixel used for the calculation as the filtering process of the
filtering unit 320 is also referred to as a filter constituent
pixel.
[0235] The filtering unit 320 outputs the color difference
components obtained by the filtering process of the target pixels
as the color difference components of the filter pixels (the pixels
constituting the filter image after the filtering process).
[0236] A decoded image is supplied to the line buffer 330 from the
outside of the deblocking filter 31a. The line buffer 330
appropriately stores the color difference components of the decoded
image from the outside of the deblocking filter 31a. Note that, the
line buffer 330 has a storage capacity for storing the color
difference components for a predetermined number of lines (number
of rows), and when the color difference components for the storage
capacity are stored, a new color difference component is stored in
the form of being overwritten on the oldest color difference
component.
[0237] Here, it is assumed that the deblocking filter 31a processes
the decoded image in the order of raster scan.
[0238] In the deblocking filter 31a, the process is performed in
units of a predetermined block (which may be, for example, a block
of a unit in which orthogonal transform is performed, or a block
including a unit in which orthogonal transform is performed). In
the deblocking filter 31a, for example, a plurality of blocks such
as those for one line can be processed in the order of raster scan,
and can also be processed in parallel.
[0239] The decision unit 310 and the filtering unit 320 include a
built-in internal buffer having a capacity capable of storing color
difference components of a line in the horizontal direction
included in a target block that is a block to be processed by the
deblocking filter 31a. The decision unit 310 and the filtering unit
320 store the color difference components of the line in the
horizontal direction included in the target block in the internal
buffer, and use the color difference components stored in the
internal buffer as color difference components of the filter
reference pixel and the filter constituent pixel, to process the
target block.
[0240] In a case where the deblocking filter 31a is applied to the
horizontal block boundary on the upper side of the target block,
color difference components of pixels in the target block and color
difference components of pixels in a block adjacent to the upper
side of the target block are required.
[0241] The color difference components of the pixels in the target
block are stored in the internal buffer when the target block is
processed. On the other hand, since the color difference components
of the pixels in the block adjacent to the upper side of the target
block are not the color difference components of the pixels in the
target block, the values are not stored in the internal buffer when
the target block is processed.
[0242] Thus, the line buffer 330 stores color difference components
of pixels of a line (pixels belonging a line) necessary for
applying the deblocking filter 31a to the horizontal block boundary
on the upper side of the target block among lines in the horizontal
direction included in the block adjacent to the upper side of the
target block. The pixels of the line necessary for applying the
deblocking filter 31a are pixels that are used for the filter
reference pixel and the filter constituent pixel.
[0243] The controller 340 controls each block constituting the
deblocking filter 31a. Furthermore, the controller 340 acquires
color difference-related parameters and the like necessary for
calculating the bS by performing generation or the like, and
supplies the parameters to the boundary strength calculation unit
261.
[0244] Note that, in the present embodiment, it is assumed that the
deblocking filter 31a processes, for example, the decoded images in
the order of raster scan. However, the deblocking filter 31a can
perform the decoded image in an order other than the order of
raster scan. For example, the deblocking filter 31a can repeat
processing the decoded image from top to bottom, from left to
right. In this case, the horizontal (lateral) (left and right) and
vertical (longitudinal) (up and down) described below are reversed
(swapped).
[0245] FIG. 13 is a flowchart explaining the process of the
deblocking filter 31a of FIG. 12.
[0246] In the deblocking filter 31a, the line buffer 330
appropriately stores the color difference components of the decoded
image supplied from the outside of the deblocking filter 31a.
[0247] Then, in step S101, the boundary strength calculation unit
261 calculates the bS for the block boundary located on the grid
and supplies the bS to the decision unit 310, and the process
proceeds to step S102.
[0248] In steps S102 to S104, the decision unit 310 performs the
filtering decision for each partial block boundary (partial
vertical block boundary and partial horizontal block boundary).
[0249] That is, in step S102, the decision unit 310 decides whether
or not a condition 1 described later is satisfied.
[0250] In a case where it is decided in step S102 that the
condition 1 is not satisfied, the decision unit 310 decides that
the deblocking filter 31a is not applied, and the filtering unit
320 does not perform the filtering process to (pixels of a line
orthogonal to) the partial block boundary for which it is decided
that the condition 1 is not satisfied, and the process ends.
[0251] Furthermore, in a case where it is decided in step S102 that
the condition 1 is satisfied, the process proceeds to step S103,
and the decision unit 310 decides whether or not a condition 2
described later is satisfied.
[0252] In a case where it is decided in step S103 that the
condition 2 is not satisfied, the decision unit 310 decides that
the deblocking filter 31a is not applied, and the filtering unit
320 does not perform the filtering process to the partial block
boundary for which it is decided that the condition 2 is not
satisfied, and the process ends.
[0253] Furthermore, in a case where it is decided in step S103 that
the condition 2 is satisfied, the process proceeds to step S104,
and the decision unit 310 decides whether or not a condition 3
described later is satisfied.
[0254] In a case where it is decided in step S104 that the
condition 3 is not satisfied, the process proceeds to step S105,
and the decision unit 310 decides that the weak filter is applied.
Then, the filtering unit 320 performs a filtering process of the
weak filter to the partial block boundary for which it is decided
that the condition 3 is not satisfied, and the process ends.
[0255] Furthermore, in a case where it is decided in step S104 that
the condition 3 is satisfied, the process proceeds to step S106,
and the decision unit 310 decides that the chroma long filter is
applied. Then, the filtering unit 320 performs a filtering process
of the chroma long filter to the partial block boundary for which
it is decided that the condition 3 is satisfied, and the process
ends.
[0256] In FIG. 13, for example, the decision in steps S102 and S103
corresponds to the adaptation necessity decision, and the decision
in step S104 corresponds to the filtering strength decision.
[0257] Note that, even in a case where it is decided in step S103
that the condition 2 is not satisfied, when the bS is 2, which
indicates that the boundary strength is the strongest, it can be
decided in the decision unit 310 that the weak filter is applied.
Then, in the filtering unit 320, the filtering process of the weak
filter can be performed on the partial block boundary whose bS is 2
although it is decided that the condition 2 is not satisfied.
[0258] <Filtering Decision of YUV420 Format>
[0259] FIG. 14 is a diagram explaining filtering decision in a case
where the color format is the YUV420 format.
[0260] That is, FIG. 14 is a diagram explaining the filtering
decision (vertical block boundary filtering decision) for the
partial vertical block boundary of the decoded image in the YUV420
format.
[0261] In the YUV420 format, if the partial vertical block boundary
of the luminance component is the vertical block boundary for four
lines in the horizontal direction, the partial vertical block
boundary of the color difference component is 1/2 of the partial
vertical block boundary of the luminance component, that is, a
vertical block boundary for two lines in the horizontal
direction.
[0262] For example, if the partial vertical block boundary of the
luminance component is a partial vertical block boundary b as a
combined portion of the partial vertical block boundaries b1 and b2
illustrated in FIG. 3, the vertical block boundary of the color
difference component is the partial vertical block boundary b1 and
the partial vertical block boundary b2.
[0263] In this case, the length of the partial vertical block
boundary of the luminance component is four pixels (for four
lines), and the length of the partial vertical block boundary of
the color difference component is two pixels (for two lines).
[0264] For the luminance component in the YUV420 format, the
deblocking filter 31a performs the vertical block boundary
filtering decision for the partial vertical block boundary of the
luminance component by using the luminance components of pixels of
two lines, the first line and the fourth line, which are
(horizontal) lines located at both ends of the partial vertical
block boundary of the luminance component, out of four lines in the
horizontal direction of the luminance component orthogonal to the
partial vertical block boundary of the luminance component.
[0265] The vertical block boundary filtering decision for the
partial vertical block boundary of the luminance component here is
filtering decision for deciding whether or not a deblocking filter
is applied to the luminance components of pixels of four
(horizontal) lines, the first line to the fourth line, orthogonal
to the partial vertical block boundary of the luminance
component.
[0266] Furthermore, the deblocking filter 31a performs the vertical
block boundary filtering decision for the partial vertical block
boundary of the color difference component by using the color
difference component of the color difference pixel of the first
line of two lines in the horizontal direction of the color
difference component orthogonal to the partial vertical block
boundary of the color difference component.
[0267] The vertical block boundary filtering decision for the
partial vertical block boundary of the color difference component
here is filtering decision for deciding whether or not a deblocking
filter is applied to the color difference components of pixels of
two (horizontal) lines, the first line to the second line,
orthogonal to the partial vertical block boundary of the color
difference component.
[0268] For the YUV420 format, the truth or falsehood (1 or 0) of
the equation (14) is decided, as the condition 1, in the vertical
block boundary filtering decision of the color difference
component.
(bS==2.parallel.(bS==1 && Large block decision))
Large block decision: (EDGE_VER &&
block_width>8).parallel.(EDGE_HOR && block_height>8)
(14)
[0269] Here, the bS in the equation (14) is the bS calculated from
two adjacent blocks sandwiching the partial vertical block
boundary. EDGE VER is true (1) in a case where the partial block
boundary that is a target of the filtering decision is a partial
vertical block boundary, and false (0) otherwise (in a case where
it is a partial horizontal block boundary). EDGE HOR is true in a
case where the partial block boundary that is a target of the
filtering decision is a partial horizontal block boundary, and
false otherwise (in a case where it is a partial vertical block
boundary).
[0270] For the YUV420 format, as the condition 2, the truth or
falsehood of the equation (15) is decided.
d<(beta>>1) (15)
[0271] A>>B represents that A is shifted to the right by B
bits.
[0272] The d in the equation (15) is calculated in accordance with
the equations (16) to (19).
dp0=Abs(p.sub.2, 0-2*p.sub.1, 0+p.sub.0, 0) (16)
dq0=Abs(q.sub.2, 0-2*q.sub.1, 0+q.sub.0, 0) (17)
dpq0=dp0+dq0 (18)
d=dpq0 (19)
[0273] Here, in the filtering decision of the reference document
REF4, the decision of the condition C92 similar to that of the
Non-Patent Document 1 is performed. In calculation of the d of the
condition C92, as indicated in the equations (1) to (7), the color
difference components p.sub.2, 0, p.sub.1, 0, p.sub.0, 0, q.sub.2,
0, q.sub.1, 0, and q.sub.0, 0, and p.sub.2, 1, p.sub.1, 1, p.sub.0,
1, q.sub.2, 1, q.sub.1, 1, and q.sub.0, 1 are used of the color
difference pixels of the two lines L11 and L12 of the color
difference component orthogonal to the partial vertical block
boundary b1 (FIG. 3).
[0274] On the other hand, in the vertical block boundary filtering
decision of the deblocking filter 31a, for the color difference
component, in calculation of the d of the condition 2, as indicated
in the equations (16) to (19) , the color difference components
p.sub.2, 0, p.sub.1, 0, p.sub.0, 0, q.sub.2, 0, q.sub.1, 0, and
q.sub.0, 0 are used of only the color difference pixels of the
first line L11 of the two lines L11 and L12 of the color difference
component orthogonal to the partial vertical block boundary b1.
[0275] For that reason, the equations (16) to (19) are equations
obtained by deleting, from the equations (1) to (7), portions
related to the color difference components p.sub.2, 1, p.sub.1, 1,
p.sub.0, 1, q.sub.2, 1, q.sub.1, 1, and q.sub.0, 1 of the color
difference pixels of the second line L12.
[0276] For the YUV420 format, as the condition 3, the truth or
falsehood of the equation (20) is decided.
xUseStrongFiltering(LinePos-#0) (20)
[0277] LinePos-#j-1 represents the j-th line of the two lines L11
and L12 of the color difference component orthogonal to the partial
vertical block boundary b1.
[0278] The function xUseStrongFiltering(LinePos-#j-1) in the
equation (20) is a function similar to that used for the filtering
decision of the luminance component, and returns a value of the
truth or falsehood (1 or 0) depending on whether or not the
equations (21) to (23) are satisfied.
|p3-p0|+|q3-q0|<(beta>>3) (21)
|p2-2*p1+p0|+|q2-2*q1+q0|<(beta >>2) (22)
|p0-q0|<((tc*5+1)>>1) (23)
[0279] In the equations (21) to (23), p.sub.i and q.sub.k represent
the color difference components of the color difference pixels
p.sub.i, j and q.sub.k, j of the i-th and k-th columns from the
partial vertical block boundary b1, in the j-th row of the two
adjacent blocks Bp and Bq sandwiching the partial vertical block
boundary b1, and the index j of p.sub.i, j and q.sub.k, j is
omitted.
[0280] Furthermore, tc is a parameter given depending on the
quantization parameter.
[0281] |1p3 -p0|+|q3-q0| in the equation (21) represents flatness
of the partial vertical block boundary b1.
|p2-2*p1+p0|+|q2-2*q1+q0| in the equation (22) represents
continuity of the partial vertical block boundary b1. |p0-q0| in
the equation (23) represents a gap at the partial vertical block
boundary bl.
[0282] Here, in the filtering decision of the color difference
component of the reference document REF4, the truth or falsehood of
the equation (24) is decided.
xUseStrongFiltering(LinePos-#0) &&
xUseStrongFiltering(LinePos-#1) (24)
[0283] In decision of the truth or falsehood of the equation (24),
the color difference components p.sub.3, 0, p.sub.2, 0, p.sub.1, 0,
p.sub.0, 0, q.sub.3, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0, and
p.sub.3, 1, p.sub.2, 1, p.sub.1, 1, p.sub.0, 1, q.sub.3, 1,
q.sub.2, 1, q.sub.1, 1, and q.sub.0, 1 are used of the color
difference pixels of the two lines L11 and L12 of the color
difference component orthogonal to the partial vertical block
boundary b1.
[0284] On the other hand, in the filtering decision of the color
difference component of the deblocking filter 31a, in decision of
the truth or falsehood of the condition 3 of the equation (20), the
color difference components p.sub.3, 0, p.sub.2, 0, p.sub.1, 0,
p.sub.0, 0, q.sub.3, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0 are
used of only the color difference pixels of the first line L11 of
the two lines L11 and L12 of the color difference component
orthogonal to the partial vertical block boundary b1.
[0285] For that reason, the equation (20) is an equation obtained
by deleting, from the equation (24), a portion
xUseStrongFiltering(LinePos-#1) related to the color difference
components p.sub.3, 1, p.sub.2, 1, p.sub.1, 1, p.sub.0, 1, q.sub.2,
1, q.sub.1, 1, and q.sub.0, 1 of the color difference pixels of the
second line L12.
[0286] For the YUV420 format, the filtering decision (horizontal
block boundary filtering decision) for the partial horizontal block
boundary of the decoded image is performed similarly to the
vertical block boundary filtering decision, and thus the
description thereof will be omitted.
[0287] Note that, here, for the color difference component of the
YUV420 format, the vertical block boundary filtering decision is
performed by using the color difference component of the color
difference pixels of the first line of the two lines in the
horizontal direction of the color difference component orthogonal
to the partial vertical block boundary of the color difference
component.
[0288] For the color difference component of the YUV420 format, the
vertical block boundary filtering decision can be performed by
using the color difference components of the color difference
pixels of the second line, not the first line of the two lines in
the horizontal direction of the color difference component
orthogonal to the partial vertical block boundary of the color
difference component. The same applies to the horizontal block
boundary filtering decision. <Filtering Decision of YUV444
Format>
[0289] FIG. 15 is a diagram explaining filtering decision in a case
where the color format is the YUV444 format.
[0290] That is, FIG. 15 is a diagram explaining the vertical block
boundary filtering decision for the partial vertical block boundary
of the decoded image in the YUV444 format.
[0291] In the YUV444 format, if the partial vertical block boundary
of the luminance component is the vertical block boundary for four
lines in the horizontal direction, the partial vertical block
boundary of the color difference component is a vertical block
boundary for four lines in the horizontal direction, similarly to
the partial vertical block boundary of the luminance component.
[0292] For example, the partial vertical block boundaries of the
luminance component and the color difference component both are the
partial vertical block boundary b as the combined portion of the
partial vertical block boundaries b1 and b2 illustrated in FIG.
3.
[0293] In this case, the lengths of the partial vertical block
boundaries of the luminance component and the color difference
component both are four pixels (for four lines).
[0294] For the luminance component in the YUV444 format, the
deblocking filter 31a performs the vertical block boundary
filtering decision for the partial vertical block boundary of the
luminance component by using the luminance components of pixels of
two lines, the first line and the fourth line, which are
(horizontal) lines located at both ends of the partial vertical
block boundary of the luminance component, out of four lines in the
horizontal direction of the luminance component orthogonal to the
partial vertical block boundary of the luminance component.
[0295] The vertical block boundary filtering decision for the
partial vertical block boundary of the luminance component here is
filtering decision for deciding whether or not a deblocking filter
is applied to the luminance components of pixels of four lines, the
first line to the fourth line, orthogonal to the partial vertical
block boundary of the luminance component.
[0296] Furthermore, the deblocking filter 31a performs the vertical
block boundary filtering decision for the partial vertical block
boundary of the color difference component by using the color
difference components of the color difference pixels of a line
identical to a line used when performing the vertical block
boundary filtering decision that decides whether or not a
deblocking filter is applied to the luminance component.
[0297] That is, the deblocking filter 31a performs the vertical
block boundary filtering decision for the partial vertical block
boundary of the color difference component by using the color
difference components of the color difference pixels of two lines,
the first line and the fourth line, which are (horizontal) lines
located at both ends of the partial vertical block boundary of the
color difference component, out of four lines in the horizontal
direction of the color difference component orthogonal to the
partial vertical block boundary of the color difference
component.
[0298] The vertical block boundary filtering decision for the
partial vertical block boundary of the color difference component
here is filtering decision for deciding whether or not a deblocking
filter is applied to the color difference components of pixels of
four lines, the first line to the fourth line, orthogonal to the
partial vertical block boundary of the color difference
component.
[0299] For the YUV444 format, in the vertical block boundary
filtering decision of the color difference component, as the
condition 1, the truth or falsehood of the equation (14) is
decided, similarly to the YUV420 format.
[0300] For the YUV444 format, as the condition 2, the truth or
falsehood of the equation (25) is decided.
d<beta (25)
[0301] The d in the equation (25) is calculated in accordance with
the equations (26) to (32).
dp0=Abs(p2, 0-2*p1, 0+p0, 0) (26)
dp3=Abs(p2, 3-2*p1, 3+p0, 3) (27)
qp0=Abs(q2, 0-2*q1, 0+q0, 0) (28 )
qp3=Abs(q2, 3-2*q1, 3+q0, 3) (29)
dpq0=dp0+dq0 (30)
dpq3=dp3+dq3 (31)
d=dpq0+dpq3 (32)
[0302] In the vertical block boundary filtering decision of the
deblocking filter 31a, for the color difference component, in
calculation of the d in the condition 2, as indicated in the
equations (26) to (32), out of the four lines L11, L12, L21, and
L22 of the color difference component orthogonal to the partial
vertical block boundary b, the color difference components p.sub.2,
0, p.sub.1, 0, p.sub.0, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0
of the color difference pixels of the first line L11, and the color
difference components p.sub.2, 3, p.sub.1, 3, p.sub.0, 3, q.sub.2,
3, q.sub.1, 3, and q.sub.0, 3 of the color difference pixels of the
fourth line L22 are used.
[0303] For the YUV444 format, as the condition 3, the truth or
falsehood of the equation (33) is decided.
xUseStrongFiltering(LinePos-#0) &&
xUseStrongFiltering(LinePos-#3) (33)
[0304] The function xUseStrongFiltering(LinePos-#j-1) returns a
value of the truth or falsehood depending on whether or not the
equations (21) to (23) are satisfied, as described in FIG. 14.
[0305] Thus, in the vertical block boundary filtering decision of
the deblocking filter 31a, in decision of the truth or falsehood of
the condition 3 of the equation (33), out of the four lines L11,
L12, L21, and L22 of the color difference component orthogonal to
the partial vertical block boundary b, the color difference
components p.sub.3, 0, p.sub.2, 0, p.sub.1, 0, p.sub.0, 0, q.sub.3,
0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0 of the color difference
pixels of the first line L11, and the color difference components
p.sub.3, 3, p.sub.2, 3, p.sub.1, 3, p.sub.0, 3, q.sub.3, 3,
q.sub.2, 3, q.sub.1, 3, and q.sub.0, 3 of the color difference
pixels of the fourth line L22 are used.
[0306] For the YUV444 format, the filtering decision (horizontal
block boundary filtering decision) for the partial horizontal block
boundary of the decoded image is performed similarly to the
vertical block boundary filtering decision, and thus the
description thereof will be omitted.
[0307] Note that, here, for the luminance component and color
difference component of the YUV444 format, the vertical block
boundary filtering decision is performed by using the pixels of the
first line and the fourth line of the four lines in the horizontal
direction orthogonal to the partial vertical block boundary.
[0308] For the YUV444 format, the vertical block boundary filtering
decision can be performed by using pixels of any one or more lines
other than the first line and fourth line of the four lines in the
horizontal direction orthogonal to the partial vertical block
boundary. However, in the vertical block boundary filtering
decision of the color difference component, pixels are used of the
same line used in the vertical block boundary filtering decision of
the luminance component. The same applies to the horizontal block
boundary filtering decision.
<Filtering Decision of YUV422 Format>
[0309] FIG. 16 is a diagram explaining filtering decision in a case
where the color format is the YUV422 format.
[0310] That is, FIG. 16 is a diagram explaining the vertical block
boundary filtering decision for the partial vertical block boundary
of the decoded image in the YUV422 format.
[0311] In the YUV422 format, if the partial vertical block boundary
of the luminance component is the vertical block boundary for four
lines in the horizontal direction, the partial vertical block
boundary of the color difference component is a vertical block
boundary for four lines in the horizontal direction, similarly to
the partial vertical block boundary of the luminance component.
[0312] For example, the partial vertical block boundaries of the
luminance component and the color difference component both are the
partial vertical block boundary b as the combined portion of the
partial vertical block boundaries b1 and b2 illustrated in FIG.
3.
[0313] In this case, the lengths of the partial vertical block
boundaries of the luminance component and the color difference
component both are four pixels (for four lines).
[0314] For the luminance component in the YUV422 format, the
deblocking filter 31a performs the vertical block boundary
filtering decision for the partial vertical block boundary of the
luminance component by using the luminance components of pixels of
two lines, the first line and the fourth line, which are
(horizontal) lines located at both ends of the partial vertical
block boundary of the luminance component, out of four lines in the
horizontal direction of the luminance component orthogonal to the
partial vertical block boundary of the luminance component.
[0315] The vertical block boundary filtering decision for the
partial vertical block boundary of the luminance component here is
filtering decision for deciding whether or not a deblocking filter
is applied to the luminance components of pixels of four lines, the
first line to the fourth line, orthogonal to the partial vertical
block boundary of the luminance component.
[0316] Furthermore, the deblocking filter 31a performs the vertical
block boundary filtering decision for the partial vertical block
boundary of the color difference component by using the color
difference components of the color difference pixels of a line
identical to a line used when performing the vertical block
boundary filtering decision that decides whether or not a
deblocking filter is applied to the luminance component.
[0317] That is, the deblocking filter 31a performs the vertical
block boundary filtering decision for the partial vertical block
boundary of the color difference component by using the color
difference components of the color difference pixels of two lines,
the first line and the fourth line, which are (horizontal) lines
located at both ends of the partial vertical block boundary of the
color difference component, out of four lines in the horizontal
direction of the color difference component orthogonal to the
partial vertical block boundary of the color difference
component.
[0318] The vertical block boundary filtering decision for the
partial vertical block boundary of the color difference component
here is filtering decision for deciding whether or not a deblocking
filter is applied to the color difference components of pixels of
four lines, the first line to the fourth line, orthogonal to the
partial vertical block boundary of the color difference
component.
[0319] For the YUV422 format, in the vertical block boundary
filtering decision of the color difference component, as the
condition 1, the truth or falsehood of the equation (14) is
decided, similarly to the YUV420 format.
[0320] For the YUV422 format, as the condition 2, the truth or
falsehood of the equations (34) and (35) is decided.
edgeDir==EDGE_VER (34)
d<beta (35)
[0321] In edgeDir, EDGE_VER is set in a case where the partial
block boundary that is a target of the filtering decision is the
partial vertical block boundary, and EDGE_HOR is set in a case
where the partial block boundary that is the target of the
filtering decision is the partial horizontal block boundary.
[0322] The d in the equation (35) is calculated in accordance with
the equations (26) to (32) described in FIG. 15.
[0323] In the vertical block boundary filtering decision of the
deblocking filter 31a, for the color difference component, in
calculation of the d in the condition 2, as indicated in the
equations (26) to (32), out of the four lines L11, L12, L21, and
L22 of the color difference component orthogonal to the partial
vertical block boundary b, the color difference components p.sub.2,
0, p.sub.1, 0, p.sub.0, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0
of the color difference pixels of the first line L11, and the color
difference components p.sub.2, 3, p.sub.1, 3, p.sub.0, 3, p.sub.2,
3, q.sub.1, 3, and q.sub.0, 3 of the color difference pixels of the
fourth line L22 are used.
[0324] For the YUV422 format, as the condition 3, the truth or
falsehood of the equation (33) described in FIG. 15 is decided.
[0325] Thus, for the YUV422 format, in the vertical block boundary
filtering decision of the deblocking filter 31a, similarly to the
case of the YUV444 described in FIG. 15, in decision of the truth
or falsehood of the condition 3 of the equation (33), out of the
four lines L11, L12, L21, and L22 of the color difference component
orthogonal to the partial vertical block boundary b, the color
difference components p.sub.3, 0, p.sub.2, 0, p.sub.1, 0, p.sub.0,
0, q.sub.3, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0 of the color
difference pixels of the first line L11, and the color difference
components p.sub.3, 3, p.sub.2, 3, p.sub.1, 3, p.sub.0, 3, q.sub.3,
3, q.sub.2, 3, q.sub.1, 3, and g.sub.0, 3 of the color difference
pixels of the fourth line L22 are used.
[0326] Note that, here, for the luminance component and color
difference component of the YUV422 format, the vertical block
boundary filtering decision is performed by using the pixels of the
first line and the fourth line of the four lines in the horizontal
direction orthogonal to the partial vertical block boundary.
[0327] For the YUV422 format, the vertical block boundary filtering
decision can be performed by using pixels of any one or more lines
other than the first line and fourth line of the four lines in the
horizontal direction orthogonal to the partial vertical block
boundary. However, in the vertical block boundary filtering
decision of the color difference component, pixels are used of the
same line used in the vertical block boundary filtering decision of
the luminance component.
[0328] FIG. 17 is a diagram explaining filtering decision in a case
where the color format is the YUV422 format.
[0329] That is, FIG. 17 is a diagram explaining the filtering
decision (horizontal block boundary filtering decision) for the
partial horizontal block boundary of the decoded image in the
YUV422 format.
[0330] Note that, in FIG. 17, a description will be given assuming
that the block boundary BB is not a vertical block boundary but a
horizontal block boundary in FIG. 3. For example, it is assumed
that the block Bp and the block Bq are the blocks above and below
the (horizontal) block boundary BB, respectively. In this case, as
described in FIG. 3, in p.sub.i, j and q.sub.k, j, i and k are row
indexes and j is a column index.
[0331] Furthermore, the partial block boundaries b1, b2, and b are
partial horizontal block boundaries.
[0332] In the YUV422 format, assuming that the partial horizontal
block boundary of the luminance component is the horizontal block
boundary for four lines in the vertical direction, the partial
horizontal block boundary of the color difference component is 1/2
of the partial horizontal block boundary of the luminance
component, that is, the horizontal block boundary for two lines in
the vertical direction.
[0333] For example, if the partial horizontal block boundary of the
luminance component is the partial horizontal block boundary b as
the combined portion of the partial horizontal block boundaries b1
and b2 illustrated in FIG. 3, the horizontal block boundary of the
color difference component is the partial horizontal block boundary
b1 and the partial horizontal block boundary b2.
[0334] In this case, the length of the partial horizontal block
boundary of the luminance component is four pixels (for four
lines), and the length of the partial horizontal block boundary of
the color difference component is two pixels (for two lines).
[0335] For the luminance component in the YUV422 format, the
deblocking filter 31a performs the horizontal block boundary
filtering decision for the partial horizontal block boundary of the
luminance component by using the luminance components of pixels of
two lines, the first line and the fourth line, which are (vertical)
lines located at both ends of the partial horizontal block boundary
of the luminance component, out of four lines in the vertical
direction of the luminance component orthogonal to the partial
horizontal block boundary of the luminance component.
[0336] The horizontal block boundary filtering decision for the
partial horizontal block boundary of the luminance component here
is filtering decision for deciding whether or not a deblocking
filter is applied to the luminance component of pixels of four
(vertical) lines, the first line to the fourth line, orthogonal to
the partial horizontal block boundary of the luminance
component.
[0337] Furthermore, the deblocking filter 31a performs the
horizontal block boundary filtering decision for the partial
horizontal block boundary of the color difference component by
using the color difference component of the color difference pixel
of the first line of two lines in the vertical direction of the
color difference component orthogonal to the partial horizontal
block boundary of the color difference component.
[0338] The horizontal block boundary filtering decision for the
partial horizontal block boundary of the color difference component
here is filtering decision for deciding whether or not a deblocking
filter is applied to the color difference component of pixels of
two (vertical) lines, the first line to the second line, orthogonal
to the partial horizontal block boundary of the color difference
component.
[0339] For the YUV422 format, in the horizontal block boundary
filtering decision of the color difference component, as the
condition 1, the truth or falsehood of the equation (14) is
decided, similarly to the YUV420 format. However, for the YUV422
format, the bS of the equation (14) calculated in the horizontal
block boundary filtering decision of the color difference component
is the bS calculated from two adjacent blocks sandwiching the
partial horizontal block boundary.
[0340] For the YUV422 format, in the horizontal block boundary
filtering decision of the color difference component, as the
condition 2, the truth or falsehood of the equations (36) and (37)
is decided.
edgeDir==EDGE_HOR (36)
d<(beta>>1) (37)
[0341] In edgeDir, EDGE_VER is set in a case where the partial
block boundary that is a target of the filtering decision is the
partial vertical block boundary, and EDGE_HOR is set in a case
where the partial block boundary that is the target of the
filtering decision is the partial horizontal block boundary.
[0342] The d in the equation (37) is calculated in accordance with
the equations (16) to (19) described in FIG. 14.
[0343] In the horizontal block boundary filtering decision of the
deblocking filter 31a, for the color difference component, in
calculation of the d of the condition 2, as indicated in the
equations (16) to (19), the color difference components p.sub.2, 0,
p.sub.1, o, p.sub.0, 0, q.sub.2, 0, q.sub.1, 0, and q.sub.0, 0 are
used of only the color difference pixels of the first line L11 of
the two lines L11 and L12 of the color difference component
orthogonal to the partial horizontal block boundary b1.
[0344] For the YUV422 format, in the horizontal block boundary
filtering decision of the color difference component, as the
condition 3, the truth or falsehood of the equation (20) is
decided.
[0345] Thus, in the filtering decision of the color difference
component of the deblocking filter 31a, for the color difference
component, in decision of the truth or falsehood of the condition 3
of the equation (20), the color difference components p.sub.3, 0,
p.sub.2, 0, p.sub.1, 0, p.sub.0, 0, q.sub.3, 0, q.sub.2, 0,
q.sub.1, 0, and q.sub.0, 0 are used of only the color difference
pixels of the first line L11 of the two lines L11 and L12 of the
color difference component orthogonal to the partial horizontal
block boundary b1.
[0346] Note that, here, for the color difference component of the
YUV422 format, the horizontal block boundary filtering decision is
performed by using the color difference component of the color
difference pixels of the first line of the two lines in the
vertical direction of the color difference component orthogonal to
the partial horizontal block boundary of the color difference
component.
[0347] For the color difference component of the YUV422 format, the
horizontal block boundary filtering decision can be performed by
using the color difference components of the color difference
pixels of the second line, not the first line of the two lines in
the vertical direction of the color difference component orthogonal
to the partial horizontal block boundary of the color difference
component.
[0348] Note that, here, although the length of the partial block
boundary of the luminance component is set to four pixels, the
number of pixels (the number of lines) exceeding four pixels can be
adopted as the length of the partial block boundary of the
luminance component.
[0349] For example, as the length of the partial block boundary of
the luminance component, 8 pixels, 16 pixels, or the like can be
adopted.
[0350] In a case where a length of eight pixels is adopted as the
length of the partial block boundary of the luminance component,
for example, the lengths of the partial horizontal block boundary
and the partial vertical block boundary of the color difference
component in the YUV444 format, and the partial vertical block
boundary in the YUV422 format are eight pixels similar to the case
of the luminance component. Furthermore, the lengths of the partial
horizontal block boundary in the YUV422 format and the partial
horizontal block boundary and the partial vertical block boundary
of the color difference component of the YUV420 format are four
pixels, which is 1/2 of the case of the luminance component.
[0351] Moreover, the vertical block boundary filtering decision of
the luminance component and the color difference component in the
YUV422 format can be performed by using pixels of two lines
adjacent to each other in the center of the partial vertical block
boundary, in addition to the pixels of the lines located at both
ends of the partial vertical block boundary among the lines
orthogonal to the partial vertical block boundary.
[0352] For example, in a case where the length of the partial
vertical block boundary of the luminance component and color
difference component in the YUV422 format is eight pixels, it is
possible to perform the vertical block boundary filtering decision
that decides whether or not a deblocking filter is applied to
pixels of each of the luminance component and color difference
component of eight lines orthogonal to the partial vertical block
boundary, by using the luminance component and color difference
component of pixels of four lines, the first, fourth, fifth, and
eighth lines, out of the eight lines orthogonal to the partial
vertical block boundary.
[0353] The same applies to the horizontal block boundary filtering
decision and the vertical block boundary filtering decision of the
luminance component and the color difference component of the
YUV444 format in the above points.
[0354] <Others>
[0355] (Application Target of the Present Technology)
[0356] The present technology can be applied to any image
coding/decoding method. That is, unless inconsistent with the
present technology described above, the specifications of various
processes related to image coding/decoding, such as transform
(inverse transform), quantization (inverse quantization), coding
(decoding), and prediction, are arbitrary, and are not limited to
the examples described above. Furthermore, unless inconsistent with
the present technology described above, some of these processes may
be omitted.
[0357] (Block)
[0358] Furthermore, in the present specification, "block" (not a
block indicating a processing unit) used for description as a
partial area or a unit of processing of an image (picture)
indicates an arbitrary partial area in a picture, unless otherwise
specified, and the size, shape, characteristics, and the like are
not limited. For example, the "block" includes arbitrary partial
areas (units of processing) such as the transform block (TB),
transform unit (TU), prediction block (PB), prediction unit (PU),
smallest coding unit (SCU), coding unit (CU), largest coding unit
(LCU), coding tree block (CTB), coding tree unit (CTU), transform
block, sub-block, macroblock, tile, or slice described in the
reference documents REF1 to REF3 and the like.
[0359] (Unit of Processing)
[0360] A unit of data in which the various types of information
described above is set, and a unit of data targeted by the various
processes each are arbitrary and are not limited to the examples
described above. For example, these information and processes each
may be set for each Transform Unit (TU), Transform Block (TB),
Prediction Unit (PU), Prediction Block (PB), Coding Unit (CU),
Largest Coding Unit (LCU), sub-block, block, tile, slice, picture,
sequence, or component, or data in units of data of those may be
targeted. Of course, the unit of data can be set for each piece of
information or process, and it is not necessary that the units of
data of all the information and processes are unified. Note that, a
storage location of these pieces of information is arbitrary, and
may be stored in the header, parameter set, or the like of the unit
of data described above. Furthermore, those may be stored in a
plurality of locations.
[0361] (Control Information)
[0362] Control information related to the present technology
described above may be transmitted from the coding side to the
decoding side. For example, control information (for example,
enabled_flag) may be transmitted that controls whether or not the
application of the present technology described above is permitted
(or prohibited). Furthermore, for example, control information may
be transmitted indicating an object to which the present technology
is applied (or an object to which the present technology is not
applied). For example, control information may be transmitted that
specifies the block size (upper limit, lower limit, or both),
frame, component, layer, or the like to which the present
technology is applied (or for which application is permitted or
prohibited).
[0363] (Block Size Information)
[0364] In specification of the size of the block to which the
present technology is applied, the block size may not only be
directly specified, but also be specified indirectly. For example,
the block size may be specified by using identification information
for identifying the size. Furthermore, for example, the block size
may be specified by a ratio to or a difference from the size of a
reference block (for example, the LCU, the SCU, and the like). For
example, in a case where information for specifying the block size
is transmitted as a syntax element or the like, the information for
indirectly specifying the size as described above may be used as
the information. By doing so, the amount of information can be
reduced, and the coding efficiency can be improved in some cases.
Furthermore, the specification of the block size also includes
specification of a block size range (for example, specification of
an allowable block size range, or the like).
[0365] (Others)
[0366] Note that, in the present specification, the "flag" is
information for identifying a plurality of states, and includes not
only information used for identifying two states of true (1) or
false (0), but also information capable of identifying three or
more states. Thus, values that can be taken by the "flag" may be,
for example, two values of 1/0, or three or more values. That is,
the number of bits constituting the "flag" is arbitrary, and may be
1 bit or a plurality of bits. Furthermore, the identification
information (including the flag) is assumed to include not only the
identification information in the bitstream but also difference
information of the identification information with respect to a
certain reference information in the bitstream, so that the "flag"
and "identification information" include not only the information
but also the difference information with respect to the reference
information, in the present specification.
[0367] Furthermore, various types of information (metadata, and the
like) regarding the coded data (bitstream) may be transmitted or
recorded in any form as long as those are associated with the coded
data. Here, a term "associate" means that, for example, when
processing one data, the other data is made to be usable
(linkable). That is, the data associated with each other may be
collected as one data, or may be individual data. For example,
information associated with coded data (image) may be transmitted
on a transmission line different from that for the coded data
(image). Furthermore, for example, the information associated with
the coded data (image) may be recorded in a recording medium
different from that for the coded data (image) (or in a different
recording area of the same recording medium). Note that, this
"association" may be a part of data, not the entire data. For
example, an image and information corresponding to the image may be
associated with each other in an arbitrary unit such as a plurality
of frames, one frame, or a portion within a frame.
[0368] Note that, in this specification, terms "combine",
"multiplex", "add", "integrate", "include", "store", "put in",
"enclose", "insert", and the like mean to combine a plurality of
objects into one, for example, to combine coded data and metadata
into one, and the terms mean one method of the "associate"
described above.
[0369] The present technology can also be implemented as any
configuration constituting a device or system, for example, a
processor as a system large scale integration (LSI) or the like, a
module using a plurality of processors and the like, a unit using a
plurality of modules and the like, a set in which other functions
are further added to the unit, or the like (that is, a
configuration of a part of the device).
[0370] <Description of Computer to which the Present Technology
is Applied>
[0371] Next, a series of processes described above can be performed
by hardware or software. In a case where the series of processes is
performed by software, a program constituting the software is
installed in a general-purpose computer or the like.
[0372] FIG. 18 is a block diagram illustrating a configuration
example of an embodiment of a computer in which a program for
executing the series of processes described above is installed.
[0373] The program can be recorded in advance on a hard disk 905 or
a ROM 903 as a recording medium incorporated in the computer.
[0374] Alternatively, the program can be stored (recorded) in a
removable recording medium 911 driven by a drive 909. Such a
removable recording medium 911 can be provided as so-called
packaged software. Here, examples of the removable recording medium
911 include a flexible disk, a Compact Disc Read Only Memory
(CD-ROM), a Magneto Optical (MO) disk, a Digital Versatile Disc
(DVD), a magnetic disk, a semiconductor memory, and the like.
[0375] Note that, the program can be installed on the computer from
the removable recording medium 911 as described above, or can be
downloaded to the computer via a communications network or a
broadcast network and installed on the hard disk 905 incorporated.
That is, for example, the program can be wirelessly transferred
from a download site to the computer via an artificial satellite
for digital satellite broadcasting, or can be transmitted to the
computer via a network such as a Local Area Network (LAN) or the
Internet by wire.
[0376] The computer incorporates a Central Processing Unit (CPU)
902, and an input/output interface 910 is connected to the CPU 902
via a bus 901.
[0377] The CPU 902 executes the program stored in the Read Only
Memory (ROM) 903 according to a command when the command is input
by a user operating an input unit 907 or the like via the
input/output interface 910. Alternatively, the CPU 902 loads the
program stored in the hard disk 905 into a random access memory
(RAM) 904 and executes the program.
[0378] The CPU 902 therefore performs the processing according to
the above-described flowchart or the processing performed by the
configuration of the above-described block diagram. Then, the CPU
902 causes the processing result to be output from an output unit
906 or transmitted from a communication unit 908 via the
input/output interface 910 as necessary, and further, recorded on
the hard disk 905, for example.
[0379] Note that, the input unit 907 includes a keyboard, a mouse,
a microphone, and the like. Furthermore, the output unit 906
includes a Liquid Crystal Display (LCD), a speaker, and the
like.
[0380] Here, in the present specification, the process performed by
the computer in accordance with the program does not necessarily
have to be performed chronologically in the order described as the
flowchart. That is, the process performed by the computer in
accordance with the program also includes processes executed in
parallel or individually (for example, parallel process or process
by an object).
[0381] Furthermore, the program may be processed by one computer
(processor) or may be distributed and processed by a plurality of
computers. Moreover, the program may be transferred to a remote
computer and executed.
[0382] Moreover, in the present specification, a system means a set
of a plurality of constituents (device, module (component), and the
like), and it does not matter whether or not all of the
constituents are in the same cabinet. Thus, a plurality of devices
that is accommodated in a separate cabinet and connected to each
other via a network and one device that accommodates a plurality of
modules in one cabinet are both systems.
[0383] Note that, the embodiment of the present technology is not
limited to the embodiments described above, and various
modifications are possible without departing from the scope of the
present technology.
[0384] For example, the present technology can adopt a
configuration of cloud computing that shares one function in a
plurality of devices via a network to process the function in
cooperation.
[0385] Furthermore, each step described in the above flowchart can
be executed by sharing in a plurality of devices, other than being
executed by one device.
[0386] Moreover, in a case where a plurality of processes is
included in one step, the plurality of processes included in the
one step can be executed by being shared in a plurality of devices,
other than being executed by one device.
[0387] Furthermore, the advantageous effects described in the
present specification are merely examples and are not limited to
them, and other effects may be included.
REFERENCE SIGNS LIST
[0388] 10 Image processing system [0389] 11 Encoder [0390] 21 A/D
conversion unit [0391] 22 Screen rearrangement buffer 22 [0392] 23
Calculation unit [0393] 24 Orthogonal transform unit [0394] 25
Quantization unit [0395] 26 Lossless encoding unit [0396] 27
Accumulation buffer [0397] 28 Inverse quantization unit [0398] 29
Inverse orthogonal transform unit [0399] 30 Calculation unit [0400]
31a, 31b Deblocking filter [0401] 32 Frame memory [0402] 33
Selection unit [0403] 34 Intra prediction unit [0404] 35 Motion
prediction/compensation unit [0405] 36 Predicted image selection
unit [0406] 37 Rate control unit [0407] 41 Adaptive offset filter
[0408] 42 ALF [0409] 51 Decoder [0410] 61 Accumulation buffer
[0411] 62 Lossless decoding unit [0412] 63 Inverse quantization
unit [0413] 64 Inverse orthogonal transform unit [0414] 65
Calculation unit [0415] 67 Screen rearrangement buffer [0416] 68
D/A conversion unit [0417] 69 Frame memory [0418] 70 Selection unit
[0419] 71 Intra prediction unit [0420] 72 Motion
prediction/compensation unit [0421] 73 Selection unit [0422] 81
Adaptive offset filter [0423] 82 ALF [0424] 261 Boundary strength
calculation unit [0425] 310 Decision unit [0426] 311 Filtering
decision unit [0427] 312 Filtering strength decision unit [0428]
320 Filtering unit [0429] 330 Line buffer [0430] 340 Controller
[0431] 901 Bus [0432] 902 CPU [0433] 903 ROM [0434] 904 RAM [0435]
905 Hard disk [0436] 906 Output unit [0437] 907 Input unit [0438]
908 Communication unit [0439] 909 Drive [0440] 910 Input/output
interface [0441] 911 Removable recording medium
* * * * *