System and process for image rescaling using adaptive interpolation kernel with sharpness and de-ringing control

Huang; Yong

Patent Application Summary

U.S. patent application number 12/802382 was filed with the patent office on 2011-12-08 for system and process for image rescaling using adaptive interpolation kernel with sharpness and de-ringing control. This patent application is currently assigned to STMicroelectronics Asia Pacific Pte. Ltd.. Invention is credited to Yong Huang.

Application Number20110298972 12/802382
Document ID /
Family ID45064198
Filed Date2011-12-08

United States Patent Application 20110298972
Kind Code A1
Huang; Yong December 8, 2011

System and process for image rescaling using adaptive interpolation kernel with sharpness and de-ringing control

Abstract

A digital video rescaling system is provided. The system includes an image data input configured to receive input support pixels y.sub.1 to y.sub.n and a sharpness control module configured to generate a sharpness control parameter Kshp. The system further includes an interpolated pixel generator configured to use an adaptive interpolation kernel to generate an interpolated pixel y.sub.s based on the input support pixels, and adjust a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp. The system also includes a de-ringing control unit to adjust the ringing effect of the interpolated pixel based on a local image feature Kfreq, and an output module configured to output the adjusted interpolated pixel for display.


Inventors: Huang; Yong; (Singapore, SG)
Assignee: STMicroelectronics Asia Pacific Pte. Ltd.

Family ID: 45064198
Appl. No.: 12/802382
Filed: June 4, 2010

Current U.S. Class: 348/441 ; 348/E7.003
Current CPC Class: G06T 3/4007 20130101
Class at Publication: 348/441 ; 348/E07.003
International Class: H04N 7/01 20060101 H04N007/01

Claims



1. A digital video rescaling system comprising: an image data input configured to receive input support pixels y.sub.1 to y.sub.n; a sharpness control module configured to generate a sharpness control parameter Kshp; an interpolated pixel generator configured to use an 8-tap filter to generate an interpolated pixel y.sub.s based on the input support pixels, and adjust a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp; and an output module configured to output the adjusted interpolated pixel y.sub.s for display.

2. A system in accordance with claim 1 wherein the interpolated pixel generator is configured to generate the interpolated pixel y.sub.s using third order polynomial functions based at least partly upon eight input support pixels y.sub.1 to y.sub.8.

3. A system in accordance with claim 1 wherein the interpolated pixel y.sub.s is generated as follows: y s ( s ) = n = 1 8 y n * f n ( s , Kshp ) , ##EQU00006## where y.sub.n, n=(1 . . . 8) are eight support pixels. S is the phase of the interpolation which is the distance from interpolation position to the position of the support pixel y.sub.4. f.sub.n(s,Kshp) (n=1 . . . 8) are eight control synthesis functions that can be expressed as follows: f.sub.1(s,Kshp)=(a(0,0)+Kshp*b(0,0))*s.sup.3+(a(0,1)+Kshp*b(0,1))*s.sup.2- +(a(0,2)+Kshp*b(0,2)*s+(a(0,3)+Kshp*b(0,3)) f.sub.2(s,Kshp)=(a(1,0)+Kshp*b(1,0))*s.sup.3+(a(1,1)+Kshp*b(1,1))*s.sup.2- +(a(1,2)+Kshp*b(1,2)*s+(a(0,3)+Kshp*b(1,3)) f.sub.3(s,Kshp)=(a(2,0)+Kshp*b(2,0))*s.sup.3+(a(2,1)+Kshp*b(2,1))*s.sup.2- +(a(2,2)+Kshp*b(2,2)*s+(a(2,3)+Kshp*b(2,3)) f.sub.4(s,Kshp)=(a(3,0)+Kshp*b(3,0))*s.sup.3+(a(3,1)+Kshp*b(3,1))*s.sup.2- +(a(3,2)+Kshp*b(3,2)*s+(a(3,3)+Kshp*b(3,3)) f.sub.5(s,Kshp)=f.sub.4((1-s),Kshp), f.sub.6(s,Kshp)=f.sub.3((1-s),Kshp), f.sub.7(s,Kshp)=f.sub.2((1-s),Kshp), and f.sub.8(s,Kshp)=f.sub.1((1-s),Kshp). A = [ a ( 0 , 0 ) a ( 0 , 1 ) a ( 0 , 2 ) a ( 0 , 3 ) a ( 1 , 0 ) a ( 1 , 1 ) a ( 1 , 2 ) a ( 1 , 3 ) a ( 2 , 0 ) a ( 2 , 1 ) a ( 2 , 2 ) a ( 2 , 3 ) a ( 3 , 0 ) a ( 3 , 1 ) a ( 3 , 2 ) a ( 3 , 3 ) ] ##EQU00007## and ##EQU00007.2## B = [ b ( 0 , 0 ) b ( 0 , 1 ) b ( 0 , 2 ) b ( 0 , 3 ) b ( 1 , 0 ) b ( 1 , 1 ) b ( 1 , 2 ) b ( 1 , 3 ) b ( 2 , 0 ) b ( 2 , 1 ) b ( 2 , 2 ) b ( 2 , 3 ) b ( 3 , 0 ) b ( 3 , 1 ) b ( 3 , 2 ) b ( 3 , 3 ) ] ##EQU00007.3## are two coefficient matrices.

4. A system in accordance with claim 3 wherein the coefficient matrices A and B are defined as follows: A = [ - 21 52 - 32 0 52 - 150 97 1 - 154 412 - 256 0 304 - 587 28 254 ] , and B = [ - 9 21 - 11 - 2 15 - 38 18 3 - 32 69 - 23 - 11 51 - 88 5 21 ] . ##EQU00008##

5. A method of rescaling digital video, the method comprising: receiving input support pixels y.sub.1 to y.sub.n at an image data input; generating a sharpness control parameter Kshp at a sharpness control module; generating an interpolated pixel y.sub.s based on the input support pixels y.sub.1 to y.sub.n at an 8-tap filter; adjusting a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp at the interpolated pixel generator; and outputting the adjusted interpolated pixel y.sub.s for display.

6. A method in accordance with claim 5 wherein generating the interpolated pixel y.sub.s comprises using third order polynomial functions based at least partly upon eight input support pixels y.sub.1 to y.sub.8.

7. A method in accordance with claim 5 wherein generating the interpolated pixel y.sub.s comprises using the following relationship: y s ( s ) = n = 1 8 y n * f n ( s , Kshp ) , ##EQU00009## where y.sub.n, n=(1 . . . 8) are eight support pixels. S is the phase of the interpolation which is the distance from interpolation position to the position of the support pixel y.sub.4. f.sub.n(s,Kshp) (n=1 . . . 8) are eight control synthesis functions that can be expressed as follows: f.sub.1(s,Kshp)=(a(0,0)+Kshp*b(0,0))*s.sup.3(a(0,1)+Kshp*b(0,1))*s.sup.2+- (a(0,2)+Kshp*b(0,2)*s+(a(0,3)+Kshp*b(0,3)) f.sub.2(s,Kshp)=(a(1,0)+Kshp*b(1,0))*s.sup.3+(a(1,1)+Kshp*b(1,1))*s.sup.2- +(a(1,2)+Kshp*b(1,2)*s+(a(0,3)+Kshp*b(1,3)) f.sub.3(s,Kshp)=(a(2,0)+Kshp*b(2,0))*s.sup.3+(a(2,1)+Kshp*b(2,1))*s.sup.2- +(a(2,2)+Kshp*b(2,2)*s+(a(2,3)+Kshp*b(2,3)) f.sub.a(s,Kshp)=(a(3,0)+Kshp*b(3,0))*s.sup.3+(a(3,1)+Kshp*b(3,1))*s.sup.2- +(a(3,2)+Kshp*b(3,2)*s+(a(3,3)+Kshp*b(3,3)) f.sub.5(s,Kshp)=f.sub.4((1-s),Kshp), f.sub.6(s,Kshp)=f.sub.3((1-s),Kshp), f.sub.7(s,Kshp)=f.sub.2((1-s),Kshp), and f.sub.8(s,Kshp)=f.sub.1((1-s),Kshp). A = [ a ( 0 , 0 ) a ( 0 , 1 ) a ( 0 , 2 ) a ( 0 , 3 ) a ( 1 , 0 ) a ( 1 , 1 ) a ( 1 , 2 ) a ( 1 , 3 ) a ( 2 , 0 ) a ( 2 , 1 ) a ( 2 , 2 ) a ( 2 , 3 ) a ( 3 , 0 ) a ( 3 , 1 ) a ( 3 , 2 ) a ( 3 , 3 ) ] ##EQU00010## and ##EQU00010.2## B = [ b ( 0 , 0 ) b ( 0 , 1 ) b ( 0 , 2 ) b ( 0 , 3 ) b ( 1 , 0 ) b ( 1 , 1 ) b ( 1 , 2 ) b ( 1 , 3 ) b ( 2 , 0 ) b ( 2 , 1 ) b ( 2 , 2 ) b ( 2 , 3 ) b ( 3 , 0 ) b ( 3 , 1 ) b ( 3 , 2 ) b ( 3 , 3 ) ] ##EQU00010.3## are two coefficient matrices.

8. A method in accordance with claim 7 wherein the coefficient matrices A and B are defined as follows: A = [ - 21 52 - 32 0 52 - 150 97 1 - 154 412 - 256 0 304 - 587 28 254 ] , and B = [ - 9 21 - 11 - 2 15 - 38 18 3 - 32 69 - 23 - 11 51 - 88 5 21 ] . ##EQU00011##

9. A digital video rescaling system comprising: an image data input configured to receive input support pixels y.sub.1 to y.sub.n; an interpolated pixel generator configured to use an 8-tap filter to generate an interpolated pixel value y.sub.s based on the input support pixels y.sub.1 to y.sub.n; a de-ringing control unit configured to modify the interpolated pixel y.sub.s adaptively to a local image feature Kfreq to generate an output y.sub.out; and an output module configured to output the output y.sub.out for display.

10. A system in accordance with claim 9 wherein the local image feature Kfreq is related to local frequency characteristics.

11. A system in accordance with claim 9 further comprising: a local frequency analysis unit configured to calculate the local image feature Kfreq; a local max/min analysis unit configured to distinguish between a larger and a smaller value of two support pixels y.sub.a and y.sub.b and generate an output Lmax and Lmin; and a comparator configured to compare the interpolated pixel value y.sub.s with the output Lmax and Lmin and generate a comparison result y.sub.m.

12. A system in accordance with claim 11 further wherein the comparator is configured to generate the comparison result y.sub.m as follows: y m = { L max if ( y s > L max ) L min if ( y s < L min ) y s else . ##EQU00012##

13. A system in accordance with claim 11 further wherein the de-ringing control unit is further configured to: subtract the comparison result y.sub.m from the interpolated pixel value y.sub.s; multiply the difference by the local image feature Kfreq; and add the product to comparison result y.sub.m to generate g.sub.out.

14. A system in accordance with claim 9 wherein the local frequency analysis unit is configured to calculate the local image feature Kfreq as follows: Kfreq=min(dev1,dev2,dev3,dev4)/N, where dev1, dev2, dev3 and dev4 are defined as follows: dev1=max(|y.sub.1-2*y.sub.2+y.sub.3|,|y.sub.2-2*y.sub.3+y.sub.4|), dev2=max(|y.sub.3-2*y.sub.4+y.sub.5|,|y.sub.4-2*y.sub.5+y.sub.6|), dev3=max(|y.sub.5-2*y.sub.6+y.sub.7|,|y.sub.6-2*y.sub.7+y.sub.8|), and dev4=min(|y.sub.2-y.sub.4|,|y.sub.3-y.sub.5|), where N is a constant value used to normalize Kfreq so that Kfreq is in the range of [0,1].

15. A method of rescaling digital video, the method comprising: receiving support pixels y.sub.1 to y.sub.n at an image data input; using an 8-tap filter to generate an interpolated pixel y.sub.s based on the input support pixels y.sub.1 to y.sub.n at an interpolated pixel generator; modifying the interpolated pixel y.sub.s adaptively to a local image feature Kfreq to generate an output y.sub.out at a de-ringing control unit; and outputting the output y.sub.out for display.

16. A method in accordance with claim 15 wherein the local image feature Kfreq is related to local frequency characteristics.

17. A method in accordance with claim 15 further comprising: distinguishing between a larger and a smaller value of two support pixels y.sub.a and y.sub.b and generating an output Lmax and Lmin at a local max/min analysis unit; and comparing the interpolated pixel value y.sub.s with the output Lmax and Lmin and generating a comparison result y.sub.m at a comparator.

18. A method in accordance with claim 17 further wherein the comparison result y.sub.m is generated as follows: y m = { L max if ( y s > L max ) L min if ( y s < L min ) y s else . ##EQU00013##

19. A method in accordance with 17 further comprising: subtracting the comparison result y.sub.m from the interpolated pixel value y.sub.s; multiplying the difference by the local image feature Kfreg; and adding the product to comparison result y.sub.m to generate y.sub.out at the de-ringing control unit.

20. A method in accordance with claim 15 wherein calculating the local image feature Kfreq comprises using the following relationship: Kfreq=min(dev1,dev2,dev3,dev4)/N, where dev1, dev2, dev3 and dev4 are defined as follows: dev1=max(|y.sub.1-2*y.sub.2+y.sub.3|,|y.sub.2-2*y.sub.3+y.sub.4|), dev2=max(|y.sub.3-2*y.sub.4+y.sub.5|,|y.sub.4-2*y.sub.5+y.sub.6|), dev3=max(|y.sub.5-2*y.sub.6+y.sub.7|,|y.sub.6-2*y.sub.7+y.sub.8|), and dev4=min(|y.sub.2-y.sub.4|,|y.sub.3-y.sub.5|), where N is a constant value used to normalize Kfreq so that Kfreq is in the range of [0,1].
Description



TECHNICAL FIELD OF THE INVENTION

[0001] The present invention generally relates to the field of digital image processing, and more particularly to a system and process for rescaling digital images for display.

BACKGROUND OF THE INVENTION

[0002] Digital images have become more popular in the field of image display because they offer clarity and less distortion during processing. Furthermore, a wider range of image processing algorithms can be applied to digital images. Interpolation is a common stage in image processing to improve the appearance of the processed image on the output imaging medium. Interpolation is often performed during rescaling or resizing of digital images.

[0003] Rescaling or resizing of digital images includes magnification or reduction of image. For example, large screen displays have a native resolution that reaches or exceeds the well-known high-definition TV (HDTV) standard. In order to display a low-resolution digital image on a large screen display, it is desirable to rescale the image to a full screen resolution.

[0004] Traditionally, linear interpolation techniques such as bilinear or bicubic interpolation are used to rescale digital images. The bilinear interpolation method interpolates an input signal using a 2-tap filter. In this method, only the two pixels immediately on either side of the location of the new pixel are used. The bicubic interpolation method interpolates an input signal using a 4-tap filter. In this method, two pixels on either side of the location of the new pixel are used.

[0005] 2-tap and 4-tap filters all have degradation in the high frequency region. These filters often suffer from image quality issues, such as blurring, aliasing, and staircase edges. 8-tap interpolation, such as that performed by an 8-tap polyphase filter, improves reconstruction in the high frequency region and reduces the staircase and aliasing issues. However, 8-tap interpolation introduces ringing artifacts along the edges, and the conventional 8-tap interpolation is not flexible in sharpness control.

SUMMARY OF THE INVENTION

[0006] A digital video rescaling system is provided. The system includes an image data input configured to receive input support pixels y.sub.1 to y.sub.n and a sharpness control module configured to generate a sharpness control parameter Kshp. The system further includes an interpolated pixel generator configured to use an adaptive interpolation kernel to generate an interpolated pixel y.sub.s based on the input support pixels, and adjust a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp. The system also includes an output module configured to output the adjusted interpolated pixel y.sub.s for display.

[0007] A method of rescaling digital video is provided. The method includes receiving input support pixels y.sub.1 to y.sub.n at an image data input, generating a sharpness control parameter Kshp at a sharpness control module, and generating an interpolated pixel y.sub.s based on the input support pixels y.sub.1 to y.sub.n at an interpolated pixel generator. The method further includes adjusting a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp at the interpolated pixel generator, and outputting the adjusted interpolated pixel y.sub.s for display.

[0008] A digital video rescaling system is provided. The system includes an image data input configured to receive input support pixels y.sub.1 to y.sub.n, and an interpolated pixel generator configured to use an adaptive interpolation kernel to generate an interpolated pixel value y.sub.s based on the input support pixels y.sub.1 to y.sub.n. The system also includes a de-ringing control unit configured to modify the interpolated pixel value y.sub.s adaptively to a local image feature Kfreq to generate an output y.sub.out, and an output module configured to output the output y.sub.out for display.

[0009] A method of rescaling digital video is provided. The method includes receiving support pixels y.sub.1 to y.sub.n at an image data input, and using an adaptive interpolation kernel to generate an interpolated pixel y.sub.s based on the input support pixels y.sub.1 to y.sub.n at an interpolated pixel generator. The method also includes modifying the interpolated pixel y.sub.s adaptively to a local image feature Kfreq to generate an output y.sub.out at a de-ringing control unit, and outputting the output y.sub.out for display.

[0010] Before undertaking the DETAILED DESCRIPTION OF THE INVENTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation; the term "or," is inclusive, meaning and/or; the phrases "associated with" and "associated therewith," as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term "controller" means any device, system or part thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:

[0012] FIG. 1 illustrates a digital video rescaling system according to an embodiment of this disclosure;

[0013] FIG. 2 illustrates the generation of an interpolated pixel based on eight input support pixels according to an embodiment of the present disclosure;

[0014] FIG. 3 illustrates an interpolation kernel generated according to an embodiment of the present disclosure;

[0015] FIG. 4 illustrates an interpolation kernel driven by a sharpness control kernel according to an embodiment of the present disclosure;

[0016] FIG. 5 illustrates interpolation kernels having varying sharpness control values according to an embodiment of the present disclosure;

[0017] FIG. 6 illustrates the frequency responses of interpolation kernels having varying sharpness control values according to an embodiment of the present disclosure;

[0018] FIG. 7 illustrates an implementation of an adaptive 8-tap interpolation according to an embodiment of the present disclosure;

[0019] FIG. 8 illustrates an implementation of a de-ringing system according to an embodiment of the present disclosure; and

[0020] FIG. 9 illustrates a method of rescaling digital video according to an embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE INVENTION

[0021] FIGS. 1 through 9, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system.

[0022] The present disclosure provides an effective system and method of video image rescaling. The present disclosure describes the use of adaptive interpolation kernels with sharpness and de-ringing control to reduce ringing artifacts while maintaining the quality of the reconstruction in the high frequency region. The sharpness and ringing effect of the interpolated image are controlled using a sharpness control parameter and a de-ringing control parameter.

[0023] In some embodiments, a controllable interpolation kernel is used to generate the interpolated outputs, which solves the issues of aliasing and staircase edges. With the adaptive sharpness and de-ringing control, the present disclosure provides a system and method to improve the sharpness of the output image without introducing ringing artifacts that are a common issue in conventional 8-tap interpolation methods.

[0024] In particular embodiments, a controllable interpolation kernel with the following key components is provided:

[0025] (1): an 8-tap interpolation filter to maintain the quality of the reconstruction in the high frequency region and solve aliasing and staircase problems in the interpolated images;

[0026] (2) sharpness control functionality to generate the interpolated images with the visual qualities ranging from relative sharpness to softness; and

[0027] (3) de-ringing control functionality to adjust the levels of ringing effects along the edges in the interpolated images.

[0028] The system and method of the present disclosure can be applied to a generic video image processing system and can be used to both upscale and downscale images with controllable sharpness levels.

[0029] FIG. 1 illustrates a digital video rescaling system 100 according to an embodiment of this disclosure.

[0030] As shown in FIG. 1, the rescaling system 100 includes an image data input 101, an adaptive 8-tap interpolation unit 102, a de-ringing control unit 103, and a local feature analysis unit 104. The final output of the system 100 is sent from an image data output module 105. The image data input 101 receives a plurality of discrete sample values and sends the sample values to the adaptive 8-tap interpolation unit 102 and the local feature analysis 104. The adaptive 8-tap interpolation unit 102 uses the discrete sample values received from the image data input 101 to generate an interpolated pixel with controllable sharpness value. The de-ringing control unit 103 receives the interpolated pixel from the adaptive 8-tap interpolation unit 102 and modifies the interpolated pixel according to the local feature that was estimated by the local feature analysis unit 104. The local feature analysis unit 104 uses the discrete sample values received from the image data input 101 to estimate local features used by the de-ringing control unit 103.

[0031] FIG. 2 illustrates the generation of an interpolated pixel 200 based on eight input support pixels 201-208 according to an embodiment of the present disclosure.

[0032] As shown in FIG. 2 in particular embodiments, the adaptive 8-tap interpolation unit 102 uses controllable third order polynomial functions based on the eight input support pixels 201-208 to generate the interpolated pixel 200.

[0033] In one embodiment, the interpolated pixel 200 can be calculated, for example, by Equation 1 below:

y s = n = 1 8 y n * f n ( s ) . [ Eqn . 1 ] ##EQU00001##

[0034] The eight control synthesis functions f.sub.n(s) that can be expressed, for example, by Equations 2-9 below:

f.sub.1(s)=C(0,0)*s.sup.3+C(0,1)*s.sup.2+C(0,2)*s+C(0,3), [Eqn. 2],

f.sub.2(S)=C(1,0)*s.sup.3+C(1,1)*s.sup.2+C(1,2)*s+C(1,3), [Eqn. 3]

f.sub.3(S)=C(2,0)*s.sup.3+C(2,1)*s.sup.2+C(2,2)*s+C(2,3), [Eqn. 4]

f.sub.4(s)=C(3,0)*s.sup.3+C(3,1)*s.sup.2+C(3,2)*s+C(3,3), [Eqn. 5]

f.sub.5(s)=f.sub.4(1-s), [Eqn. 6]

f.sub.6(s)=f.sub.3(1-s), [Eqn. 7]

f.sub.7(s)=f.sub.2(1-s), and [Eqn. 8]

f.sub.8(s)=f.sub.1(1-s). [Eqn. 9]

[0035] The C(i,j) coefficient metrics of the above control synthesis functions can be calculated, for example, by Equation 10 below:

C(i,j)=A(i,j)+Kshp*B(i,j), [Eqn. 10]

[0036] In Equation 10,

A = [ a ( 0 , 0 ) a ( 0 , 1 ) a ( 0 , 2 ) a ( 0 , 3 ) a ( 1 , 0 ) a ( 1 , 1 ) a ( 1 , 2 ) a ( 1 , 3 ) a ( 2 , 0 ) a ( 2 , 1 ) a ( 2 , 2 ) a ( 2 , 3 ) a ( 3 , 0 ) a ( 3 , 1 ) a ( 3 , 2 ) a ( 3 , 3 ) ] ##EQU00002## and ##EQU00002.2## B = [ b ( 0 , 0 ) b ( 0 , 1 ) b ( 0 , 2 ) b ( 0 , 3 ) b ( 1 , 0 ) b ( 1 , 1 ) b ( 1 , 2 ) b ( 1 , 3 ) b ( 2 , 0 ) b ( 2 , 1 ) b ( 2 , 2 ) b ( 2 , 3 ) b ( 3 , 0 ) b ( 3 , 1 ) b ( 3 , 2 ) b ( 3 , 3 ) ] ##EQU00002.3##

[0037] are two coefficient matrices, for example, used to generate interpolation kernel and sharpness control kernel.

[0038] The coefficient matrices A and B are defined, for example, as shown in Equations 11 and 12 below:

A = [ - 21 52 - 32 0 52 - 150 97 1 - 154 412 - 256 0 304 - 587 28 254 ] , and [ Eqn . 11 ] B = [ - 9 21 - 11 - 2 15 - 38 18 3 - 32 69 - 23 - 11 51 - 88 5 21 ] . [ Eqn . 12 ] ##EQU00003##

[0039] Of course one of ordinary skill in the art would recognize that matrices A and B are just one example of coefficient matrices that may be used to generate an 8-tap interpolation kernel and an 8-tap sharpness control kernel, respectively, and that any number of coefficient matrices may be used without departing from the scope or spirit of the present disclosure.

[0040] Accordingly, the interpolated pixel 200 also can be calculated, for example, by Equation 13 below:

y s ( s ) = n = 1 8 y n * f n ( s , Kshp ) , [ Eqn . 13 ] ##EQU00004##

[0041] where y.sub.n, n=(1 . . . 8) are the eight support pixels 201-208 from the image data input 101. s is the phase of the interpolation which is the distance from interpolation position to the position of the support pixel y.sub.4. The range of phase is from 0 to 1. The number of phases can be defined by the precision of the interpolation. f.sub.n(s,Kshp) (n=1 . . . 8) are eight control synthesis functions that can be expressed, for example, by Equations 14-21 below:

f.sub.1(s,Kshp)=(a(0,0)+Kshp*b(0,0))*s.sup.3+(a(0,1)+Kshp*b(0,1))*s.sup.- 2+(a(0,2)+Kshp*b(0,2)*s+(a(0,3)+Kshp*b(0,3)) [Eqn. 14]

f.sub.2(s,Kshp)=(a(1,0)+Kshp*b(1,0))*s.sup.3+(a(1,1)+Kshp*b(1,1))*s.sup.- 2+(a(1,2)+Kshp*b(1,2)*s+(a(0,3)+Kshp*b(1,3)) [Eqn. 15]

f.sub.3(s,Kshp)=(a(2,0)+Kshp*b(2,0))*s.sup.3+(a(2,1)+Kshp*b(2,1))*s.sup.- 2+(a(2,2)+Kshp*b(2,2)*s+(a(2,3)+Kshp*b(2,3)) [Eqn. 16]

f.sub.4(s,Kshp)=(a(3,0)+Kshp*b(3,0))*s.sup.3+(a(3,1)+Kshp*b(3,1))*s.sup.- 2+(a(3,2)+Kshp*b(3,2)*s+(a(3,3)+Kshp*b(3,3)) [Eqn. 17]

f.sub.5(s,Kshp)=f.sub.4((1-s),Kshp), [Eqn. 18]

f.sub.6(s,Kshp)=f.sub.3((1-s),Kshp), [Eqn. 19]

f.sub.7(s,Kshp)=f.sub.2((1-s),Kshp), and [Eqn. 20]

f.sub.8(s,Kshp)=((1-s),Kshp). [Eqn. 21]

[0042] FIG. 3 illustrates an interpolation kernel 300 generated according to an embodiment of the present disclosure.

[0043] In this particular embodiment, the interpolation kernel 300 was generated by the coefficient matrix A.

[0044] FIG. 4 illustrates an interpolation kernel 401 driven by a sharpness control kernel 403 according to an embodiment of the present disclosure.

[0045] As shown in FIG. 4, an interpolation kernel 401 is driven by a sharpness control kernel 403. The interpolation kernel 401 may be generated, for example, by coefficient matrix A, and the sharpness control kernel 403 may be generated, for example, by coefficient matrix B. The sharpness control kernel 403 is combined with the interpolation kernel 401 to generate a resulting interpolation kernel 405 having sharpness control. In a particular embodiment, EAT in Equation 13 is the sharpness control parameter used to adjust the sharpness of the interpolated pixel.

[0046] FIG. 5 illustrates adaptive interpolation kernels having varying sharpness control values according to an embodiment of the present disclosure.

[0047] FIG. 6 illustrates the frequency responses of adaptive interpolation kernels having varying sharpness control values according to an embodiment of the present disclosure.

[0048] As shown in FIGS. 5 and 6, the adaptive interpolation kernels are driven by varying sharpness control values, and their frequency responses vary with the sharpness control values. From the frequency responses, it can be seen that the magnitudes of the high frequency region are adjusted accordingly to the sharpness control parameter.

[0049] FIG. 7 illustrates an implementation of an adaptive 8-tap interpolation according to an embodiment of the present disclosure.

[0050] In this embodiment, coefficient A and B are stored in two register arrays 701 and 703. The coefficient C is calculated in calculation unit 705, for example, by C=A+Kshp*B. Kshp is provided by a sharpness control module 707. The coefficient C is passed to synthesis function unit 709 to generate the synthesis functions f.sub.i(s) (i=1 . . . 8). The f.sub.i(s) are then used as filter coefficients in interpolated pixel generator 711 to generate the interpolated pixel y.sub.s by an 8-tap filter.

[0051] As shown in FIG. 1, the process at de-ringing control unit 103 is used to modify the interpolated pixel adaptively to the local image feature, the local image feature being related to the local frequency characteristics. In particular embodiments, the control in the high frequency region should be less to maintain quality reconstruction in the high frequency region. In the edge or low frequency region, the control should be higher to reduce the ringing effect.

[0052] FIG. 8 illustrates an implementation of a de-ringing process according to an embodiment of the present disclosure.

[0053] FIG. 8 shows a local frequency analysis unit 801, a local max/min analysis unit 802, a comparator 803 and a de-ringing control unit 804. The local frequency analysis 801 is used to calculate a feature value that is related to the local frequency. In some embodiments, the local feature is estimated, for example, using Equation 22 below:

Kfreq=min(dev1,dev2,dev3,dev4)/N, [Eqn. 22]

[0054] where dev1, dev2, dev3 and dev4 are defined as shown in Equations 23-26 below:

dev1=max(|y.sub.1-2*y.sub.2+y.sub.3|,|y.sub.2-2*y.sub.3+y.sub.4|), [Eqn. 23]

dev2=max(|y.sub.3-2*y.sub.4+y.sub.5|,|y.sub.4-2*y.sub.5+y.sub.6|) [Eqn. 24]

dev3=max(|y.sub.5-2*y.sub.6+y.sub.7|,|y.sub.6-2*y.sub.7+y.sub.8|), and [Eqn. 25]

dev4=min(|y.sub.2-y.sub.4|,|y.sub.3-y.sub.5|). [Eqn. 26]

[0055] N is a constant value used to normalize Kfreq so that Kfreq is in the range of [0,1]. Of course one of ordinary skill in the art would recognize that this is just one way of determining the local feature and that other means of determining the local feature may be utilized without departing from the scope or spirit of the present disclosure.

[0056] The local max/min analysis unit 802 is used to discriminate between the larger and smaller value of the support pixels y.sub.4 and y.sub.5 as shown in Equations 27 and 28 below:

Lmax=max(y.sub.4y.sub.5), and [Eqn. 27]

Lmin=min(y.sub.4,y.sub.5). [Eqn. 28]

[0057] The outputs of the local max/min analysis unit 802 are then compared with the output of the adaptive 8-tap interpolation unit 102 (y.sub.s) in the comparator 803 to generate the output (y.sub.m) as shown in Equation 29 below:

y m = { L max if ( y s > L max ) L min if ( y s < L min ) y s else . [ Eqn . 29 ] ##EQU00005##

[0058] The outputs y.sub.s and y.sub.m are then subtracted and multiplied by the local image feature Kfreq in the de-ringing control unit 804 to generate the final output y.sub.out as shown in Equation 30 below:

y.sub.out=Kfreq*(y.sub.s-y.sub.m)+y.sub.m [Eqn. 30]

[0059] FIG. 9 illustrates a method 900 of rescaling digital video according to an embodiment of the present disclosure.

[0060] As shown in FIG. 9, the method 900 includes receiving input support pixels y.sub.1 to y.sub.n (block 901), generating a sharpness control parameter Kshp (block 903), and generating an interpolated pixel y.sub.s based on the input support pixels y.sub.1 to y.sub.n (block 905). The method 900 also includes adjusting a sharpness of the interpolated pixel y.sub.s based at least partly upon the sharpness control parameter Kshp (block 907). The method 900 further includes generating a local image feature Kfreq (block 909) and modifying the interpolated pixel y.sub.s adaptively to the local image feature Kfreq to generate an output y.sub.out (block 911). The method 900 also includes outputting the output y.sub.out for display (block 913).

[0061] Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed