U.S. patent application number 11/898909 was filed with the patent office on 2008-02-21 for methods and apparatuses providing noise reduction while preserving edges for imagers.
This patent application is currently assigned to Micron Technology, Inc.. Invention is credited to Igor Subbotin.
Application Number | 20080043124 11/898909 |
Document ID | / |
Family ID | 37839118 |
Filed Date | 2008-02-21 |
United States Patent
Application |
20080043124 |
Kind Code |
A1 |
Subbotin; Igor |
February 21, 2008 |
Methods and apparatuses providing noise reduction while preserving
edges for imagers
Abstract
Methods and apparatuses of reducing noise in an image by
obtaining a first value for a target pixel, obtaining a respective
second value for neighboring pixels surrounding the target pixel,
for each neighboring pixel, comparing a difference between the
first value and the second value to a threshold value and
selectively replacing the first value with an average value
obtained from the first value and at least a subset of the second
values from the neighboring pixels which have an associated
difference which is less than the threshold value based on a result
of the comparing step. In a further modification, less than all
neighboring pixels which have an associated difference which is
less than the threshold value are used in the averaging.
Inventors: |
Subbotin; Igor; (South
Pasadena, CA) |
Correspondence
Address: |
DICKSTEIN SHAPIRO LLP
1825 EYE STREET NW
Washington
DC
20006-5403
US
|
Assignee: |
Micron Technology, Inc.
|
Family ID: |
37839118 |
Appl. No.: |
11/898909 |
Filed: |
September 17, 2007 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11295445 |
Dec 7, 2005 |
|
|
|
11898909 |
Sep 17, 2007 |
|
|
|
Current U.S.
Class: |
348/250 ;
348/E9.002; 382/260 |
Current CPC
Class: |
G06T 5/20 20130101; H04N
1/409 20130101; G06T 2207/10024 20130101; G06T 5/002 20130101 |
Class at
Publication: |
348/250 ;
382/260; 348/E09.002 |
International
Class: |
G03B 19/02 20060101
G03B019/02; G06K 9/40 20060101 G06K009/40 |
Claims
1. A method of processing an image, comprising the steps of:
selecting a set of pixel signals from pixels surrounding an
identified target pixel having a target pixel signal; for each of
the surrounding pixel signals of the set, determining a respective
difference value between the target pixel signal and the
surrounding pixel signal; for each of the surrounding pixel signals
of the set, determining if the respective difference value is less
than a predetermined threshold; and substituting an average value
as a value for the target pixel signal, the average value is based
on the target pixel signal and at least a selected subset of the
pixel signals having a respective difference value less than a
predetermined threshold.
2. The method of claim 1, wherein the average value is an average
of 2.sup.n pixel signals where n is an integer.
3. The method of claim 1, wherein the target pixel is one of a red,
greenred, greenblue, or blue pixel and each of the pixels in the
selected set are the same color as the target pixel.
4. The method of claim 1, wherein at least one of the surrounding
pixels has been previously denoised.
5. The method of claim 1, wherein the predetermined threshold is
based on at least one of an analog and digital gain used in image
capture.
6. The method of claim 1, wherein the predetermined threshold is
based on the color of the target pixel.
7. A method of processing an image comprising the steps of:
selecting a target pixel having a first signal value; replacing a
first register value with a sum of the first register value and the
first signal value; incrementing a first counter; selecting a
correction kernel having a number of pixels surrounding the target
pixel, each of the kernel pixels having a respective second signal
value; grouping the selected kernel pixels into at least one pixel
group; comparing a difference between the first signal value and a
one of the respective second signal values to a threshold value;
replacing the first register value with a sum of the first register
value and one of the respective second signal values based on a
result of the comparing the difference step; incrementing the first
counter based on a result of the comparing the difference step;
comparing a value of the first counter to a set of at least one
predetermined number; replacing a second register value with the
first register value based on a result of the comparing the value
step; replacing a value of a second counter with the value of the
first counter based on a result of the comparing the value step;
and replacing the first signal value with the result of a division
of the second register value by the second counter.
8. The method of claim 7, further comprising repeating the
comparing the difference step through replacing the value of the
second counter step for each of the respective second signal
values.
9. The method of claim 7, further comprising repeating the
comparing the difference step through replacing the value of the
second counter step for each of the respective second signal values
in a first pixel group before repeating the comparing the
difference step through replacing the value of the second counter
step for each of the respective second signal values in a second
pixel group.
10. The method of claim 7, wherein each selected kernel pixel has a
respective distance from the target pixel and are grouped based on
each kernel pixels' distance from the target pixel.
11. The method of claim 7, wherein the set of at least one
predetermined number is comprised of integer powers of the number
two.
12. The method of claim 7, wherein the target pixel is one of a
red, green, or blue pixel and each of the selected kernel pixels
are the same color as the target pixel.
13. A method of processing an image, comprising the steps of:
selecting a target pixel having a signal value; selecting a
correction kernel associated with the target pixel containing a set
of correction kernel pixels each having a respective pixel signal
and a respective distance from the target pixel; for each of the
correction kernel pixel signals, determining a respective
difference value between the target pixel signal and the correction
kernel pixel signal; for each of the correction kernel pixel
signals, determining if the respective difference value is less
than a predetermined threshold; and substituting an average value
as a value for the target pixel signal, the average value being an
average of 2.sup.n pixel signals where n is an integer, wherein the
2.sup.n signals are comprised of the target pixel signal and at
least a subset of the correction kernel pixel signal having a
respective difference value less than a predetermined
threshold.
14. The method of claim 13, wherein the at least a subset of
correction kernel pixel signals is selected based on the correction
kernel pixel signals respective distances from the target
pixel.
15. An imager comprising: a pixel array for capturing an image and
comprising a plurality of pixels, each pixel outputting a signal
representing an amount of light received; and a circuit for
denoising an image captured by the array, the circuit being
configured to: select from the captured image a target pixel having
a signal value; select a correction kernel associated with the
target pixel containing a set of correction kernel pixels each
having a respective pixel signal and a respective distance from the
target pixel; for each of the correction kernel pixel signals,
determine a respective difference value between the target pixel
signal and the correction kernel pixel signal; for each of the
correction kernel pixel signals, determine if the respective
difference value is less than a predetermined threshold; and
substitute an average value as a value for the target pixel signal,
the average value being an average of 2.sup.n pixel signals where n
is an integer, wherein the 2.sup.n signals are comprised of the
target pixel signal and at least a subset of the correction kernel
pixel signals having a respective difference value less than a
predetermined threshold.
16. The imager of claim 15, wherein the at least a subset of
correction kernel pixel signals is selected based on the correction
kernel pixel signals respective distances from the target
pixel.
17. The imager of claim 15, wherein the imager is part of a camera
system.
18. An imager comprising: a pixel array for capturing an image and
comprising a plurality of pixels, each pixel outputting a signal
representing an amount of light received; and a processing circuit
for denoising an image captured by the array, the processing
circuit being configured to: select from the captured image a set
of pixel signals from pixels surrounding an identified target pixel
having a target pixel signal; for each of the surrounding pixel
signals of the set, determine a respective difference value between
the target pixel signal and the surrounding pixel signal; for each
of the surrounding pixel signals of the set, determine if the
respective difference value is less than a predetermined threshold;
and substitute an average value as a value for the target pixel
signal, the average value is based on the target pixel signal and
at least a selected subset of the pixel signals having a respective
difference value less than a predetermined threshold.
19. The imager of claim 18, wherein the average value is an average
of 2.sup.n pixel signals where n is an integer.
20. The imager of claim 18, wherein the target pixel and each pixel
in the selected set are the same color.
21. The imager of claim 18, wherein at least one of surrounding
pixels has been previously denoised.
22. The imager of claim 18, wherein the imager is part of a camera
system.
23. A storage medium containing a program for execution by a
processor, the processor when executing the program, performs the
steps of: selecting a set of pixel signals from pixels surrounding
an identified target pixel having a target pixel signal; for each
of the surrounding pixel signals of the set, determining a
respective difference value between the target pixel signal and the
surrounding pixel signal; for each of the surrounding pixel signals
of the set, determining if the respective difference value is less
than a predetermined threshold; and substituting an average value
as a value for the target pixel signal, the average value is based
on the target pixel signal and at least a selected subset of the
pixel signals having a respective difference value less than a
predetermined threshold.
Description
[0001] This application is a continuation-in-part of U.S. patent
application Ser. No. 11/295,445, filed on Dec. 7, 2005, the subject
matter of which is incorporated in its entirety by reference
herein.
FIELD OF THE INVENTION
[0002] The embodiments described herein relate generally to the
field of solid state imager devices, and more particularly to
methods and apparatuses for noise reduction in a solid state imager
device.
BACKGROUND OF THE INVENTION
[0003] Solid state imagers, including charge coupled devices (CCD),
CMOS imagers and others, have been used in photo imaging
applications. A solid state imager circuit includes a focal plane
array of pixels, each one of the pixels including a photosensor,
which may be a photogate, photoconductor or a photodiode having a
doped region for accumulating photo-generated charge.
[0004] One of the most challenging problems for solid state imagers
is noise reduction, especially for imagers with a small pixel size.
The effect of noise on image quality increases as pixel sizes
continue to decrease and may have a severe impact on image quality.
Specifically, noise impacts image quality in smaller pixels because
of reduced dynamic range. One of the ways of solving this problem
is by improving fabrication processes; the costs associated with
such improvements, however, are high. Accordingly, engineers often
focus on other methods of noise reduction. One such solution
applies noise filters during image processing. There are many
complicated noise reduction algorithms which reduce noise in the
picture without edge blurring, however, they require huge
calculating resources and cannot be easily implemented in a
system-on-a-chip application. Most simple noise reduction
algorithms which can be implemented in system-on-a-chip
applications blur the edges of the images.
[0005] Two known methods that may be used for image denoising are
briefly now discussed. The first method includes the use of local
smoothing filters, which work by applying a local low-pass filter
to reduce the noise component in the image. Typical examples of
such filters include averaging, medium and Gaussian filters. One
problem associated with local smoothing filters is that they do not
distinguish between high frequency components that are part of the
image and those created due to noise. As a result, these filters
not only remove noise but also blur the edges of the image.
[0006] A second group of denoising methods work in the spatial
frequency domain. These methods typically first convert the image
data into a frequency space (forward transform), then filter the
transformed image and finally convert the image back into the image
space (reverse transform). Typical examples of such filters include
DFT filters and wavelength transform filters. The utilization of
these filters for image denoising, however, is impeded by the large
volume of calculations required to process the image data.
Additionally, block artifacts and oscillations may result from the
use of these filters to reduce noise. Further, these filters are
best implemented in a YUV color space (Y is the luminance component
and U and V are the chrominance components). Accordingly, there is
a need and desire for an efficient image denoising method and
apparatus which does not significantly blur the edges of the
image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 is a top-down view of a conventional microlens and
color filter array used in connection with a pixel array.
[0008] FIG. 2A depicts an image correction kernel for a red,
greenred, greenblue, or blue pixel of a pixel array in accordance
with an embodiment.
[0009] FIG. 2B depicts a correction kernel for a green pixel of a
pixel array in accordance with an embodiment.
[0010] FIG. 3 depicts the correction kernel of FIG. 2A in more
detail.
[0011] FIG. 4 shows a flowchart of a method for removing pixel
noise in accordance with an embodiment.
[0012] FIG. 5 shows a flowchart of a method for removing pixel
noise in accordance with another embodiment.
[0013] FIG. 6 shows a block diagram of an imager constructed in
accordance with an embodiment described herein.
[0014] FIG. 7 shows a processor system incorporating at least one
imager constructed in accordance with an embodiment described
herein.
DETAILED DESCRIPTION OF THE INVENTION
[0015] In the following detailed description, reference is made to
the accompanying drawings, which form a part hereof and show by way
of illustration specific embodiments that may be practiced. These
embodiments are described in sufficient detail to enable those of
ordinary skill in the art to make and use them, and it is to be
understood that other embodiments may be utilized, and that
structural, logical, procedural, and electrical changes may be made
to the specific embodiments disclosed. The progression of
processing steps described is an example of the embodiments;
however, the sequence of steps is not limited to that set forth
herein and may be changed as is known in the art, with the
exception of steps necessarily occurring in a certain order.
[0016] The term "pixel," as used herein, refers to a photo-element
unit cell containing a photosensor device and associated structures
for converting photons to an electrical signal. For purposes of
illustration, a small representative three-color pixel array is
illustrated in the figures and description herein. However, the
embodiments may be applied to monochromatic imagers as well as to
imagers for sensing fewer than three or more than three color
components in an array. Accordingly, the following detailed
description is not to be taken in a limiting sense, and the scope
of the present invention is defined only by the appended
claims.
[0017] FIG. 1 depicts one known conventional color filter array,
arranged in a Bayer pattern, covering a pixel array to focus
incoming light. It should be understood that, taken alone, a pixel
generally does not distinguish one incoming color of light from
another and its output signal represents only the intensity of
light received, not any identification of color. However, pixels
80, as discussed herein, are referred to by color (i.e., "red
pixel," "blue pixel," etc.) when a color filter 81 is used in
connection with the pixel array to focus a particular wavelength
range of light, corresponding to a particular color, onto the
pixels 80. Accordingly, when the term "red pixel" is used herein,
it is referring to a pixel associated with and receiving light
through a red color filter; when the term "blue pixel" is used
herein, it is referring to a pixel associated with and receiving
light through a blue color filter; and when the term "green pixel"
is used herein, it is referring to a pixel associated with and
receiving light through a green color filter. It should be
appreciated that the term "green pixel" can refer to a "greenred
pixel," which is a green pixel in the same row with red pixels, and
can refer to a "greenblue pixel," which is a green pixel in the
same row with blue pixels.
[0018] FIGS. 2A and 2B illustrate parts of pixel array 100 having
an identified target pixel 32a, 32b that may undergo a corrective
method in accordance with an embodiment described herein. The
identified target pixel 32a shown in FIG. 2A in pixel array 100 may
be a red, a greenred, a greenblue, or a blue pixel. Pixel array 100
shown in FIG. 2B has an identified pixel 32b that for purposes of
further description is a green pixel (either greenred or
greenblue).
[0019] In the illustrated examples, it is assumed that the pixel
array 100 is associated with a Bayer pattern color filter array 82
(FIG. 1); however, the embodiments may also be used with other
color filter patterns or the color filter array may be omitted for
a monochrome pixel array 100. The color filters 81 focus incoming
light of a particular wavelength range onto the underlying pixels
80. In the Bayer pattern, as illustrated in FIG. 1, every other
pixel array row consists of alternating red (R) and green (G)
colored pixels, while the other rows consist of alternating green
(G) and blue (B) color pixels.
[0020] To denoise identified target pixel 32a, 32b, embodiments
utilize signal values of the nearest neighboring pixels of the
identified target pixel 32a, 32b. The identified target pixel 32a,
32b is the pixel currently being processed. The neighboring pixels
are collectively referred to herein as a correction kernel, shown
in FIGS. 2A and 2B respectively as kernels 101a, 101b. For example,
it may be desirable to select the pixels in the correction kernel
101a to have the same color as the target pixel 32a, such as, for
example, red, greenred, greenblue, and blue and to select the
pixels in the correction kernel 101b to have the same color as the
target pixel 32b, such as, for example green (without
differentiating between greenred and greenblue). A total of eight
neighboring pixels are included in each kernel 101a, 101b. It
should be noted, that the illustrated correction kernels 101a, 101b
are examples, and that other correction kernels may be chosen for
pixel arrays using color filter patterns other than the Bayer
pattern. In addition, a correction kernel could encompass more or
less than eight neighboring pixels, if desired.
[0021] In FIGS. 2A and 2B, the illustrated correction kernels 101a,
101b are outlined with a dotted line. For kernel 101a there are
eight pixels (pixels 10, 12, 14, 34, 54, 52, 50, and 30) having the
same color as the identified target pixel 32a. Although it appears
that correction kernel 101a contains sixteen pixels, it should be
noted that half of the pixels are not the same color as the target
pixel 32a, whose signals would not be considered for use in
denoising target pixel 32a. The actual pixels that make up kernel
101a are shown in greater detail in FIG. 3. Kernel 101b also
includes eight pixels (pixels 12, 23, 34, 43, 52, 41, 30, and 21)
having the same green color (without differentiating between
greenred and greenblue) as the identified pixel 32b.
[0022] As described in detail below, the embodiments described
herein may be used to denoise images while preserving edges. Rather
than outputting the actual pixel signal value for the target pixel,
the target pixel's signal value ("value") is averaged with the
signal values of pixels in the correction kernel. This averaging is
done to minimize the effect noise has on an individual pixel. For
example, in a flat-field image, an array of ideal pixels would
output the same signal value for every pixel in the array; however,
because of noise the pixels of the array do not output the same
signal for every pixel in the array. By averaging the signal values
from the surrounding pixels having the same color as the target
pixel, the effect of noise on the target pixel is reduced.
[0023] In order to preserve edges, it is desirable to set a
threshold such that averaging is only performed if the difference
between the target pixel signal value and the signal values of
pixels in the correction kernel is below a threshold. Only noise
that has amplitude of dispersion (the difference between the
average maximum and minimum value) lower than a noise amplitude
threshold (TH) will be averaged and reduced. Therefore, the
threshold should be set such that noise is reduced, but pixels
along edges will be subjected to less (or no) averaging thereby
preserving edges. An embodiment described herein sets a noise
amplitude threshold (TH), which may be a function of analog and
digital gains that may have been applied to amplify the original
signal. It should be appreciated that the threshold TH can be
varied based on, for example, pixel color. An embodiment described
herein accomplishes this by processing a central target pixel by
averaging it with all its like color neighbors that produce a
signal difference less than the set threshold. Another embodiment
described herein accomplishes this by processing a central target
pixel by averaging it with a selected subset of its like color
neighbors that produce a signal difference less than the set
threshold. Further, the exemplary noise filter could be applied
either to each color separately in Bayer, Red/Green/Blue (RGB),
Cyan/Magenta/Yellow/Key (CMYK), luminance/chrominance (YUV), or
other color space.
[0024] With reference to FIG. 4, one example method 200 is now
described. The method can be carried out by a processor circuit,
such as, for example, an image processor circuit 280 (described
below with reference to FIG. 6) which can be implemented in
hardware logic, or as a programmed processor or some combination of
the two. Alternatively, the method can be implemented by a
processor circuit separate from an image processor circuit 280,
such as, for example, a separate hardwired logic or programmed
processor circuit or a separate stand alone computer.
[0025] It should be understood that each pixel has a value that
represents an amount of light received at the pixel. Although
representative of a readout signal from the pixel, the value is a
digitized representation of the readout analog signal. These values
are represented in the following description as P(pixel) where "P"
is the value and "(pixel)" is the pixel number shown in FIGS. 2A or
2B. For explanation purposes only, the method 200 is described with
reference to the kernel 101a and target pixel 32a as illustrated in
FIG. 2A.
[0026] Initially, at step 201, the target pixel 32a being processed
is identified. Next, at step 202, the kernel 101a associated with
the target pixel 32a is selected/identified. After the associated
kernel 101a is selected, at step 203, the difference in values
P(pixel) of the central (processed) pixel 32a and each neighboring
pixel 10, 12, 14, 30, 34, 50, 52, 54 in kernel 101a are compared
with a threshold value TH. The threshold value TH may be
preselected, for example, using noise levels from current gain
settings, or using other appropriate methods. In the illustrated
example, at step 203, neighboring pixels that have a difference in
value P(pixel) less than or equal to the threshold value TH are
selected. Alternatively, at step 203, a subset of the neighboring
pixels that have a difference in value P(pixel) less than or equal
to the threshold value TH are selected. For example purposes only,
the value could be the red value if target pixel 32a is a red
pixel.
[0027] Next, at step 204, a value P(pixel) for each of the kernel
pixels located around the target pixel 32a, which were selected in
step 203, are added to a corresponding value for the target pixel
32a and an average value A(pixel) is calculated. For example, for
target pixel 32a, the average value
A32=(P10+P12+P14+P30+P32a+P34+P50+P52+P54)/9 is calculated, if all
eight neighboring pixels were selected in step 203. At step 205,
the calculated value A(pixel), which is, in this example, A32,
replaces the original target pixel value P32a.
[0028] The methods described herein may be carried out on each
pixel signal as it is processed. As pixels values are denoised, the
values of previously denoised pixels may be used to denoise other
pixel values. Thereby, when the method described herein and the
values of previously denoised pixels are used to denoise other
pixels, the method and apparatus is implemented in a partially
recursive manner (pixels are denoised using values from previously
denoised pixels). However, the embodiments are not limited to this
implementation and may be implemented in a fully recursive (pixels
are denoised using values from other denoised pixels) or
non-recursive manner (no pixels having been denoised are used to
denoise subsequent pixels).
[0029] The method 200 described above may also be implemented and
carried out, as discussed above, on target pixel 32b and associated
image correction kernel 101b (FIG. 2B). For example, in step 202
the kernel 101b is selected/identified. After the associated kernel
101b is selected for target pixel 32b, the differences in values
between each of the neighboring pixels 12, 21, 23, 30, 34, 41, 43,
52 in kernel 101b located around target pixel 32b and the value of
target pixel 32b are compared to a threshold TH in step 203. The
remaining steps 204, 205 are carried out as discussed above for the
pixels corresponding to kernel 101b.
[0030] The methods described above provide good denoising. It may
be desirable, however, to limit the number of pixels utilized in
the averaging of the target pixel signal value and the correction
kernel signal values to decrease implementation time and/or
decrease die size. For example, as illustrated in the flowchart of
FIG. 5, the number of pixels averaged may, for example, be limited
to an integer power of the number two (e.g., 1, 2, 4, 8, etc.)
which limits the averaging to binary division. In other words, the
average value is an average of 2.sup.n pixel signals where n is an
integer. Binary division may be desirable as it can be implemented
with register shifts, thereby decreasing die size and time
necessary to average the target pixel. The flowchart of FIG. 5
illustrates a method 2000 of noise reduction which can be carried
out by an image processor circuit 280 (described below with
reference to FIG. 6) which can be implemented in hardware logic or
as a programmed processor or some combination of the two.
Alternatively, the method can be implemented by a processor circuit
separate from an image processor circuit 280, such as, for example,
a separate hardwired logic or programmed processor circuit or a
separate stand alone computer. For explanation purposes only, the
method 2000 is described with reference to the kernel 101a and
target pixel 32a as illustrated in FIG. 2A.
[0031] Initially, at step 2010, a target pixel p having a signal
value p.sub.sig is selected/identified, for example, pixel 32a
(FIG. 2A). It should be appreciated if a Bayer pattern color filter
array is utilized with pixel array 100 (FIG. 2A), that pixel 32a
may be a red, greenred, greenblue, or blue pixel. For explanation
purposes, pixel 32a will be described as and referred to as a
greenblue pixel. Next, first and second register values
Pixel.sub.sum and Pixel.sub.sum.sub.--.sub.new, respectively, are
initialized to be equal to p.sub.sig and first and second counters
Pixel.sub.count and Pixel.sub.count.sub.--.sub.new, respectively,
are initialed to be equal to 1 (step 2020). Then, a correction
kernel associated with the target pixel p containing N pixels is
selected/identified, for example, kernel 101a (FIG. 2A) containing
greenblue pixels 10, 12, 14, 30, 34, 50, 52, 54 (step 2030). The N
pixels from the kernel are grouped at step 2040. It may be
desirable to process the correction kernel pixels that are closest
to the target pixel first, for example, the N pixels can be grouped
into one or more groups g by their distance from target pixel p.
For example, a first group g can be selected to include pixels 12,
52, 30, and 34 that are closest to target pixel 32a and a second
group g can be selected to include pixels 10, 14, 50, and 54 that
are further away from target pixel 32a than the pixels in the first
group g. Then the groups g can be assessed in order of their
distance to target pixel p, such that the pixels in a group closest
to target pixel p can be assessed before pixels in a group further
from target pixel p are assessed. It should be appreciated that all
of the pixels N can alternatively be grouped into one group g. Then
in step 2050, a group g that has not been previously assessed is
selected. For example, it may be desirable to select a group of
pixels that has not been previously assessed that is closest to
target pixel p. Next, a pixel n having a signal value n.sub.sig
from the selected group g is selected (step 2060).
[0032] In step 2070, a determination is made to see if the absolute
value of the difference between n.sub.sig and P.sub.sig is less
than a threshold TH. The threshold value TH may be preselected, for
example, using noise levels from current gain settings, or using
other appropriate methods. Additionally, the threshold value TH can
be preselected based on the color of the target pixel p. If the
determined difference is greater than the threshold TH (step 2070),
n.sub.sig is not included in the averaging and the method 2000 then
determines if all of the pixels in group g have been assessed (step
2130). However, if the determined difference is less than the
threshold TH (step 2070), a new value for Pixel.sub.sum is
determined by adding n.sub.sig to Pixel.sub.sum (step 2080) and a
new value for Pixel.sub.count is determined by incrementing
Pixel.sub.count (step 2090). The method 2000 then compares the
value of Pixel.sub.count to a set of at least one predetermined
number (step 2100). For example, it may be desirable to compare the
value of Pixel.sub.count to a set of values comprised of integer
powers of the number two. As described below in more detail,
division by Pixel.sub.count is required in step 2150 and when
implementing division in hardware, division by a power of two can
be accomplished with register shifts, thereby making the operation
faster and able to be implemented in a smaller die area. If
Pixel.sub.count is contained in the set of at least one
predetermined number, for example, if Pixel.sub.count is 4 and the
set of at least one predetermined number includes 1, 2, 4, and 8,
Pixel.sub.count.sub.--.sub.new is determined by setting
Pixel.sub.count.sub.--new=Pixel.sub.count (step 2110) and
Pixel.sub.sum.sub.--.sub.n, is determined by setting
Pixel.sub.sum.sub.--.sub.new=Pixel.sub.sum (step 2120). If
Pixel.sub.count is not contained in the set of at least one
predetermined number (step 2100), for example, if Pixel.sub.count
is 7 and the set of at least one predetermined number includes 1,
2, 4, and 8, Pixel.sub.sum.sub.--.sub.new will not be determined
and the method 2000 continues by determining if all pixels in group
g have been assessed (step 2130). It should be appreciated that if
Pixel.sub.count is not in the set of at lest one predetermined
number, then Pixel.sub.sum.sub.--.sub.new will not include the
current value for Pixel.sub.sum. In other words,
Pixel.sub.sum.sub.--.sub.new is only set when Pixel.sub.count is
within the set of at least one predetermined number.
[0033] Then the method 2000 determines if all pixels in group g
have been assessed (step 2130). If not, then the method returns to
step 2060 and selects a next pixel n. If all of the pixels in group
g have been assessed (step 2130), the method 2000 determines if all
groups g have been assessed (step 2140). If all groups g have not
been assessed, the method 2000 continues at step 2050 and selects a
next group g. If all groups g have been assessed, then
p.sub.sig.sub.--.sub.new is determined by dividing
Pixel.sub.sum.sub.--.sub.new by Pixel.sub.count.sub.--.sub.new
(step 2150). The method 2000 can then be repeated for a next target
pixel p at step 2010.
[0034] The method 2000 described above may also be implemented and
carried out, as discussed above, on target pixel 32b (FIG. 2B) and
associated image correction kernel 101b (FIG. 2B). For example, it
may be desirable to average both greenred and greenblue pixels
together. If target pixel 32b is a greenred pixel, the correction
kernel could be selected to include pixels 30, 12, 34, 52, 21, 23,
41, and 43 where pixels 31, 12, 23, and 52 are greenred pixels and
pixels 21, 23, 41, and 43 are greenblue pixels.
[0035] The above described embodiments may not provide sufficient
denoising to remove spurious noise (i.e., noise greater than 6
standard deviations). Accordingly, embodiments of the invention are
better utilized when implemented after the image data has been
processed by a filter which will remove spurious noise.
[0036] In addition to the above described embodiments, a program
for operating a processor embodying the methods may be stored on a
carrier medium which may include RAM, floppy disk, data
transmission, compact disk, etc. which can be executed by an
associated processor. For example, embodiments may be implemented
as a plug-in for existing software applications or may be used on
their own. The embodiments are not limited to the carrier mediums
specified herein and may be implemented using any carrier medium as
known in the art or hereinafter developed.
[0037] FIG. 6 illustrates an example imager 300 having an exemplary
CMOS pixel array 240 with which described embodiments may be used.
Row lines of the array 240 are selectively activated by a row
driver 245 in response to row address decoder 255. A column driver
260 and column address decoder 270 are also included in the imager
300. The imager 300 is operated by the timing and control circuit
250, which controls the address decoders 255, 270. The timing and
control circuit 250 also controls the row and column driver
circuitry 245, 260.
[0038] A sample and hold circuit 261 associated with the column
driver 260 reads a pixel reset signal Vrst and a pixel image signal
Vsig for selected pixels of the array 240. A differential signal
(Vrst-Vsig) is produced by differential amplifier 262 for each
pixel and is digitized by analog-to-digital converter 275 (ADC).
The analog-to-digital converter 275 supplies the digitized pixel
signals to an image processor circuit 280 which forms and may
output a digital image. The method 200 (FIG. 4) and method 2000
(FIG. 5) may be implemented by a processor circuit. For example,
the processor circuit may be the image processor circuit 280 which
is implemented as a digital logic processor pipeline or as a
programmed processor that is capable of performing the method 200
(FIG. 4) or method 2000 (FIG. 5) on the digitized signals from the
pixel array 240. Alternatively, the processor circuit may be
implemented as a hardwired circuit that processes the analog output
of the pixel array and is located between the amplifier 262 and ADC
275 (not shown). Although the imager 300 has been described as a
CMOS imager, this is merely one example imager that may be used.
Embodiments of the invention may also be used with other imagers
having a different readout architecture. While the imager 300 has
been shown as a stand-alone imager, it should be appreciated that
the embodiments are not so limited. For example, the embodiments
may be implemented on a system-on-a-chip or the imager 300 can be
coupled to a separate signal processing chip which implements
disclosed embodiments. Additionally, raw imaging data can be output
from the image processor circuit 280 (which can be implemented in
hardware logic, or as a programmed processor or some combination of
the two) and stored and denoised elsewhere, for example, in a
system as described in relation to FIG. 7 below or in a stand-alone
image processing system.
[0039] FIG. 7 shows system 1100, a typical processor system
modified to include the imager 300 (FIG. 6) of an embodiment. The
system 1100 is an example of a system having digital circuits that
could include imagers. Without being limiting, such a system could
include a computer system, still or video camera system, scanner,
machine vision, video phone, and auto focus system, or other imager
systems.
[0040] System 1100, for example a camera system, may comprise a
central processing unit (CPU) 1102, such as a microprocessor, that
communicates with one or more input/output (I/O) devices 1106 over
a bus 1104. Imager 300 also communicates with the CPU 1102 over the
bus 1104. The processor-based system 1100 also includes random
access memory (RAM) 1110, and can include removable memory 1115,
such as flash memory, which also communicates with the CPU 1102
over the bus 1104. The imager 300 may be combined with a processor,
such as a CPU, digital signal processor, or microprocessor, with or
without memory storage on a single integrated circuit or on a
different chip than the processor. As described above, raw image
data from the pixel array 240 (FIG. 6) can be output from the
imager 300 image processor circuit 280 and stored, for example in
the random access memory 1110 or the CPU 1102. Denoising can then
be performed on the stored data by the CPU 1102, or can be sent
outside the system 1100 and stored and operated on by a stand-alone
processor, e.g., a computer, external to system 1100 in accordance
with the embodiments described herein.
[0041] While the embodiments have been described in detail in
connection with preferred embodiments known at the time, it should
be readily understood that the claimed invention is not limited to
the disclosed embodiments. Rather, the embodiments can be modified
to incorporate any number of variations, alterations,
substitutions, or equivalent arrangements not heretofore described.
For example, the methods can be used with pixels in other patterns
than the described Bayer pattern, and the correction kernels would
be adjusted accordingly. While the embodiments are described in
connection with a CMOS imager, they can be practiced with other
types of imagers. Thus, the claimed invention is not to be seen as
limited by the foregoing description, but is only limited by the
scope of the appended claims.
* * * * *