U.S. patent application number 13/450564 was filed with the patent office on 2012-08-23 for image processing apparatus and method.
This patent application is currently assigned to CANON KABUSHIKI KAISHA. Invention is credited to Katsuhisa Ogawa.
Application Number | 20120212645 13/450564 |
Document ID | / |
Family ID | 39704948 |
Filed Date | 2012-08-23 |
United States Patent
Application |
20120212645 |
Kind Code |
A1 |
Ogawa; Katsuhisa |
August 23, 2012 |
IMAGE PROCESSING APPARATUS AND METHOD
Abstract
An image processing apparatus determines a correction gain
factor at a high rate of speed to synthesize an image. The
apparatus includes a long second middle luminance image detecting
unit for detecting a pixel having a pixel value in a first middle
luminance pixel value region derived from a long second exposure
image; a short second middle luminance image detecting unit for
detecting a pixel having a pixel value in second middle luminance
pixel value region derived from a shorter second exposure image; a
correction gain factor calculating unit for designating, as a
common pixel, the pixel detected by the long second middle
luminance image detecting unit and the pixel detected by the short
second middle luminance image detecting unit, each of the pixels at
a common position, and for calculating a correction gain factor
based on the pixel values of the common pixels.
Inventors: |
Ogawa; Katsuhisa;
(Machida-shi, JP) |
Assignee: |
CANON KABUSHIKI KAISHA
Tokyo
JP
|
Family ID: |
39704948 |
Appl. No.: |
13/450564 |
Filed: |
April 19, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12099563 |
Apr 8, 2008 |
8189069 |
|
|
13450564 |
|
|
|
|
Current U.S.
Class: |
348/229.1 ;
348/E5.037 |
Current CPC
Class: |
H04N 5/243 20130101;
H04N 5/23232 20130101; H04N 5/23245 20130101 |
Class at
Publication: |
348/229.1 ;
348/E05.037 |
International
Class: |
H04N 5/235 20060101
H04N005/235 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 13, 2007 |
JP |
2007-106121 |
Claims
1-6. (canceled)
7. An image processing apparatus for synthesizing a first image and
a second image, the first image being derived by a longer exposure
time than the second image, the apparatus comprising: a first
selecting unit configured to select a pixel outputting, as a part
of the first image, a pixel value within a first range from a
plurality of pixels; a second selecting unit configured to select a
pixel outputting, as a part of the second image, a pixel value
within a second range from the plurality of pixels; a calculating
unit configured to calculate a gain factor based on the pixel value
of the first image and the pixel value of the second image, wherein
the pixel values are output from a pixel selected by both of the
first selecting unit and the second selecting unit; and a
synthesizing unit configured to synthesize the first image and the
second image based on the gain factor calculated by the calculating
unit.
8. The image processing apparatus according to claim 7, wherein the
first range is between a first lower threshold value and a first
higher threshold value.
9. The image processing apparatus according to claim 8, wherein the
first selecting unit selects the pixel based on comparing the pixel
value with the first lower threshold value and with the first
higher threshold value.
10. The image processing apparatus according to claim 7, wherein
the second range is between a second lower threshold value and a
second higher threshold value.
11. The image processing apparatus according to claim 10, wherein
the second selecting unit selects the pixel based on comparing the
pixel value with the second lower threshold value and with the
second higher threshold value.
12. The image processing apparatus according to claim 7, wherein,
at the pixel outputting the pixel value within the first range, a
saturation into white and a black sinking do not occur.
13. The image processing apparatus according to claim 7, wherein,
at the pixel outputting the pixel value within the second range, a
saturation into white and a black sinking do not occur.
14. The image processing apparatus according to claim 7, wherein
the first selecting unit assigns a first symbol to the pixel
outputting the pixel value within the first range and a second
symbol to a pixel outputting, as a part of the first image, a pixel
value outside the first range, and wherein the second selecting
unit assigns the first symbol to the pixel outputting the pixel
value within the second range and the second symbol to a pixel
outputting, as a part of the second image, a pixel value outside
the second range.
15. The image processing apparatus according to claim 14, wherein
the calculating unit performs an AND operation to pixels at a same
coordinate position in the first image and in the second image.
16. The image processing apparatus according to claim 14, wherein
the calculating unit excludes the pixel value output from the pixel
assigned with the second symbol from calculation of the gain
factor.
17. The image processing apparatus according to claim 7, wherein
the first selecting unit and the second selecting unit select the
pixel by parallel processing.
18. The image processing apparatus according to claim 7, wherein a
number of the pixels selected by the first or second selecting unit
is converted by the apparatus operating in a first mode and a
second mode.
19. An image processing method for synthesizing a first image and a
second image, the first image being derived by a longer exposure
time than the second image, the method comprising: a first
selecting step for selecting a pixel outputting, as a part of the
first image, a pixel value within a first range from a plurality of
pixels; a second selecting step for selecting a pixel outputting,
as a part of the second image, a pixel value within a second range
from the plurality of pixels; a calculating step for calculating a
gain factor based on the pixel value of the first image and the
pixel value of the second image, wherein the pixel values are
output from a pixel selected by both of the first selecting step
and the second selecting step; and a synthesizing step for
synthesizing the first image and the second image based on the gain
factor calculated by the calculating step.
20. The image processing method according to claim 19, wherein the
first range is between a first lower threshold value and a first
higher threshold value.
21. The image processing method according to claim 19, wherein the
second range is between a second lower threshold value and a second
higher threshold value.
22. The image processing method according to claim 19, wherein the
first selecting step includes a step of assigning a first symbol to
the pixel outputting the pixel value within the first range and a
second symbol to a pixel outputting, as a part of the first image,
a pixel value outside the first range, and wherein the second
selecting step includes a step of assigning the first symbol to the
pixel outputting the pixel value within the second range and the
second symbol to a pixel outputting, as a part of the second image,
a pixel value outside the second range.
23. The image processing method according to claim 19, wherein the
first selecting step and the second selecting step are executed by
parallel processing.
Description
BACKGROUND OF THE INVENTION
[0001] 1. Field of the Invention
[0002] The present invention relates to an image processing
apparatus and method.
[0003] 2. Description of the Related Art
[0004] In recent years, a solid-state imaging device, such as a
charge coupled device (CCD) sensor and a complementary metal oxide
semiconductor (CMOS) sensor, has been used as an imaging device of
an image inputting apparatus, such as a television camera, a video
camera, and a digital camera. However, the dynamic range of the
solid-state imaging device is narrower than that of a film of a
film-based camera, and the image quality of the solid-state imaging
device deteriorates in some imaging conditions.
[0005] Accordingly, as a method of expanding the dynamic range of
the solid-state imaging device, there is a wide dynamic range image
synthesizing technique using a multiple exposure system, in which a
plurality of images of the same scene, which images have different
exposure quantities different from one another, is photographed and
the photographed plurality of images are synthesized to obtain an
image having an expanded dynamic range. The flow of the image
synthesizing processing will be described step by step below.
[0006] (1) In order that the image signals on the lower luminance
side of a subject may take suitable values, a long second exposure
image accumulated for a long exposure time is imaged. However, in
the long second exposure image, the exposure quantity on the higher
luminance side of the subject is large, and the "saturation into
white," in which an image signal is saturated, occurs.
[0007] (2) A short second exposure image, which has been
accumulated for a short time during which the image signals on the
higher luminance side of the subject take suitable values, is
imaged. An image at all the imaged pixels of which image signals
are not saturated to cause no "saturation into white" is
obtained.
[0008] (3) The pixel signals from the long second exposure pixels
at which the "saturation into white" occurs are abandoned, and the
pixel output values of a short second exposure image at the
coordinate positions of the abandoned image signals are replaced as
new image signals.
[0009] (4) A correction gain G is determined on the basis of a
ratio of each of the pixel output values of the long second
exposure image and the short second exposure image, and the
replaced short second exposure image is multiplied by the
determined correction gain G to correct the shifts between the
luminance levels of both the images.
[0010] If the ratio of exposure times is made to be the correction
gain G, then offsets remain in the boundary parts between the long
second exposure pixel output values and the short second exposure
pixel output values, and the offsets are visually observed as a
fake edge. The cause of the occurrence of the fake edge is that the
correction gain quantity necessary for the correction is not only
determined by the ratio of the exposure times but also is
influenced by the nonlinearity of the outputs of the sensor, the
non-linear gain errors of a reproducing analog circuit system, and
the like, besides. Accordingly, the correction gain G is calculated
on the basis of real image information, as described with regard to
the aforesaid item (4).
[0011] A method of calculating the correction gain G from the long
second and short second exposure pixel output value is disclosed in
the following first patent document. The correction gain
determinating method of Japanese Publication of Patent No. 3420303
(the first patent document) first calculates the ratio of luminance
values at each of the pixels of an imaged long second exposure
image and an imaged short second exposure image. The calculated
ratios are expressed as luminance level ratios. The method next
calculates an average of the luminance level ratios of a pixel
group having the same luminance value of the long second exposure
image (hereinafter referred to as long second luminance value) on
the basis of the long second luminance values, and the calculated
average of the luminance level ratios is set as a luminance level
ration representative value at the long second luminance value.
Then, the method produces a graph having an X-axis plotted by long
second luminance values and a Y-axis plotted by the luminance level
ratio representative values. Because the long second exposure
pixels cause the saturation into white in an area in which the long
second luminance values (X-axis) are large and the short second
exposure pixels cause black sinking in an area in which the long
second luminance values (X-axis) are small, the luminance level
ratio representative values (Y-axis) in these areas have large
errors from the values to the long second luminance values (X-axis)
near to the center of them, and the produced graph becomes a curve
having a gentle peak at an intermediate level of the long second
luminance values (X-axis). The range of use of the long second
luminance values (X-axis) to be used for calculating the correction
gain G is determined around the long second luminance value
(X-axis) corresponding to the peak position. The luminance level
ratio representative values (Y-axis) corresponding to the long
second luminance values (X-axis) situated in the set range are
extracted, and the average value of the extracted luminance level
ratio representative values (Y-axis) is calculated to determine the
correction gain G.
[0012] However, the determination method of the correction gain
disclosed in the aforesaid first patent document has a point
requiring examination: the method must execute enormous
calculations. In order to calculate the correction gain, the
following mass operations are necessary: division operations for
obtaining the luminance level ratios to all of the pixels, the
sorting processing and the product-sum operations for totalizing
the calculated luminance level ratios every long second luminance
value to obtain the average of them, and the like. If the
calculation method is realized by software or hardware, a long time
is needed for the calculation up to the correction gain
determination. In particular, in the case of a camera for a moving
image (monitoring camera) installing a wide dynamic range image
synthesizing processing function, the correction gain determinating
method disclosed in the first patent document is required to end
the calculation of the correction gain within 1/60 seconds if the
frame rate of the camera for a moving image is 60 fps. However, the
hardware of the camera for a moving image has not been able to
realize the completion of the calculation within 1/60 seconds.
Then, the calculation of the correction gain from image signals has
not been performed, but the wide dynamic range image has been
synthesized by using a coefficient determined beforehand.
Consequently, fake edges caused by the errors of the correction
gains have occurred in final wide dynamic range images.
[0013] Moreover, because the technique disclosed in the first
patent document adopts the method of determining the correction
gain on the basis of the calculation results of the luminance level
ratios to all of the pixels, the number of pixels to be used for
the calculation of the correction gain is uniquely determined. In
order to improve the calculation accuracy of the correction gain
and to synthesize a wide dynamic range image having an ultra-high
image quality emphasizing a fake edge reducing effect, it has an
advantage to have a large number of pixels to be used for the
calculation as many as possible. However, in the application
sufficient for a normal image quality, it is over specification to
have the ultra-high image quality caused by improving the
calculation accuracy of the correction gain.
[0014] Because the technique disclosed in the first patent document
adopts the method of determining the correction gain on the basis
of the calculation results of the luminance level ratios to all of
the pixels, the method cannot set the number of the pixels to be
used for the calculation to an arbitrary number. Consequently, it
is impossible to flexibly set the correction gain calculation
accuracy and the calculation speed according to each
application.
[0015] It is an object of the present to provide an image
processing apparatus and method, both capable of determining a
correction gain factor to synthesize an image at a high rate of
speed.
SUMMARY OF THE INVENTION
[0016] An image processing apparatus of the present invention is an
image processing apparatus for synthesizing a long second exposure
image derived by larger exposure quantity and a short second
exposure image derived by smaller exposure quantity, for imaging of
the same scene, comprising: a long second middle luminance image
detecting unit for detecting a pixel position of a pixel having a
pixel value in first middle luminance pixel value region from the
long second exposure image; a short second middle luminance image
detecting unit for detecting a pixel position of a pixel having a
pixel value in second middle luminance pixel value region from the
short second exposure image; a correction gain factor calculating
unit for designating, as a common pixel, a pixel having a common
pixel position such that the pixel position detected by the long
second middle luminance image detecting unit is the same as the
pixel position detected by the short second middle luminance image
detecting unit, and for calculating a correction gain factor based
on the pixel value of the long second exposure image and the pixel
value of the short second exposure image at a pixel position of the
common pixel position; and an image synthesizing unit for
synthesizing for multiplying the short second exposure image by the
correction gain factor, and for synthesizing the short second
exposure image subjected to the multiplication and the long second
exposure image.
[0017] Moreover, an image processing method of the present
invention is an image processing method for synthesizing a long
second exposure image derived by larger exposure quantity and a
short second exposure image derived by smaller exposure quantity,
for imaging of the same scene, comprising: a long second middle
luminance image detecting step for detecting a pixel position of a
pixel having a pixel value in first middle luminance pixel value
region from the long second exposure image; a short second middle
luminance image detecting step for detecting a pixel position of a
pixel having a pixel value in second middle luminance pixel value
region from the short second exposure image; a correction gain
factor calculating step for designating, as a common pixel, a pixel
having a common pixel position such that the pixel position
detected by the long second middle luminance image detecting unit
is the same as the pixel position detected by the short second
middle luminance image detecting unit, and for calculating a
correction gain factor based on the pixel value of the long second
exposure image and the pixel value of the short second exposure
image at a pixel position of the common pixel position; and an
image synthesizing step for synthesizing for multiplying the short
second exposure image by the correction gain factor, and for
synthesizing the short second exposure image subjected to the
multiplication and the long second exposure image.
[0018] Other features and advantages of the present invention will
be apparent from the following description taken in conjunction
with the accompanying drawings, in which like reference characters
designate the same or similar parts throughout the figures
thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] FIG. 1 is a flowchart illustrating the flow of the
processing of an image processing method according to an exemplary
embodiment of the present invention.
[0020] FIGS. 2A and 2B are diagrams illustrating the relations
incident lights into an image sensor and pixel output values at the
times of long second exposure image imaging and short second
exposure image imaging.
[0021] FIGS. 3A and 3B are diagrams illustrating the relations
between incident light quantities and pixel output values in a long
second exposure image and a short second exposure image when the
pixel values at a threshold value Th or more are replaced with
zeros.
[0022] FIG. 4 is a diagram illustrating the relation between
incident light quantities and pixel output values of an image
synthesized by addition processing.
[0023] FIG. 5 is a diagram illustrating the relations between
incident light quantities and pixel output values of a synthesized
image at the time of performing the multiplication of a correction
gain factor G.
[0024] FIG. 6 is a diagram illustrating a luminance histogram
illustrating a distribution of the pixel output values of a long
second exposure image.
[0025] FIG. 7 is a diagram illustrating a luminance histogram
illustrating a distribution of the pixel output values of a short
second exposure image.
[0026] FIG. 8 is a diagram illustrating a configuration example of
an image processing apparatus according to an exemplary embodiment
of the present invention.
[0027] The accompanying drawings, which are incorporated in and
constitute a part of the specification, illustrate embodiments of
the invention and, together with the description, server to explain
the principles of the invention.
DESCRIPTION OF THE EMBODIMENTS
[0028] FIG. 8 is a diagram illustrating a configuration example of
an image processing apparatus according to an exemplary embodiment
of the present invention, and FIG. 1 is a flowchart illustrating
the image processing method of the image processing apparatus. To
put it concretely, FIG. 1 is a flowchart for describing the flow of
the processing of calculating a gain factor to be used at the time
of synthesizing a wide dynamic range image from the luminance
information of the two images of a long second exposure image and a
short second exposure image.
[0029] An image sensor 800 is a solid-state imaging device
including two-dimensionally arranged photodiodes, and generates two
images of a long second exposure image and a short second exposure
image by photoelectric conversion. To put it concretely, the image
sensor 800 photographs a plurality of images, having different
exposure quantities from one another, of the same scene. A long
second exposure image accumulated for a long exposure time is first
imaged in order that the image signals of the lower luminance side
of a subject may take suitable values. A short second exposure
image accumulated for a short time is next imaged in order that the
image signals on the higher luminance side of the subject may take
suitable values.
[0030] At a step S1 of detecting middle luminance pixels by the
exposure of the long second exposure, a unit 801 for detecting the
pixels of the middle luminance under the long second exposure
detects the pixels having middle luminance values, at which pixels
no "saturation into white" and no "black sinking" occur, among all
of the pixel output values in the long second exposure image. The
pixels the output values of which are equal to or more than a
threshold value .alpha.1 and equal to or less than a threshold
value .alpha.2 among all of the output values Yh of the pixels
constituting the long second exposure image are defined as the
pixels of middle luminance under long second exposure, and a
numeral 1 is assigned to the coordinates of the corresponding pixel
positions. The pixels the output values of which are less than the
threshold value .alpha.1 or larger than the threshold value
.alpha.2 are determined to include the pixels at which the
"saturation into white" and the "black sinking" occur, and the
pixels are not defined as the pixels of middle luminance under long
second exposure, but a numeral 0 is assigned to the coordinates of
the corresponding pixel positions.
[0031] At a step S2 of forming an image of a middle luminance
region under long second exposure, a unit 802 for forming an image
of a middle level of luminance region under the long second
exposure generates an image Yh_b of a middle luminance region under
the long second exposure, which image is a binary image, in which
the pixels of the middle luminance and the other pixels are
separated on the basis of the coordinate positions of the pixels of
the middle luminance under the long second exposure detected by the
processing at the step S1. The unit 802 then assigns an active
symbol to the pixels of the middle luminance. In the present
exemplary embodiment, binarization processing is supposed to be
performed by positive logic, and a symbol 1 is assigned to the
pixels of middle luminance.
[0032] At a step S3 of detecting an image of the middle luminance
under the short second exposure, a unit 803 for detecting the
pixels of the middle luminance under the short second exposure
detects the pixels that have the middle luminance values and have
no occurrence of the "saturation into white" and "black sinking
among the output values of all of the pixels of the short second
exposure image. The unit 803 then defines the pixels having the
output values equal to or more than .beta.1 and equal to or less
than .beta.2 among the output values Y1 of all of the pixels
constituting the short second exposure image as pixels of the
middle luminance under the short second exposure, and assigns a
numeral 1 to the coordinates at the corresponding pixel positions.
The pixels having the output values less than .beta.1 or larger
than .beta.2 are determined to include the pixels at which the
"saturation into white" and the "black sinking" occur, and does not
define the pixels as the pixels of the middle luminance under the
short second exposure, but a numeral 0 is assigned to the
coordinates of the corresponding pixel positions.
[0033] At a step S4 of forming an image of the middle luminance
region under the short second exposure, a unit 804 for forming an
image of the middle luminance region under the short second
exposure generates an image Y1_b of the middle luminance region
under the short second exposure, which is a binary image in which
the pixels of the middle luminance are separated from the other
pixels, on the basis of the coordinates positions of the pixels of
the middle luminance under the short second exposure, which pixels
have been detected at the step S3. The unit 804 then assigns a
numeral 1 to the pixels of the middle luminance.
[0034] Although the procedure illustrated in FIG. 1 sequentially
executes the processing at the steps S1 and S2 to specify the
pixels of the middle luminance in the long second exposure image
and sequentially executes the processing at the steps S3 and S4 to
specify the pixels of the middle luminance in the short second
exposure image, the procedure of the processing is not limited to
this one. The processing at the steps S3 and S4 may be first
sequentially executed before to specify the pixels of the middle
luminance in the short second exposure image, and the procedure at
the steps S1 and S2 may be second sequentially executed to specify
the pixels of the middle luminance in the long second exposure
image. Moreover, the processing at the steps S1 and S2 and the
processing at the steps S3 and S4 may be simultaneously executed by
parallel processing.
[0035] At a step S5 of detecting the pixels of common middle
luminance, a unit 805 for forming an image of the common middle
luminance performs a logical product (AND) operation to the every
pixels at the same coordinate positions in both the image Yh_b
generated at the step S2 and the image Y1_b generated at the step
S4. The numeral 1 is assigned to the pixels of the middle luminance
in both the images Yh_b and Y1_b. Consequently, the pixels at which
the results of the logical products of the respective pixels are 1
are the common pixels of the middle luminance, which have been
detected as the pixels of the middle luminance in both the images
Y1_b and Yh_b.
[0036] At a step S6 of forming an image of a common middle
luminance region, a unit 806 for forming an image of the common
middle luminance region generates an image Yh1_b of the common
middle luminance region, which is a binary image in which the
pixels of the common middle luminance and the other pixels are
separated, on the basis of the coordinate positions of the pixels
of the common middle luminance detected at the processing at the
step S5. The unit 806 then assigns a numeral 1 to the pixels of the
common middle luminance.
[0037] At a step S7 of calculating a luminance level ratio, an
arithmetic operation unit 807 for calculating the luminance level
ratio calculates a luminance level ratio K(z) from a ratio between
the output value of a pixel of the long second exposure image Yh
and the output value of a pixel in the short second exposure image
Yl, both pixels situated at the coordinate position of a pixel of
the common middle luminance detected at the step S6.
[0038] In the following, the procedure of the operation of the
luminance level K(z) will be described. In order to detect the
pixels of the common middle luminance, the arithmetic operation
unit 807 scans the image Yh1_b in the horizontal direction from the
upper part of the image in order. If a pixel value is 0, then the
arithmetic operation unit 807 excludes the pixel from the objects
of the calculation to neglect the pixel. If a pixel value is 1,
then the arithmetic operation unit 807 assigns an address z=1 to
the coordinate as an object pixel. The arithmetic operation unit
807 next continues to execute the scanning. If the pixel value is
0, then the arithmetic operation unit 807 neglects the pixel. If
the arithmetic operation unit 807 detects 1, then the arithmetic
operation unit 807 performs the increment of the address of the
pixel, and assigns an address z=2 to the pixel as the address of
the coordinate thereof. The arithmetic operation unit 807 performs
the processing, to all of the pixels, of performing the increment
of the address z of a pixel having the pixel value of 1 when the
arithmetic operation unit 807 detects the pixel. If N pixels of the
common middle luminance exist, then the addresses z of 1-N are
assigned to the coordinate positions of the pixels of the common
middle luminance. The arithmetic operation unit 807 next extracts
the output value of a pixel situated at the coordinate position of
the address z in the long second exposure image Yh(z) and the
output value of a pixel situated at the coordinate position of the
address z in the short second exposure image Yl(z), and calculates
the ratio between the pixel output values to set the ratio as the
luminance level ratio K(z). The arithmetic operation unit 807
performs the calculation to the pixels at the coordinates indicated
by the addresses z=1-N, and thus calculates the luminance level
ratios K(z) at the N pixels of the common middle luminance.
[0039] A formula (1) illustrates a calculation formula of the
luminance level ratio K(z).
K ( z ) = Yh ( z ) Yl ( z ) ( 1 ) ##EQU00001##
[0040] Hereupon, a sign Yh(z) denotes the pixel value of the long
second exposure image Yh at the address z. A sign Yl(z) denotes the
pixel value of the short second exposure image Yl at the address z.
A sign K(z) denotes the luminance level ratio at the address z. The
address z expresses the addresses 1-N. A sign N denotes the number
of the pixels of the common middle luminance.
[0041] At a step of calculating of a correction gain factor, an
arithmetic operation unit 808 for calculating the correction gain
obtains an average value of the N luminance level ratios K(z), and
calculates a correction gain factor G. A formula (2) illustrates a
calculation formula of the correction gain factor G.
G = z = 1 N K ( z ) N ( 2 ) ##EQU00002##
[0042] A method of a unit 809 for synthesizing an image to
synthesize a wide dynamic image by using the correction gain factor
G determined in accordance with the present exemplary embodiment
will be described step by step.
[0043] FIG. 2A is a graph illustrating the relation between an
incident light quantity into the image sensor 800 and the pixel
output value thereof in a long second exposure image. As the
incident light quantity increases, the pixel output value also
increases. When the incident light quantity becomes D0 or more, the
photodiodes of the image sensor 800 is saturated. Even if the
incident light quantity further increases, the pixel output value
is in the state of "saturation into white" to be constant. The
gradation of the incident light that the photodiodes have actually
received cannot be reproduced to the incident light quantity larger
than D0, and consequently the dynamic range of the image signal is
limited at the D0.
[0044] FIG. 2B is a graph illustrating the relation between the
incident light quantity into the image sensor 800 and the pixel
output value in a short second exposure image. Although the pixel
output value is saturated at the incident light quantity of D0 in
the long second exposure image as illustrated in the graph of FIG.
2A, because the exposure time of the short second exposure image is
shorter than that of the long second exposure image, the pixel
output value is not saturated at D0, and the pixel output value is
linearly output up to the incident light quantity of D1 illustrated
in FIG. 2B. However, not a sufficient exposure quantity can obtain
to the incident light quantity on the lower luminance side of the
subject, and the pixel output value decreases by a large margin.
The exposure time to the incident light quantity of D2 or less is
insufficient, and the photocharges accumulated in the photodiodes
are the noise level or less of the photodiodes. Consequently, the
pixel output value cannot be recognized as an image signal. If the
incident light is D2 or less, the gradation of the subject cannot
be reproduced, and the pixel output value is in the "black sinking"
state. The dynamic range of the image signal is limited here.
[0045] In FIG. 2A, it is judged that the pixel output value of the
incident light quantity near to the D0 produces the "saturation
into white," and the pixel output value of a threshold value Th or
more is replaced with zero. FIG. 3A is a graph illustrating the
relation between the incident light quantity and the pixel output
value of the long second exposure image when the pixel output value
of the threshold value Th or more of FIG. 2A is replaced with zero.
The pixel output value less than the threshold value Th is in an
unsaturated state, and is a signal having no "saturation into
white." Moreover, the pixel output value has sufficient magnitudes
on the lower luminance side, and is a signal having no "black
sinking." Next, in order to replace the pixel output value of the
threshold value Th or more, which pixel output value is set as
zero, in the long second exposure image Yh with the pixel output
value of the short second exposure image Yl, the pixel output value
of the threshold value Th or less of the short second exposure
image Yl.
[0046] FIG. 3B is a graph illustrating the relation between the
incident light quantity and the pixel output value of the short
second exposure image in which the pixel output value of the
threshold value Th or less in FIG. 2B is replaced with zero. A
pixel output value having no saturation to the incident light
quantity from the threshold value Th up to D1 can be obtained. The
unit 809 next adds the two images of the long second exposure image
of FIG. 3A, in which the pixel output value of the threshold value
Th or more is changed to zero, and the short second exposure image
of FIG. 3B, in which the pixel output value of the threshold value
Th or less is changed to zero, with each other.
[0047] A graph illustrating the relation of the incident light
quantity and the pixel output value of the image synthesized by the
addition processing is FIG. 4. The curve corresponding to the
incident light quantity which is less than the threshold value Th
indicates the pixel output value of the long second exposure image,
and the curve corresponding to the incident light quantity which is
the threshold value Th or more indicates the pixel output value of
the short second exposure image. A discontinuous point is produced
in the increasing curve of the pixel output value to the increasing
incident light quantity at a part of the pixel output value of the
threshold value Th, and a large step is produced. The step is
produced by the gain difference cause by the difference of the
exposure times of both of the images. In order to make the curve of
the increasing pixel output value continuous near the threshold
value Th, and to synthesize an image having an expanded dynamic
range, the pixel output value of the threshold value Th or more
(the pixel output value of the short second exposure image) is
multiplied by the correction gain factor G calculated by the method
of the present exemplary embodiment.
[0048] FIG. 5 is a graph illustrating the relation between the
incident light quantity and the pixel output value of the
synthesized image produced by the multiplication of the correction
gain factor G to the pixel output value corresponding to the
incident light quantity of the threshold value Th or more (the
pixel output value of the short second exposure image). The pixel
output value increasing curve has no discontinuity and no step at
the threshold value Th. Moreover, the multiplication of the
correction gain factor G to the pixel output value corresponding to
the incident light quantity of the threshold value Th or more (the
pixel output value of the short second exposure image) expands the
dynamic range. The pixel output value corresponding to the incident
light quantity D1 has been Fs before the multiplication of the
correction gain factor G, but the pixel output value is FS.times.G
after the multiplication of the correction gain factor G. Thus, the
image having the dynamic range expanded by G times can be
synthesized. The correction gain factor G is calculated by the
method of the present exemplary embodiment, and the pixel output
value increasing curve has no discontinuity. The high image quality
wide dynamic range image having no fake edge can be thus
synthesized.
[0049] The determination method of the threshold values .alpha.1,
.alpha.2, .beta.1, and .beta.2 to determine the pixels of the
middle luminance under the long second exposure and the pixels of
the middle luminance under the short second exposure, which
threshold values are necessary for calculating the correction gain
factor G in the present exemplary embodiment, will be next
described.
[0050] The set values of the threshold values .alpha.1 and .alpha.2
to determine the pixels of the middle luminance under the long
second exposure will be described with reference to FIG. 6. FIG. 6
is a luminance histogram illustrating the distribution of the pixel
output value of the long second exposure image Yh. The image used
in FIG. 6 is an image of 8 bits and 256 steps of gradation, and the
saturation into white occurs therein because the exposure time
thereof is long. The image has the luminance distribution in which
the frequency of the pixel output value of 240 or more is very
large. If the maximum value 255 of the pixel output is supposed to
100%, and if the pixels of the middle luminance under the long
second exposure is defined as the ones within a range from 30% to
70%, both inclusive, of the maximum value of the pixel output, then
.alpha.1 becomes 77, and .alpha.2 becomes 178, and the region 4 in
FIG. 6 is the pixel region of the middle luminance under the long
second exposure. In order to calculate the correction gain factor G
with good accuracy at a high rate of speed, the sample number of
the pixels of the middle luminance under the long second exposure
is needed to be a predetermined number or more, and the pixel
region 4 of the middle luminance under the long second exposure is
the most suitable range in which both of the high speed property
and the enhancement of accuracy are satisfied.
[0051] Incidentally, the present exemplary embodiment defines the
pixels of middle luminance under long second exposure as the ones
within the range from 30% to 70%, both inclusive, of the maximum
value of the pixel output, but the range is not limited to this
one. As the lower limit value of the threshold value .alpha.1, it
is desirable to set the lower limit value to the 10% of the maximum
value of the pixel output. The reason is that, if the threshold
value .alpha.1 is set to be 10% or less of the maximum value of the
pixel output, then black sinking pixels and pixels including noises
to make their SN ratios worse are defined as the pixels of the
middle luminance under the long second exposure, and errors would
arise in the calculation result of the correction gain factor G. As
the upper limit value of the threshold value .alpha.2, the upper
limit value is desirably set to the 80% of the maximum value of the
pixel output. The reason is that the pixels at which the saturation
into white occurs are defined as the pixels of the middle luminance
under the long second exposure, and the errors would arise in the
calculation result of the correction gain factor G.
[0052] Next, the set values of the threshold values .beta.1 and
.beta.2 to determine the pixels of the middle luminance under the
short second exposure will be described with reference to FIG. 7.
FIG. 7 is a luminance histogram illustrating the distribution of
the pixel output value of the short second exposure image Yl. The
image used in FIG. 7 is an image of 8 bits and 256 steps of
gradation, and the black sinking occurs therein because the
exposure time thereof is short. The image has the luminance
distribution in which the frequency of the pixel output value near
to 0 is very large. Moreover, because the pixel output values on
the higher luminance side (about 200 or more) does not have any
tendency of the rapid increase of the frequency, it is known that
the saturation into white does not occur. If the maximum value 255
of the pixel output is supposed to 100%, and if the pixels of the
middle luminance under the short second exposure is defined as the
ones within a range from 30% to 70%, both inclusive, of the maximum
value of the pixel output, then .beta.1 becomes 77, and .beta.2
becomes 178, and the region 5 in FIG. 7 is the pixel region of the
middle luminance under the short second exposure. In order to
calculate the correction gain factor G with good accuracy at a high
rate of speed, the sample number of the pixels of the middle
luminance under the short second exposure is needed to be a
predetermined number or more, and the pixel region 5 of the middle
luminance under the short second exposure is the most suitable
range in which both of the high speed property and the enhancement
of accuracy are satisfied.
[0053] Incidentally, the present exemplary embodiment defines the
pixels of middle luminance under short second exposure as the ones
within the range from 30% to 70%, both inclusive, of the maximum
value of the pixel output, but the range is not limited to this
one. As the lower limit value of the threshold value .beta.1, it is
desirable to set the lower limit value to the 20% of the maximum
value of the pixel output. The reason is that, if the threshold
value .beta.1 is set to be 20% or less of the maximum value of the
pixel output, then black sinking pixels and pixels including noises
to make their SN ratios worse are defined as the pixels of the
middle luminance under the short second exposure, and errors would
arise in the calculation result of the correction gain factor G. In
particular, because the exposure time of the short second exposure
image is shorter than that of the long second exposure image, the
frequency of the occurrence of the black sinking pixels increases.
Accordingly, the 10% (=.alpha.1) of the maximum value of the pixel
output of the long second exposure image and the 20% (=.beta.1) of
the maximum value of the pixel output of the short second exposure
image are set as the lower limit values of the threshold values to
determine the pixel region of the middle luminance.
[0054] As the upper limit value of the threshold value .beta.2, the
upper limit value is desirably set to the 90% of the maximum value
of the pixel output. The reason is that the pixels at which the
saturation into white occurs are defined as the pixels of the
middle luminance under the short second exposure, and the errors
would arise in the calculation result of the correction gain factor
G. Because the exposure time of the short second exposure image is
shorter than that of the long second exposure image, the frequency
of the occurrence of the saturation into white decreases.
Accordingly, the 80% (=.alpha.2) of the maximum value of the pixel
output of the long second exposure image and the 90% (=.beta.2) of
the maximum value of the pixel output of the short second exposure
image are set as the lower limit values of the threshold values to
determine the pixel region of the middle luminance.
[0055] Moreover, the combination of the threshold values .alpha.1,
.alpha.2, .beta.1, and .beta.2 enables the changing of the sizes of
the pixel regions 4 and 5, and enables the change of the sample
number of the pixels of the common middle luminance to be used at
the time of calculating the correction gain factor G.
[0056] By reducing the threshold values .alpha.1 and .beta.1, and
by enlarging the threshold values .alpha.2 and .beta.2, the ranges
of the pixel regions 4 and 5 are expanded, and the sample numbers
of the common pixels of the middle luminance and the luminance
level ratios K increase. As a result, the correction gain factor G
is led to be obtained from the luminance level ratios K, the sample
number of which has increased, and the errors caused by the noise
components and the like are decreased to make the accuracy of the
correction gain factor G be improved.
[0057] Because the method improves the accuracy, although the
method increases the calculation time up to the calculation of the
correction gain factor G, the method is used at the time of
synthesizing an ultra-high image quality wide dynamic range image
having a large fake edge reducing effect. The method can be used
for the application of an ultra-high image quality still image
camera and the like, which can spare a certain time for the
calculation of the gain factor G.
[0058] By enlarging the threshold values .alpha.1 and .beta.1, and
by reducing the threshold values .alpha.2 and .beta.2, the ranges
of the pixel regions 4 and 5 are reduced, and the sample numbers of
the common pixels of the middle luminance and the luminance level
ratios K decrease. As a result, the correction gain factor G is led
to be obtained from the luminance level ratios K, the sample number
of which has decreased, and the calculation time up to the
calculation of the correction gain factor G is shortened by a large
margin.
[0059] Because the method can shorten the calculation time by a
large margin with the calculation accuracy of the correction gain
factor G secured at a certain degree, the method is used at the
time of synthesizing a wide dynamic range image requiring high
speed processing. The method can be used for the application of a
high speed imaging moving image camera and the like, which must
complete the calculation of the correction gain factor G within a
predetermined time.
[0060] By the execution of the operation procedure described above,
the operation quantity can be reduced by a large margin by
calculating the correction gain G by detecting the pixels having
the middle luminance values in both of the long second exposure
image and the short second exposure image at a high rate of speed
to used only the detected pixels.
[0061] As described above, the image processing apparatus according
to the present exemplary embodiment detects the pixels of the
common middle luminance having the middle luminance values in both
of the long second exposure image and the short second exposure
image by logical product operations on the basis of the positional
information of the pixels of the middle luminance of the long
second exposure image and the positional information of the pixels
of the middle luminance of the short second exposure image, both
pieces of the positional information detected by the threshold
value processing. The correction gain G at the time of image
synthesis is then determined on the basis of the ratios of the
luminance values of the detected pixels of the common middle
luminance.
[0062] According to the present exemplary embodiment, after
extracting the pixels to determine the correction gain G by the
threshold value operation and the logical product operation, the
correction gain G is determined by the luminance level ratios of
the long second exposure pixels and the short second exposure
pixels by using only the extracted pixels. Consequently, the
enormous calculations that are required for the technique disclosed
in the first patent document are not required, and the correction
gain G can be determined at a high rate of speed. Thereby, a high
speed wide dynamic range image synthesizing processing can be
realized.
[0063] Moreover, by changing the threshold value levels of the
threshold value processing to extract the pixels for correction
gain determination, the total number of the pixels for the
correction gain determination can be changed. Thereby, the control
of the relation between the operation speed of the correction gain
and the operation accuracy thereof is enabled, and flexible wide
dynamic range image synthesizing processing capable of dealing with
both of the moving image application requiring high speed
processing and the still image application requiring ultra-high
image quality processing can be realized. That is, the numbers of
the pixels included in the first and second middle luminance pixel
value regions are converted by a moving image mode and a still
image mode, such that the number of the pixels in the moving image
mode is larger than that in the still image mode.
[0064] The image sensor 800 in the image processing apparatus and
the image processing method of the present exemplary embodiment
images the long second exposure image having the large exposure
quantity and the short second exposure image having the small
exposure quantity of the same scene (subject). The unit 801 for
detecting the pixels of the middle luminance under the long second
exposure detects the pixel positions of the pixels having the pixel
values in the first middle luminance pixel value region 4 from the
long second exposure image at the step S1 of detecting the middle
luminance pixel by exposure of the long second exposure. The unit
803 for detecting the pixels of the middle luminance under the
short second exposure detects the pixel positions of the pixels
having the pixel values in the second middle luminance pixel value
region 5 from the short second exposure image at the step S3 of
detecting the image of the middle luminance under the short second
exposure. The unit 805 for forming the image of the common middle
luminance detects the pixels as the common pixels, which pixels
situated at the common pixel positions to the pixel positions
detected by the unit 801 and the pixel positions detected by the
unit 803, at the step S5 of detecting the pixels of the common
middle luminance. The arithmetic operation unit 808 for calculating
the correction gain calculates the correction gain factor G on the
basis of the pixel values of the long second exposure image and the
pixel values of the short second exposure image at the pixel
positions of the common pixels at the step S8 of calculating of the
correction gain factor. The unit 809 for synthesizing an image
multiplies the short second exposure image by the correction gain
factor G to synthesize the multiplied short second exposure image
with the long second exposure image at the image synthesizing
step.
[0065] The arithmetic operation unit 807 for calculating the
luminance level ratio and the arithmetic operation unit 808 for
calculating the correction gain calculate the average value of the
ratios K(z) between the pixel values of the long second exposure
image and the pixel values of the short second exposure image at
the pixel positions of the common pixels by the formulae (1) and
(2) as the correction gain factor G.
[0066] The first middle luminance pixel value region can be the
region of the pixel values within the range from 30% to 70%, both
inclusive, of the maximum luminance pixel value, and the second
middle luminance pixel value region can be the region of the pixel
values within the range from 30% to 70%, both inclusive, of the
maximum luminance pixel value.
[0067] Moreover, the first middle luminance pixel value region can
be the region of the pixel values within the range from 10% to 80%,
both inclusive, of the maximum luminance pixel value, and the
second middle luminance pixel value region can be the region of the
pixel values within the range from 20% to 90%, both inclusive, of
the maximum luminance pixel value.
[0068] While the present invention has been described with
reference to exemplary embodiments, it is to be understood that the
invention is not limited to the disclosed exemplary embodiments.
The scope of the following claims is to be accorded the broadest
interpretation so as to encompass all such modifications and
equivalent structures and functions.
[0069] This application claims the benefit of Japanese Patent
Application No. 2007-106121, filed Apr. 13, 2007, which is hereby
incorporated by reference herein in its entirety.
* * * * *