U.S. patent application number 11/727147 was filed with the patent office on 2008-02-28 for image processing apparatus and imaging device.
Invention is credited to Goh Itoh, Ryosuke Nonaka.
Application Number | 20080050031 11/727147 |
Document ID | / |
Family ID | 39113512 |
Filed Date | 2008-02-28 |
United States Patent
Application |
20080050031 |
Kind Code |
A1 |
Itoh; Goh ; et al. |
February 28, 2008 |
Image processing apparatus and imaging device
Abstract
An image processing apparatus generates a brightness component
image of a low-sensitivity image by applying a smoothing filter to
the low-sensitivity image, generates a brightness component image
of a high-sensitivity image by applying the smoothing filter to the
high-sensitivity image, generates a contrast component image of the
low-sensitivity image by dividing the low-sensitivity image by the
brightness component image of the low-sensitivity image, generating
a contrast component image of the high-sensitivity image by
dividing the high-sensitivity image by the brightness component
image of the high-sensitivity image, generates a combined
brightness component image by combining the brightness component
images of the low-sensitivity image and of the high-sensitivity
image, generates a combined contrast component image by combining
the contrast component images of the low-sensitivity image and of
the high-sensitivity image, and finally generates a
contrast-expanded image by multiplying the combined brightness
component image and the combined contrast component image.
Inventors: |
Itoh; Goh; (Tokyo, JP)
; Nonaka; Ryosuke; (Kanagawa, JP) |
Correspondence
Address: |
FINNEGAN, HENDERSON, FARABOW, GARRETT & DUNNER;LLP
901 NEW YORK AVENUE, NW
WASHINGTON
DC
20001-4413
US
|
Family ID: |
39113512 |
Appl. No.: |
11/727147 |
Filed: |
March 23, 2007 |
Current U.S.
Class: |
382/260 |
Current CPC
Class: |
G06T 5/20 20130101; G06T
5/008 20130101; G06T 5/002 20130101; G06T 2207/10024 20130101; G06T
5/50 20130101 |
Class at
Publication: |
382/260 |
International
Class: |
G06K 9/40 20060101
G06K009/40 |
Foreign Application Data
Date |
Code |
Application Number |
Aug 24, 2006 |
JP |
2006-228407 |
Claims
1. An image processing apparatus comprising: a first image
generator that generates each of a plurality of first images by
applying a smoothing filter to each of a plurality of input images
of a single object picked up with various sensitivities; a second
image generator that generates each of a plurality of second images
by dividing a pixel value of each pixel of the each input image by
a pixel value of a collocated pixel of the each first image; a
third image generator that generates a third image by adding pixel
values of collocated pixels in the first images; a fourth image
generator that generates a fourth image by multiplying pixel values
of collocated pixels in the second images; and an output image
generator that generates an output image by multiplying a pixel
value of each pixel of the third image and a pixel value of a
collocated pixel of the fourth image.
2. The apparatus of claim 1, further comprising: an extracting unit
that extracts a luminance component, a first color-difference
component and a second color-difference component from the each
input image to generates each of a plurality of luminance images, a
plurality of first color-difference images and a plurality of
second color-difference images; a third color-difference image
combining unit that adds pixel values of collocated pixels in the
first color-difference images to generate a third color-difference
image; and a fourth color-difference image combining unit that adds
pixel values of collocated pixels in the second color-difference
images to generate a fourth color-difference image; wherein the
first image generator generates the first images using the
luminance images as new input images, and the output image
generator generates a new output image by combining the third
color-difference image and the fourth color-difference image with
the output image.
3. The apparatus of claim 1, wherein the first image generator uses
a linear low-pass filter or a non-linear low-pass filter of an edge
preservation type as the smoothing filter.
4. The apparatus of claim 1, wherein the third image generator
generates the third image after having adjusted the each first
image to an average gradation level.
5. The apparatus of claim 4, wherein the adjustment of the third
image generator comprises: obtaining a histogram of the first
image; obtaining a weighted average value of the each first image
on the basis of the histogram; and adjusting the each first image
to the average gradation level by the weighted average value.
6. The apparatus of claim 1, wherein the fourth image is obtained
by applying the smoothing filter to the second images and
multiplying the pixel values of the collocated pixels of the
smoothing filter-applied images.
7. The apparatus of claim 1, wherein the output image generator
performs threshold value processing so that the multiple value of
the third image and the fourth image falls within the gradation
level which can be inputted to a display device that displays the
output image.
8. An imaging device comprising: an image pick-up unit that picks
up the input images of the single object with at least two
sensitivities, wherein the input images is processed by the image
processing apparatus of claim 1.
9. The imaging device of claim 8, wherein the image pick-up unit
comprises: a light-receiving element that acquires a
low-sensitivity image; and a light-receiving element that acquires
a high-sensitivity image, the light-receiving element having an
aperture larger than that of the light-receiving element of the
low-sensitivity image.
10. The imaging device of claim 8, wherein the image pick-up unit
comprises: a light-receiving element that acquires a
low-sensitivity image; and a light-receiving element that acquires
a high-sensitivity image, the light-receiving element having a
light-receiving time longer than a light-receiving time for the
low-sensitivity image.
11. An image processing method comprising: generating each of a
plurality of first images by applying a smoothing filter to each of
a plurality of input images of a single object picked up with
various sensitivities; generating each of a plurality of second
images by dividing a pixel value of each pixel of the each input
image by a pixel value of a collocated pixel of the each first
image; generating a third image by adding pixel values of
collocated pixels in the first images; generating a fourth image by
multiplying pixel values of collocated pixels in the second images;
and generating an output image by multiplying a pixel value of each
pixel of the third image and a pixel value of a collocated pixel of
the fourth image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is based upon and claims the benefit of
priority from the prior Japanese Patent Application No.
2006-228407, filed on Aug. 24 2006; the entire contents of which
are incorporated herein by reference.
TECHNICAL FIELD
[0002] The present invention relates to an image processing
apparatus for expanding contrast of a picked-up image to generate
an image with good visibility, which is effective for an imaging
device which can acquire a high-sensitivity picked-up image and a
low-sensitivity picked-up image as input images, and to the imaging
device.
BACKGROUND OF THE INVENTION
[0003] In an image obtained by picking up a dark object, gradation
in a dark area is not sufficiently reproduced, and hence a
blackened image is obtained. In contrast, in an image obtained by
picking up a bright object, since more electric charges are formed
in comparison with the charge storage amount of an imaging element,
an image saturated in white is obtained. Both cases have a problem
that images having inferior visibility are obtained since gradation
characteristics of the images cannot be reproduced.
[0004] In order to solve this problem, a method of expanding
contrast by generating a combined image from two images having
different picking-up conditions is proposed in Japanese Application
Kokai No. 2001-352486. In Japanese Application Kokai No.
2001-352486, a basic concept is to emphasize a certain space
frequency component in an image, and high-frequency component of
the image picked up with short exposure time is not emphasized, and
the high-frequency component of the image picked up with long
exposure time is emphasized for combining these images.
[0005] Further description will be made about characteristics of
the image with low charge storage. Such an image has a large
blackened area and has much noise in the dark area as described
above, but the contrast in a bright area is high. Hereinafter, the
image picked up with a low sensitivity as described above is
referred to as "low-sensitivity image".
[0006] Further description will now be made about characteristics
of the image with high charge storage. Such an image has a large
area saturated in white, but the contrast in the dark area is high
as described above. Hereinafter, the image picked up with a high
sensitivity as described above is referred to as "high-sensitivity
image".
[0007] As a method of picking up the low-sensitivity image, there
are a method of reducing the exposure time, and a method of
reducing the surface area of an aperture of a light-receiving
element. On the other hand, as a method of picking up a
high-sensitivity image, there are a method of increasing the
exposure time and a method of increasing the surface area of the
aperture of the light-receiving element.
[0008] Therefore, in Japanese Application Kokai No. 2001-352486,
improvement in visibility of the image is aimed by emphasizing the
high-frequency component of the high-sensitivity image while
reducing noises by not emphasizing the high-frequency component of
the low-sensitive image, and combining the respective images.
However, the emphasized area is an edge of the dark area and
low-frequency component in the bright area of the object, but no
effect appears in the low-frequency component of the dark area and
the edge component of the bright area. Therefore, the method in
Japanese Application Kokai No. 2001-352486 does not correspond
essentially to the method of emphasizing the contrast.
[0009] The method of emphasizing the contrast is disclosed in
Japanese Application Kokai No. 2003-8935. In the method disclosed
in Japanese Application Kokai No. 2003-8935, since the contrast
with respect to a local area in a screen is emphasized, emphasizing
of the contrast is achieved by calculating a superimposed value
using pixel values in the periphery of a target pixel using a
smoothing filter (epsilon filter) that stores an edge area of an
input image, and determining the target pixel value is divided by
the superimposed value as the contrast.
[0010] However, in Japanese Application Kokai No. 2003-8935, since
there is no change between the pixel in question and the peripheral
pixels in the blackened area or the white saturated area, and hence
the contrast is 1, it is not suitable for emphasis. In other words,
with the method disclosed in Japanese Application Kokai No.
2003-8935, since information is already lost, the contrast between
the pixel in question and the peripheral pixels cannot be
enhanced.
[0011] As described above, the contrast between the pixel in
question and the peripheral pixels (hereinafter, referred to as
"local contrast") cannot be emphasized adequately from the dark
area to the bright area by using the methods described above.
[0012] Accordingly, it is an object of the invention to provide an
image processing apparatus that can generate an image with good
visibility by emphasizing local contrast using at least two
low-sensitivity image and high-sensitivity image and an imaging
device therefor.
BRIEF SUMMARY OF THE INVENTION
[0013] According to a first aspect of the invention, there is
provided an image processing apparatus including: a first image
generator that generates each of a plurality of first images by
applying a smoothing filter to each of a plurality of input images
of a single object picked up with various sensitivities; a second
image generator that generates each of a plurality of second images
by dividing a pixel value of each pixel of the input image by a
pixel value of a collocated pixel of the each first image; a third
image generator that generates a third image by adding pixel values
of collocated pixels in the first images; a fourth image generator
that generates a fourth image by multiplying the pixel values of
collocated pixels in the second images; and an output image
generator that generates an output image by multiplying a pixel
value of each pixel of the third image and a pixel value of a
collocated pixel of the fourth image.
[0014] According to another aspect of the invention, there is
provided an imaging device including an image pick-up unit that
picks up the input images of the single object with at least two
sensitivities, wherein the input images is processed by the image
processing apparatus according to the first aspect of the
invention.
[0015] According to the aspects of the invention, the input images
in various sensitivities are acquired to generate an image with a
good visibility in which the contrast is expanded from the input
images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] FIG. 1 is a block diagram of an image processing apparatus
according to a first embodiment of the invention;
[0017] FIG. 2 is a block diagram of the image processing apparatus
according to a second embodiment;
[0018] FIG. 3 shows tables showing coefficients of a filtering
process according to a third embodiment;
[0019] FIGS. 4A and 4B are explanatory drawings showing Halo
phenomenon of the same embodiment;
[0020] FIGS. 5A and 5B show another filtering coefficient of the
same embodiment;
[0021] FIGS. 6A and 6B illustrate a .epsilon. filter according to
the same embodiment;
[0022] FIGS. 7A and 7B illustrate a .epsilon. median filter
according to the same embodiment;
[0023] FIG. 8 illustrates an example of a .epsilon. value in an
image processing in the same embodiment;
[0024] FIGS. 9A and 9B are image data to be converted in the image
processing apparatus according to a fifth embodiment;
[0025] FIGS. 10A and 10B are image data to be converted in the
image processing apparatus according to the same embodiment;
[0026] FIGS. 11A and 11B are block diagrams of the image processing
apparatus according to a seventh embodiment;
[0027] FIG. 12 is a block diagram of the image processing apparatus
according to an eighth embodiment;
[0028] FIG. 13 is a block diagram of an imaging device according to
the first embodiment of the invention;
[0029] FIGS. 14A and 14B are drawings showing a configuration of an
imaging device in the imaging device according to the second
embodiment;
[0030] FIGS. 15A and 15B are drawings showing a configuration of
the imaging device in the imaging device according to the third
embodiment;
[0031] FIG. 16 is a table showing the difference between a
low-sensitivity image and a high-sensitivity image in the image
processing apparatus in the first embodiment; and
[0032] FIG. 17 is a table showing characteristics of the various
types of image processing in the image processing apparatus
according to the first embodiment.
DETAILED DESCRIPTION OF THE INVENTION
[0033] An image processing apparatus 10 which generates a
contrast-expanded image from a low-sensitivity image and a
high-sensitivity image picked up in two different sensitivities
will be described in a first embodiment to an eighth embodiment,
and then an imaging device 200 for picking up the low-sensitivity
image and the high-sensitivity image in a ninth embodiment to an
eleventh embodiment.
First Embodiment
[0034] Referring now to FIG. 1, FIGS. 16 and 17, the image
processing apparatus 10 according to a first embodiment of the
invention will be described.
(1) Description of Low-Sensitivity Image, High-Sensitivity
Image
[0035] The image processing apparatus 10 receives a supply of
monochrome low-sensitivity image and high-sensitivity image. The
low-sensitivity image and the high-sensitivity image are the images
described above.
[0036] In the description of the embodiments shown below, it is
assumed that the image data has gradation levels (level of
brightness) from 0 to 255 and, in order to simplify the
description, the relation between the gradation levels and the
brightness is assumed to be 1:1. However, the relation between the
gradation level and the brightness level of the image data may be
the relation which follows a predetermined curve in advance.
(2) Configuration of Image Processing Apparatus 10
[0037] Referring now to FIG. 1, a configuration of the image
processing apparatus 10 will be described. FIG. 1 is a block
diagram of the image processing apparatus 10 according to the first
embodiment.
[0038] The image processing apparatus 10 includes a smoothing
filter processing unit 12, a smoothing filter processing unit 14, a
divider 16, a divider 18, a contrast component combining unit 20, a
brightness component combining unit 22, a multiplier 24, an L_LCC
image processing unit 26, an L_GLC image processing unit 28, an
H_LCC image processing unit 30, and an H_GLC image processing unit
32. The functions of these units 12 to 32 are realized also by a
program stored in a computer.
(2-1) Smoothing Filter Processing Units 12, 14
[0039] Firstly, in the smoothing filter processing unit 12, a
brightness component image of the low-sensitivity image shown by an
expression (1) shown below (hereinafter, referred to as L_GLC) is
generated by applying the smoothing filter to the low-sensitivity
image. Subsequently, image processing is applied to the L_GLC by
the L_GLC image processing unit 28. The image processing performed
here will be described later.
[0040] In the same manner, the smoothing filter processing unit 14
generates a brightness component image (hereinafter, referred to as
H_GLC) of the high-sensitivity image shown by an expression (2)
shown below by applying the smoothing filter also of the
high-sensitivity image. As described later, different smoothing
filters may be used here for the low-sensitivity image and the
high-sensitivity image. Subsequently, the image processing is
applied to the H_GLC by the H_GLC image processing unit 32. The
image processing performed here will be described later.
P L_GLC ( x , y ) = i = - k k j = - l l T L_GLC ( i , j ) X ( x - i
, y - j ) ( 1 ) ##EQU00001## [0041] P.sub.L.sub.--.sub.GLC(x, y):
brightness component image data of a low-sensitivity image at a
pixel position (x, y) [0042] T.sub.L.sub.--.sub.GLC(i, j): filter
coefficient of a smoothing filter for the low-sensitivity image
[0043] X(x, y); image data of the low-sensitivity image at the
pixel position (x, y) [0044] k: lateral pixel range to be filtered
[0045] l: vertical pixel range to be filtered
[0045] P H_GLC ( x , y ) = i = - k k j = - l l T H_GLC ( i , j ) X
( x - i , y - j ) ( 2 ) ##EQU00002## [0046]
P.sub.H.sub.--.sub.GLC(x, y): brightness component image data of a
high-sensitivity image at the pixel position (x, y) [0047]
T.sub.H.sub.--.sub.GLC(i, j): filter coefficient of the smoothing
filter for the high-sensitivity image [0048] X(x, y); image data of
the high-sensitivity image at the pixel position (x, y) [0049] k:
lateral pixel range to be filtered [0050] l: vertical pixel range
to be filtered
(2-2) Dividers 16, 18
[0051] Subsequently, the divider 16 divides the low-sensitivity
image by the L_GLC to generate a contrast component image of the
low-sensitivity image shown by an expression (3) (hereinafter,
referred to as L_LCC). Subsequently, the image processing is
applied to the L_LCC by the L_LCC image processing unit 26. The
image processing performed here will be described later.
[0052] In the same manner, the divider 18 divides the
high-sensitivity image by the H_GLC to generate a contrast
component image of the high-sensitivity image shown by an
expression (4) (hereinafter, referred to as H_LCC). Subsequently,
the image processing is applied to the H_LCC by the H_LCC image
processing unit 30. The image processing performed here will be
described later.
[0053] Generally, when the processing with the smoothing filter is
performed, the low space frequency component of an original image
is extracted. Therefore, by subtracting the low space frequency
component from the original image, the high space frequency
component can be obtained. However, since division is performed
instead of subtraction in the first embodiment, the low space
frequency component is regarded as an average luminance in the
screen to obtain the ratio with respect to the pixel in question,
that is, the contrast. Therefore, the image after division is
regarded as a contrast component image.
Q.sub.L.sub.--.sub.LCC(x,
y)=X.sub.L.sub.--.sub.GLC(i,j)/P.sub.L.sub.--.sub.GLC(i,j) (3)
[0054] Q.sub.L.sub.--.sub.LCC(x,y) :contrast component image data
of the low-sensitivity image at the pixel position (x, y) [0055]
Q.sub.H.sub.--.sub.LCC(x,y) :contrast component image data of the
high-sensitivity image at the pixel position (x, y)
[0055]
Q.sub.H.sub.--.sub.LCC(x,y)=X.sub.H.sub.--.sub.GLC(i,j)/P.sub.H.s-
ub.--.sub.GLC(i,j) (4)
[0056] In FIG. 1, the image processing is performed separately for
P(x, y) and Q (x, y) respectively. However, various methods can be
performed mainly for eliminating noises or improvement of gradation
characteristics. Combining of the brightness components and
combining of contrast components are expressed in an expression
first and detailed description will be described later.
(2-3) Brightness Component Combining Unit 22
[0057] Subsequently, the brightness component combining unit 22
combines the L_LCC and the H_LCC as shown in an expression (5), and
forms a combined contrast component image (hereinafter, referred to
as C_LCC).
P.sub.C.sub.--.sub.GLC(x,y)=.alpha.P.sub.L.sub.GLC(i,j)+.beta.P.sub.H.su-
b.--.sub.GLC(i,j) (5) [0058] P.sub.C.sub.--.sub.GLC(X, y):
brightness component image data of a combined image at the pixel
position (x, y) [0059] .alpha.: weight coefficient for the
brightness component image of the low-sensitivity image [0060]
.beta.: weight coefficient for the brightness component image of
the high-sensitivity image
(2-4) Contrast Component Combining Unit 20
[0061] Subsequently, the contrast component combining unit 20
combines the L_GLC and the H_GLC to generate a combined brightness
component image (hereinafter, referred to as C_GLC) as shown by an
expression (6).
Q.sub.C.sub.--.sub.LCC(x,y)=.lamda.Q.sub.L.sub.LCC(i,j).times..mu.Q.sub.-
H.sub.--.sub.LCC(i,j) (6) [0062] Q.sub.C.sub.--.sub.LCC(x, y):
contrast component image data of the combined image at the pixel
position (x, y) [0063] .lamda.: weight coefficient for the contrast
component image of the low-sensitivity image [0064] .mu.: weight
coefficient for the contrast component image of the
high-sensitivity image As is clear from the expression (6), .lamda.
and .mu. each are expressed by a single coefficient, and finally,
can be treated as a contrast emphasis coefficient.
(2-5) Multiplier 24
[0065] Finally, the multiplier 24 multiplies the C_GLC and the
C_LCC to generate a contrast-expanded image as shown by an
expression (7).
R(x,y)=P.sub.C.sub.GLC(i,j).times..gamma.Q.sub.C.sub.--.sub.LCC(i,j)
(7) [0066] R(x, y): image data after a contrast-expanding process
at the pixel position (x, y) [0067] .gamma.: contrast emphasis
coefficient
[0068] As described above, from the expression (6) and the
expression (7), the relation is expressed as
.gamma.=.lamda..mu..
(3) Description of Image Processing
[0069] Subsequently, the image processing performed by the L_GLC
image processing unit 28, the H_GLC image processing unit 32, the
L_LCC image processing unit 26, and the H_LCC image processing unit
30 will be described.
[0070] The differences between the low-sensitivity image and the
high-sensitivity image are listed in a table in FIG. 16.
[0071] In view of description shown above, the respective types of
image processing can be characterized as shown in a table in FIG.
17.
[0072] Therefore, it is preferable to design a filter which
satisfies the characteristics as shown above. Impulsive noise may
be contained in addition to Gaussian noise, and hence a non-linear
filter, described later, may also be used effectively.
(4) Effects
[0073] The image processing apparatus 10 according to the first
embodiment, a monochrome image with high contrast and high
visibility can be generated in the dark area and in the bright area
by expanding the contrast with the low-sensitivity image having
high contrast in the bright area and the high-sensitivity image
having high contrast in the dark area.
Second Embodiment
[0074] Referring now to FIG. 2, the image processing apparatus 10
according to a second embodiment of the invention will be
described.
(1) Description of Low-Sensitivity Image and High-Sensitivity
Image
[0075] The image processing apparatus 10 receives a supply of a
color image with a high contrast from a color low-sensitivity image
and a color high-sensitivity image.
[0076] Then, the image processing apparatus 10 extracts a luminance
component image, a first color-difference component image and a
second color-difference component image from the color
low-sensitivity image and the color high-sensitivity image, and
generates a contrast-expanded image regarding the illuminance
component by performing the same method as the image processing
apparatus 10 according to the first embodiment on the luminance
component image. On the other hand, the image processing apparatus
10 performs the image processing on the first color-difference
component image and the second color-difference component image for
obtaining average values thereof, and combines the images after
processing again to generate a contrast-expanded color image.
(2) Configuration of Image Processing Apparatus 10
[0077] FIG. 2 is a block diagram showing the image processing
apparatus 10 according to the second embodiment, and will be
described in sequence below.
(2-1) Separating Units 102, 104
[0078] The separating unit 102 for separating the luminance
component and the color-difference component in the low-sensitivity
image extracts the low-sensitivity luminance component image
(hereinafter, referred to as L_LC) from a color low-sensitivity
image, a low-sensitivity first color-difference component image
(hereinafter, referred to as L_UC) and a low-sensitivity second
color-difference component image (hereinafter, referred to as
L_VC).
[0079] The separating unit 104 for separating the luminance
component and the color-difference component in the
high-sensitivity image extracts the high-sensitivity luminance
component image (hereinafter, referred to as H_LC) from a color
high-sensitivity image, a high-sensitivity first color-difference
component image (hereinafter, referred to as H_UC) and a
high-sensitivity second color-difference component image
(hereinafter, referred to as H_VC).
(2-2) Contrast Expanding Unit 106
[0080] The contrast expanding unit 106 performs a processing in the
same manner as the image processing apparatus 10 in the first
embodiment, and generates a contrast-expanded image C_LC.
[0081] The processing will be performed as shown below in detail.
Description of the expressions in the respective steps in the
luminance component image will be omitted because they are the same
processing as the expressions (1) to (7) in the first
embodiment.
[0082] As regards the low-sensitivity image, the smoothing filter
is applied on the L_LC, to generate the brightness component image
L_GLC of the low-sensitivity image.
[0083] In the same manner, as regards the high-sensitivity image,
the smoothing filter is applied on the H_LC to generate the
brightness component image H_GLC of the high-sensitivity image.
[0084] Subsequently, the contrast component image L_LCC of the
low-sensitivity image is generated by dividing by the L_GLC of the
low-sensitivity image.
[0085] In the same manner, the contrast component image H_LCC of
the high-sensitivity image is generated by dividing by the
high-sensitivity image H-GLC.
[0086] Subsequently, the L_GLC and the H_GLC are combined to
generate the combined brightness component image C_GLC.
[0087] Then, the L_LCC and the H_LCC are combined to generate the
combined contrast component image C_LCC.
[0088] Then, the C_GLC and the C_LCC are multiplied to generate the
contrast-expanded image C_LC.
(2-3) First Color-Difference Component Combining Unit 108
[0089] The first color-difference component combining unit 108
combines the L_UC and the H_UC and generates a combined first
color-difference component image C_UC as shown by an expression
(8).
P.sub.C.sub.--.sub.HC(x,y)=.alpha.P.sub.L.sub.--HC(i,j)+.beta.P.sub.H.su-
b.--.sub.HC(i,j) (8) [0090] P.sub.C.sub.--.sub.HC(x, y): first
color-difference component image data of combined image at the
pixel position (x, y) [0091] .alpha.: weight coefficient for the
first color-difference component image of the low-sensitivity image
[0092] .beta.: weight coefficient for the first color-difference
component image of the high-sensitivity image
(2-4) Second Color-Difference Component Combining Unit 110
[0093] The second color-difference component combining unit 110
combines the L_VC and the H_VC to generate a combined second
color-difference component image C_VC as shown by an expression
(9).
P.sub.C.sub.--.sub.SC(x,y)=.alpha.P.sub.L.sub.--.sub.SC(i,j)+.beta.P.sub-
.H.sub.--.sub.SC(i,j) (9) [0094] P.sub.C.sub.--.sub.SC(x,y): second
color-difference component image data of the combined image at the
pixel position (x, y) [0095] 60 : weight coefficient for the second
color-difference component image of the low-sensitivity image
[0096] .beta.: weight coefficient for the second color-difference
component image of the high-sensitivity image
(2-5) Color Image Generator 112
[0097] Finally, the color image generator 112 combines the C_LC,
the C_UC and the C-VC to generate a contrast-expanded color
image.
(3) Effects
[0098] The image processing apparatus 10 according to the second
embodiment, the contrast is expanded by the low-sensitivity image
having high contrast in the bright area and the high-sensitivity
image having high contrast in the dark area so that a high-contrast
color image which demonstrates high-visibility both the dark area
and the bright area can be generated.
(4) Modification 1
[0099] Although the same sign is used as a weighted coefficient
both in the brightness component image, the first color-difference
component image and the second color-difference component image
here, it is possible to combine these images after having changed
the ratio respectively.
[0100] In the case of the low-sensitivity image, since a bright
object can generally be captured, picking up in good color is
achieved. In contrast, in the case of the high-sensitivity image,
although a dark object can be captured, colors may be off-balanced
because the intensity is not sufficient depending on the color.
[0101] Therefore, a good color image can be obtained easier by
weighting the first color-difference component and the second
color-difference component picked up as the low-sensitivity image
with a higher ratio.
(5) Modification 2
[0102] The luminance component image, the first color-difference
component image and the second color-difference component image
used above can be obtained by calculating a luminance component
(Y), a first color-difference component (U), and a second
color-difference component (V) by the following expression from red
component (R), green component (G) and blue component (B) as the
input image signals. Such an image conversion method can employ
calculation which is generally specified by Broadcast Standard as
needed.
Y=0.299R+0.587G+0.114B (10)
U=-0.169R-0.331G+0.500B (11)
V=0.500R-0.419G-0.081B (12)
Third Embodiment
[0103] Referring now to FIG. 3 to FIG. 8, the image processing
apparatus 10 according to a third embodiment of the invention will
be described.
(1) Halo Phenomenon
[0104] The brightness component image corresponds generally to
outside light or illumination light in the video contents of the
input image, and hence is a uniform luminance component for an
object. In other words, it corresponds to a low space frequency
component, it is effective to pass the input image through a
low-pass filter for generating the brightness component image. The
low-pass filter is referred to as "smoothing filter"
hereinafter.
[0105] Various types of smoothing filter can be employed, and a
basic filter is a linear filter which performs an operation by
convoluting the filter coefficient to a constant value. For
example, examples of the filter coefficients of the Gaussian Filter
and an average value filter are shown in FIG. 3.
[0106] In the case in which such a linear filter is employed, the
luminance fluctuations (hereinafter, referred to as Halo
phenomenon) at a boundary of the object as shown in FIG. 4A occur
depending on the image. The reason of occurrence of this phenomenon
is that when the linear filter is used, even though the brightness
component image is the boundary of the object, a uniform filtering
process is performed and hence signal is interfused between
adjacent objects (black round object and white background in this
case) as shown in FIG. 4B.
(2) Method of Improving Halo Phenomenon
(2-1) First Method
[0107] As a first method for improving the Halo phenomenon, a
method of using a plurality of linear filters to generate a
plurality of brightness component images and contrast component
images, respectively, and adding the weight to the respective
component images again can be employed.
[0108] In the respective embodiments described above, the
above-described method can be employed for the low-sensitivity
image and the high-sensitivity image respectively.
(2-2) Second Method
[0109] Referring now to FIG. 5, a second method for improving Halo
phenomenon will be described. FIG. 5A shows an example of a filter
in which the coefficient of a pixel at the center is sufficiently
larger than the coefficients of the peripheral pixels. FIG. 5B is
an example of a filter in which the difference between the
coefficient of the pixel at the center and the coefficients of the
peripheral pixels is small.
[0110] As a second method, there is a method of differentiating the
filter coefficient between the low-sensitivity image and the
high-sensitivity image.
[0111] In the low-sensitivity image, the contrast of the bright
object is maintained, and in contrast, the dark object is
blackened. On the other hand, in the high-sensitivity image, the
bright object is saturated in the white side and hence whitened,
and the contrast of the dark object is maintained.
[0112] Therefore, in the low-sensitivity image, in order to make
the brightness component of the bright object closer to the signal
value of the input image, when the signal value of the input image
of the pixel at the center is higher than the threshold value of
the low-sensitivity image, a filter in which the coefficient of the
pixel at the center is sufficiently larger than the coefficients of
the peripheral pixels is used. In the high-sensitivity image, in
order to make the brightness component of the dark object closer to
the pixel value of the input image, when the signal value of the
input image of the pixel at the center is lower than the threshold
value of the high-sensitivity image, a filter in which the
coefficient of the pixel at the center is sufficiently larger than
the coefficients of the peripheral pixels is used.
[0113] In other words, the filter coefficient is changed according
to the luminance level of the center pixel of a block to be
processed with the filter, or the average luminance level in the
block. For example, in the low-sensitivity image, when the
luminance level at the center pixel is high, the filter shown in
FIG. 5A is used, and when the luminance level of the center pixel
is low, the filter shown in FIG. 5B is used. On the other hand, in
the high-sensitivity image, when the luminance level of the center
pixel is low, the filter shown in FIG. 5A is used, and when the
luminance level of the center pixel is high, the filter shown in
FIG. 5B is used.
(2-3) Third Method
[0114] Referring now to FIGS. 6A, 6B to FIG. 8, a third method for
improving Halo phenomenon will be described.
[0115] In the third method, a non-linear filter which enables the
smoothing process while maintaining the boundary of the object is
used as a method of generating a brightness component image.
[0116] FIGS. 6A and 6B illustrate an example in which .epsilon.
filter is used as the non-linear filter. In the C filter, the
absolute difference in the signal levels of the center pixel and
the peripheral pixels of the block to be processed by the filter is
obtained and, when the difference value is equal to or smaller than
the certain threshold value (hereinafter, referred to as .epsilon.
value), the values of the peripheral pixels are maintained and,
when the difference value is larger than the threshold value, the
values of the peripheral pixels are replaced by the value of the
pixel at the center.
[0117] FIG. 6A and 6B show an example in which the center pixel is
gray, the peripheral pixels are white or black, the difference
value between the black peripheral pixels and the gray central
pixel is equal to or smaller than the .epsilon. value, and the
difference value between the white peripheral pixels and the gray
center pixel is larger than the .epsilon. value (FIG. 6A).
Therefore, the white pixels are replaced by the pixel value of the
center gray pixel (FIG. 6B), and then the smoothing process of
5.times.5 is performed.
[0118] The .epsilon. filter has a property to remove small signal
noise (or Gaussian noise) while preserving the edge, a brightness
component in which the small signal noise is restrained can be
obtained.
[0119] As the types of the noises, there is also an impulsive
noise, and a median filter is effective for removing the impulsive
noise. The median filter is a filter that arranges the pixels in
the block in the order of the gradation level and selects a median
value.
[0120] FIGS. 7A and 7B illustrate an example of a
median-.epsilon.-filter filter including a median filter and the
.epsilon. filter combined with each other. The block size of the
median filter is shown by hatched 3.times.3 (FIG. 7A), and the
block size of the .epsilon. filter is shown by dotted 5.times.5
(FIG. 7B). When the block size is not large, the cost of arithmetic
operation can be reduced and, in addition, when the probability of
generation of the impulsive noise is not very high, the sufficient
effect can be obtained even though the block size is small.
W ( x , y ) = i = - k k j = - 1 l T ( i , j ) Y ( x - i , y - j ) (
13 ) if X ( x - i , y - j ) - med { X ( x , y ) } .ltoreq. Y ( x -
i , y - j ) = X ( x - i , y - j ) if X ( x - i , y - j ) - med { X
( x , y ) } > Y ( x - i , y - j ) = med { X ( x , y ) }
##EQU00003##
[0121] Where W(x, y) designates an output value of the filter, T(i,
j) designates a filter coefficient, and Y(x-i, y-j) designates the
pixel value in the block. The term med{X(x, y)} is a median value
in the block, and when the difference value is larger than
.epsilon. in the .epsilon. filter, it is replaced by the median
value.
[0122] Subsequently, a method of setting the .epsilon. value will
be described. Although it may set by any method, it can be changed
according to the data of the target input image.
[0123] For example, when expanding the contrast of the dark area,
the contrast of the noise component is also expanded, and hence the
visibility may be impaired. In the dark area or the bright area,
blackening or whitening may occur, and hence the signal component
of the object is lost in many cases. Therefore, the contrast
component is brought to a value close to 1 by remaining the many
signals in the brightness component image. In other words, an
affect to the combined contrast component image can be reduced.
[0124] More specifically, the F value is varied according to the
gradation level of the input image as in the expression shown
below. FIG. 8 shows the .epsilon. value when a .xi.value is 0.5. In
the dark gradation level or the bright gradation level, since the
.epsilon. value is small, the input image becomes data of the
brightness component image as is without being affected by the
peripheral pixels. In the intermediate gradation level, since the
.epsilon. value is large, the smoothing processing data including
the input image and the peripheral pixels is the brightness
component image data. Therefore, the data of the contrast component
image obtained by dividing the input image by the brightness
component image is close to 1 in the dark gradation level or the
bright gradation level.
.epsilon.=(128-|128-X(x,y)|)*.xi. (14) [0125] .xi.: arbitrary real
number
Fourth Embodiment
[0126] The image processing apparatus 10 according to a fourth
embodiment of the invention will be described.
[0127] In the respective embodiments shown above, the input image
is divided by the brightness component image for generating the
contrast component image. However, when the brightness component
image is 0, the contrast cannot be calculated.
[0128] Therefore, it is necessary that the brightness component
image is larger than 0. However, since there is a possibility that
erroneous signal is immixed also in the brightness component image
in actuality, it is recommended to perform preprocess by providing
a threshold value Th. Normally, the threshold value Th is set to 1,
and the image data must simply be clipped to 1 only for pixels of
0.
[0129] However, it is also possible to differentiate the threshold
value Th respectively for the low-sensitivity image and the
high-sensitivity image.
[0130] For example, in the low-sensitivity image, the contrast in
the bright object is maintained, and in contrast, the dark object
is blackened. Therefore, it is highly likely that the dark object
includes much noise, and hence it is recognized to have a low
reliability as the image data, so that the threshold value Th is
set to a further larger value (10, for example).
[0131] On the other hand, in the high-sensitivity image, the bright
object is saturated to the white side and hence whitened, and the
contrast of the dark object is maintained. Therefore, it is highly
likely that the bright object contains much noise and hence it is
recognized to have a low reliability as the image data, so that it
is also possible to clip the values exceeding the threshold value
Th (245, for example).
Fifth Embodiment
[0132] Referring now to FIGS. 9A and 9B, and FIGS. 10A and 10B, the
image processing apparatus 10 according to a fifth embodiment of
the invention will be described.
[0133] As in the first embodiment, the weight adding can be
employed for generating the combined brightness component image.
However, when the brightness is significantly different from the
low-sensitivity image and the high-sensitivity image, the
brightness becomes entirely dark or bright, and hence blackening
(the image data becomes to a value close to 0) or whitening (the
image data becomes the maximum gradation 255) may occur when the
contrast component is multiplied. Therefore, the effect of contrast
expansion can hardly be obtained.
[0134] Therefore, in order to bring an average gradation level of
the combined brightness component image to a desired gradation
level, it is effective to adjust the brightness of the brightness
component images. A method thereof will be described below.
(1) First Method
[0135] In a first method, for example, in order to bring average
gradation level of the combined brightness component image to 120,
when the average gradation level of the brightness component image
of the high-sensitivity image is 200 and the average gradation
level of the brightness component image of the low-sensitivity
image is 60, the respective pixels of the high-sensitivity image is
multiplied by 0.6 (=120/200), and the respective pixels of the
low-sensitivity image is multiplied by 2.0 (=120/60) to achieve the
same gradation level 120 as preprocess. FIGS. 9A and 9B illustrate
an example in which the image data is converted. FIG. 9A shows the
result of processing of the high-sensitivity image, and FIG. 9B
shows the result of processing of the low-sensitivity image,
respectively.
(2) Second Method
[0136] As a second method, it is also possible to obtain the
average gradation level using only reliable data from the reason
that in the low-sensitivity image, the contrast of the bright
object is maintained, but the dark object is blackened, while in
the high-sensitivity image, the bright object is saturated to white
side and whitened, and the contrast of the dark object is
maintained.
[0137] For example, it is also possible to obtain the average
gradation level of the brightness component image of the
low-sensitivity image using only the image data larger than the
preset threshold value Th (10, for example) in the low-sensitivity
image, then obtain the average gradation level of the brightness
component image of the high-sensitivity image using only the image
data smaller than the preset threshold value Th (245, for example)
in the high-sensitivity image, and then modulate the respective
average gradation levels to be the average gradation level of the
combined brightness component image.
(3) Third Method
[0138] As shown in the first method, there is a method of adjusting
the brightness of the combined brightness component image using an
average brightness of the respective brightness component images of
the low-sensitivity image and the high-sensitivity image for
generating the combined brightness component image. Instead, as a
third method, it is also possible to obtain histograms respectively
and perform a histogram smoothing process as preprocess.
[0139] Although the histogram smoothing process may be performed by
any methods, for example, when converting the luminance of a
certain gradation level L1 to L2 by smoothing in the
low-sensitivity image, an accumulated number of pixels C1 (L1)
contained up to L1 and an accumulated number of pixels C2 (L2)
contained up to L2 after smoothing are set to be the same. The
conversion shown above may be achieved using the following
expression.
[0140] FIGS. 10A and 10B illustrate an example in which the image
data is converted. FIG. 10A shows the histogram of the brightness
component image of the low-sensitivity image, and FIG. 10B shows a
result of conversion of the respective signal levels of the
brightness component image in the low-sensitivity image by the
histogram smoothing process.
C 2 ( L 2 ) = ( ( S - S / 256 ) / 255 ) * L 2 + S / 256 = S / 256 *
( L 2 + 1 ) ( 15 ) C 1 ( L 1 ) = C 2 ( L 2 ) ( 16 ) L 2 = C 1 ( L 1
) * 256 / S - 1 ( 17 ) ##EQU00004## [0141] S: number of pixels of
the entire screen
(4) Fourth Method
[0142] A fourth method will be described. In the low-sensitivity
image, the contrast of the bright object is maintained, and in
contrast, the dark object is blackened. On the other hand, in the
high-sensitivity image, the bright object is saturated to white
side and hence whitened, and the contrast of the dark object is
maintained. Therefore, in the fourth method, only the reliable data
is accumulated respectively, modulated so that the luminance levels
after accumulation become the same.
[0143] For example, modulation is performed so that a luminance
level LN in which the number of accumulated pixels reaches N after
having accumulated the numbers of pixels of the respective levels
from the maximum luminance level 255 in descending order in the
low-sensitivity image, and a luminance level LM in which the number
of accumulated pixels reaches M after having accumulated the
numbers of pixels of the respective luminance levels from the
lowest luminance level 0 in ascending order in the high-sensitivity
image become such that the sum of N and M substantially matches the
number of pixels S of the entire screen.
Sixth Embodiment
[0144] The image processing apparatus 10 according to a sixth
embodiment of the invention will be described.
[0145] In order to generate the combined contrast component image,
the combined contrast component image can be obtained using the
multiplication value of the respective contrast component images of
the low-sensitivity image and the high-sensitivity image as in the
first embodiment.
[0146] The reason is that in the low-sensitivity image, the
contrast of the bright object is maintained, and in contrast, the
dark object is blackened. On the other hand, in the
high-sensitivity image, the bright object is saturated to white
side and hence whitened, and the contrast of the dark object is
maintained. Therefore, the black object in the low-sensitivity
image has a contrast of 1, and when it is multiplied by the
contrast component of the high-sensitivity image, the contrast
component of the high-sensitivity image is equal to the combined
contrast image, while the bright object in the high-sensitivity
image has a contrast of 1, and when it is multiplied by the
contrast component of the low-sensitivity image, the contrast
component of the low-sensitivity image is equal to the combined
contrast image. Accordingly, in the combined contrast component
image, an image in which the contrast from the dark object to the
bright object is maintained is obtained.
[0147] Therefore, in the sixth embodiment, the contrast component
is modulated according to the luminance level as a method different
from the first embodiment.
[0148] The reliability of the contrast component of the dark object
in the low-sensitivity image is low, while the reliability of the
contrast component of the bright object in the high-sensitivity
image is low. Therefore, the respective data of the contrast
component images are exponentiated and modulated so that the
contrast of the pixels whose luminance level of the brightness
component image is low becomes a value close to 1 in the
low-sensitivity image and the contrast of the pixels whose
luminance level of the brightness component image is high becomes a
value close to 1 in the high-sensitivity image. The above-described
procedure can be expressed as the expression shown below.
Q.sub.L.sub.--.sub.LCC2=Q.sub.L.sub.--.sub.LCC.sup.L1/255L.sup.--.sup..t-
au. (18)
Q.sub.H.sub.--.sub.LCC2=Q.sub.H.sub.--.sub.LCC.sup.(L1-255)/255H.sup.--.-
sup..tau. (19) [0149] Q.sub.L.sub.--.sub.LCC: contrast component of
the low-sensitivity image before conversion [0150]
Q.sub.L.sub.--.sub.LCC2: contrast component of the low-sensitivity
image after conversion [0151] Q.sub.H.sub.--.sub.LCC: contrast
component of the high-sensitivity image before conversion [0152]
Q.sub.H.sub.--.sub.LCC2: contrast component of the high-sensitivity
image after conversion [0153] L_.tau.,H_.tau.; arbitrary real
number
Seventh Embodiment
[0154] Referring now to FIG. 11, the image processing apparatus 10
according to a seventh embodiment of the invention will be
described.
[0155] In order to generate the combined contrast component image,
noise is immixed to the contrast component due to the influence of
the noise in the input image in the respective contrast component
images of the low-sensitivity image and the high-sensitivity
image.
[0156] Therefore, as shown in a block diagram in FIG. 11A, the
noise can be removed also from the combined contrast-expanded image
by performing noise removal also in the contrast component
image.
[0157] The method of removing noise from the respective contrast
component images may be achieved by using the linear filter or the
non-linear filter described above. More preferably, the non-linear
filter is better since it is desired to express the contrast
between the adjacent pixels clearly. Since the contrast component
may often be an extremely large value due to the dividing process,
the removal of the impulsive noise is effective.
[0158] As shown in a block diagram in FIG. 11B, it is possible to
multiply the respective contrast component images after having
removed the noise from the respective contrast component images to
obtain the combined contrast image.
[0159] It is also possible to remove the noise after having
multiplied the respective contrast component images to obtain the
combined contrast image.
Eighth Embodiment
[0160] Referring now to FIG. 12, the image processing apparatus 10
according to an eighth embodiment of the invention will be
described.
[0161] The contrast-expanded image is generated by multiplying the
combined brightness component image and the combined contrast
component image. However, when the image data of the combined
brightness component image is large and the image data of the
combined contrast component image is also large, the obtained
contrast-expanded image exceeds the maximum gradation level which
can be displayed on a display device.
[0162] Simply speaking, the image data of the contrast-expanded
image which exceeds the maximum gradation level (255, for example)
which can be displayed is replaced by 255. On the other hand, when
the image data of the combined brightness component image is small,
and the image data of the combined contrast component image is also
small, the image data of the contrast-expanded image becomes 0. In
this manner, when the data is deviated to 0 or over 255 in the
contrast-expanded image, the image is blackened or whitened.
[0163] Therefore, it is effective to increase the number of
operating levels (1024, for example) in the contrast-expanded
image, and performs coefficient operation as in the first method in
the fifth embodiment to convert into 255, or to perform the
histogram smoothing process as in the third method in the fifth
embodiment.
[0164] FIG. 12 shows a block diagram of the eighth embodiment, in
which the coefficient operation and the histogram smoothing process
are performed in a level conversion process.
Ninth Embodiment
[0165] An imaging device 200 according to a ninth embodiment of the
invention will be described.
[0166] In order to generate the contrast-expanded image, the
imaging device 200 for acquiring the low-sensitivity image and the
high-sensitivity image is necessary in addition to the image
processing apparatus 10 described above.
[0167] FIG. 13 is a block diagram of the imaging device 200
including the image processing apparatus 10.
[0168] As shown in FIG. 13, the imaging device 200 basically
includes imaging devices 202, 204 including CMOS that can pick up
two images of the low-sensitivity image and the high-sensitivity
image, AD conversion units 206, 208 that convert the respective
images from the analogue data to digital data, respectively, frame
memories 210, 212 that store the digital data, the image processing
apparatus 10 that reads out the data from the respective frame
memories 210, 212 and performs image processing of the embodiments,
and a frame memory 214 that stores the contrast-expanded image.
Tenth Embodiment
[0169] Referring now to FIG. 14, the imaging device 200 according
to a tenth embodiment of the invention will be described.
[0170] As the imaging devices 202, 204 for acquiring the
low-sensitivity image and the high-sensitivity image, a
configuration having an element 216 whose aperture is small and an
element 218 whose aperture is large is provided, so that the image
processing according to the respective apertures can be
established.
[0171] For example, as shown in FIG. 14A, the size of the aperture
of the element 218 of the device for the high-sensitivity image is
two times the size of the aperture of the elements 216 of the
device for the low-sensitivity image, and the respective elements
are arranged in a checkered pattern.
[0172] Assuming that the respective picked-up images are neither
blackened nor whitened, and have no noise, the brightness of the
high-sensitivity image becomes two times the brightness of the
low-sensitivity image as an ideal result. Therefore, the brightness
component image should be approximately two times also in the image
processing.
[0173] Actually, since the whitening or blackening occurs, it does
not necessarily have to be two times depending on the object. Here,
the intermediate level of the high-sensitivity image and the
intermediate level of the low-sensitivity image are extracted and
whether or not the adjacent pixels have the image data is
inspected.
[0174] For example, even when only one, not more, of the four
high-sensitivity images in the periphery of the low-sensitivity
image has an image data of the intermediate level as shown in FIG.
14B, the data of the pixel of the low-sensitivity image in question
is considered to be effective. In contrast, when none of the four
high-sensitivity images in the periphery of the low-sensitivity
image has the image data of the intermediate level, it is
recognized that both of the low-sensitivity image and the
high-sensitivity images could not be picked up simultaneously in
the periphery, and hence the data of the pixel of the
low-sensitivity image in question is determined to be ineffective.
Determination between effective and ineffective is performed also
for the data of the pixels of the high-sensitivity images in the
same manner. Then, the average value is obtained using only the
data of the effective pixels, and the processing of the sixth
embodiment is continued. With the processing as described thus far,
the device and the image processing can be adjusted
effectively.
Eleventh Embodiment
[0175] Referring now to FIG. 15, the imaging device 200 according
to an eleventh embodiment of the invention will be described.
[0176] As the imaging devices 202, 204 for acquiring the
low-sensitivity image and the high-sensitivity image, a
configuration having the both elements 216 whose light-receiving
time is short and the elements 218 whose light-receiving time is
long is provided so that the image processing according to the
respective light-receiving times can be established.
[0177] For example, as shown in FIG. 15, the light-receiving time
of the elements 216 for the low-sensitivity image is set to be half
the light-receiving time of the element 218 of the high-sensitivity
image. The control of the light-receiving time as such can be
realized by controlling the timing to start reading the data from
the respective elements.
[0178] The image processing method can be used as an adjustment
parameter in generation of the brightness component image as the
imaging device 200 in the second embodiment.
[0179] Although two elements can be provided as described above, a
configuration in which image pick-up can be performed for two
light-receiving times with one element is also applicable.
Modification
[0180] The invention is not limited to the embodiments described
above, and may be modified in various manners without departing the
scope of the invention.
[0181] For example, there is a method effective for generating a
visible image with expanded contrast from two or more original
images different in exposure condition obtained by a different
method including CCD.
[0182] It is also possible to use images in R, G and B, or images
in complementary colors of cyan, magenta, yellow and green instead
of the monochrome image or the color image shown above.
[0183] Although the description has been made using the image
picked up in two sensitivities, that is, the low-sensitivity image
and the high-sensitivity image in the embodiments shown above, the
same procedure can be applied also to the case in which various
sensitivity images picked up in three or more sensitivities are
used.
* * * * *