U.S. patent application number 14/395476 was filed with the patent office on 2015-05-14 for multi-angle spectral imaging measurement method and apparatus.
This patent application is currently assigned to OFFICE COLOR SCIENCE CO., LTD.. The applicant listed for this patent is OFFICE COLOR SCIENCE CO., LTD.. Invention is credited to Masayuki Osumi.
Application Number | 20150131090 14/395476 |
Document ID | / |
Family ID | 49383589 |
Filed Date | 2015-05-14 |
United States Patent
Application |
20150131090 |
Kind Code |
A1 |
Osumi; Masayuki |
May 14, 2015 |
MULTI-ANGLE SPECTRAL IMAGING MEASUREMENT METHOD AND APPARATUS
Abstract
A lighting device that emits illumination light from two or more
angular directions onto a sample surface to be measured, an imaging
optical lens, and a monochrome two-dimensional image sensor are
provided. This configuration provides a method and an apparatus
that take a two-dimensional image of the sample surface to be
measured at each measurement wavelength and accurately measure
multi-angle and spectral information on each of all pixels in the
two-dimensional image in a short time. In particular, a multi-angle
spectral imaging measurement method and apparatus that have
improved accuracy and usefulness are provided.
Inventors: |
Osumi; Masayuki;
(Kumamoto-shi, JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
OFFICE COLOR SCIENCE CO., LTD. |
Kumamoto-shi, Kumamoto |
|
JP |
|
|
Assignee: |
OFFICE COLOR SCIENCE CO.,
LTD.
Kumamoto-shi, Kumamoto
JP
|
Family ID: |
49383589 |
Appl. No.: |
14/395476 |
Filed: |
April 19, 2013 |
PCT Filed: |
April 19, 2013 |
PCT NO: |
PCT/JP2013/061671 |
371 Date: |
October 18, 2014 |
Current U.S.
Class: |
356/300 |
Current CPC
Class: |
G01J 3/46 20130101; G01N
21/276 20130101; G01J 2003/2826 20130101; G01J 3/0264 20130101;
G01J 3/42 20130101; G01J 3/501 20130101; G01J 3/12 20130101; G01J
3/502 20130101; G01J 3/524 20130101; G01J 2003/1226 20130101; G01J
3/463 20130101; G01J 3/1256 20130101; G01N 2201/061 20130101; G01N
21/255 20130101; G01J 3/0208 20130101; G01J 3/0297 20130101; G01J
3/504 20130101; G01N 21/57 20130101; G01J 3/2823 20130101; G01N
21/8422 20130101 |
Class at
Publication: |
356/300 |
International
Class: |
G01J 3/28 20060101
G01J003/28; G01J 3/42 20060101 G01J003/42 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 20, 2012 |
JP |
2012-097170 |
Jun 1, 2012 |
JP |
2012-126389 |
Claims
1. A multi-angle spectral imaging measurement apparatus comprising:
a linear or spot lighting device capable of emitting white
illumination light perpendicularly onto a sample surface containing
effect materials from two or more fixed angular directions;
spectroscopic means for dispersing light reflected from the sample
surface, the spectroscopic means being disposed above the sample
surface; an imaging lens forming an image of reflected light
dispersed by the spectroscopic means; a fixed two-dimensional image
sensor capable of receiving the reflected light through the imaging
lens to take an image of the sample surface; and a white reference
surface provided around the entire sample surface; the multi-angle
spectral imaging measurement apparatus acquiring spectral
information on the sample surface by using changes in optical
geometrical conditions in an illumination direction and an image
taking direction for each pixel in a two-dimensional image taken
with the two-dimensional image sensor; wherein, during calibration
before measurement, an image of a reference standard white plate
and the white reference surface is taken at the same time, a
calibration coefficient for each pixel and each wavelength is
measured, and exposure time for each of the wavelengths is
determined; a two-dimensional image of the sample surface and the
white reference surface provided around the entire sample surface
is taken at each measurement wavelength by a changing a pass
wavelength of the spectroscopic means without changing the relative
positions of the lighting device, the two-dimensional image sensor,
the sample surface and the white reference surface, and spectral
information on all of the pixels of the two-dimensional image at
each measurement wavelength is acquired; and image taking exposure
time for calibration and measurement is changed according to each
of the measurement wavelengths to correct gain differences in a
spectral property of the lighting device, a spectral property in
the spectroscopic means, and a spectral property of the
two-dimensional image sensor within the range of the measurement
wavelengths.
2. A multi-angle spectral imaging measurement apparatus comprising:
a linear or spot lighting device capable of emitting white
illumination light perpendicularly onto a sample surface from two
or more fixed angular directions; an imaging lens forming an image
of light reflected from the sample surface, the imaging lens being
disposed above the sample surface; spectroscopic means for
dispersing light imaged by the imaging lens; a fixed
two-dimensional image sensor capable of receiving light from the
spectroscopic means to take an image of the sample surface; and a
white reference surface provided around the entire sample surface;
the multi-angle spectral imaging measurement apparatus acquiring
spectral information on the sample surface by using changes in
optical geometrical conditions in an illumination direction and an
image taking direction for each pixel in a two-dimensional image
taken with the two-dimensional image sensor; wherein, during
calibration before measurement, an image of a reference standard
white plate and the white reference surface is taken at the same
time, a calibration coefficient for each pixel and each wavelength
is measured, and exposure time for each of the wavelengths is
determined; a two-dimensional image of the sample surface and the
white reference surface provided around the entire sample surface
is taken at each measurement wavelength by a changing a pass
wavelength of the spectroscopic means without changing the relative
positions of the lighting device, the two-dimensional image sensor,
the sample surface and the white reference surface, and spectral
information on all of the pixels of the two-dimensional image at
each measurement wavelength is acquired; and image taking exposure
time for calibration and measurement is changed according to each
of the measurement wavelengths to correct gain differences in a
spectral property of the lighting device, a spectral property in
the spectroscopic means, and a spectral property of the
two-dimensional image sensor within the range of the measurement
wavelengths.
3. A multi-angle spectral imaging measurement apparatus comprising:
a linear or spot wavelength-variable lighting device capable of
emitting monochromatic illumination light with each measurement
wavelength perpendicularly onto a sample surface from two or more
fixed angular directions; an imaging lens forming an image of light
reflected from the sample surface, the imaging lens being disposed
above the sample surface; a fixed two-dimensional image sensor
capable of receiving the reflected light through the imaging lens
to take an image of the sample surface; and a white reference
surface provided around the entire sample surface; the multi-angle
spectral imaging measurement apparatus acquiring spectral
information on the sample surface by using changes in optical
geometrical conditions in an illumination direction and an image
taking direction for each pixel in two axis directions in a
two-dimensional image taken with the two-dimensional image sensor;
wherein, during calibration before measurement, an image of a
reference standard white plate and the white reference surface is
taken at the same time, a calibration coefficient for each pixel
and each wavelength is measured, and exposure time for each of the
wavelengths is determined; a two-dimensional image of the sample
surface and the white reference surface provided around the entire
sample surface is taken at each measurement wavelength by a
changing a pass wavelength of the spectroscopic means without
changing the relative positions of the lighting device, the
two-dimensional image sensor, the sample surface and the white
reference surface, and spectral information on all of the pixels of
the two-dimensional image at each measurement wavelength is
acquired; and image taking exposure time for calibration and
measurement is changed according to each of the measurement
wavelengths to correct gain differences in a spectral property of
the lighting device, a spectral property in the spectroscopic
means, and a spectral property of the two-dimensional image sensor
within the range of the measurement wavelengths.
4. A multi-angle spectral imaging measurement apparatus comprising:
a linear or spot wavelength-variable lighting device capable of
emitting monochromatic illumination light with each measurement
wavelength perpendicularly onto a sample surface from two or more
fixed angular directions; spectroscopic means for dispersing light
reflected from the sample surface, the spectroscopic means being
disposed above the sample surface; an imaging lens forming an image
of reflected light dispersed by the spectroscopic means; a fixed
two-dimensional image sensor capable of receiving the reflected
light through the imaging lens to take an image of the sample
surface; and a white reference surface provided around the entire
sample surface; the multi-angle spectral imaging measurement
apparatus acquiring spectral information on the sample surface by
using changes in optical geometrical conditions in an illumination
direction and an image taking direction for each pixel in a
two-dimensional image taken with the two-dimensional image sensor;
wherein, during calibration before measurement, an image of a
reference standard white plate and the white reference surface is
taken at the same time, a calibration coefficient for each pixel
and each wavelength is measured, and exposure time for each of the
wavelengths is determined; an emission wavelength of the lighting
device and a pass wavelength of the spectroscopic means are changed
to take a two-dimensional image of the sample surface and the white
reference surface provided around the entire sample surface with
each of the emission wavelengths and each of the pass wavelengths
without changing the relative positions of the lighting device, the
two-dimensional image sensor, the sample surface and the white
reference surface; spectral information on all of the pixels of the
two-dimensional image for each of the emission wavelengths and each
of the pass wavelengths is acquired; and image taking exposure time
for the calibration and measurement is changed according to each of
the measurement wavelengths to correct gain differences in a
spectral property of the lighting device, a spectral property in
the spectroscopic means, and a spectral property of the
two-dimensional image sensor within the range of the measurement
wavelengths.
5. A multi-angle spectral imaging measurement apparatus comprising:
a linear or spot wavelength-variable lighting device capable of
emitting monochromatic illumination light with each measurement
wavelength perpendicularly onto a sample surface from two or more
fixed angular directions; an imaging lens forming an image of light
reflected from the sample surface, the imaging lens being disposed
above the sample surface; spectroscopic means for dispersing light
imaged by the imaging lens; a fixed two-dimensional image sensor
capable of receiving light from the spectroscopic means to take an
image of the sample surface; and a white reference surface provided
around the entire sample surface; the multi-angle spectral imaging
measurement apparatus acquiring spectral information on the sample
surface by using changes in optical geometrical conditions in an
illumination direction and an image taking direction for each pixel
in a two-dimensional image taken with the two-dimensional image
sensor; wherein the multi-angle spectral imaging measurement
apparatus is capable of taking an image of a reference standard
white plate and the white reference surface at the same time,
measuring a calibration coefficient for each pixel and each
wavelength, and determining exposure time for each of the
wavelengths during calibration before measurement; changing an
emission wavelength of the lighting device and a pass wavelength of
the spectroscopic means to take a two-dimensional image of the
sample surface and the white reference surface provided around the
entire sample surface with each of the emission wavelengths and
each of the pass wavelengths without changing the relative
positions of the lighting device, the two-dimensional image sensor,
the sample surface and the white reference surface; and acquiring
spectral information on all of the pixels of the two-dimensional
image for each of the emission wavelengths and each of the pass
wavelengths; and image taking exposure time for the calibration and
measurement is changed according to each of the measurement
wavelengths to correct gain differences in a spectral property of
the lighting device, a spectral property in the spectroscopic
means, and a spectral property of the two-dimensional image sensor
within the range of the measurement wavelengths.
6. The multi-angle spectral imaging measurement apparatus according
to claim 1, wherein a dynamic range of measurement is extended by
changing the light amount of the lighting device or changing
exposure time during image taking, or combination of both.
7. The multi-angle spectral imaging measurement apparatus according
to claim 1, wherein the spectral information on each pixel for each
measurement wavelength is used for averaging of numerical values
and image calculations such as calculations of distributions of
color numerical values in a color space, spatial frequency analysis
image calculations, and information entropy image calculations.
8. The multi-angle spectral imaging measurement apparatus according
to claim 1 into which three-dimensional geometry measurement means
is incorporated, wherein three-dimensional geometry information on
the sample surface is measured by the three-dimensional geometry
measurement means, the three-dimensional geometry information on
the sample surface is used to obtain the direction normal to each
position on the sample surface, and changes in the optical
geometrical conditions are corrected.
9. A multi-angle imaging measurement method for acquiring spectral
information on all pixels in a two-dimensional image at each
measurement wavelength, wherein an image of a reference standard
white plate and a white reference surface is taken at the same time
by using a linear or spot lighting device, a calibration
coefficient for each pixel and each wavelength is measured, and
exposure time for each wavelength is determined during calibration
before measurement, then a sample surface and a white reference
surface provided around the entire sample surface are illuminated
with illumination light perpendicularly to the sample surface and
the white reference surface from two or more fixed angular
directions, illumination light reflected from the sample surface
and the white reference surface is dispersed by spectroscopic
means, the dispersed reflected light is imaged by an imaging lens,
the reflected light is received through the imaging lens, an image
of the received light is taken by a fixed two-dimensional image
sensor without changing the relative positions of the lighting
device, the two-dimensional image sensor, the sample surface and
the white reference surface, changes in the amount of illumination
light and changes in exposure time during measurement are corrected
with reference to light reflected at the white reference surface,
changes in optical geometrical conditions in an illumination
direction and an image taking direction of each pixel in the
two-dimensional image taken are used to acquire spectral
information on the sample surface; and image taking exposure time
during calibration and measurement are changed according to each
measurement wavelength to correct gain differences in a spectral
property of the lighting device, a spectral property in the
spectroscopic means, and a spectral property of the two-dimensional
image sensor within the range of the measurement wavelengths.
10. A multi-angle imaging measurement method for acquiring spectral
information on all pixels in a two-dimensional image at each
measurement wavelength, wherein an image of a reference standard
white plate and a white reference surface is taken at the same time
by using a linear or spot lighting device, a calibration
coefficient for each pixel and each wavelength is measured, and
exposure time for the wavelength is determined during calibration
before measurement, a sample surface and a white reference surface
provided around the entire sample surface are illuminated with
illumination light perpendicularly to the sample surface and the
white reference surface from two or more fixed angular directions,
illumination light reflected from the sample surface and the white
reference surface is imaged by an imaging lens, the light imaged by
the imaging lens is dispersed by spectroscopic means, the reflected
light from the spectroscopic means is received, an image of the
received light is taken by a fixed two-dimensional image sensor
without changing the relative positions of the lighting device, the
two-dimensional image sensor, the sample surface and the white
reference surface, changes in the amount of illumination light and
changes in exposure time during measurement are corrected with
reference to light reflected at the white reference surface,
changes in optical geometrical conditions in an illumination
direction and an image taking direction of each pixel in the
two-dimensional image taken are used to acquire spectral
information on the sample surface; and image taking exposure time
during calibration and measurement are changed according to each
measurement wavelength to correct gain differences in a spectral
property of the lighting device, a spectral property in the
spectroscopic means, and a spectral property of the two-dimensional
image sensor within the range of the measurement wavelengths.
11. The multi-angle imaging measurement method according to claim
9, wherein three-dimensional geometry information on the sample
surface is measured by three-dimensional geometry measurement
means, the three-dimensional geometry information on the sample
surface is used to obtain the direction normal to each position on
the sample surface, changes in the optical geometrical conditions
are corrected.
12. A method for calculating a spectral reflectance factor for each
measurement wavelength associated with multi-angle information,
image distributions in spatial frequency analysis and fractal
analysis, distributions in a color space that are obtained from a
color value of each pixel, the number of appearing colors, and
information entropy by using a spectral reflectance factor of each
pixel or a numerical color value in a color space calculated from
the spectral reflectance factor on the basis of spectral
information on each pixel obtained by the multi-angle spectral
imaging measurement apparatus according to claim 1.
Description
TECHNICAL FIELD
[0001] The present invention relates to any fields, such as paint,
coating, textile, printing, and plastic, in which colors of
surfaces of objects are measured and to a multi-angle spectral
imaging measurement method and apparatus used in measurement and
evaluation, and manufacturing based on the measurement and
evaluation.
BACKGROUND ART
[0002] Design aesthetic is one of the very important added-values
of industrial products today. Accordingly, there are many products
that have various kinds of texture and surface structures. For
example, the outer coating materials of automobiles contain various
kinds of effect materials and accordingly their surfaces have
extremely minute glitter reflective and granular texture.
Furthermore, colors significantly change according to variations in
combinations of effect materials, pigments and other materials,
layer structures of coating, and geometrical conditions of lighting
and observation directions. In particular, the advent of
interference effect materials in recent years has added variety to
colors and texture. In addition to outer coating of automobiles,
the surface structures of other products such as interior
materials, furniture, buildings, electric appliances, electronic
devices, cosmetics, and packages have become complex regardless of
inclusion of effect materials. Like outer coating of automobiles,
the colors and texture of products change in various ways depending
on texture and optical geometrical conditions. Furthermore, wood
products and textile products as well as coated products exhibit
color changes and texture changes depending on optical geometrical
conditions. Not only industrial products, but also skin and
biological matters exhibit color changes and texture changes
depending on optical geometrical conditions.
[0003] Product development of designs and materials, manufacturing
and quality control, and marketing such as advertisements have
required a high level of visual evaluation by experienced people
and therefore have required time and human resource consuming and
required huge energy and enormous efforts. There are demands for
establishment of methods for accurately measuring and evaluating
colors of surfaces of products and means for accurately
representing images such as CGs.
[0004] Thus, it is desired to establish physical means and an
evaluation method for accurately and stably measuring image color
information at multiple angles with simple operations. Note that as
color measurement, spectral measurement is essential especially for
colors of objects containing interference effect materials because
of its accuracy and capability of measuring a wide range of
colors.
[0005] Patent document 1 discloses a technique that uses light
dispersed by a prism or a diffraction grating to acquire spectral
information that differs among parts of an image.
[0006] Patent document 2 discloses a technique relating to a
multi-angle spectral imaging apparatus including a rotary
illumination light source and a multispectral camera.
[0007] Patent document 3 discloses a technique in which a white
reference surface is provided around an entire sample surface in
order to correct time fluctuations in an illumination light
source.
[0008] Patent document 4 discloses a technique that uses a lighting
device including a plurality of monochromatic LEDs that are
arranged in a row and capable of emitting light rays of different
colors to illuminate a sample surface and take an image when
multi-angle spectral measurement is performed.
[0009] Patent document 5 discloses a technique relating to a
spectral measurement system including a spectrometer that measures
a first spectrum of a subject under ultraviolet illumination light
and a second spectrum of the subject under visible illumination
light.
[0010] Patent document 6 discloses a technique relating to a color
measurement apparatus that measures spectral characteristics of a
sample and performs computations.
CITATION LIST
Patent Literature
[0011] Patent document 1: Japanese Patent No. 3933581 [0012] Patent
document 2: Japanese Patent Laid-Open No. 2005-181038 [0013] Patent
document 3: Japanese Examined Patent Publication No. 60-041731
[0014] Patent document 4: Japanese Patent Laid-Open No. 2004-226262
[0015] Patent document 5: Japanese Patent Laid-Open No. 2003-337067
[0016] Patent document 6: Japanese Patent Laid-Open No.
2007-333726
SUMMARY OF INVENTION
Technical Problems
[0017] The technique disclosed in Patent document 1 provides a
method for acquiring spectral information that differs among parts
of an image. However, the spectral information differs only among
pixels and spectral information with different wavelengths or the
like for different pixels cannot be acquired at all of the pixels.
Furthermore, means for correcting changes in the amount of light
and changes in exposure time during measurement is not configured.
Accordingly, it is difficult for the technique to provide
high-precision measurement that is practical in manufacturing
coating materials or product manufacturing that involves coating.
Moreover, since the lighting device illuminates from only one
angular position that is greater than 40 degrees, the field angle
of a recording device needs to be wide, causing distortion near the
rim of an imaging lens which is generally used in the recording
device.
[0018] The technique disclosed in Patent document 2 relates to a
multi-angle spectral imaging apparatus including an illumination
light source and a multispectral camera. However, the illumination
light source disclosed is a light source that rotates illumination
from one position or one illuminant, or a configuration in which a
sample is rotated. This makes accurate measurement difficult and
requires a mechanism for causing rotation, which adds to complexity
to the mechanism of the imaging apparatus.
[0019] The technique disclosed in Patent document 3 applies light
from a light source to light reflected from an object to be
measured and to background light, receives light rays are received
at a light receiver, then compares both light rays with each other
but does not acquire spectral information by taking an image of
applied light.
[0020] The technique disclosed in Patent document 4 uses two fixed
lighting devices for measurement. A problem with the technique is
that since each of the lighting devices has monochrome LEDs that
are arranged in a row and emit light rays of different colors and
the LEDs are turned on and off independently of each other, the
different light-emitting LEDs emit light to an object to be
measured at different angles and information acquired with an image
pickup device differs accordingly.
[0021] The technique disclosed in Patent document 5 provides a
plurality of light emitting parts that have a plurality of
different spectral energy distributions. However the technique has
a problem that since the light emitting parts are provided
separately, the different light emitting parts emit light to an
object to be measured at different angles and information acquired
with a spectrometer differs accordingly.
[0022] The technique disclosed in Patent document 6 only performs
image computations based on information acquired with a
spectrometer.
[0023] In order to acquire features of and evaluation information
about effect materials containing complex structural coloration,
image measurement that is capable of multi-angle measurement in
which optical geometrical conditions based on lighting and
observation directions can be changed in a wide color gamut is
required and effective. Furthermore, image analysis calculations
for quantifying features of measured spectral imaging information
are required.
[0024] However, such an apparatus has not been in practical use and
existing methods provide extremely small amount of information or
require enormous time for data measurement. In addition, effective
means for image analysis calculation combined with such an
apparatus is needed. On the other hand, special interference effect
materials are being often used in modern coating of automobiles.
The interference effect materials contain extremely fine and
high-brightness glittering reflected colors that has a wide color
representation range. In order to effectively use multi-angle
information and perform accurate image analysis calculations for
evaluating features of such coating colors, highly precise and
precise measurement needs to be performed with a high dynamic range
and in a wide color gamut over a whole image.
[0025] In light of the problems with the existing techniques
described in the Background Art section, an object of the present
invention is to provide a method and an apparatus that efficiently
measure information such as multi-angle spectral information on
each pixel of a surface of an object in a short time and, more
specifically, to implement a more accurate and practical
multi-angle spectral imaging measurement method and apparatus.
[0026] Another object is to implement a multi-angle spectral
imaging measurement method for correcting changes in optical
geometrical conditions for sample surfaces having a
three-dimensional geometry and an apparatus therefore.
Solution to Problems
[0027] In order to acquire features of and evaluation information
about effect materials containing complex structural coloration,
image measurement that is capable of multi-angle measurement in
which optical geometrical conditions based on lighting and
observation directions can be changed in a wide color gamut is
required and effective.
[0028] In order to solve the problems, according to the present
invention, a two-dimensional image sensor capable of taking an
image in a direction perpendicular to a sample surface is provided,
lighting devices are placed at certain angles with respect to the
direction perpendicular to the sample surface, and changes in
optical geometrical conditions from pixel to pixel in the X axis
and Y axis directions in an image are used while applying
illumination light from two or more angular directions, thereby
measuring multi-angle spectral imaging information.
[0029] The spectral measurement uses a white light
illumination/spectral light receiving method with a band-pass
filter having a passband for each constant wavelength, a
liquid-crystal tunable filter that allows passband tuning, an
acoustooptic element or the like at a position between the
two-dimensional image sensor and the sample surface. Alternatively,
a spectral illumination method with a spectral light source device
capable of emitting monochromatic light with each measurement
wavelength as illumination light is used. Alternatively, a
combination of a spectral light receiving method and a spectral
illumination method is used so that fluorescent color can be
measured.
[0030] Spectral data measured on each pixel is converted to color
numeric values such as tristimulus values of CIE, CIELAB values,
RGB values, or Munsell color system values, then the color numeric
values are provided to numeric value calculation means that uses
numeric values for a three-dimensional structure of distributions
in a LAB space based on optical geometrical conditions, space
frequency analysis, information entropy or the like to compare and
evaluate features of a surface of an object.
[0031] Three-dimensional geometry measurement means is incorporated
in the multi-angle spectral imaging apparatus, whereby
three-dimensional geometry information on the sample surface is
used to obtain the direction normal to the sample surface, and
differences in optical geometrical conditions and differences in
irradiation energy of light per unit area due to displacement and
difference in inclination are corrected.
Advantageous Effects of Invention
[0032] According to the present invention, a method and an
apparatus that efficiently measure information such as multi-angle
spectral information on each pixel of a surface of an object in a
short time can be provided and, more specifically, a more accurate
and practical multi-angle spectral imaging measurement method and
apparatus can be implemented. Furthermore, since the field angle
range of the imaging lens can be reduced by using irradiation of
illumination light from two or more angular directions, distortion
near the rim of the imaging lens can be reduced to enable accurate
and extensive spectral measurement of color information. Moreover,
a high spatial resolution can be achieved in a short time by
measuring various and continuous optical geometrical conditions at
a time. Furthermore, since spectral information is obtained as an
image, minute glittering reflection can be captured. In addition,
flexible use of distribution information in a color space and data
processing in combination with multi-angle information is possible,
such as averaging an image partially or for each optical
geometrical condition. Moreover, incorporation of the
three-dimensional geometry measurement means enables measurement of
three-dimensional geometry information on a sample surface, thereby
allowing correction of differences in irradiation energy of light
per unit area due to changes in optical geometrical conditions,
displacement, and differences in inclination. By measuring skin or
biological matters, the present invention can contribute to
research and development in the medial and cosmetics fields.
BRIEF DESCRIPTION OF DRAWINGS
[0033] FIG. 1 is a diagram schematically illustrating a
configuration of the present invention.
[0034] FIG. 2 is a diagram illustrating optical geometrical
conditions in the present invention.
[0035] FIG. 3 is a diagram illustrating an embodiment that uses a
linear lighting device.
[0036] FIG. 4 is a diagram illustrating an embodiment that uses a
spot lighting device.
[0037] FIG. 5 is a flowchart illustrating a calibration procedure
of the present invention.
[0038] FIG. 6 is a flowchart illustrating a measurement procedure
of the present invention.
[0039] FIG. 7 is a graph illustrating distributions of L* when a
lighting device is set at an angle of 20 degrees.
[0040] FIG. 8 is a graph illustrating distributions of L* when the
lighting device is set at an angle of 45 degrees.
[0041] FIG. 9 is a diagram representing changes in spectral
reflectance factor as wavelength and aspecular angle are changed in
the present invention.
[0042] FIG. 10 is a diagram representing distributions of pixels of
sample A in a CIELAB color space.
[0043] FIG. 11 is a diagram representing distributions of pixels of
sample A in the CIELAB color space and representing L*-a*
relationship and a*-b* relationship when a lighting device is set
at angles of 20 degrees and when the lighting device is set at 45
degrees and a*-b* relationship when the value of L* is set at
50.
[0044] FIG. 12 is a diagram representing distributions of pixels of
an image of sample A divided into eight regions with respect to a
aspecular angle and representing L*-a* relationship in each region
when the lighting device is set at 20 degrees and when the lighting
devices is set at 45 degrees, and a*-b* relationship when the value
of L* is set at 50.
[0045] FIG. 13 is a diagram representing distributions of pixels of
sample B in the CIELAB color space.
[0046] FIG. 14 is a diagram representing distributions of pixels of
sample B in the CIELAB color space and representing L*-a*
relationship when the lighting device is set at 20 degrees and when
the lighting device is set at 45 degrees, and a*-b relationship
when the value of L* is set at 50.
[0047] FIG. 15 is a diagram representing distributions of pixels of
an image of sample B divided into eight regions with respect to a
aspecular angle and representing L*-a* relationship in each region
when the lighting device is set at 20 degrees and when the lighting
devices is set at 45 degrees, and a*-b* relationship when the value
of L* is set at 50.
[0048] FIG. 16 is a diagram illustrating another example of the
present invention that incorporates three-dimensional geometry
measurement means.
[0049] FIG. 17 is a diagram illustrating means for acquiring
spectral information when a point light source is used.
[0050] FIG. 18 is a diagram illustrating means for acquiring
spectral information when a linear light source is used.
[0051] FIG. 19 is a diagram illustrating measured value changes
with changes in geometrical conditions due to the orientation of
effect materials.
[0052] FIG. 20 is a diagram illustrating example measurements of
interference effect materials.
[0053] FIG. 21 is a diagram illustrating distributions on L*-a* and
a*-b surfaces at L*=15.
[0054] FIG. 22 is a diagram illustrating a monochrome
two-dimensional image sensor.
[0055] FIG. 23 is a diagram illustrating a Bayer-arrangement
two-dimensional image sensor.
[0056] FIG. 24 is a diagram illustrating a Foveon two-dimensional
image sensor.
[0057] FIG. 25 is a diagram illustrating a three-CCD
two-dimensional image sensor.
[0058] FIG. 26 is a diagram illustrating an example in which
spectroscopic means is used at the illumination side.
[0059] FIG. 27 is a diagram illustrating an example in which
spectroscopic means is used at the illumination side and the light
receiving side.
[0060] FIG. 28 is a diagram illustrating an arrangement of a
plurality of lighting devices.
[0061] FIG. 29 is a diagram illustrating a state in which light
sources placed symmetrically with respect to a vertical axis from
the center of a sample surface are lit at the same time.
[0062] FIG. 30 is a diagram illustrating an exemplary configuration
of a measurement apparatus.
[0063] FIG. 31 is a diagram illustrating an example in which a
large sample such as a fender or a door is measured.
[0064] FIG. 32 is a diagram illustrating an example in which a
liquid sample or the like is measured.
[0065] FIG. 33 is a diagram illustrating the frequency of
appearance at different levels for explaining calibration
coefficients.
[0066] FIG. 34 is a diagram illustrating a position at which
measured values can be considered identical when a linear light
source is used.
[0067] FIG. 35 is a diagram illustrating a position at which
measured values can be considered identical when a point light
source is used.
[0068] FIG. 36 is a diagram illustrating a three-dimensional curved
surface measurement.
[0069] FIG. 37 is a diagram illustrating correction of a
calibration coefficient.
DETAILED DESCRIPTION OF EMBODIMENTS
[0070] Embodiments of the present invention will be described below
with reference to drawings.
(Measurement Mechanism)
[0071] FIG. 1 is a diagram schematically illustrating a
configuration of the present invention. An embodiment of a
multi-angle spectral imaging apparatus of the present invention
includes a monochrome two-dimensional image sensor 4 that is
disposed in the direction perpendicular to a flat sample surface 2
to be measured and acquires a gray-scale image, an optical lens 8
that forms an image on the monochrome two-dimensional image sensor,
spectroscopic means 6, such as a liquid-crystal tunable filter,
that is disposed in a predetermined position between the sample
surface 2 to be measured and the optical lens 8 and whose passband
is tunable, and a lighting device 10 having energy required for
measurement in a measurement waveband. One embodiment of the
present invention is configured with a white reference surface 12
disposed around the sample surface 2 to be measured in parallel
with the sample surface to be measured. An image of the white
reference surface is taken together with the sample surface 2 to be
measured. This image is used for correction made for accommodating
changes in the amount of light of illumination and changes and
errors in exposure time of an image pickup device during
measurement.
(Optical Geometrical Conditions)
[0072] Optical geometrical conditions of images, illumination and
light reception/observation will be described below. FIG. 2 is a
diagram illustrating optical geometrical conditions in the present
invention. Light emitted from the lighting device 10 illuminates
the sample surface 2 to be measured and the white reference surface
12. The monochrome two-dimensional image sensor 4 and the optical
lens 8 are disposed so that the sample surface 2 to be measured and
the white reference surface 12 are included together in an image
taken. Note that the spectroscopic means 6 is omitted from FIG.
2.
[0073] Light emitted from the lighting device 10 illuminates the
sample surface 2 to be measured and the white reference surface 12.
The illumination direction is dependent on the geometry being
illuminated and varies from position to position on the sample
surface 2 to be measured and the white reference surface 12 with
respect to the direction perpendicular to the sample surface 2 to
be measured and the white reference surface 12. Specular reflection
direction is opposite to the illumination direction, namely the
mirrored direction of the illumination direction. The direction in
which the pixels of the monochrome two-dimensional image sensor 4
taking an image of the sample surface 2 to be measured and the
white reference surface 12 receive light varies from pixel to pixel
and the unit vector 18 is denoted by P.sub.ij. Here, i,j represent
position of a pixel in the X axis and the Y axis of the image,
respectively. In other words, (i, j) represents the coordinates of
each pixel on a two-dimensional image.
[0074] The unit vector 20 on the image in the illumination
direction is denoted by I.sub.ij and the corresponding unit vector
16 in the specular reflection direction is denoted by S.sub.ij. The
angle of the unit vector 18 in the light receiving direction with
respect to the specular reflection direction of illumination is
generally called aspecular angle 14 (hereinafter referred to as the
"aspecular angle"). Metallic colors and pearlescent colors used for
outer coating of automobiles change in hue (lightness, chroma and
hue) depending on this angle. The aspecular angle at each pixel on
an image is denoted by .delta..sub.ij and can be written as Formula
1. Note that, for clarity, positions are described as positions on
the sample surface 2 to be measured or on the white reference
surface 12 that correspond to pixels (i, j) 19 of the monochrome
two-dimensional image sensor 4 in FIG. 2.
.delta..sub.ij=cos.sup.-1(p.sub.ijS.sub.ij) [Expression 1]
[0075] Here, p.sub.ijS.sub.ij represents the inner product of
P.sub.ij and S.sub.ij.
[0076] As can be seen from FIG. 2, the aspecular angle
.delta..sub.ij in the pixels of the monochrome two-dimensional
image sensor 4 differs for different i and j and therefore has
different angle information.
[0077] Based on FIG. 2, a method for calculating the aspecular
angle will be described in detail. Geometrical conditions for point
P are: the zenith angle of illumination denoted by .theta..sub.il,
the azimuth angle of illumination measured counterclockwise with
respect to the X axis, denoted by .theta..sub.il, the zenith angle
of light reception, denoted by .theta..sub.rsv, and the azimuth
angle of light reception measured counterclockwise with respect to
the X axis denoted by .phi..sub.rsv. Assuming that the geometrical
conditions (.theta..sub.ij, .phi..sub.ij, .theta..sub.rsv,
.phi..sub.rsv) have been obtained by calculations, which will be
described later, and letting (n.sub.xa, n.sub.ya, n.sub.za) denote
a unit vector in the specular reflection direction corresponding to
the illumination direction and (n.sub.xr, n.sub.yr, n.sub.zr)
denote a unit vector in the light reception direction, then the
angles of specular reflection can be written as
.theta..sub.as=.theta..sub.il
.phi..sub.as=.phi..sub.il+.pi. [Expression 2]
Therefore,
n.sub.xa=sin(.theta..sub.as)cos(.phi..sub.as)
n.sub.ya=sin(.theta..sub.as)sin(.phi..sub.as)
n.sub.za=cos(.theta..sub.as) [Expression 3]
On the other hand, in the case of light reception angle
n.sub.xr=sin(.theta..sub.rsv)cos(.phi..sub.rsv)
n.sub.yr=sin(.theta..sub.rsv)sin(.phi..sub.rsv)
n.sub.zr=cos(.theta..sub.rsv) [Expression 4]
The angle is inversely calculated from the inner product as
.delta..sub.ij=cos.sup.-1(n.sub.xan.sub.xr+n.sub.yan.sub.yr+n.sub.zan.su-
b.zr) [Expression 5]
[0078] Here, .delta..sub.ij is a typical angle condition for
identifying optical features of effect materials. It is useful that
color space distributions are calculated and statistical parameters
are calculated from an image in a region where .delta..sub.ij is
the same.
[0079] The lighting device 10 used is linear (a linear light
source) as illustrated in FIG. 3 or a spot (a point light source)
as illustrated in FIG. 4. The optical lens 8 and the spectroscopic
means 6 may be replaced with each other on condition that they can
fulfill their respective functions. In the case of linear
illumination, different angle information is available in the X
axis or Y axis direction of the image, depending on the direction
in which the lighting device 10 is placed. In the case of a spot
illumination, the illumination angle and light reception angle vary
in both X and Y axis directions of the image and therefore
different angle information in both directions are available.
[0080] Here, since the sample surface 2 to be measured has a
two-dimensional extent, the aspecular angle varies from position to
position on the sample surface 2 to be measured and the white
reference surface 12. FIG. 7 illustrates a distribution of L*
obtained from an image measured when the lighting device is set at
20 degrees and FIG. 8 illustrates a distribution of L* obtained
from an image measured when the lighting device is set at 45
degrees.
[0081] Means for acquiring spectral information by using light
emitted from a point or liner illumination light source device and
reflected by a sample surface and changes in optical geometrical
conditions depending on positions on a sample surface will be
described in further detail.
[0082] FIG. 17 illustrates the case of the point light source. A
position on the sample surface is denoted by P.sub.XY(x.sub.p,
y.sub.p), the position of an illuminant is denoted by
I(.theta..sub.i, .phi..sub.i, D.sub.i), and a light receiving
position (focal point position) is denoted by R(0, 0, D.sub.r).
Here, x and y are positions in the X axis direction and Y axis
direction, respectively, on the sample surface, .theta. is a zenith
angle, .phi. is an azimuth angle measured counterclockwise from the
X axis, D.sub.i is the distance from the coordinate origin of the
sample surface to the light source, and D.sub.r is the distance
from the coordinate origin of the sample surface to an image pickup
device. It is assumed that the image pickup device has a field
angle large enough to cover an image taking range of the sample
surface. Geometrical conditions of illumination and light receiving
at position P can be represented by the zenith angle of the
illumination, the azimuth angle of the illumination, the zenith
angle of the light reception, and the azimuth angle of the light
reception. Then the position of the illumination can be written
as:
x.sub.i=D.sub.isin(.theta..sub.i)cos(.phi..sub.i)
y.sub.i=D.sub.isin(.theta..sub.i)sin(.phi..sub.i)
z.sub.i=D.sub.icos(.theta..sub.i) [Expression 6]
[0083] A relative coordinate position of the illumination on the xy
plane for point P can be written as:
x.sub.i'=x.sub.i-x.sub.p
Y.sub.i'=y.sub.i-y.sub.p [Expression 7]
and the relative coordinate position of light reception in the xy
plane for point P can be written as:
x.sub.r'=-x.sub.p
y.sub.r'=-y.sub.p [Expression 8]
Then geometric conditions (.theta..sub.il, .phi..sub.il,
.theta..sub.rvs, .phi..sub.rvs) are
.theta..sub.il=.pi./2-tan.sup.-1(z.sub.i/sqrt((s.sub.i').sup.2+(y.sub.i'-
).sup.2)) (sqrt represents square root; the same applies to the
following formulas.)
.phi..sub.il=tan.sup.-1(y.sub.i'/x.sub.i')
.theta..sub.rsv=.pi./2-tan.sup.-1(z.sub.r'/sqrt((x.sub.r').sup.2+(y.sub.-
r').sup.2))
.phi..sub.rsv=tan.sup.-1(y.sub.r'/x.sub.r') [Expression 9]
[0084] FIG. 18 illustrates the case of a linear light source. If
the light source is disposed parallel to the Y axis direction,
illumination is uniform along the Y axis and optical geometrical
conditions in this case are: a position on the sample surface
denoted by P(x.sub.p), the illumination position denoted by
I(.theta..sub.i, D.sub.i), and the light receiving position (focal
point position) denoted by R(0, D.sub.r). Here, x is a position in
the X axis direction and the Y axis direction on the sample
surface, .theta. is a zenith angle, D.sub.1 is the distance from
the coordinate origin to the illumination and D.sub.r is the
distance from the coordinate origin to an image pickup device. It
is assumed that the image pickup device has a field angle large
enough to cover an image taking range of the sample surface.
Geometrical conditions of illumination and light receiving at
position P can be represented by the zenith angle of the
illumination, the azimuth angle of the illumination, the zenith
angle of the light reception, and the azimuth angle of the light
reception. Then the position of the illumination can be written
as:
x.sub.i=D.sub.isin(.theta..sub.i)
z.sub.i=d.sub.icos(.theta..sub.i) [Expression 10]
[0085] A relative coordinate position of the illumination on the xy
plane for point P can be written as:
x.sub.i'=x.sub.i-x.sub.p
z.sub.i'=z.sub.i [Expression 11]
and the relative coordinate position of light reception in the xy
plane for point P can be written as:
x.sub.r'=-x.sub.p
z.sub.r'=-D.sub.r [Expression 12]
[0086] Then geometric conditions (.theta..sub.il, .theta..sub.rsv)
are
.theta..sub.il=.pi./2-tan.sup.-1(z.sub.i'/|x.sub.i'|)
.theta..sub.rsv=.pi./2-tan.sup.-1(z.sub.r'/|x.sub.r'|) [Expression
13]
[0087] FIG. 19 illustrates changes of measured values with changes
in geometrical conditions due to the orientation of effect
materials. The effect materials have glittering reflection
properties including specular reflection on its surface and
exhibits strong directionality of reflection in the specular
reflection direction. Because of the extremely strong reflection in
the specular reflection direction, the amount of reflected light
rapidly decreases as the distance of the observation position from
the specular reflection direction increases. While glitter additive
sizes are typically in the range of several micrometers to several
tens of micrometers, some effect materials exceed 100 .mu.m in
size. This occurs because there are fluctuations in the
distribution of light in the coating film formed by splay
coating.
[0088] Slight changes in optical geometrical conditions at each
wavelength can lead to significant changes in the amount of
reflected light near the specular reflection, which can prevent
spectroscopic means from accurately measuring glittering reflection
and can lead to a significant difference in distributions in the
color space. For instance, effect materials made of aluminum flakes
are silver in color and actually have a substantially constant
reflectance factor at each wavelength, but if optical geometrical
conditions vary from wavelength to wavelength, the spectral
reflectance factor vary from wavelength to wavelength, which will
produce false colors. In the case of interference effect materials,
changes in optical geometrical conditions will change interference
conditions and will significantly affect the spectral reflectance
factor and color values.
[0089] Spectroscopic means used for measurement needs to have a
wavelength resolution appropriate to an object to be measured.
Interference effect materials, which are often used today, have a
sharp reflectance and a reflection property close to a saturated
color which is a single-wavelength color. Accordingly, measured
glittering reflections are distributed in a wide range in the color
space. Therefore accurate measurement cannot be performed by a
measurement method such as an RGB method, which has a limited color
gamut. FIG. 20 illustrates an example of measurement of
interference effect materials. In this example of measurement,
results of measurements on a product named Xirallic Crystal Silver
from Merck KGaA are shown. When glittering reflection in a small
region enclosed in a rectangle is displayed, a spectral reflectance
factor measured has a sharp and narrow shape as a function of
wavelength. A region of the lightest color that indicates a
marginal domain of the object color and distributions were
compared. Values in the CIELAB color space using artificial
illuminant D65 at an observation view field of 10.degree. were
calculated from the measured spectral reflectance factors. FIG. 21
illustrates a distribution in L*-a* and a*-b* surfaces at L*=15
together with the lightest color region. It is shown that the
results of the measurement are distributed to near the lightest
color. Therefore measurements need to be made with a wavelength
resolution that is sufficient for measuring a wide region.
(Two-Dimensional Image Sensor for Image Taking)
[0090] A two-dimensional image sensor for image taking will be
described below.
[0091] Special interference effect materials are often used in
automobile coating today. The interference effect materials contain
glittering reflection colors that are very fine and bright and have
a wide color representation range. Multi-angle information is
effectively used for such paint colors. In order to perform
accurate image analysis calculations for evaluating features,
measurement with a high resolution, high precision, high dynamic
range and a wide color gamut over an entire image is needed.
Therefore, a monochrome two-dimensional image sensor 4 is used in
embodiments of the present invention to prevent occurrence of a
moire image and the spectroscopic means is provided in a stage
before the image pickup to implement a mechanism that acquires
spectral information in a wide color gamut for every pixel.
[0092] The two-dimensional image sensor 4 preferably includes an
anti-blooming mechanism for identifying a good glittering reflected
color with a high precision. If the two-dimensional image sensor 4
is a CCD image pickup sensor, it is desirable that the image sensor
include means for preventing occurrence of noise in long time
exposure by using a Peltier cooling device, a water cooling
mechanism, an air cooling mechanism or a combination of these. The
two-dimensional image sensor 4 has a dynamic range of 8 bits or
greater, preferably 16 bits or greater in output resolution.
Furthermore, it is desirable that the device include an
anti-blooming mechanism.
[0093] The two-dimensional image sensor 4 is preferably does not
include a parallel arrangement of color RGB elements in order to
prevent occurrence of false colors or moire in measurement in a
minute region. Monochrome sensors do not cause such problems. There
are devices that are capable of RGB measurement but do not rely on
a parallel arrangement. Such devices do not produce false colors
and are effective in measurement of small reflection from glitter
additive to be particularly measured by the present invention. RGB
image pickup devices that do not rely on a parallel arrangement
include three-CCD sensors and Foveon sensors.
[0094] FIGS. 22 to 25 illustrate several types of two-dimensional
image sensors. FIG. 22 illustrates a monochrome image sensor, FIG.
23 illustrates a Bayer-arrangement color image sensor, FIG. 24
illustrates a Foveon color image sensor, and FIG. 25 illustrates a
three-CCD two-dimensional image sensor. In the case of the Foveon
image sensor in FIG. 24, one pixel has R, G, B sensitivities. The
monochrome two-dimensional image sensor in FIG. 22 is most
efficient, and allows measurement with a high sensitivity. The
Foveon color image sensor in FIG. 24 and the three-CCD
two-dimensional image sensor in FIG. 25 do not use a parallel pixel
arrangement and are capable of fast measurement. On the other hand,
the Bayer arrangement color image sensor in FIG. 23 uses a parallel
pixel arrangement, which can have adverse effects such as changes
in geometrical conditions and occurrence of false colors.
(Optical Lens)
[0095] The optical lens 8 for image formation disposed before the
monochrome two-dimensional image sensor 4 is preferably an optical
lens with a high resolution, low distortion near the rim and low
attenuation and preferably has a field angle of less than
40.degree. in order to minimize image distortion near the rim. For
illumination, a mechanism capable of switching between at least two
angles is provided to enable measurement at multiple aspecular
angles (see FIGS. 1, 3 and 4).
(Spectral Imaging)
[0096] Spectral imaging for acquiring spectral information for all
image pickup pixels will be described below.
[0097] The spectroscopic means 6 provided immediately before the
optical lens 8 may include a plurality of band-pass filers having
different pass bands according to measurement wavelengths as
illustrated in FIG. 1 and switching may be made from one filter to
another to take images, thereby acquiring spectral information. In
this case, if measurement is made at every 10 nm in the range
between 400 nm to 700 nm, which is a visible light range, 31
band-pass filters are provided and switching is made from one
filter to another while repeating image taking at the same position
to acquire 31 pieces of spectral information for all pixels.
[0098] Here, the spectroscopic means 6 only needs to disperse light
incident on the two-dimensional image sensor 2 and may be disposed
before the optical lens 8 or in a midway position between the
optical lens 8 and the two-dimensional image sensor 2, or
immediately before the two-dimensional image sensor 2.
[0099] Another way of providing spectroscopic means at the
illumination side is to use spectroscopic means at the illumination
side as illustrated in FIG. 26 to apply a monochromatic light to a
sample surface. Means for producing monochromatic light may be a
spectral light source device 40 that uses a white light illuminant
having radiation energy that is sufficient for a desired wavelength
range in which spectral information is to be acquired in
combination with a reflective diffraction element, a transmission
diffraction element, a prism or the like, or a spectral light
source device in which a plurality of interference filters that
filters light incident on a spectral device are replaced with one
another, or a liquid-crystal tunable filter or a combination of
band-pass filter such as AOTFs are used, or switching is made
between single-wavelength illumination light sources. In addition,
a mechanism is provided that uses one of or a combination of a
lens, a mirror, or optical fiber and the like so that an
illumination position and a light distribution pattern does not
change while spectroscopically changing wavelength. For receiving
light, a combination of two-dimensional monochrome image pickup
devices or two-dimensional color image pickup devices is used.
[0100] Furthermore, spectroscopic means may be used at both of the
illumination side and the light reception side as illustrated in
FIG. 27. This can be used as a bispectral multi-angle spectral
imaging apparatus to apply light to a fluorescent object. In this
case, the spectroscopic means 6 only needs to disperse light
incident on the two-dimensional image sensor 2 and may be provided
before the optical lens 8, in a midway position between the optical
lens 8 and the two-dimensional image sensor 2, or immediately
before the two-dimensional image sensor 2.
(Lighting Device)
[0101] A light source lamp such as a tungsten lamp, a halogen lamp,
a xenon lamp, or a white LED with a mechanism such as optical fiber
and a light guide or a light source lamp with a mirror, a
projection lens and the like that is capable of illuminate the
sample surface 2 to be measured and the white reference surface 12
at the same time may be used as the lighting device 10. If the
light distribution pattern or the spectral distribution provided by
the lighting device 10 varies with changes in temperature, it is
desirable to provide an appropriate cooling mechanism using Peltier
cooling, water cooling, air cooling or a combination of these.
[0102] A specific configuration of the lighting device will now be
described. FIG. 28 is a diagram illustrating an arrangement of a
plurality of illuminants. As illustrated in FIG. 28, the plurality
of lighting devices 10 are arranged along the zenith angle
direction and the azimuth angle direction or the Y axis direction,
so that a point-like or linear illumination pattern can be provided
from a plurality of angles without provision of a moving
member.
[0103] In addition, a combination of a plurality of illuminants can
be lit at the same time to provide a wide variety of optical
geometrical conditions and change spatial distributions of light
sources. Furthermore, shadowless diffused illumination conditions
can be produced by lighting light sources at positions that are
symmetric with respect to the axis (the Z axis) perpendicular to
the center of the sample surface as illustrated in FIG. 29.
[0104] By using various patterns of lighting for measurement of the
same sample in this way, simulation calculations can be performed
to acquire a various kinds of information, thereby measurement time
and the number of measurement times can be reduced.
[0105] Furthermore, by lighting a plurality of light sources at the
same time to provide a sufficient amount of light, precise,
short-time and fast measurement can be performed on a
low-reflectance object to be measured.
(Configuration of Apparatus)
[0106] In an example configuration of the measurement apparatus,
the apparatus is contained in a rectangular housing 50 having a
measurement window 52 in one face for placing a sample to be
measured as illustrated in FIG. 30 so that the sample 2 can be
brought into contact with and fixed on the housing 50 from the
outside. By changing the orientation of the housing 50 to position
the measurement window at the top to allow measurement of a large
sample or to position the measurement window at the bottom to allow
contactless measurement of a liquid sample, a viscous sample,
cosmetics and the like.
[0107] When the measurement window 52 is positioned at the top as
illustrated in FIG. 31, measurement of a large sample 54 such as a
fender or a door can be easily made. Furthermore, measurement of
powder or liquid in a glass cell can be made.
[0108] On the other hand, when the measurement window is positioned
at the bottom as illustrated in FIG. 32, contactless measurement of
a sample 56 such a liquid sample, a viscous sample, powder,
cosmetics or the like can be made. In addition, contactless
measurement of a human organism and skin can be effectively made.
This enables measurement of a translucent human organism without
influence of changes in blood flow due to pressure or edge loss
errors.
(Method of Calibrating Devices)
[0109] A method for calibrating devices performed before actual
measurement, and a method for measurement and image analysis
calculations performed after the calibration will be described
below with reference to FIGS. 5 and 6. First, the method for
calibrating the devices will be described with reference to FIG.
5.
[0110] For stable measurement, it is important to establish
calibration means. Calibration is performed in order to address
environmental changes such as temperature and humidity changes,
instability of power supply, and changes in the light source
devices and optical devices over time. Calibration is performed
immediately after power-on the apparatus or several hours or days
after the last calibration to make corrections so that measured
values always fall within a certain error margin.
[0111] First, a standard white plate for reference is provided
(S102). The surface structure is a lusterless diffusing surface and
is uniform in a measurement range. It is desirable that the
spectral reflectance factor of the surface structure is
approximately 80% or greater in a measurement wavelength region. A
typical standard white plate is a plate made of pressed barium
sulfate powder. The spectral reflectance factor of the standard
white plate at a wavelength .lamda. is measured beforehand with
another calibrated spectrophotometer and is stored in a memory. The
measured value is denoted by R.sub.c(.lamda.). The standard white
plate is placed in place of a sample to be measured, an image of
the standard white plate is taken at every measurement wavelength
in illumination p, and the results are recorded in the memory. Each
of the values is denoted by C.sub.ij(p, .lamda.), where (i, j)
represents the position of a pixel in an image of the standard
white board taken.
[0112] At the same time, an image of the white reference surface
provided around the standard white plate is also taken and
recorded. Each of the values is denoted by W.sub.kl(p, .lamda.),
where (k, l) represents the position of a pixel in an image of the
white reference surface taken. Desirably, the surface structure of
the white reference surface is a lusterless diffusing surface that
is as much independent of light reception and illumination angles
as possible. It is also desirable that the spectral reflectance
factor of the white reference surface is approximately 80% or
greater in a measurement wavelength region. Since properties of the
spectroscopic means 6 and spectral distributions of the lighting
devices 10 range vary in the measurement wavelength, exposure time
may be optimized according to each measurement wavelength in order
to correct the changes before calibration information is measured
(S103, S104). When the calibration with the plurality of
illuminants and different wavelengths are completed, the
calibration process ends (S105, S106). At this time, the exposure
times are stored along with the calibration information. When
measurement of a sample is made, the measurement is performed with
the same exposure times that are stored here. By using the values
obtained by taking pictures of the white reference surface,
variations from location to location of the surface measured due to
instability of power supply and changes in the light source devices
and optical devices over time can be corrected. By controlling
exposure time, spectral distributions of different light sources,
the spectral properties of the spectroscopic means, and the
spectral sensitivity of the image pickup devices that vary with
different measurement wavelengths can be made constant to ensure a
high dynamic range.
[0113] The calibration method will be described in further detail
with reference to FIG. 2.
[0114] The white reference surface 12 is set around a sample
surface 2 to form a double-beam optical system. In one example
configuration, the white reference surface 12 is disposed inside
the measurement window. The white reference surface 12 and the
white calibration plate 3 placed in the sample setting location are
measured at the same time to obtain a calibration coefficient at
that point in time. Since two-dimensional image pickup devices are
used, images of different regions of the white reference surface 12
and the sample surface 2 are taken at the same time. The values of
the results of the image taking of the white reference surface 12
are stored in the memory.
[0115] A standard white diffusion surface to which values are
assigned or an alternative white surface with which the standard
white diffusion surface can be referred to is used in calibration.
The surface is referred to as the white calibration plate 3. During
calibration, the white calibration plate 3 and the white reference
surface 12 are measured at the same time to obtain calibration
coefficients C.sub.I(x, y, .lamda.). The subscript I represents the
type (position) of an illuminant. The calibration coefficients can
be obtained based on a level setting that takes into consideration
the range of spectral reflectance factors that are possible in
measurement.
[0116] The level setting is a setting that associates an output
count value from the two-dimensional image pickup device with a
spectral reflectance factor of the white calibration plate 3. For
example, if the spectral reflectance factor of the white
calibration plate is 90% for an image pickup device capable of
16-bit output and 90% is associated with an output count value of
30000, then the maximum measurable spectral reflectance factor is
90.times.65535/30000=196.6%.
[0117] Calculation of a calibration coefficient will be described
with reference to FIG. 33.
[0118] (1) The spectral reflectance factor of a standard white
diffusion surface provided beforehand or an alternative white
surface with which the standard white diffusion surface can be
referred to is denoted by R.sub.W. The white calibration plate 3
has a uniform surface and the spectral reflectance factor R.sub.W
does not change with optical geometrical conditions or regions.
[0119] (2) Likewise, the spectral reflectance factor of the white
reference surface 12 is denoted by R.sub.R. The reference surface
has a uniform surface and the spectral reflectance factor R.sub.R
does not change with optical geometrical conditions or regions.
[0120] (3) The maximum value output from an image pickup device is
denoted by L.sub.max and the level setting value at illumination I
for the white calibration plate 3 is denoted by
L.sub.I(0<L.sub.I<L.sub.max).
[0121] (4) Exposure time and the amount of light are adjusted so
that the position of the distribution of output values V.sub.I(x,
y, .lamda.) from the image pickup device at illumination I and
wavelength .lamda. is substantially peaked at the position of
L.sub.I (the adjustments will be described later).
[0122] (5) The white calibration plate 3 and the white reference
surface 12 are measured at the same time. The white calibration
plate measured value is denoted by V.sub.I,W(x.sub.W, y.sub.W,
.lamda.) and the white reference surface measured value is denoted
by V.sub.I,R(x.sub.R, y.sub.R, .lamda.). Here, (x.sub.W,
y.sub.W).noteq.(x.sub.R, y.sub.R). The white reference surface
measured value V.sub.I,R(x.sub.R, y.sub.R, .lamda.) is stored in
the memory. This is used for correction during measurement.
[0123] (6) The calibration coefficient for calculating the spectral
reflectance factor for a part of a sample to be measured at
illumination I and wavelength .lamda. is
C.sub.I(x.sub.W, y.sub.W, .lamda.)=R.sub.W/V.sub.I,W(x.sub.W,
y.sub.W, .lamda.).
[0124] During calibration, the exposure time of the two-dimensional
image pickup device is changed according to measurement wavelength
and a difference in the light amount of the light source, a
difference in the efficiency of the spectroscopic means (a
difference in transmittance or the like), and a difference in the
spectral sensitivity of the image pickup device at the wavelength
are corrected to ensure a high dynamic range.
[0125] In addition, exposure time is multiplexed, like setting
short-exposure time and long-exposure time, to extend the dynamic
range. Exposure time is multiplexed by multiplexing level setting
or providing a low-reflectance calibration plate (gray or black or
the like).
[0126] Adjustment of the exposure time during calibration is
automatically optimized by computer control based on the
distribution of measurement count values during the calibration.
Exposure time can be adjusted manually as well.
[0127] A method for extending the dynamic range other than
multiplexing the exposure time is to multiplex the light amount of
illumination. In this case, electrical energy supplied to the
illuminant or the number of illuminants may be multiplexed. In
addition, the calibration plate described above may be replaced or
the level setting may be changed.
[0128] Moreover, multiplexing of exposure time and multiplexing of
the light amount of illumination may be combined.
[0129] During measurement, the sample surface 2 and the white
reference surface 12 are measured at the same time with the
exposure time optimized for each wavelength, the measured values
are compared with the measured count values of the white reference
surface recorded during calibration, and correction is made.
Changes to be corrected include changes in the light amount of
illumination, changes in the spectroscopic means (for example
changes in the efficiency of dispersion due to changes in
transmittance of a filter), changes in the sensitivity of a
two-dimensional image pickup device, errors in exposure time
(especially in short-time exposure) and the like. The white
reference surface measured values V.sub.I,R(x.sub.R, y.sub.R,
.lamda.) at illumination I and wavelength .lamda. at the time of
calibration have been stored in the memory. While an example
measurement method will be described below, the method for
interpolation calculation of a time correction coefficient for a
sample surface is not limited to this; various methods such as
two-dimensional spline interpolation may be used.
[0130] A method for interpolation calculation of a time correction
coefficient for a sample surface 2 will be described. White
reference surface measured value being measured is denoted by
V.sub.I,RM(x.sub.R, y.sub.R, .lamda.). The time correction
coefficient with respect to the time of calibration is
F.sub.I(x.sub.R, y.sub.R, .lamda.)=V.sub.I,RM(x.sub.R, y.sub.R,
.lamda.)/V.sub.I,R(x.sub.R, y.sub.R, .lamda.) [Expression 14]
[0131] The range of x.sub.R is
x.sub.b.ltoreq.x.sub.R.ltoreq.x.sub.e and the range of y.sub.R is
y.sub.b.ltoreq.y.sub.R.ltoreq.y.sub.e. Consider interpolation at
point P(x.sub.p, y.sub.p). First, linear interpolation is performed
in the Y axis direction. In this case,
F.sub.0=F.sub.I(x.sub.p, y.sub.b,
.lamda.)[(y.sub.e-y.sub.p)/(y.sub.e-y.sub.b)]+F.sub.I(x.sub.p,
y.sub.e, .lamda.)[(y.sub.p-y.sub.b)/(y.sub.e-y.sub.b)] [Expression
15]
[0132] Then, a displacement from the liner interpolation along the
Y axis is calculated for x.sub.b and x.sub.e.
.DELTA.F.sub.xb=F.sub.I(x.sub.b, y.sub.p, .lamda.)-F.sub.I(x.sub.b,
y.sub.b,
.lamda.)[(y.sub.e-y.sub.p)/(y.sub.e-y.sub.b)]-F.sub.I(x.sub.b,
y.sub.e, .lamda.)[(y.sub.p-y.sub.b)/(y.sub.e-y.sub.b)]
.DELTA.F.sub.xe=F.sub.I(x.sub.e, y.sub.p, .lamda.)-F.sub.I(x.sub.e,
y.sub.b,
.lamda.)[(y.sub.e-y.sub.p)/(y.sub.e-y.sub.b)]-f.sub.I(x.sub.e,
y.sub.e, .lamda.)[(y.sub.p-y.sub.b)/(y.sub.e-y.sub.b)] [Expression
16]
[0133] The displacement is obtained for each illumination
condition.
[0134] The time correction coefficient for each illumination
condition is denoted by F.sub.ij(I, .lamda.). Here, the subscripts
ij represent a position on the white reference surface (which is
the same as the position on a sample surface) corresponding to
x.sub.R, y.sub.R in the formula given earlier.
(Measurement of Actual Sample Surface to be Measured)
[0135] An actual measurement method will be described with
reference to FIG. 6. A sample to be measured is set (S111), an
image is taken with each measurement wavelength (S112) and the
result is recorded in a memory (S113). The value is denoted by
V.sub.Mij(I, .lamda.). In addition, F.sub.ij(I, .lamda.) described
above is obtained from a measured value of the image of the white
reference surface taken at the same time. In this case, the
spectral reflectance factor R.sub.ij(I, .lamda.) of the sample to
be measured after calibration can be obtained according to Formula
17. Here, the calibration coefficient for the illumination I is
C.sub.ij(I, .lamda.). When these calculations are completed for a
plurality of illumination conditions and different wavelengths, the
measurement ends (S114).
R.sub.ij(I, .lamda.)=V.sub.Mij(I, .lamda.)C.sub.ij(I,
.lamda.)/F.sub.ij(I, .lamda.) [Expression 17]
[0136] In this case, a pixel position on the white reference
surface is obtained by referring to a position that effectively
works on a pixel position (i, j) on the sample surface to be
measured for which correction is made, for example a position on
the white reference surface that intersects with a line drawn from
a position to be measured in parallel to the X axis and with a line
drawn from the position in parallel to the Y axis as an
extrapolation position for the X axis and the Y axis or applying
partial interpolation and averaging. Measurement of the white
reference surface is made at the same time as the time of image
taking and measured values are used for correction for
accommodating changes in the light amount of illumination and
changes and errors in exposure time of an image pickup device
during measurement. This enables high-precision and stable
measurement. Note that a light trap or a standard black plate may
be used in measurement of calibration data to make corrections to
spectral reflectance factors near 0.
[0137] Depending on measurement positions in an image, there are
regions for which optical geometrical conditions can be considered
to be the same. Examples of such regions are a region along the Y
axis that has a width dx along the X axis, as depicted in FIG. 34
(the shaded region) when linear light sources are used and a small
region where changes in optical geometrical conditions are
negligible. In that case, image information under substantially
identical optical geometrical conditions can be obtained and can be
used as information for obtaining distributions in a color space
and feature quantities in an image.
[0138] Depending on the relationship between a measurement position
in an image and illumination, there are regions for which optical
geometrical conditions can be considered to be symmetric azimuth
angles. For example, when a point light source is used as
illustrated in FIG. 35, such a relationship is the relationship
between illumination displaced from the center of the illuminant in
the + direction along the Y axis and illumination displaced from
the center of the illuminant in the same direction along the Y axis
or the relationship between illumination displaced from the center
of the illuminant in the - direction with respect to the Y axis and
illumination displaced in the - direction with respect to the Y
axis. In this case the orientation of effect materials can be
measured.
(Methods for Calculating Feature Quantities by Image Analysis)
[0139] Methods for calculating feature quantities by image analysis
using measured spectral imaging information will be described
below. The calculation methods are illustrative only. Since
measurements in the present invention are performed with high
resolution, high precision for every pixel and many types of image
analysis calculations can be used and high reliability of the
results of the calculations can be ensured.
[0140] A method for obtaining the average of measured spectral
reflectance factors R.sub.mij (p, .lamda.) for all pixels with each
illuminant with respect to multi-angle information will be
described (S115). The aspecular angle differs from pixel position
(i, j) to pixel position. The average R.sub.m(p, .lamda.) of
spectral reflectance factors can be written as Formula 18.
R.sub.m(p, .lamda.)=.SIGMA..SIGMA.R.sub.mij(p, .lamda.)/N
[Expression 18]
[0141] Here, .SIGMA. is calculated for the region from the i-th
pixel to the i+I-th pixel in the X axis direction and the region
from the j-th pixel to the j+J-th pixel in the Y axis direction. N
is the number of pixels included in a calculation region and
N=(I+1)(J+1).
[0142] The average aspecular angle .theta.(p) for the calculation
region can be written as Formula 19.
.theta.(p)=.SIGMA..SIGMA..theta..sub.ij(p)/N [Expression 19]
[0143] The average spectral reflectance factor R.sub.mij(p,
.lamda.) can be obtained at any aspecular angle .theta.(p) for a
desired spatial resolution can be obtained by choosing i, j, I and
J appropriately. When I and J are chosen to be small, changes in
aspecular angle .theta.(p) in the average calculation region will
be small and the average can be calculated with a high spatial
resolution. On the other hand, when I and J are chosen to be large,
the spatial resolution will decrease but the effects of averaging
and smoothing will be high because N is large.
[0144] Parameters such as lightness distributions for different
angles, color spatial distributions, spatial distribution for each
image position, the number of appearing colors, information
entropy, and image filters, fractal dimensions and other parameters
can be calculated and displayed as obtained multi-angle spectral
image information.
[0145] Furthermore, based on the information mentioned above,
reproduction information can be provided by computer graphics.
(Method for Obtaining Color Values)
[0146] A method for obtaining color values from spectral
reflectance factors R.sub.mij(p, .lamda.) of all of the measured
pixels will be described (S116). In the procedure for calculating
color values, tristimulus values XYZ are calculated first. A
typical method established by Comission Internationale de
l'Eclairage (CIE) is used.
[0147] The tristimulus values X.sub.ij(p), Y.sub.ij(p), Z.sub.ij(P)
of each pixel can be written as Formula 20.
X.sub.ij(p)=k.intg.P(.lamda.)x(.lamda.)R.sub.mij(p,
.lamda.)d.lamda.
Y.sub.ij(p)=k.intg.P(.lamda.)y(.lamda.)R.sub.mij(p,
.lamda.)d.lamda.
Z.sub.ij(p)=k.intg.P(.lamda.)x(.lamda.)R.sub.mij(p,
.lamda.)d.lamda.
K=100/.intg.P(.lamda.)y(.lamda.)d.lamda. [Expression 20]
[0148] Here, P(.lamda.) represents a spectral distribution of
illumination light that is possible when an object is observed and
the range of wavelengths .lamda. to be integrated is a visible
light region used in the measurement. When presented on a computer
display or a printer, the tristimulus values X.sub.ij(p),
Y.sub.ij(p), Z.sub.ij(P) are converted to R.sub.ij(p), G.sub.ij(p),
B.sub.ij(P) values. Methods for converting to sRGB and Adobe RGB
based on standards are known.
[0149] As in the calculations of the average of spectral
reflectance factors described above, the average of tristimulus
values can be obtained according to Formula 21.
X(p)=.SIGMA..SIGMA.X.sub.ij(p)/N
Y(p)=.SIGMA..SIGMA.Y.sub.ij(p)/N
Z(p)=.SIGMA..SIGMA.Z.sub.ij(p)/N [Expression 21]
[0150] A method for obtaining color values from the tristimulus
values X.sub.ij(p), Y.sub.ij(p), Z.sub.ij(p) of all pixels measured
will be described. A typical method is to obtain L*a*b* which are
values in the CIELAB color space specified in CIE by using Formula
22.
L*.sub.ij(p)=116f(Y.sub.ij(P)/Y.sub.n)-16
a*.sub.ij(P)=500{f(X.sub.ij(p)/X.sub.n)-f(Y.sub.ij(p)/Y.sub.n)}
b*.sub.ij(p)=200{f(Y.sub.ij(p)/Y.sub.n)-f(Z.sub.ij(p)/Z.sub.n)}
[Expression 22]
Here,
[0151] when X.sub.ij(p)/X.sub.n>0.00856,
f(X.sub.ij(p)/X.sub.n)=(X.sub.ij(p)/X.sub.n).sup.1/3 [0152] when
X.sub.ij(p)/X.sub.n<0.00856, f(X.sub.ij(p)/X.sub.n)= [0153]
7.787(X.sub.ij(p)/X.sub.n)+16/116 [0154] when
Y.sub.ij(p)/Y.sub.n>0.00856,
f(Y.sub.ij(p)/Y.sub.n)=(Y.sub.ij(p)/Y.sub.n).sup.1/3 [0155] when
Y.sub.ij(p)/Y.sub.n<0.00856,
f(Y.sub.ij(p)/Y.sub.n)=7.787(X.sub.ij(p)/X.sub.n)+16/116 [0156]
when Z.sub.ij(p)/Z.sub.n>0.00856,
f(Z.sub.ij(p)/Z.sub.n)=(Z.sub.ij(p)/Z.sub.n).sup.1/3 [0157] when
Z.sub.ij(p)/Z.sub.n<0.00856,
f(Z.sub.ij(p)/Z.sub.n)=7.787(Z.sub.ij(p)/Z.sub.n)+16/116
[0158] X.sub.n, Y.sub.n, and Z.sub.n are tristimulus values
obtained by measurement of an object having a spectral reflectance
factor of 100%. As in the calculation of the average of spectral
reflectance factors described above, the average of CIELAB values
can be obtained according to Formula 23.
L*(p)=.SIGMA..SIGMA.L*.sub.ij(p)/N
a*(p)=.SIGMA..SIGMA.a*.sub.ij(p)/N
b*(p)=.SIGMA..SIGMA.b*.sub.ij(p)/N [Expression 23]
[0159] An important method for acquiring features of a sample 2 to
be measured by image analysis is to obtain a distribution of
measured values in the CIELAB color space. In this case again, the
region from the i-th pixel to the i+I-th pixel along the X axis of
the image and the region from the j-th pixel to the j+J-th pixel
along the Y axis are calculated.
[0160] A method for calculating the number of colors that appeared
and a method for calculating information entropy can be used as a
method for representing distributions of measured values in the
CIELAB color space as numerical parameters. Information entropy H
can be calculated according to Formula 24.
H(p)=-.SIGMA.P(p, c)log.sub.2{P(p, c)} [Expression 24]
[0161] Here, P(p, c) is the frequency of appearance of color c in
an image taken with illumination p and is calculated by dividing
the region from the i-th pixel to the i+I-th pixel along the X axis
and the region from the j-th pixel to the j+J-th pixel along the Y
axis into cubes with edges .DELTA.L*, .DELTA.a*, .DELTA.b* of a
certain length.
[0162] Various other methods can also be used, such as a method of
using an image filter or differential values, or performing
frequency analysis such as Fourier transform or wavelet transform
on an image position, i.e. a aspecular angle .theta.(p), which is a
geometrical condition of illumination and light reception, and a
method of parameterizing distributions in the CIELAB color space in
further detail. By quantifying features of an object to be
measured, the difference between two different objects to be
measured can be quantified. In addition, efficiency improvement and
advantageous effects in a wide range of application such as quality
control in product manufacturing and determination of manufacturing
methods can be achieved.
(Three-Dimensional Curved Surface)
[0163] Spectral information on a sample surface of a planar object
to be measured is acquired using changes in optical geometrical
conditions from pixel to pixel in the description given above.
Accurate measurement of a curved surface to be measured can be made
as well by incorporating a three-dimensional measurement device in
a multi-angle spectral imaging apparatus and correcting optical
geometrical conditions from the normal to each pixel of the curved
surface.
[0164] Detailed description will be given below. Displacements in
the Z axis direction of the sample surface and the normal direction
are measured from the three-dimensional geometry and corrections
are made to the optical geometrical conditions (.theta..sub.il,
.phi..sub.il, .theta..sub.rsv, .phi..sub.rsv) described above. In
the case of point light source, a position on the sample surface is
denoted by P(x.sub.p, y.sub.p), the position of a lighting device
is denoted by I(.theta..sub.i, .phi..sub.i, D.sub.i), and the
position of a light receiving device (focal position) is denoted by
R(0, 0, D.sub.r). Here, x and y are positions in the X axis
direction and Y axis direction, respectively, on the sample
surface, .theta. is a zenith angle, .phi. is an azimuth angle
measured counterclockwise from the X axis, D.sub.i is the distance
from the sample surface to the light source, and D.sub.r is the
distance from the sample surface to an image pickup device. It is
assumed that the image pickup device has a field angle large enough
to cover an image taking range of the sample surface. Geometrical
conditions of illumination and light receiving at position P can be
represented by the zenith angle of the illumination, the azimuth
angle of the illumination, the zenith angle of the light reception,
and the azimuth angle of the light reception. Then the position of
the illumination can be written as:
x.sub.i=D.sub.isin(.theta..sub.i)cos(.phi..sub.i)
y.sub.i=D.sub.isin(.theta..sub.i)sin(.phi..sub.i)
z.sub.i=D.sub.icos(.theta..sub.i) [Expression 25]
[0165] The relative coordinate position of the illumination with
respect to point P is
x.sub.i'=x.sub.i-x.sub.p
y.sub.i'=y.sub.i-y.sub.p
z.sub.i'=z.sub.i [Expression 26]
and the relative coordinate position of the light reception with
respect to point P is
x.sub.r'=-x.sub.p
y.sub.r'=-y.sub.p
z.sub.r'=Z.sub.r [Expression 27]
[0166] It is assumed here that the result of measurement of the
three-dimensional geometry shows that there is a displacement of
point P by D.sub.H and its normal vector is inclined at a zenith
angle of .theta..sub.n in the direction of .theta..sub.n. The
relative coordinate position to which the displacement D.sub.H is
given is
x.sub.i''=x.sub.i'
y.sub.i''=y.sub.i'
z.sub.i''=z.sub.i'-D.sub.H
x.sub.r''=x.sub.r'
y.sub.r''=y.sub.r'
z.sub.r''=z.sub.r'-D.sub.H [Expression 28]
[0167] If the normal vector is inclined at a zenith angle of
.theta..sub.n in the .phi..sub.n direction, the same operation as
rotation by -.theta..sub.n about the straight line that is
perpendicular to the positions of the illumination I'' and the
light receiving position R'' and passes through the relative
coordinate X'' Y'' plane. The unit vector as the axis of the
rotation is (-sin(.phi..sub.n), cos(.phi..sub.n), .theta.). The
general expression of the rotation .theta. about an arbitrary
vector (n.sub.y, n.sub.y, n.sub.z) is
[Expression 29]
[Math 1]
[0168] Therefore,
x.sub.i'''=[sin(.phi..sub.n).sup.2{1-cos(-.theta..sub.n)}+cos(-.theta..s-
ub.n)]x.sub.i''-sin(.phi..sub.n)cos(.phi..sub.n){1-cos(-.theta..sub.n)}y.s-
ub.i''+cos(.phi..sub.n)sin(-.theta..sub.n)z.sub.i''
y.sub.i'''=-sin(.phi..sub.n)cos(.phi..sub.n){1-cos(-.theta..sub.n)}x.sub-
.i''+[cos(.phi..sub.n).sup.2{1-cos(-.theta.)}+cos(-.theta..sub.n)]y.sub.i'-
'+sin(.phi..sub.n)sin(-.theta..sub.n)z.sub.i''
z.sub.i'''=cos(.phi..sub.n)sin(-.theta..sub.n)x.sub.i''-sin(.phi..sub.n)-
sin(-.theta..sub.n)y.sub.i''+cos(-.theta..sub.n)z.sub.i''
x.sub.r''=[sin(.phi..sub.n).sup.2{1-cos(-.theta..sub.n)}+cos(-.theta..su-
b.n)]x.sub.r'-sin(.phi..sub.n)cos(.phi..sub.n){1-cos(-.theta..sub.n)}y.sub-
.r'+cos(.phi..sub.n)sin(-.theta..sub.n)z.sub.r'
y.sub.r''=sin(.phi..sub.n)cos(.phi..sub.n){1-cos(-.theta..sub.n)}x.sub.r-
'+[cos(.phi..sub.n).sup.2{1-cos(-.theta..sub.n)}+cos(-.theta..sub.n)]y.sub-
.r'+sin(.theta..sub.n)sin(-.theta..sub.n)z.sub.r'
z.sub.r''=-cos(.phi..sub.n)sin(-.theta..sub.n)x.sub.r'-sin(.phi..sub.n)s-
in(-.theta..sub.n)y.sub.r'+cos(-.theta..sub.n)z.sub.r' [Expression
30]
[0169] Therefore the geometrical conditions (.theta..sub.il,
.phi..sub.il, .theta..sub.rsv, .phi..sub.rsv) are
.theta..sub.il=.pi./2-tan.sup.-1(z.sub.i'''/sqrt((x.sub.i''').sup.2+(y.s-
ub.i''').sup.2))
.phi..sub.il=tan.sup.-1(y.sub.i'''/x.sub.i''')
.theta..sub.rsv=.pi./2-tan.sup.-1(z.sub.r'''/sqrt((x.sub.p''').sup.2+(y.-
sub.p''').sup.2))
.phi..sub.rsv=tan.sup.-1(y.sub.r'''/x.sub.r''') [Expression 31]
[0170] In the case of three-dimensional curved surface measurement,
the calibration coefficients need to be corrected when there are
changes in position and angle. That is, the calibration coefficient
is recalculated using a displacement D.sub.H in the vertical
direction and an inclination with respect to the illumination
position. This is done because the irradiation energy per unit area
changes with the displacement and the inclination.
[0171] FIG. 37 is a diagram illustrating correction of a
calibration coefficient. A position on the XYZ Cartesian
coordinates of the illumination is denoted by I(x.sub.i, y.sub.i,
z.sub.i) and the observation position on the sample surface is
denoted by P(x.sub.p, y.sub.p, 0). The distance D.sub.I from the
observation position to the illumination position is
D.sub.I[(x.sub.i-x.sub.p).sup.2+(y.sub.i-y.sub.p).sup.2+z.sub.p.sup.2].s-
up.0.5 [Expression 32]
When there is a change D.sub.H in the distance, the calibration
coefficient C is corrected according to the following formula.
C'=C(D.sub.I+D.sub.H)/D.sub.I [Expression 33]
[0172] The unit vector at point P in the direction of illumination
is denoted by (n.sub.Ix, n.sub.Iy, n.sub.Iz) and the unit vector of
point P with respect to the three-dimensional curved surface is
denoted by (n.sub.Hx, n.sub.Hy, n.sub.Hz). Because the unit vector
of point P in the vertical direction is (0, 0, 1), the inner
product of this unit vector and the unit vector in the direction of
illumination is n.sub.Iz. Therefore the zenith angle in the
direction of illumination is
cos(.theta..sub.I)=n.sub.Iz
.theta..sub.I=cos.sup.-1(n.sub.Iz) [Expression 34]
In this case, the projected area of light per unit area is
S.sub.I=1/cos(.theta..sub.I) [Expression 35]
On the other hand, the inner product A of the direction of
illumination and the normal direction is
A=n.sub.Ixn.sub.Hx+n.sub.Iyn.sub.Hy+n.sub.Izn.sub.Hz
.theta..sub.IH=cos.sup.-1(A) [Expression 36]
Therefore, the projected area in this case is
S.sub.IH=1/cos(.theta..sub.IH) [Expression 37]
The correction coefficient in this case is
C''=C'(S.sub.I/S.sub.IH)=C[(D.sub.I+D.sub.H)/D.sub.I][cos(.theta..sub.IH-
)/cos(.theta..sub.I)] [Expression 38]
The method for obtaining the correction coefficient is not limited
to this; any of various methods can be used.
[0173] A specific example of an embodiment of the present invention
will now be described.
[0174] In this example, the monochrome two-dimensional image sensor
4 including a Peltier cooling mechanism and an anti-blooming
mechanism and having 772 pixels in the X axis direction and 580
pixels in the Y axis direction was used. The imaging lens was a
single-focus C mount lens having a focal length of 25 mm and a
longitudinal field angle of 22 degrees. The spectroscopic means had
a liquid-crystal tunable filter immediately before the lens and was
capable of spectral measurement in the visible light range at
intervals of 10 nm. As the sample surface to be measured, the
central area of an image that has 720 pixels in the X axis
direction and 520 pixels in the Y axis direction was used and the
other, marginal area was a white reference surface. For
illumination, two sets of linearly arranged 10 white LED chips each
of which included a projection lens were provided and were disposed
so that the sample surface to be measured was illuminated at angles
of 20 degrees and 45 degrees from the direction perpendicular to
the center of the sample surface to be measured.
[0175] In this example, before measurement, a standard white plate,
which is a standard calibration plate, was set in a measurement
place after power-on, then calibration information at each
wavelength was measured with the illuminants at 20 degrees and 45
degrees. In order to make corrections of spectral distribution
properties of the liquid-crystal tunable filter and the lighting
devices, exposure time was optimized for each measurement
wavelength and was stored along with the calibration information in
the memory. Then, the sample to be measured was set at the
measurement place and measurement was made with illuminants at 20
degrees and 45 degrees for each wavelength with the same exposure
times as those in the calibration.
[0176] Sample A (a color card containing aluminum flake pigment
(Alpate 7670 NS from Toyo Aluminium K.K.)) and sample B (a color
card containing interference effect materials (Xirallic Crystal
Silver XT60-10 from Merck)) were measured. After the measurement of
both of the samples, spectral reflectance factors for all pixels
with illuminants at 20 degrees and 45 degrees were obtained based
on the calibration information. From the spectral reflection
coefficients, tristimulus values XYZ, RGB values for image display,
and values in the CIELAB color space were obtained. FIG. 9
illustrates a graph of results of the measurement of spectral
reflectance factors obtained by dividing each of images with
illuminants at 20 degrees and 45 degrees into eight regions in the
X axis direction, averaging the measured values of the pixels in
each of the total of 16 regions, and changing the wavelength in
increments of 10 nm in the range of 400 nm to 700 nm.
[0177] In addition, distributions in the CIELAB color space for all
pixels with illuminants at 20 degrees and 45 degrees, the number of
colors that appeared, and information entropy were calculated from
the CIELAB values.
[0178] The number of colors that appeared in the image in the
CIELAB color space can be used as an indicator of a feature of the
color card. In the case of a color card that uses only a pigment as
coloring agent and has a paint color generally called solid color,
measured values in the image are substantially the same and
therefore extremely few colors appear in the image. In the case of
a metallic color, the lightness and chroma particularly vary
depending on the type and blending quantity of aluminum effect
materials contained, pigments used in combination with the effect
materials, and optical geometrical conditions, and therefore more
colors appear. In the case of a pearl color, the hue as well as
lightness and chroma vary according to optical geometrical
conditions and, in addition, interference reflected light from fine
effect materials are contained and therefore various colors appear
in each individual pixel, which increases the number of appearing
colors. This means that colors distribute in a wider range in the
CIELAB color space. The information entropy is a numerical value
indicating the amount of information in an image and can be used as
an indicator of a feature of a color card, like the number of
appearing colors.
[0179] L*a*b* values of each pixel were calculated using a
calculation method in JIS Z8729:2004 based on Formula 7 with CIE
colorimetric standard illuminant D65 specified in JIS Z8781:1999
using a color-matching function for a view angle of 10 degrees. The
calculation was performed with a precision of two decimal places.
Then, the CIELAB space was divided into virtual cubes with an edge
of 1.00 along each of the L*, a* and b* axes, values of L*a*b*
calculated for 720.times.520=374400 pixels in the image were
assigned to the cubes, and the number of cubes containing colors of
the pixels that appeared was used as the number of colors that
appeared. This number of colors is denoted by N. This calculation
method was also used for the image divided into eight regions.
[0180] The information entropy E was calculated according to
Formula 39.
E=-.SIGMA.P(i)log.sub.2{P(i)} [Expression 39]
[0181] Here, P(i) is the frequency of appearance of color i in the
image and represents how many pixels are included in a cube with an
edge of 1.00 along each of the L*, a* and b* axes.
[0182] FIG. 10 illustrates distributions in the CIELAB color space
for sample A and FIG. 11 illustrates the L*-a* relationship with
illuminants at 20 degrees and 45 degrees and the a*-b* relationship
when the value of L* was set at 50. FIG. 13 illustrates
distributions in the CIELAB color space for sample B and FIG. 14
illustrates the L*-a* relationship with illuminants at 20 degrees
and 45 degrees and the a*-b* relationship when the value of L* was
set at 50.
[0183] Each of the images with 20-degree and 40-degree illuminants
was divided into eight regions in the X axis direction. The
distribution in the CIELAB color space for each region is
illustrated. FIG. 12 illustrates the L*-a* relationship in the
eight regions of sample A with illuminants at 20 degrees and 45
degrees and the a*-b* relationship when the value of L* was set at
50 and FIG. 15 illustrates the L*-a* relationship in the eight
regions of sample B with illuminants at 20 degrees and 45 degrees
and the a*-b* relationship when the value of L* was set at 50. The
distributions for the interference effect materials of sample A is
wider than the distributions for the aluminum effect materials of
sample B because the interference effect materials of sample A have
interference glittering reflections. The difference between the
samples is clearly shown.
[0184] While this embodiment has been described with an example in
which a white reference surface 12 is provided around a sample
surface 2 to be measured, the white reference surface 12 may be
omitted and only the sample surface 2 may be measured. While a
white light source is used as the lighting device 10 in this
embodiment, a light source that has dispersion facility in the
lighting device and is capable of emitting single-wavelength light
may be used.
[0185] A flat object to be measured is used to acquire spectral
information of a sample surface from changes in optical geometrical
conditions of each pixel in this embodiment. If surface to be
measured is curved, a three-dimensional geometry measurement device
may be incorporated in the multi-angle spectral imaging apparatus
so that optical geometrical conditions can be accurately corrected
based on the normal to each of the pixels on the curved
surface.
[0186] Another example in which a three-dimensional geometry
measurement device is incorporated will be described with reference
to FIG. 16. Description of the same components as those in the
example described above will be omitted.
[0187] This example includes, as in the first example, a monochrome
two-dimensional image sensor 4, a liquid-crystal tunable filter
which acts as spectroscopic means 6, and a lighting device 10
including two sets of linearly arranged 10 white LED chips. In
addition to these components, this example includes a laser
projector 9 (SHOWWX from Microvision, Inc., USA) as geometry
measurement means for three-dimensional geometry correction of a
sample to be measured. The laser projector 9 is used to project a
grating image 11 onto a surface to be measured, the grating image
11 is captured with the monochrome two-dimensional image sensor 4,
and a geometry is obtained from displacements of the grating image
11.
[0188] In this example, before measurement, a sufficiently flat
standard white plate, which is a standard calibration plate, was
set in a measurement place after power-on and then calibration
information at each wavelength was measured with illuminants at 20
degrees and 45 degrees. Then, the laser projector 9 for
three-dimensional geometry measurement was used to project a
grating image, an image of the projected grating image was taken
with the monochrome two-dimensional image sensor 4, and positions
on the grating image 11 were quantified as data on a reference flat
surface and stored.
[0189] During measurement, the grating image 11 was projected with
the laser projector 9, the grating image was acquired with the
monochrome two-dimensional image sensor 4, and position information
on the grating image 11 on the reference flat surface that has
measured and stored beforehand was used to calculate
three-dimensional geometry data by a triangular method.
[0190] Then a normal vector at each of the positions on the sample
surface was calculated from the calculated three-dimensional
geometry data, the specular reflection direction of illumination
from the lighting device was corrected, then aspecular angles were
recalculated to correct optical geometrical conditions of the
three-dimensional surface.
[0191] In this embodiment, optical geometrical conditions can be
accurately corrected because the three-dimensional geometry
measurement device is incorporated in the multi-angle spectral
imaging apparatus in order to measure a curved surface to be
measured. In particular, when data on aspecular angles is obtained,
normal vectors are calculated from the three-dimensional geometry,
specular reflection directions of illumination are corrected, then
the aspecular angles are recalculated, thereby enabling measurement
unaffected by the geometry of the surface to be measured.
[0192] While a surface of a solid is measured in this embodiment,
an object other than solids, such as a fluid surface, liquid, and
powder, can be measured contactlessly.
REFERENCE NUMERALS
[0193] 2 Sample surface (sample surface to be measured) [0194] 3
White calibration plate [0195] 4 Two-dimensional image sensor
[0196] 6 Spectroscopic means [0197] 8 Optical lens [0198] 9 Laser
projector [0199] 10 Lighting device [0200] 11 Grating image [0201]
12 White reference surface [0202] 14 Aspecular angle [0203] 16 Unit
vector in specular reflection direction [0204] 18 Unit vector in
light receiving direction [0205] 19 Pixel (i, j) [0206] 20 Unit
vector in illumination direction [0207] 32 Lighting direction
[0208] 34 Specular reflection direction [0209] 36 Observation
direction [0210] 38 Normal direction of effect materials [0211] 40
Spectral light source device [0212] 50 Housing [0213] 52
Measurement window [0214] 54 Large sample such as fender or door
[0215] 56 Liquid sample
* * * * *