U.S. patent application number 14/582176 was filed with the patent office on 2016-06-23 for method and device of constructing uniform color space directly from raw camera rgb.
The applicant listed for this patent is MediaTek Singapore Pte. Ltd.. Invention is credited to Hsien-Che Lee, Ying-Yi Li.
Application Number | 20160180552 14/582176 |
Document ID | / |
Family ID | 56130035 |
Filed Date | 2016-06-23 |
United States Patent
Application |
20160180552 |
Kind Code |
A1 |
Li; Ying-Yi ; et
al. |
June 23, 2016 |
Method And Device Of Constructing Uniform Color Space Directly From
Raw Camera RGB
Abstract
A method of constructing uniform color space directly from raw
camera RGB and associated device is described. The method
constructs a uniform color space from raw camera RGB may determine
characteristics related to the imaging device. The method may also
determine a direction and a scale of each of first, second and
third perceptual color axes based at least in part on the
characteristics related to the imaging device, such that a first
perceptual color axis correlates with lightness, a second
perceptual color axis correlates with yellow-blue color variations,
and a third perceptual color axis correlates with red-green color
variations. The second perceptual color axis may be substantially
aligned with typical daylight variation.
Inventors: |
Li; Ying-Yi; (Taipei,
TW) ; Lee; Hsien-Che; (Pleasanton, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
MediaTek Singapore Pte. Ltd. |
Singapore |
|
SG |
|
|
Family ID: |
56130035 |
Appl. No.: |
14/582176 |
Filed: |
December 23, 2014 |
Current U.S.
Class: |
345/591 |
Current CPC
Class: |
G09G 5/06 20130101; G09G
2340/06 20130101; G06T 11/001 20130101 |
International
Class: |
G06T 11/00 20060101
G06T011/00; G09G 5/06 20060101 G09G005/06 |
Claims
1. A method of constructing a uniform color space from raw
tristimulus values of an imaging device, comprising: obtaining, by
a processor, characteristics related to the imaging device;
computing, by the processor, first, second and third perceptual
color axes, the first perceptual color axis correlated with
lightness, the second perceptual color axis correlated with a first
color variation and substantially aligned with a daylight
variation, the third perceptual color axis correlated with a second
color variation and orthogonal to the second perceptual color axis;
and scaling, by the processor, the first, second and third
perceptual color axes so that a resulting distance in the uniform
color space and a distance of Munsell colors in a CIELAB color
space are substantially the same.
2. The method of claim 1, wherein obtaining characteristics related
to the imaging device comprises receiving parameters associated
with spectral sensitivity functions of the imaging device.
3. The method of claim 1, wherein obtaining characteristics related
to the imaging device comprises: using a color checker with a
plurality of color patches with known Munsell color notations or
known spectral reflectances; and receiving a plurality of images of
the color checker captured under different phases of daylight by
the imaging device.
4. The method of claim 3, wherein the plurality of color patches
comprise a series of patches of neutral colors.
5. The method of claim 1, wherein computing first, second and third
perceptual color axes comprises: computing a daylight plane of the
imaging device, a constant lightness plane of Munsell colors with
the constant lightness plane having a surface normal vector as the
first perceptual color axis, and an intersection line of the
daylight plane and the constant lightness plane as the second
perceptual color axis based at least in part on the characteristics
related to the imaging device; and determining a first line as the
third perceptual color axis, the first line on the constant
lightness plane and orthogonal to the second perceptual color
axis.
6. The method of claim 5, wherein the second perceptual color axis
comprises a yellow-blue axis.
7. The method of claim 5, wherein the third perceptual color axis
comprises an axis of a color correlated with a perceptual red-green
axis.
8. The method of claim 1, wherein the scaling comprises scaling
with weighted errors so that one or more chosen colors are weighted
more heavily relative to other colors to emphasize a fidelity of
the one or more chosen colors as represented in the uniform color
space.
9. The method of claim 8, wherein the one or more chosen colors
comprise at least a color of skin, grass, blue sky, or any other
user-chosen color.
10. A method of computing an inverse transform of a uniform color
space having a perceptual color axis substantially aligned with a
daylight variation, comprising: reducing, by a processor, a
plurality of equations describing the uniform color space into a
nonlinear equation with a single variable; examining, by the
processor, a behavior of the nonlinear equation for a plurality of
input ranges to determine a projection for linear approximation;
solving, by the processor, one or more combinational cases of a
third-degree polynomial and a first-degree polynomial from the
linear approximation in the projection to provide a solution;
determining, by the processor, whether the solution is within a
color gamut of the imaging device; and mapping, by the processor,
an out-of-gamut solution into an in-gamut color according to a
gamut mapping strategy.
11. A device implementable in an imaging device, comprising: a
memory configured to store data representative of characteristics
related to the imaging device; and a processor configured to store
data in and access data from the memory, the processor comprising:
a computation unit configured to compute first, second and third
perceptual color axes of a uniform color space based at least in
part on the characteristics related to the imaging device; and a
scaling unit configured to scale the first, second and third
perceptual color axes.
12. The device of claim 11, wherein the scaling unit is configured
to scale the first, second and third perceptual color axes so that
a resulting distance in the uniform color space is substantially
equal to a distance of Munsell colors in a CIELAB color space.
13. The device of claim 11, wherein the memory is configured to
store parameters associated with spectral sensitivity functions of
the imaging device utilized by the computation unit in computing
the first, second and third perceptual color axes of the uniform
color space.
14. The device of claim 11, wherein the memory is further
configured to store a lookup table containing a plurality of
results of an inverse transformation of the uniform color space
corresponding to a plurality of grid points of the uniform color
space.
15. The device of claim 14, wherein the computation unit is further
configured to interpolate one or more additional inverse colors in
the uniform color space based at least in part on the lookup
table.
16. The device of claim 11, wherein the first perceptual color axis
correlates with lightness, wherein the second perceptual color axis
correlates with yellow-blue color variations, wherein the third
perceptual color axis correlates with red-green color variations,
and wherein the second perceptual color axis is substantially
aligned with daylight variation.
17. The device of claim 11, wherein in computing the first, second
and third perceptual color axes of the uniform color space, the
computation unit is configured to perform operations comprising:
computing a daylight plane of the imaging device, a constant
lightness plane of Munsell colors with the constant lightness plane
having a surface normal vector as the first perceptual color axis,
and an intersection line of the daylight plane and the constant
lightness plane as the second perceptual color axis based at least
in part on the characteristics related to the imaging device; and
determining a first line as the third perceptual color axis, the
first line on the constant lightness plane and orthogonal to the
second perceptual color axis.
18. The device of claim 17, wherein the second perceptual color
axis comprises a yellow-blue axis.
19. The device of claim 17, wherein the third perceptual color axis
comprises an axis of a predefined color correlated with a
perceptual red-green axis.
20. The device of claim 11, wherein in scaling the first, second
and third perceptual color axes, the scaling unit is configured to
scale the first, second and third perceptual color axes with
weighted errors so that one or more chosen colors are weighted more
heavily relative to other colors to emphasize a fidelity of the one
or more chosen colors as represented in the uniform color
space.
21. The device of claim 11, further comprising: a characteristics
obtaining unit configured to obtain the characteristics related to
the imaging device by performing operations comprising; using a
color checker with a plurality of color patches with known Munsell
color notations or known spectral reflectances; and receiving a
plurality of images of the color checker captured under different
phases of daylight by the imaging device.
22. The device of claim 21, wherein the plurality of color patches
comprise a series of patches of neutral colors.
23. The device of claim 11, further comprising: an inverse
transformation unit configured to compute an inverse transformation
of the uniform color space by performing operations comprising:
reducing a plurality of equations describing the uniform color
space into a nonlinear equation with a single variable; examining a
behavior of the nonlinear equation for a plurality of input ranges
to determine a projection for linear approximation; solving one or
more combinational cases of a third-degree polynomial and a
first-degree polynomial from the linear approximation in the
projection to provide a solution; determining whether the solution
is within a color gamut of the imaging device; and mapping an
out-of-gamut solution into an in-gamut color according to a gamut
mapping strategy.
Description
TECHNICAL FIELD
[0001] The inventive concept described herein is generally related
to digital color image processing and, more particularly, to color
conversion from raw camera RGB to a uniform color space with color
attributes that correlate closely with human color perception.
BACKGROUND
[0002] Unless otherwise indicated herein, approaches described in
this section are not prior art to the claims listed below and are
not admitted to be prior art by inclusion in this section.
[0003] Digital color images are typically captured in the raw
camera RGB space and displayed on monitors in a standard RGB space
such as sRGB, for example. There are usually many image processing
steps that take place to correct, manipulate, enhance, and convert
a raw camera RGB image to a standard sRGB image. These image
processing steps typically are done in the image signal processing
(ISP) pipeline of a digital camera. Color manipulation, adjustment,
and conversion are difficult to do effectively in RGB color space
because the RGB color model does not map easily to color attributes
that correlate closely with human color perception such as, for
example, lightness, hue, and chroma. For instance, to avoid
numerical overflow in processing bright and colorful objects, an
approach is to simply clip RGB channels having numerical values
that are too high or too low, such as those greater than 4095 (for
12-bit processing) or those less than 0. Color clipping in RGB
space tends to change the hue dramatically. As a result, a bright
and colorful purple flower may turn into red color as its blue
channel is clipped, while skin color may become bright yellow
around specular highlights when its red channel is clipped. Yet,
another important step in color processing is to increase the
colorfulness of natural images, where dull and low-contrast images
may be processed to have enhanced color chroma, thus making such
images look better. This is commonly done in advertisements and is
becoming a common feature of digital cameras.
[0004] A key to doing color adjustment well is to transform colors
from an RGB color space to a color space that is much better
correlated with color appearance description, such as CIE 1976
L*a*b* (CIELAB) or CIE 1976 L*u*v* (CIELUV) uniform color spaces.
For example, in CIELAB space, the angle between a* and b* axes is
well correlated with the hue of a color, the L* is well correlated
with its lightness, and the distance from (a*, b*) to (0, 0) is
well correlated with its perceived chroma.
[0005] Although CIELAB is widely useful and highly successful in
practical applications, it is not directly applicable to the
majority of imaging devices, such as smartphone cameras, digital
cameras, and document scanners. This is because CIELAB is based on
XYZ tristimulus values calculated from CIE 1931 xyz color matching
functions. In order to transform camera RGB to CIE XYZ, a standard
practice is to use a customized 3.times.3 matrix for the color
conversion. This practice may work well when the camera spectral
sensitivities, herein interchangeably referred to as spectral
sensitivity functions or sensor fundamentals, can be well
approximated by linear combinations of the CIE 1931 xyz color
matching functions. However, most smartphone cameras and consumer
digital cameras fall far short of this condition due to the high
manufacturing cost of color filters that can provide such a good
approximation. Therefore, RGB transformation by a 3.times.3 matrix
cannot produce good matches for the corresponding object colors in
CIE XYZ values. Errors in such matrix transformation can be
especially large for certain object colors. Furthermore, the output
from a matrix transformation can be negative and not physically
meaningful. When such condition happens, it is difficult to make a
perceptually meaningful correction.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The accompanying drawings are included to provide a further
understanding of the disclosure, and are incorporated in and
constitute a part of the present disclosure. The drawings
illustrate embodiments of the disclosure and, together with the
description, serve to explain the principles of the disclosure. It
is appreciable that the drawings are not necessarily in scale as
some components may be shown to be out of proportion than the size
in actual implementation in order to clearly illustrate the concept
of the present disclosure.
[0007] FIG. 1 is a diagram showing a comparison between a
conventional technique and a novel and non-obvious technique in
accordance with an embodiment of the present disclosure.
[0008] FIG. 2 is a schematic diagram showing an example logic for
conversion from RGB to a uniform color space, or DLAB, in
accordance with an embodiment of the present disclosure.
[0009] FIG. 3 is a chart showing the CIE daylight locus (4000 to
25000 K) for a first camera.
[0010] FIG. 4A is a chart showing a first perspective of
three-dimensional (3-D) distribution of Munsell color data under
illuminant C in RGB of the first camera of FIG. 3.
[0011] FIG. 4B is a chart showing a second perspective of 3-D
distribution of Munsell color data under illuminant C in RGB of the
first camera of FIG. 3.
[0012] FIG. 4C is a chart showing a third perspective of 3-D
distribution of Munsell color data under illuminant C in RGB of the
first camera of FIG. 3.
[0013] FIG. 5 is a chart showing daylight locus in the logarithmic
space for a second camera.
[0014] FIG. 6A and FIG. 6B are charts that together show the
relation between c and d under different L.sup.+, a.sup.+, and
b.sup.+ for a third camera (linear sRGB).
[0015] FIG. 7 is a block diagram of an example apparatus in
accordance with embodiments of the present disclosure.
[0016] FIG. 8 is a flowchart of an example process related to
constructing a uniform color space from raw tristimulus values of
an imaging device in accordance with an embodiment of the present
disclosure.
[0017] FIG. 9 is a flowchart of an example process related to
constructing a uniform color space from raw tristimulus values of
an imaging device in accordance with another embodiment of the
present disclosure.
[0018] FIG. 10 is a flowchart of an example process related to
computing an inverse transform of a uniform color space having a
perceptual color axis substantially aligned with the daylight
variation in accordance with an embodiment of the present
disclosure.
[0019] FIG. 11 is a block diagram of an example device in
accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Overview
[0020] FIG. 1 is a diagram showing a comparison 100 between a
conventional technique 110 and a novel and non-obvious technique
120 in accordance with an embodiment of the present disclosure.
[0021] The CIE 1976 L*a*b* color space, abbreviated as CIELAB, is
defined as follows:
L * = 116 f ( Y Y n ) - 16 , a * = 500 [ f ( X X n ) - f ( Y Y n )
] , b * = 200 [ f ( Y Y n ) - f ( Z Z n ) ] , where f ( t ) = t
.DELTA. s ##EQU00001## if t > ( 24 116 ) 3 , or else f ( t ) = (
841 108 ) t + 16 116 ##EQU00001.2##
with (X,Y,Z) and (X.sub.n,Y.sub.n,Z.sub.n) being the CIE 1931
tristimulus values of the test object color stimulus and the
specified white object color stimulus. In order to convert camera
RGB to CIELAB, it is necessary to involve a two-step process of
first converting camera RGB to CIE XYZ and then from CIE XYZ to
CIELAB. As shown in FIG. 1, a conventional approach 110 requires a
first conversion from camera RGB to CIE XYZ, e.g., via 3.times.3
matrix or other linear or nonlinear transformations, followed by a
second conversion from CIE XYZ to CIELAB. However, information
associated with the image may potentially be lost in the conversion
from camera RGB to CIE XYZ.
[0022] Embodiments of methods and systems according to the present
disclosure construct a uniform color space (hereinafter referred to
as "DLAB") directly from camera RGB without going through CIE XYZ,
thus avoiding or at least minimizing potential information loss in
the conversion from camera RGB to CIE XYZ. As shown in FIG. 1, a
novel and non-obvious approach 120 according to the present
disclosure converts camera RGB directly to DLAB.
[0023] FIG. 2 is a schematic diagram showing an example logic 200
for conversion from RGB to DLAB in accordance with an embodiment of
the present disclosure. Logic 200 and any variation thereof may be
implemented in a circuit, e.g., in a chip such as an image signal
processor, with R/R.sub.n, G/G.sub.n, and B/B.sub.n as input
parameters (where one image corresponds to one set of R.sub.n,
G.sub.n, and B.sub.n while values of R, G, and B are different for
each pixel) and L.sup.+, a.sup.+ and b.sup.+ as the output
parameters. The other parameters shown in FIG. 2 are coefficients
that may be pre-computed, and may be saved in storage, e.g.,
registers or memory of a chip, for use and/or modified by user.
Detailed computations associated with logic 200 when spectral
sensitivity functions are known and unknown are described
later.
[0024] According to an embodiment of the present disclosure, the
three axes of DLAB are lightness L.sup.+, red-green a.sup.+ and
yellow-blue b.sup.+ (daylight direction or locus). In other
embodiments, the L.sup.+ axis may be determined by fitting the
constant Munsell value plane, the b.sup.+ axis may be determined
from the daylight plane, and the a.sup.+ axis may be in an axis in
the orthogonal direction with respect to the b.sup.+ axis.
[0025] The present disclosure uncovers three major observations
that form the basis of constructing a uniform color space directly
from camera RGB: (1) daylight locus is very well approximated as a
plane in log R, log G, and log B color space for almost all digital
imaging devices; (2) daylight locus closely approximates the major
axis of the Munsell colors; and (3) daylight locus is also
approximately a plane in normalized linear RGB color space with a
surface normal vector similar to that in log R, log G, log B color
space.
[0026] Based on these three observations, the present disclosure
provides two methods of constructing DLAB, which is a uniform color
space, directly from camera RGB without going through CIE XYZ
conversion, depending on the availability of the spectral
sensitivity functions of the imaging device, e.g., smartphone
cameras, digital cameras, document scanners, etc. In particular,
one method constructs DLAB when the spectral sensitivity functions
of the imaging device are known, and the other method constructs
DLAB when the spectral sensitivity functions are unknown. The
present disclosure further provides a method for computing the
inverse transform of DLAB so that conversion between the uniform
color space and camera RGB can be carried out back and forth.
[0027] The choice of the yellow-blue axis in the construction of
DLAB is based on at least two reasons. Firstly, the yellow-blue
axis is generally aligned with the human color perception as well
as the daylight locus. Secondly, once DLAB is implemented in
various cameras and imaging devices, each camera or imaging device
has an axis aligned with the daylight locus in the uniform color
space. This allows a direct color transformation from one
camera/imaging device to another camera/imaging device. Each camera
or imaging device in which embodiments of the present disclosure
are implemented converts its RGB to DLAB which can be converted to
a different camera's RGB by using the inverse transform of DLAB.
That is, with conversion of N different cameras/imaging devices,
the present disclosure limits the number of calibrations necessary
to N instead of N(N-1)/2 as with conventional approaches in which
no DLAB is defined or used. Moreover, it will be understood by
those skilled in the art that an axis of another color that is
generally aligned with one of the perceptual opponent color
processes may alternatively be chosen for DLAB in lieu of the
red-green axis. In other words, daylight is fixed at the
yellow-blue direction and currently the orthogonal of daylight is
chosen as the red-green direction; however, any color that is close
to the red-green axis may be chosen to align with the red-green
axis. For example, an axis of the color of foliage may be used for
DLAB in lieu of the red-green axis.
[0028] Embodiments of the present disclosure may construct a
uniform color space, or DLAB, directly from camera RGB, without
having to rely on any conversion to CIE XYZ. Such a direct
construction can reduce information loss, especially when the
camera spectral sensitivities are very different from the CIE 1931
xyz color matching functions. Embodiments of the present disclosure
may align one of the chromatic axes of DLAB with the CIE daylight
vector, which is also parallel to the b*-axis of CIELAB at a
correlated color temperature of 7200 K. In doing so, all DLAB color
spaces constructed by methods and systems according to the present
disclosure will have at least one well-aligned opponent color
process. Embodiments of the present disclosure may also scale DLAB
lightness and color axes so that their distances correlate well
with the same color differences as those in CIELAB.
[0029] Embodiments of the present disclosure may be implemented in
various applications such as, for example, an electronic apparatus
equipped with an imaging device or a chip or chipset operating with
an imaging device and configured to perform operations according to
the present disclosure.
Example 1
[0030] When the spectral sensitivity functions of the imaging
device are known, embodiments of the present disclosure may compute
the RGB values of Munsell colors from their measured and corrected
spectral reflectance functions. For instance, a chip vendor that
provides chips or chipsets in which one or more embodiments of the
present disclosure are implemented may have information or data
pertaining to the spectral sensitivity functions of various imaging
devices, e.g., sensors or assemblies of sensor and lens. In such
case coefficients of DLAB may be pre-computed for those imaging
devices and programmed into the chips or chipsets by the
vendor.
[0031] Munsell colors are arranged according to value, hue, and
chroma. Colors of the same Munsell value are of the same CIELAB
lightness, and these colors of the same Munsell value may be used
to determine a constant-value plane. Since the spectral sensitivity
functions are known, embodiments of the present disclosure may also
calculate the RGB values for CIE daylight colors at various
correlated color temperatures. These daylight RGB values form a
plane in log R, log G, and log B color space. Moreover, the
normalized linear daylight RGB values also form a slightly curved
surface which is well approximated as a plane, herein referred to
as the daylight plane.
[0032] In some embodiments, an algorithmic procedure of
constructing DLAB may include the following: (1) finding the normal
vector of the daylight plane, n.sub.d; (2) finding the normal
vector of constant-value plane, v.sub.y; (3) finding the
intersection of n.sub.d and v.sub.y planes, b.sup.+; (4) finding
the orthogonal direction of b.sup.+, a.sup.+; (5) finding the
transformation matrix, K; (6) performing nonlinear transformation,
f(.cndot.); and (7) finding scale factors, r and .tau..
[0033] FIG. 3 is a chart 300 showing the CIE daylight locus (4000
to 25000 K) for a first camera.
[0034] In experiments conducted by inventors of the present
disclosure, the first camera (a consumer camera and herein referred
to as Camera A) was used in the derivation although DLAB may be
designed for and implemented in various trichromatic imaging
systems. The input RGB values used in DLAB are normalized by
reference white R.sub.n, G.sub.n, and B.sub.n, e.g., determined by
an auto-white balance algorithm. The output of the DLAB includes
lightness, L.sup.+, and chromatic coordinates, a.sup.+ and b.sup.+.
DLAB may use the same cube root nonlinear transform, f(t), as
CIELAB. The lightness, L.sup.+, is the cube root transform of a
linear combination of normalized RGB. The chromatic coordinates,
a.sup.+ and b.sup.+, are differences of cube root transforms of
linear combinations of normalized RGB as well. However, the
nonlinear mapping functions are performed on the positive terms and
negative terms separately. The input includes normalized sensor
responses of an image sensor, e.g., a CMOS image sensor. The output
includes (L.sup.+,a.sup.+,b.sup.+). The coefficients,
v.sub.y.sub.1, v.sub.y.sub.2 and v.sub.y.sub.3, are computed from
constant Munsell value planes. The coefficients, k.sub.11',
k.sub.13', k.sub.21' and k.sub.22', are computed from the daylight
plane and its orthogonal direction. Note that positive and negative
terms are grouped separately and then mapped through the nonlinear
function. The scaling factors, r and .tau., are determined to match
Munsell chromas and CIELAB step size.
[0035] The daylight locus in the logarithmic space can be
approximated by a straight line on a constant-intensity plane, and
used as one of the chromatic basis vectors. The CIE daylight locus
from 4000K to 25000K on the intensity-invariant plane is computed
for Camera A. The daylight locus becomes virtually a straight line,
as shown in FIG. 3. The normal vector of the daylight plane,
n.sub.d, for Camera A is n.sub.d=[0.4415, -0.8156,
0.3741].sup.T.
[0036] FIG. 4A, FIG. 4B and FIG. 4C show three perspectives of 3-D
distribution of Munsell color data under illuminant C in RGB of
Camera A. In particular, chart 410 of FIG. 4A shows the first
perspective, chart 420 of FIG. 4B shows the second perspective, and
chart 430 of FIG. 4C shows the third perspective. In FIG. 4A, FIG.
4B and FIG. 4C, each group of concentric eclipses is associated
with a particular value of Munsell lightness. As can be seen in
FIG. 4A, FIG. 4B and FIG. 4C, the constant-value plane has a
certain slope and is not perpendicular to any axis of R/R.sub.n,
G/G.sub.n, and B/B.sub.n.
[0037] From the distribution of Munsell color data in 3-D plot, as
shown in FIGS. 4A-4C, it can be observed that Munsell RGB under
illuminant C has similar ellipses for each value. Furthermore, for
a given Munsell value, Munsell colors are on a constant-luminance
plane. In order to find the plane of constant Munsell value,
luminance Y is approximated by the linear combination of RGB by
Equation (1) below:
[ R 1 / R n G 1 / G n B 1 / B n R 2 / R n G 2 / G n B 2 / B n R t /
R n G t / G n B t / B n ] v y = [ Y 1 / Y n Y 2 / Y n Y t / Y n ] ,
##EQU00002##
where R.sub.i, G.sub.i, B.sub.i, and Y.sub.i are the Munsell RGB
responses and the luminances of the Munsell values, and t is the
total number of Munsell colors, which is 1021 for the experiments
conducted by the inventors. Here,
v.sub.y=[v.sub.y.sub.n,v.sub.y.sub.n,v.sub.y.sub.n].sup.T is the
surface normal vector of constant-value plane and the direction for
lightness.
[0038] Given a normalized Munsell RGB point
q=[R/R.sub.n,G/G.sub.n,B/B.sub.n].sup.T, the lightness L.sup.+ of
DLAB has the same scale as the of CIELAB as expressed by Equation
(2) below:
L.sup.+=116f(v.sub.y.sup.Tq)-16.
[0039] In the normalized RGB space, the neutral colors have equal
normalized RGB values,
i . e . , R R n = G G n = B B n . ##EQU00003##
In order to let the reference white be the center on Munsell
constant-value plane, the center direction of the plane is set to
o=[1,1,1].sup.T. Given a normalized Munsell RGB point q on the
plane with normal vector n.sub.p, normalized vector of v.sub.y,
where n.sub.p=v.sub.y/ {square root over
(v.sub.y.sub.1.sup.2+v.sub.y.sub.2.sup.2+v.sub.y.sub.3.sup.2)}, the
center of the plane, o.sub.q, may be shifted to (0,0) since q and
o.sub.q are on the same plane with normal vector n.sub.p,
n.sub.p.sup.Tq=n.sub.p.sup.To.sub.q. In addition, o.sub.q is on the
direction of o=[1,1,1].sup.T. Therefore, the center of the plane,
o.sub.q, can be expressed as
o.sub.q=(n.sub.p.sup.Tq/n.sub.p.sup.To.sub.q)o. Let n.sub.p=.left
brkt-bot.n.sub.p.sub.n,n.sub.p.sub.n,n.sub.p.sub.n.right
brkt-bot..sup.T, and
v.sub.n.sub.p-n.sub.p.sub.1+n.sub.p.sub.2+n.sub.p.sub.3. The
shifted point, q'=q-o.sub.q, is expressed as Equation (3)
below:
q ' = [ R R n G G n B B n ] - n p 1 R R n + n p 2 G G n + n p 3 B B
n v n p [ 1 1 1 ] = 1 v n p [ ? | ? ? ? - ? ? + ? - ? - ? - ? ? + ?
] q . ##EQU00004## ? indicates text missing or illegible when filed
##EQU00004.2##
[0040] Daylight direction is chosen as the b.sup.+ opponent
process. The daylight direction on the Munsell constant-value plane
is the intersection of daylight plane and Munsell value plane.
Therefore, the intersection line of these two planes is given by
the cross product of the two normal vectors, expressed as Equation
(4) below:
b.sup.+=n.sub.d.times.n.sub.p.
[0041] The other opponent process, a', is chosen as the orthogonal
of b' opponent process. Accordingly, Equation (5) below is
obtained:
a.sup.+=n.sub.p.times.b.sup.+.
[0042] The transformation matrix, K, that projects the RGB values
to a.sup.+ and b.sup.+ axes can be determined by Equation (6)
below:
[ a linear + b linear + ] = [ a + T b + T ] q ' = 1 v n p [ a + T b
+ T ] [ ? + ? ? ? - ? ? + ? - ? - ? - ? ? + ? ] q = [ k 11 k 12 k
13 k 21 k 22 k 23 ] q = Kq . ##EQU00005## ? indicates text missing
or illegible when filed ##EQU00005.2##
[0043] Since it is desirable that the opponents a.sup.+ and b.sup.+
look like CIELAB a* and b*, nonlinear mapping on positive and
negative k.sub.ij's is performed separately. From the observations
made by the inventors, k.sub.12 and k.sub.23 are negative. Since
the weight of each color channel response is relative, in some
embodiments the coefficients of the negative terms are set to -1
and the weight for the positive terms are normalized. Thus, the two
opponent color processes, a.sub.ns.sup.+ and b.sub.ns.sup.+, before
global scaling are defined by Equations (7) and (8) below:
a ns + = f ( k 11 ' R R n + k 13 ' B B n ) - f ( G G n ) , b ns + =
f ( k 21 ' R R n + k 22 ' G G n ) - f ( D B n ) , ##EQU00006##
where k'.sub.1j=-k.sub.1j/k.sub.12, and
k'.sub.2j=-k.sub.2j/k.sub.23.
[0044] The a.sub.ns.sup.+ and b.sub.ns.sup.+ of Munsell are still
ellipses, not circles. Therefore, singular value decomposition
(SVD) is applied on a.sub.ns.sup.+ and b.sub.ns.sup.+ to find the
ratio between them, as shown in Equation (9) below:
[ ? ? ? ? ? ? ] = U ns .SIGMA. ns V ns T , where ##EQU00007##
.SIGMA. ns = [ ? 0 0 ? 0 0 0 0 ] . ? indicates text missing or
illegible when filed ##EQU00007.2##
[0045] The ratio, r, of the first two singular values,
.sigma..sub.ns.sub.1 and .sigma..sub.ns.sub.2, is used to adjust
the shape of ellipses and let them be close to circles.
Additionally, b.sub.ns.sup.+ is rescaled by multiplying r, which is
defined as Equation (10) below:
r=.sigma..sub.ns.sub.2/.sigma..sub.n.sub.1.
[0046] In order to make DLAB to have the same scale with CIELAB,
SVD is applied to CIELAB under illuminant C and the rescaled
ellipses, respectively, as expressed by Equations (11) and (12)
below:
[ a 1 * b 1 * a 2 * b 2 * a 1021 * b 1021 * ] = U c .SIGMA. c V c T
, ##EQU00008##
where a.sub.i.sup.+ and b.sub.i.sup.+ are CIELAB chromatic
coordinates for Munsell color i, and
[ a ns 1 + r b ns 1 + a ns 2 + r b ns 2 + a ns 1021 + r b ns 1021 *
] = U s .SIGMA. s V s T , ##EQU00009##
[0047] The rescaled ellipses are close to circles, which are close
to the shape of CIELAB. Thus, the scaling factor, .tau., is the
ratio of sum of singular values. Let
.SIGMA. c = [ .sigma. c 1 0 0 .sigma. c 2 0 0 0 0 ] and .SIGMA. s =
[ .sigma. s 1 0 0 .sigma. s 2 0 0 0 0 ] . ##EQU00010##
The scaling factor for DLAB is defined by Equation (13) below:
.tau. = .sigma. c 1 + .sigma. c 2 .sigma. s 1 + .sigma. s 2 .
##EQU00011##
Therefore, the two chromatic coordinates for DLAB are expressed by
Equations (14) and (15) below:
a + = .tau. [ f ( k 11 ' R R n + k 13 ' R B n ) - f ( G G n ) ] , b
+ = r .tau. [ f ( k 21 ' R R n + k 22 ' G G n ) - f ( B B n ) ] .
##EQU00012##
Example 2
[0048] When the camera spectral sensitivity functions are unknown,
images of a color target with known Munsell color notations or
spectral reflectance functions, such as a Macbeth color checker or
any other suitable color checker, may be taken to obtain raw camera
RGB values. For instance, a chip vendor that provides chips or
chipsets in which one or more embodiments of the present disclosure
are implemented may not have information or data pertaining to the
spectral sensitivity functions of various imaging devices, e.g.,
sensors or assemblies of sensor and lens. In such case, a user of a
portable electronics apparatus equipped with an imaging device and
a chip or chipset in which one or more embodiments of the present
disclosure are implemented can take a number of images of the color
target. Based on these images, the chip or chipset in the portable
electronic apparatus may be able to construct DLAB, e.g., by least
square fitting using techniques described below regarding FIG.
5.
[0049] Since the CIE xyY and Munsell hue, value, and chroma from
the known spectral reflectance functions of the color target, such
as a Macbeth color checker, can be computed, such information may
be used for each color patch. For example, CIE Y can be used to
evaluate the normal vector of constant-value plane and hue plus
chroma information can be used to evaluate the scale factors for
two opponent axes.
[0050] Construction of DLAB from a known color target is similar to
the algorithmic procedure described above when camera spectral
sensitivity functions are known. For illustrative purpose and not
intended to limit the scope of the present disclosure, the Macbeth
color checker is used as the color target to explain the
algorithmic procedure. For those skilled in the art, the same
construction process may work for various color targets that have
sufficiently many color patches covering an adequate range of
Munsell hue, value, and chroma. The input RGB values used in DLAB
are normalized by the reference white R.sub.n, G.sub.n, and
B.sub.n. For an image with Macbeth color checker, reference white
is scaled RGB of the white color patch. Since the reflectance of
the white color patch is 90%, reference white is 10/9 of white
color patch. Details of the procedure are described below.
[0051] FIG. 5 is a chart 500 showing daylight locus in the
logarithmic space for a second camera.
[0052] The daylight locus in the logarithmic space can be
approximated by a straight line on a constant-intensity plane, and
used as one of the chromatic basis vectors. The daylight direction
of a camera sensor can be estimated from images taken under various
phases of daylight. FIG. 5 shows the daylight locus in the
logarithmic space for the second camera (a consumer camera and
herein referred to as Camera B). Each point in FIG. 5 is related to
the RGB values of the gray card in an image taken from outdoor. For
example, the line is fitted by minimizing the sum of the squares of
the perpendicular distances from the observed points to the line to
be determined. The slope m of the line is used to calculate the
daylight direction n.sub.d, as expressed by Equation (16)
below:
n d = [ 2 - m - 1 m - 1 ] .tau. 2 m 2 + 6 . ##EQU00013##
[0053] In order to find the plane of constant Munsell value,
luminance Y is approximated by the linear combination of RGB, as
expressed by Equation (17) below:
[ R 1 / R n G 1 / G n B 1 / B n R 2 / R n G 2 / G n B 2 / B n R 24
/ R n G 24 / G n B 24 / B n ] v y = [ Y 1 / Y n Y 2 / Y n Y 24 / Y
n ] , ##EQU00014##
where R.sub.i, G.sub.i, and B.sub.i are the RGB values of Macbeth
color checker taken from the camera sensor. In particular, Y.sub.i
is the luminance of the Macbeth color checker, and i is the color
patch index. Here, i=19 is the white color patch. Therefore,
[R.sub.n G.sub.n B.sub.n]=10/9[R.sub.19 G.sub.19 B.sub.19] is the
reference white. The normal vector of constant-value plane,
v.sub.y=[v.sub.y.sub.n,v.sub.y.sub.n,v.sub.y.sub.n].sup.T, can be
found by least square fitting.
[0054] Given a normalized RGB values
q=[R/R.sub.n,G/G.sub.n,B/B.sub.n].sup.T, the lightness L.sup.+ of
DLAB has the same scale as the of CIELAB, as expressed by Equation
(18) below:
L.sup.+=116f(v.sub.y.sup.Tq)-16.
[0055] When n.sub.d and v.sub.y are known, the two opponent color
processes, a.sub.ns.sup.+ and b.sub.ns.sup.+, before global scaling
can be calculated. In order to estimate the scale of two opponents,
it is assumed that the radius r of one Munsell chroma step in
CIELAB is 5.5524, which is estimated from CIELAB. The scale of two
opponents, s.sub.1 and s.sub.2, are estimated from least square
fitting, as expressed by Equation (19) below:
[ a ns 1 + 2 b ns 1 + 2 a ns 2 + 2 b ns 2 + 2 a ns 18 + 2 b ns 18 +
2 ] s 1 2 s 2 2 = [ r 2 chroma 1 2 r 2 chroma 2 2 r 2 chroma 18 2 ]
, ##EQU00015##
where a.sub.ns.sub.1.sup.+ and b.sub.ns.sub.1.sup.+ are the
non-scale opponents of Macbeth color checker, and chroma.sub.i are
the chroma values of Macbeth color checker. Here, i is the color
patch index. Since the six patches on the last row of Macbeth color
checker are neutral with chroma=0, a quantity of eighteen colors
are used to estimate the scalars, s.sub.1 and s.sub.2. Therefore,
the two chromatic coordinates for DLAB are expressed as expressed
by Equations (20) and (21) below:
a + = s 1 [ f ( k 11 ' R R n + k 13 ' B B n ) - f ( G G n ) ] , b +
= s 2 [ f ( k 21 ' R R n + k 22 ' G G n ) - f ( B B n ) ] .
##EQU00016##
Example 3
[0056] The present disclosure also provides a method for computing
the inverse transform of DLAB. For example, this method allows
conversion of data associated with an image from DLAB to
corresponding linear sRGB. This also allows adjustment or tuning of
an image (or the data thereof) in DLAB by first converting from RGB
space to DLAB for the adjustment or tuning, followed by conversion
from DLAB back to RGB space (whether camera RGB or sRGB).
[0057] The values of L.sup.+, a.sup.+ and b.sup.+ are defined by
Equations (22), (23) and (24) below:
L + = 116 f ( v y 1 R R n + v y 2 G G n + v y 3 B B n ) - 16 , a +
= s 1 [ f ( k 11 ' R R n + k 13 ' B B n ) - f ( G G n ) ] , b + = s
2 [ f ( k 21 ' R R n + k 22 ' G G n ) - f ( B B n ) ] . Let p 1 = f
- 1 ( 16 + L + 116 ) = v y 1 R R n + v y 2 G G n + v y 3 B B n .
##EQU00017##
[0058] From the equation above, Equation (25) below may be
obtained:
R R n = 1 v y 1 ( p 1 - v y 2 G G n - v y 3 B B n ) , Let p 2 = f -
1 ( f ( G G n ) + a + s 1 ) - k 11 ' R R n + k 13 ' B B n .
##EQU00018##
[0059] From the equation above, if
R R n ##EQU00019##
is replaced by Equation (25), Equation (26) below may be
obtained:
B B n = v y 1 k 11 ' v y 3 - v y 1 k 13 ' ( k 11 ' v y 1 p 1 - k 11
' v y 2 v y 1 G G n - p 2 ) = c 1 p 1 + c 2 G G n + c 3 p 2 = ?
##EQU00020## where c 1 = ? k 11 ' v y 3 - v y 1 k 13 ' , c 2 = - ?
? k 11 ' v y 3 - v y 1 k 13 ' , and c 3 = - ? k 11 ' v y 3 - v y 1
k 13 ' , and f - 1 ( f ( B B n ) + b + s 2 ) k 21 ' v y 1 ( p 1 - v
y 2 G G n - v y 3 B B n ) + k 22 ' G G n . ? indicates text missing
or illegible when filed ##EQU00020.2##
[0060] From the equation above, if
B B n ##EQU00021##
is replaced by Equation (26), Equation (27) below may be
obtained:
f - 1 ( f ( B B n ) + b + s 2 ) = k 21 ' v y 1 p 1 - k 21 ' v y 2 -
v y 1 k 22 ' v y 1 G G n - k 21 ' v y 3 v y 1 ( c 1 p 1 + c 2 G G n
+ c 3 p 2 ) = d 1 p 1 + d 2 G G n + d 3 p 2 = d , ##EQU00022##
where d 1 = - ? ? k 11 ' v y 3 - v y 1 k 13 ' , d 2 = - ? ? - ? ? v
y 1 + ? ? v y 1 ? ? k 11 ' v y 3 - v y 1 k 13 ' , and d 3 = ? ? k
11 ' v y 3 - v y 1 k 13 ' . ? indicates text missing or illegible
when filed ##EQU00022.2##
[0061] From the equation above, it can be seen that
f ( B B n ) + b + s 2 = f ( d ) . ##EQU00023##
In addition,
c = B B n , and p 2 = f - 1 ( f ( G G n ) + a + s 2 ) .
##EQU00024##
Therefore, Equation (28) below may be obtained:
f ( c 1 p 1 + c 2 G G n + c 3 f - 1 ( f ( G G n ) + a + s 1 ) ) - f
( d 1 p 1 + d 2 G G n + d 3 f - 1 ( f ( G G n ) + a + s 1 ) ) + b +
s 2 = 0. ##EQU00025##
[0062] Since c.sub.1, c.sub.2, c.sub.3, d.sub.1, d.sub.2, d.sub.3,
s.sub.1 and s.sub.2 are fixed for one sensor, and a.sup.+, b.sup.+,
and p.sub.1 are known when L.sup.+, a.sup.+ and b.sup.+ are
given,
G G n ##EQU00026##
is the only variable in the equation. Therefore, it can be treated
as
h ( G G n ) = 0 , ##EQU00027##
and be solved by iterative searching. After solving
G G n , B B n and R R n ##EQU00028##
can be solved from Equations (26) and (25).
[0063] Charts 610 and 620 of FIG. 6A and FIG. 6B together show the
relation between c and d under different L.sup.+, a.sup.+ and
b.sup.+ for a third camera (linear sRGB).
[0064] Since it is time-consuming to use the iterative method to
solve
G G n , ##EQU00029##
a linear approximation may be used to solve inverse DLAB. The
relation between c and d under different L.sup.+, a.sup.+, and
b.sup.+ for the third camera (a consumer camera and herein referred
to as Camera C). When b.sup.+ is fixed,
d = f - 1 ( f ( c ) + b + s 2 ) . ##EQU00030##
When a.sup.+, and L.sup.+ are fixed, and
G G n ##EQU00031##
is given,
c = c 1 p 1 + c 2 G G n + c 3 f - 1 ( f ( G G n ) + a + s 1 ) , and
##EQU00032## d = d 1 p 1 + d 2 G G n + d 3 f - 1 ( f ( G G n ) + a
+ s 1 ) ##EQU00032.2##
can be computed. The inventors discovered that, for fixed a.sup.+
and L.sup.+, the relation between c and d is close to linear.
Therefore, d can be approximated from c, and can be expressed as
Equation (29) below:
d.apprxeq.m.sub.cdc+q.sub.cd,
where m.sub.cd is the slope and q.sub.cd is the intercept. Two
points (c.sub.a,d.sub.a) and (c.sub.b,d.sub.b) may be chosen to
estimate the slope and intercept. When G.sub.a is given,
c a = c 1 p 1 + c 2 G a G n + c 3 f - 1 ( f ( G a G n ) + a + s 1 )
and ##EQU00033## d a = d 1 p 1 + d 2 G a G n + d 3 f - 1 ( f ( G a
G n ) + a + s 1 ) . ##EQU00033.2##
When G.sub.b is given,
c b = c 1 p 1 + c 2 G b G n + c 3 f - 1 ( f ( G b G n ) + a + s 1 )
and ##EQU00034## d b = d 1 p 1 + d 2 G b G n + d 3 f - 1 ( f ( G b
G n ) + a + s 1 ) . ##EQU00034.2##
Since d is approximated from c and from Equations (26) and (27),
Equation (30) below may be obtained:
m cd c + q cd = m cd B B n + q cd .apprxeq. d = f - 1 ( f ( B B n )
+ b + s 2 ) . ##EQU00035##
[0065] There are four cases for solving
m cd B B n + q cd = f - 1 ( f ( B B n ) + b + s 2 )
##EQU00036##
to obtain
B B n . ##EQU00037##
These four cases are described below.
[0066] When
B B n .gtoreq. ( 24 116 ) 3 and ( B B n ) 1 3 + b + s 2 .gtoreq. 24
116 , ##EQU00038##
herein referred to as Case I, Equation (30) can be expressed as
Equation (31) below:
( 1 - m cd ) B B n + 3 b + s 2 ( B B n ) 2 3 + 3 ( b + s 2 ) 2 ( B
B n ) 1 3 + ( b + s 2 ) 3 - q cd - 0. ##EQU00039##
[0067] When
B B n < ( 24 116 ) 3 and 8 41 108 B B n + b + s 2 .gtoreq. 8 116
, ##EQU00040##
herein referred to as Case II, Equation (30) can be expressed as
Equation (32) below:
( 841 108 ) 3 ( B B n ) 3 + 3 ( 841 108 ) 2 ( 16 116 + b + s 2 ) (
B B n ) 2 + 3 ( 841 108 ) ( 16 116 + b + s 2 ) 2 - m cd B B n + (
16 116 + b + s 2 ) 3 - q cd = 0. ##EQU00041##
[0068] When
B B n .gtoreq. ( 24 116 ) 3 and ( B B n ) 1 3 + b + s 2 < 24 116
, ##EQU00042##
herein referred to as Case III, Equation (30) can be expressed as
Equation (33) below:
- m cd B B n + 108 841 ( B B n ) 1 3 + 108 841 ( b + s 2 - 16 116 )
- q cd = 0. ##EQU00043##
[0069] When
B B n < ( 24 116 ) 3 and 8 41 108 B B n + b + s 2 < 8 116 ,
##EQU00044##
herein referred to as Case IV,
B B n ##EQU00045##
can be solved by Equation (34) below:
B B n = 108 841 b + s 2 - q cd m cd - 1 . ##EQU00046##
[0070] Cases I, II, and III are cubic equations. The real root of
the cubic equations can be found by Cardano's method.
[0071] Let
x = t - .beta. 3 .alpha. ##EQU00047##
for the cubic equation
.alpha.x.sup.3+.beta.x.sup.2+.gamma.x+.delta.=0. The cubic equation
can be re-written as Equation (35) below:
t 3 + ( .gamma. .alpha. - .beta. 2 3 .alpha. 2 ) t + 2 27 ( .beta.
.alpha. ) 3 - .beta..gamma. 3 .alpha. 2 + .delta. .alpha. = t 3 + k
t + .rho. = 0. ##EQU00048##
[0072] Let t=u+v, the equation can be written as
(u+v).sup.3+.kappa.(u+v)+.rho.=0. By expanding the equation, the
following expression can be obtained:
u.sup.3+v.sup.3+.rho.+(u+v)(3uv+.kappa.)=0. When 3uv+.kappa.=0 and
u.sup.3+v.sup.3+.rho.=0, the solution for the equation can be
found. In this case, u.sup.3+v.sup.3=-.rho. and
u 3 v 3 = k 3 27 . ##EQU00049##
Therefore, u.sup.3 and v.sup.3 are the roots of
X 2 + .rho. X - k 3 27 = 0. ##EQU00050##
Thus, Equations (36) and (37) below are obtained:
u = - .rho. 2 + .rho. 2 4 + .kappa. 3 27 B , v = - .rho. 2 - .rho.
2 4 + .kappa. 3 27 S . ##EQU00051##
[0073] The real root for x is expressed as Equation (38) below:
x = u + v - .beta. 3 .alpha. . ##EQU00052##
[0074] One solution of these four cases matches the condition,
which is the solution for
B B n . ##EQU00053##
in order to solve
G G n ##EQU00054##
from
B B n , ##EQU00055##
let Equation (39) below stand:
p 3 - f - 1 ( f ( B B n ) + b + s 2 ) - k 21 ' R R n + k 22 ' G G n
. ##EQU00056##
[0075] From Equation (39) above, if
R R n ##EQU00057##
is replaced by Equation (25), Equation (40) below can be
obtained:
G G n = v y 1 k 21 ' v y 0 - v y 1 k 22 ' ( k 21 ' v y 1 p 1 - k 21
' v y 3 v y 1 B B n - p 3 ) . ##EQU00058##
[0076] After solving
G G n , R R n ##EQU00059##
can be solved from Equation (25).
Example Implementations
[0077] FIG. 7 is a block diagram of an example apparatus 700
configured to implement techniques, methods and systems in
accordance with embodiments of the present disclosure.
[0078] Example apparatus 700 may perform various functions related
to techniques, methods and systems described herein. In some
embodiments, example apparatus 700 may be a portable electronics
apparatus such as, for example, a smartphone, a personal digital
assistant (PDA) or a portable computing device such as a tablet
computer, a laptop computer, a notebook computer and the like,
which is equipped with an imaging device.
[0079] In some embodiments, example apparatus 700 may include at
least those components enclosed in the solid line of FIG. 7, such
as a camera 710, an image sensor 720, a memory 730 and a processor
740. Although image sensor 720, memory 730 and processor 740 are
illustrated as discrete components separate from each other, in
various embodiments some or all of camera 710, image sensor 720,
memory 730 and processor 740 may be integral parts of a single
module with integrated circuit (IC), chip or chipset. Moreover,
camera 710 and image sensor 720 may be integral parts of a single
module with IC, chip or chipset. Each of image sensor 720, memory
730 and processor 740 may be implemented in the form of a physical
circuit (and optional firmware, middleware, software, or any
combination thereof) configured to perform the respective
function(s) described herein.
[0080] In some other embodiments, example apparatus 700 may be, for
example, an module with IC, chip, chipset or an assembly of one or
more chips and a printed circuit board (PCB), which may be
implementable in a portable electronics apparatus such as, for
example, a smartphone, a PDA or a portable computing device such as
a tablet computer, a laptop computer, a notebook computer and the
like, which is equipped with an imaging device. In such case,
example apparatus 700 may include at least those components
enclosed in the dashed line of FIG. 7, such as memory 730 and
processor 740. Although memory 730 is illustrated as discrete
components separate from processor 740, in various embodiments,
memory 730 and processor 740 may be integral parts of an IC, chip
or chipset.
[0081] Camera 710 may be an optical instrument and configured to
capture images, which may be still photographs and/or moving images
such as video.
[0082] Image sensor 720 may be configured to sense the images
captured by camera 710 and convert the sensed image to
corresponding electrical data that can be stored in memory 730.
Image sensor 720 may be a charge-coupled device (CCD) or an
active-pixel sensor, e.g., CMOS sensor. In the present disclosure,
image sensor 720 alone or both the image sensor 720 and camera 710
may be considered as an imaging device.
[0083] Memory 730 may be configured to store data, e.g., image
data, and/or one or more sets of processor-executable instructions.
The one or more sets of processor-executable instructions may be
firmware, middleware, software or any combination thereof. Memory
730 may be in the form of any combination of one or more
computer-usable or non-transitory computer-readable media. For
example, memory 730 may be in the form of one or more of a
removable computer diskette, a hard disk, a random access memory
(RAM) device, a read-only memory (ROM) device, an erasable
programmable read-only memory (EPROM or Flash memory) device, a
removable compact disc read-only memory (CDROM), an optical storage
device, a magnetic storage device, or any suitable storage device.
Computer program code for carrying out operations of the present
disclosure may be written in any combination of one or more
programming languages. Such code, or processor-executable
instruction, may be compiled from source code to computer-readable
assembly language or machine code suitable for the device or
computer on which the code will be executed.
[0084] Processor 740 may be an image signal processor, image
processor, media processor, digital signal processor, graphics
processor or the like. Processor 740 may be coupled to camera 710,
image sensor 720 and memory 730 for communication, data access,
control, etc. For example, there may be communication from
processor 740 to image sensor 720 to better interpret the image
captured by camera 710. There may also be communication between
image sensor 720 and memory 730 by way of and/or under the control
of processor 740. Image sensor 720 may also communicate to
processor 740 to improve rendering of the captured image. Processor
740 may also communicate to camera 710 to adjust one or more
parameters of camera 710 such as, for example, focus of lens of
camera 710, zooming in and out of the lens of camera 710, etc.
Processor 740 may store data, e.g., image data, in memory 730 and
retrieve data and/or instructions or code from memory 730.
Processor 740 may execute the one or more sets of instructions
stored in memory 730.
[0085] Processor 740 may be configured to construct a uniform color
space, or DLAB, from raw tristimulus, or RGB, values of an imaging
device, e.g., image sensor 720 or both the image sensor 720 and
camera 710. The techniques described above with respect to
Equations (1)-(21) may be utilized by processor 740 in performing
these operations. For example, processor 740 may execute the one or
more sets of instructions stored in memory 730 to obtain, receive,
retrieve, access or otherwise determine characteristics related to
the imaging device, and determine a direction and a scale of each
of first, second and third perceptual color axes based at least in
part on the characteristics related to the imaging device. The
first perceptual color axis may correlate with lightness, the
second perceptual color axis may correlate with yellow-blue color
variations, and the third perceptual color axis may correlate with
red-green color variations. The second perceptual color axis may be
substantially aligned with the daylight variation.
[0086] When spectral sensitivity functions of the imaging device
are known, in obtaining the characteristics related to the imaging
device, processor 740 may be configured to receive parameters
associated with spectral sensitivity functions of the imaging
device. For example, a vendor of processor 740 may have the
parameters associated with spectral sensitivity functions of one or
more imaging devices, and may store such coefficients in memory 730
for processor 740 to access. Such parameters may be, for example,
those pre-computed coefficients shown in logic 200 of FIG. 2.
[0087] When spectral sensitivity functions of the imaging device
are unknown, in obtaining the characteristics related to the
imaging device, processor 740 may use a color checker with a
plurality of color patches with known Munsell color notations or
known spectral reflectances. Processor 740 may also receive a
plurality of images of the color checker captured under different
phases of daylight by the imaging device. For example, a user of a
portable electronics apparatus equipped with an imaging device and
processor 740 can take a number of images of a color target, and
DLAB can be constructed based on these images, e.g., by least
square fitting using techniques described below regarding FIG.
5.
[0088] In at least some embodiments, the plurality of color patches
may include a series of patches of neutral colors.
[0089] In at least some embodiments, in determining the direction
and the scale of each of the first, second and third perceptual
color axes based at least in part on characteristics related to the
imaging device, processor 740 may compute a daylight plane of the
imaging device, a best-fitting constant lightness plane of Munsell
colors with the constant lightness plane having a surface normal
vector as the first perceptual color axis, and an intersection line
of the daylight plane and the constant lightness plane as the
second perceptual color axis based at least in part on the
characteristics related to the imaging device. Processor 740 may
also determine a first line as the third perceptual color axis, the
first line being on the constant lightness plane and also
orthogonal to the second perceptual color axis. Processor 740 may
further scale the first perceptual color axis, the second
perceptual color axis, and the third perceptual color axis so that
a resulting distance in the uniform color space and a distance of
Munsell colors in a CIELAB color space are substantially the
same.
[0090] In at least some embodiments, the second perceptual color
axis may be the yellow-blue axis and the third perceptual color
axis may be orthogonal to the second perceptual color axis.
Alternatively, the third perceptual color axis may be an axis of a
predefined color correlated with the perceptual red-green axis,
e.g., an axis of the color of foliage.
[0091] In at least some embodiments, in scaling the first
perceptual color axis, the second perceptual color axis, and the
third perceptual color axis, processor 740 may scale the first
perceptual color axis, the second perceptual color axis, and the
third perceptual color axis with weighted errors so that one or
more chosen colors are weighted more heavily relative to other
colors to emphasize a fidelity of the one or more chosen colors as
represented in the uniform color space.
[0092] In at least some embodiments, processor 740 may also be
configured to perform operations to compute an inverse transform of
a uniform color space having a perceptual color axis substantially
aligned with the daylight variation. The techniques described above
with respect to Equations (22)-(40) may be utilized by processor
740 in performing these operations. For example, processor 740 may
reduce a plurality of equations describing the uniform color space
into a nonlinear equation with a single variable. Processor 740 may
examine a behavior of the nonlinear equation for a plurality of
input ranges to determine a proper projection for linear
approximation. Processor 740 may also solve one or more
combinational cases of a third-degree polynomial and a first-degree
polynomial from the linear approximation in the projection to
provide a solution. For example, processor 740 may solve Equations
(31)-(34) for cases I, II, III and IV as described above to arrive
at a solution. For example, processor 740 may determine if the
solution of
B B n ##EQU00060##
is located between the range for the case. For the
B B n ##EQU00061##
calculated from Equation (31), processor 740 may determine if
B B n ##EQU00062##
is greater than or equal to the value of
( 24 116 ) 3 and if ( B B n ) 2 3 + b + s 2 ##EQU00063##
is greater than or equal to the value of 24/116. If the result of
the determination is positive (i.e.,
B B n ##EQU00064##
is greater than or equal to the value of
( 24 116 ) 3 and ( B B n ) 2 3 + b + s 2 ##EQU00065##
is greater than or equal to the value of 24/116), then
B B n ##EQU00066##
calculated from Equation (31) is the solution. If not, processor
740 may determine whether the solution of
B B n ##EQU00067##
from Equations (32)-(34) is in the range or not.
[0093] Processor 740 may further determine whether the solution is
within a color gamut of the imaging device. In addition, processor
740 may map an out-of-gamut solution into an in-gamut color
according to a gamut mapping strategy. For example, processor 740
may preserve hue information by scaling without clipping, e.g., by
fixing the ratio of a.sup.+ and b.sup.+ as in [L.sup.+ a.sup.+
b.sup.+][L.sup.+ ra.sup.+ rb.sup.+] where r is the maximal scaling
factor that will bring all color channels within valid range.
[0094] FIG. 8 is a flowchart of an example process 800 related to
constructing a uniform color space from raw tristimulus values of
an imaging device in accordance with an embodiment of the present
disclosure.
[0095] Example process 800 may include one or more operations,
actions, or functions as illustrated by one or more of blocks 810
and 820. Although illustrated as discrete blocks, various blocks
may be divided into additional blocks, combined into fewer blocks,
or eliminated, depending on the desired implementation. Example
process 800 may be implemented by processor 740 of example
apparatus 700. For illustrative purposes, the operations described
below are performed by processor 740 of example apparatus 700. The
techniques described above with respect to Equations (1)-(21) may
be utilized by processor 740, or any other suitable one or more
processors, in performing operations pertaining to blocks 810 and
820 of example process 800. Example process 800 may begin at block
810.
[0096] Block 810 (Obtain Characteristics Related To An Imaging
Device) may refer to processor 740 obtaining, receiving,
retrieving, accessing or otherwise determining characteristics
related to the imaging device. Block 810 may be followed by block
820.
[0097] Block 820 (Determine A Direction And A Scale For Each Of
First, Second And Third Axes With The Characteristics Related To
The Imaging Device) may refer to processor 740 determining a
direction and a scale of each of first, second and third perceptual
color axes based at least in part on the characteristics related to
the imaging device. As a result, the first perceptual color axis
may correlate with lightness, the second perceptual color axis may
correlate with yellow-blue color variations, and the third
perceptual color axis may correlate with red-green color
variations. Additionally, the second perceptual color axis may be
substantially aligned with the daylight variation.
[0098] In at least some embodiments, in obtaining the
characteristics related to the imaging device, example process 800
may involve the processor 740 receiving parameters associated with
spectral sensitivity functions of the imaging device.
[0099] Alternatively, in obtaining the characteristics related to
the imaging device, example process 800 may involve the processor
740 performing operations including: using a color checker with a
plurality of color patches with known Munsell color notations or
known spectral reflectances; and receiving a plurality of images of
the color checker captured under different phases of daylight by
the imaging device. In at least some embodiments, the plurality of
color patches may include a series of patches of neutral
colors.
[0100] In at least some embodiments, in determining the direction
and the scale of each of the first, second and third perceptual
color axes based at least in part on characteristics related to the
imaging device, example process 800 may involve the processor 740
performing operations including: computing a daylight plane of the
imaging device, a best-fitting constant lightness plane of Munsell
colors with the constant lightness plane having a surface normal
vector as the first perceptual color axis, and an intersection line
of the daylight plane and the constant lightness plane as the
second perceptual color axis based at least in part on the
characteristics related to the imaging device; determining a first
line as the third perceptual color axis, the first line being on
the constant lightness plane and also orthogonal to a second
perceptual color axis; and scaling the first perceptual color axis,
the second perceptual color axis, and the third perceptual color
axis so that a resulting distance in the uniform color space and a
distance of Munsell colors in a CIELAB color space are
substantially the same.
[0101] In at least some embodiments, the second perceptual color
axis may be the yellow-blue axis, e.g., daylight, and the third
perceptual color axis may be orthogonal to the second perceptual
color axis. Alternatively, the third perceptual color axis may be
an axis of a predefined color correlated with the perceptual
red-green axis, e.g., an axis of the color of foliage. That is,
example process 800 may find other colors such as foliage, for
example, to align with the red-green axis. Such colors need not be
orthogonal to the yellow-blue axis.
[0102] In at least some embodiments, in scaling the first
perceptual color axis, the second perceptual color axis, and the
third perceptual color axis, example process 800 may involve the
processor 740 scaling the first perceptual color axis, the second
perceptual color axis, and the third perceptual color axis with
weighted errors so that one or more chosen colors are weighted more
heavily relative to other colors to emphasize a fidelity of the one
or more chosen colors as represented in the uniform color
space.
[0103] In at least some embodiments, example process 800 may
involve the processor 740 performing additional operations
including: reducing a plurality of equations describing the uniform
color space into a nonlinear equation with a single variable;
examining a behavior of the nonlinear equation for a plurality of
input ranges to determine a best projection for linear
approximation; solving one or more combinational cases of a
third-degree polynomial and a first-degree polynomial from the
linear approximation in the projection to provide a solution;
determining whether the solution is within a color gamut of the
imaging device; and mapping an out-of-gamut solution into an
in-gamut color according to a gamut mapping strategy.
[0104] FIG. 9 is a flowchart of an example process 900 related to
constructing a uniform color space from raw tristimulus values of
an imaging device in accordance with another embodiment of the
present disclosure.
[0105] Example process 900 may include one or more operations,
actions, or functions as illustrated by one or more of blocks 910,
920 and 930. Although illustrated as discrete blocks, various
blocks may be divided into additional blocks, combined into fewer
blocks, or eliminated, depending on the desired implementation.
Example process 900 may be implemented by processor 740 of example
apparatus 700. For illustrative purposes, the operations described
below are performed by processor 740 of example apparatus 700. The
techniques described above with respect to Equations (1)-(21) may
be utilized by processor 740, or any other suitable one or more
processors, in performing operations pertaining to blocks 910, 920
and 930 of example process 900. Example process 900 may begin at
block 910.
[0106] Block 910 (Obtain Characteristics Related To An Imaging
Device) may refer to processor 740 obtaining, receiving,
retrieving, accessing or otherwise determining characteristics
related to the imaging device. Block 910 may be followed by block
920.
[0107] Block 920 (Compute First, Second And Third Axes Of A Uniform
Color Space With The Characteristics Related To The Imaging Device)
may refer to processor 740 computing first, second and third
perceptual color axes. The first perceptual color axis may be
correlated with lightness. The second perceptual color axis may be
correlated with a first color variations and substantially aligned
with the daylight variation. The third perceptual color axis may be
correlated with a second color variation and orthogonal to the
second perceptual color axis. Block 920 may be followed by block
930.
[0108] Block 930 (Scale The First, Second And Third Axes) may refer
to processor 740 scaling the first, second and third perceptual
color axes so that a resulting distance in the uniform color space
and a distance of Munsell colors in a CIELAB color space are
substantially the same.
[0109] In at least some embodiments, in obtaining characteristics
related to the imaging device, example process 900 may involve the
processor 740 receiving parameters associated with spectral
sensitivity functions of the imaging device.
[0110] Alternatively, in obtaining characteristics related to the
imaging device, example process 900 may involve the processor 740
using a color checker with a plurality of color patches with known
Munsell color notations or known spectral reflectances.
Additionally, example process 900 may also involve the processor
740 receiving a plurality of images of the color checker captured
under different phases of daylight by the imaging device. In at
least some embodiments, the plurality of color patches may include
a series of patches of neutral colors.
[0111] In at least some embodiments, in computing first, second and
third perceptual color axes, example process 900 may involve the
processor 740 computing a daylight plane of the imaging device, a
constant lightness plane of Munsell colors with the constant
lightness plane having a surface normal vector as the first
perceptual color axis, and an intersection line of the daylight
plane and the constant lightness plane as the second perceptual
color axis based at least in part on the characteristics related to
the imaging device. Additionally, example process 900 may also
involve the processor 740 determining a first line as the third
perceptual color axis, with the first line on the constant
lightness plane and orthogonal to the second perceptual color
axis.
[0112] In at least some embodiments, the second perceptual color
axis may be a yellow-blue axis, e.g., daylight, and the third
perceptual color axis may be orthogonal to the second perceptual
color axis. Alternatively, the third perceptual color axis may be
an axis of a color correlated with the perceptual red-green axis,
e.g., an axis of the color of foliage. That is, example process 900
may find other colors such as foliage, for example, to align with
the red-green axis. Such colors need not be orthogonal to the
yellow-blue axis.
[0113] In at least some embodiments, in scaling, example process
900 may involve the processor 740 scaling with weighted errors so
that one or more chosen colors are weighted more heavily relative
to other colors to emphasize a fidelity of the one or more chosen
colors as represented in the uniform color space. In at least some
embodiments, the one or more chosen colors may include at least a
color of skin, grass, blue sky, or any other user-chosen color.
[0114] FIG. 10 is a flowchart of an example process 1000 related to
computing an inverse transform of a uniform color space having a
perceptual color axis substantially aligned with the daylight
variation in accordance with an embodiment of the present
disclosure.
[0115] Example process 1000 may include one or more operations,
actions, or functions as illustrated by one or more of blocks 1010,
1020, 1030, 1040 and 1050. Although illustrated as discrete blocks,
various blocks may be divided into additional blocks, combined into
fewer blocks, or eliminated, depending on the desired
implementation. Example process 1000 may be implemented by
processor 740 of example apparatus 700. For illustrative purposes,
the operations described below are performed by processor 740 of
example apparatus 700. The techniques described above with respect
to Equations (22)-(40) may be utilized by processor 740, or any
other suitable one or more processors, in performing operations
pertaining to blocks 1010, 1020, 1030, 1040 and 1050 of example
process 1000. Example process 1000 may begin at block 1010.
[0116] Block 1010 (Reduce Equations Describing A Uniform Color
Space Into A Nonlinear Equation With A Single Variable) may refer
to processor 740 reducing a plurality of equations describing the
uniform color space into a nonlinear equation with a single
variable. Block 1010 may be followed by block 1020.
[0117] Block 1020 (Examine A Behavior Of The Nonlinear Equation For
Plural Input Ranges To Determine A Projection For Linear
Approximation) may refer to processor 740 examining a behavior of
the nonlinear equation for a plurality of input ranges to determine
a best projection for linear approximation. Block 1020 may be
followed by block 1030.
[0118] Block 1030 (Solve Combinational Cases Of A Third-Degree
Polynomial And A First-Degree Polynomial From The Linear
Approximation In The Projection To Provide A Solution) may refer to
processor 740 solving one or more combinational cases of a
third-degree polynomial and a first-degree polynomial from the
linear approximation in the projection to provide a solution. Block
1030 may be followed by block 1040.
[0119] Block 1040 (Determine Whether The Solution Is Within A Color
Gamut Of The Imaging Device) may refer to processor 740 determining
whether the solution is within a color gamut of the imaging device.
Block 1040 may be followed by block 1050.
[0120] Block 1050 (Map An Out-Of-Gamut Solution Into An In-Gamut
Color According To A Gamut Mapping Strategy) may refer to processor
740 mapping an out-of-gamut solution into an in-gamut color
according to a gamut mapping strategy.
[0121] FIG. 11 is a block diagram of an example device 1100
configured to implement techniques, methods and systems in
accordance with embodiments of the present disclosure.
[0122] In some embodiments, example device 1100 may be, for
example, an IC, chip, chipset or an assembly of one or more chips
and a PCB, which may be implementable in an imaging device such as
a camera. In some other embodiments, example device 1100 may be,
for example, an IC, chip, chipset or an assembly of one or more
chips and a PCB, which may be implementable in a portable
electronics apparatus such as, for example, a smartphone, a
personal digital assistant (PDA) or a portable computing device
such as a tablet computer, a laptop computer, a notebook computer
and the like, where such a portable electronics apparatus is
equipped with an imaging device.
[0123] Example device 1100 may include a memory 1110 and a
processor 1120.
[0124] Memory 1110 may be configured to store data representative
of characteristics related to an imaging device in which example
device 1100 may be implemented. For example, when characteristics
related to the imaging device (e.g., spectral sensitivity functions
of the imaging device) are known, data pertaining to the
characteristics of the imaging device, such as coefficients of
DLAB, may be pre-computed and stored in memory 1110 by a
vendor.
[0125] Processor 1120 may be configured to store data in and access
data from memory 1110. In some embodiments, processor 1120 alone or
both processor 1120 and memory 1110 may be implemented as processor
740 of example apparatus 700 of FIG. 7.
[0126] Processor 1120 may include a computation unit 1130 and a
scaling unit 1140. Optionally, processor 1120 may also include a
characteristics obtaining unit 1150. Still optionally, processor
1120 may further include an inverse transformation unit 1160. Each
of the computation unit 1130, scaling unit 1140, characteristics
obtaining unit 1150 and inverse transformation unit 1160 may be
implemented in the form of a physical circuit (and optional
firmware, middleware, software, or any combination thereof) that is
configured to perform the respective function(s) described herein.
That is, example device 1100 is a special-purpose machine designed
and configured to perform specific operations to achieve novel and
non-obvious results in accordance with embodiments of the present
disclosure.
[0127] Computation unit 1130 may be configured to compute first,
second and third perceptual color axes of a uniform color space
based at least in part on the characteristics related to the
imaging device. Scaling unit 1140 may be configured to scale the
first, second and third perceptual color axes so that a resulting
distance in the uniform color space is substantially equal to a
distance of Munsell colors in a CIELAB color space.
[0128] In at least some embodiments, the first perceptual color
axis may correlate with lightness, the second perceptual color axis
may correlate with yellow-blue color variations, and the third
perceptual color axis may correlate with red-green color
variations. The second perceptual color axis may be substantially
aligned with a daylight variation.
[0129] In at least some embodiments, the characteristics related to
the imaging device may include parameters associated with spectral
sensitivity functions of the imaging device.
[0130] In at least some embodiments, in computing the first, second
and third perceptual color axes of the uniform color space,
computation unit 1130 may, based at least in part on the
characteristics related to the imaging device, compute a daylight
plane of the imaging device, a constant lightness plane of Munsell
colors with the constant lightness plane having a surface normal
vector as the first perceptual color axis, and an intersection line
of the daylight plane and the constant lightness plane as the
second perceptual color axis. Computation unit 1130 may also
determine a first line as the third perceptual color axis, the
first line on the constant lightness plane and orthogonal to the
second perceptual color axis. In at least some embodiments, the
second perceptual color axis may include a yellow-blue axis. In at
least some embodiments, the third perceptual color axis may include
an axis of a predefined color correlated with a perceptual
red-green axis.
[0131] In at least some embodiments, in scaling the first, second
and third perceptual color axes, scaling unit 1140 may be
configured to scale the first, second and third perceptual color
axes with weighted errors so that one or more chosen colors are
weighted more heavily relative to other colors to emphasize a
fidelity of the one or more chosen colors as represented in the
uniform color space.
[0132] Characteristics obtaining unit 1150 may be configured to
obtain the characteristics related to the imaging device by
performing a number of operations. For example, when the
characteristics related to the imaging device, e.g., spectral
sensitivity functions, are unknown, images of a color target with
known Munsell color notations or spectral reflectance functions,
such as a Macbeth color checker or any other suitable color
checker, may be taken by a user with the imaging device to obtain
raw camera RGB values. In such case, a user of a portable
electronics apparatus equipped with an imaging device in which
example device 1100 is implemented can take a number of images of
the color target. Based on these images, characteristics obtaining
unit 1150 of example device 1100 may be able to construct DLAB,
e.g., by least square fitting using techniques described below
regarding FIG. 5.
[0133] Characteristics obtaining unit 1150 may use a color checker
with a plurality of color patches with known Munsell color
notations or known spectral reflectances. Characteristics obtaining
unit 1150 may also receive a plurality of images of the color
checker captured under different phases of daylight by the imaging
device. In at least some embodiments, the plurality of color
patches may include a series of patches of neutral colors.
[0134] Inverse transformation unit 1160 may be configured to
compute an inverse transformation of the uniform color space. The
techniques described above with respect to Equations (22)-(40) may
be utilized by inverse transformation unit 1160 in computing the
inverse transformation of the uniform color space. For example,
inverse transformation unit 1160 may compute the inverse
transformation of the uniform color space by performing operations
including: reducing a plurality of equations describing the uniform
color space into a nonlinear equation with a single variable;
examining a behavior of the nonlinear equation for a plurality of
input ranges to determine a projection for linear approximation;
solving one or more combinational cases of a third-degree
polynomial and a first-degree polynomial from the linear
approximation in the projection to provide a solution; determining
whether the solution is within a color gamut of the imaging device;
and mapping an out-of-gamut solution into an in-gamut color
according to a gamut mapping strategy.
[0135] In some embodiments, results of an inverse transformation of
the uniform color space, or DLAB, may be computed offline in
advance (e.g., not by or in processor 1120) for a plurality of grid
points of the uniform color space. For example, memory 1110 may
also be configured to store a lookup table containing a plurality
of results of an inverse transformation of the uniform color space
corresponding to a plurality of grid points of the uniform color
space. Additionally, computation unit 1130 may be configured to
interpolate one or more additional inverse colors in the uniform
color space based at least in part on the lookup table. In this
scenario inverse transformation unit 1160 may not be required in
processor 1120.
ADDITIONAL NOTES
[0136] The herein-described subject matter sometimes illustrates
different components contained within, or connected with, different
other components. It is to be understood that such depicted
architectures are merely examples, and that in fact many other
architectures can be implemented which achieve the same
functionality. In a conceptual sense, any arrangement of components
to achieve the same functionality is effectively "associated" such
that the desired functionality is achieved. Hence, any two
components herein combined to achieve a particular functionality
can be seen as "associated with" each other such that the desired
functionality is achieved, irrespective of architectures or
intermedial components. Likewise, any two components so associated
can also be viewed as being "operably connected", or "operably
coupled", to each other to achieve the desired functionality, and
any two components capable of being so associated can also be
viewed as being "operably couplable", to each other to achieve the
desired functionality. Specific examples of operably couplable
include but are not limited to physically mateable and/or
physically interacting components and/or wirelessly interactable
and/or wirelessly interacting components and/or logically
interacting and/or logically interactable components.
[0137] Further, with respect to the use of substantially any plural
and/or singular terms herein, those having skill in the art can
translate from the plural to the singular and/or from the singular
to the plural as is appropriate to the context and/or application.
The various singular/plural permutations may be expressly set forth
herein for sake of clarity.
[0138] Moreover, it will be understood by those skilled in the art
that, in general, terms used herein, and especially in the appended
claims, e.g., bodies of the appended claims, are generally intended
as "open" terms, e.g., the term "including" should be interpreted
as "including but not limited to," the term "having" should be
interpreted as "having at least," the term "includes" should be
interpreted as "includes but is not limited to," etc. It will be
further understood by those within the art that if a specific
number of an introduced claim recitation is intended, such an
intent will be explicitly recited in the claim, and in the absence
of such recitation no such intent is present. For example, as an
aid to understanding, the following appended claims may contain
usage of the introductory phrases "at least one" and "one or more"
to introduce claim recitations. However, the use of such phrases
should not be construed to imply that the introduction of a claim
recitation by the indefinite articles "a" or "an" limits any
particular claim containing such introduced claim recitation to
embodiments containing only one such recitation, even when the same
claim includes the introductory phrases "one or more" or "at least
one" and indefinite articles such as "a" or "an," e.g., "a" and/or
"an" should be interpreted to mean "at least one" or "one or more;"
the same holds true for the use of definite articles used to
introduce claim recitations. In addition, even if a specific number
of an introduced claim recitation is explicitly recited, those
skilled in the art will recognize that such recitation should be
interpreted to mean at least the recited number, e.g., the bare
recitation of "two recitations," without other modifiers, means at
least two recitations, or two or more recitations. Furthermore, in
those instances where a convention analogous to "at least one of A,
B, and C, etc." is used, in general such a construction is intended
in the sense one having skill in the art would understand the
convention, e.g., "a system having at least one of A, B, and C"
would include but not be limited to systems that have A alone, B
alone, C alone, A and B together, A and C together, B and C
together, and/or A, B, and C together, etc. In those instances
where a convention analogous to "at least one of A, B, or C, etc."
is used, in general such a construction is intended in the sense
one having skill in the art would understand the convention, e.g.,
"a system having at least one of A, B, or C" would include but not
be limited to systems that have A alone, B alone, C alone, A and B
together, A and C together, B and C together, and/or A, B, and C
together, etc. It will be further understood by those within the
art that virtually any disjunctive word and/or phrase presenting
two or more alternative terms, whether in the description, claims,
or drawings, should be understood to contemplate the possibilities
of including one of the terms, either of the terms, or both terms.
For example, the phrase "A or B" will be understood to include the
possibilities of "A" or "B" or "A and B."
[0139] From the foregoing, it will be appreciated that various
embodiments of the present disclosure have been described herein
for purposes of illustration, and that various modifications may be
made without departing from the scope and spirit of the present
disclosure. Accordingly, the various embodiments disclosed herein
are not intended to be limiting, with the true scope and spirit
being indicated by the following claims.
* * * * *