U.S. patent application number 09/960276 was filed with the patent office on 2002-11-28 for dynamic image correction and imaging systems.
Invention is credited to Edgar, Albert D..
Application Number | 20020176113 09/960276 |
Document ID | / |
Family ID | 27398563 |
Filed Date | 2002-11-28 |
United States Patent
Application |
20020176113 |
Kind Code |
A1 |
Edgar, Albert D. |
November 28, 2002 |
Dynamic image correction and imaging systems
Abstract
A method, system and software are disclosed for applying an
image mask for improving image detail in a digital image. An
electronic representation of an image is scanned or captured using
an image capture device. A dynamic image mask is generated from the
electronic representation of the image. The dynamic image mask has
sharp edges which are representative of rapidly changing boundaries
in the original image and blurred regions in less rapidly changing
areas. The dynamic image mask is applied to the electronic
representation of the original image to produce an enhanced image.
The enhanced image may have certain advantages. For example, in
some embodiments, the enhanced image can be view on a display with
much more viewing detail that conventional systems.
Inventors: |
Edgar, Albert D.; (Austin,
TX) |
Correspondence
Address: |
SIMON, GALASSO & FRANTZ PLC.
P.O. Box 26503
Austin
TX
78755-0503
US
|
Family ID: |
27398563 |
Appl. No.: |
09/960276 |
Filed: |
September 21, 2001 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60234408 |
Sep 21, 2000 |
|
|
|
60234520 |
Sep 21, 2000 |
|
|
|
60285591 |
Apr 19, 2001 |
|
|
|
Current U.S.
Class: |
358/3.27 ;
382/261; 382/263; 382/264; 382/266 |
Current CPC
Class: |
H04N 1/407 20130101;
G06T 5/20 20130101; H04N 1/4092 20130101; G06T 5/004 20130101 |
Class at
Publication: |
358/3.27 ;
382/261; 382/266; 382/263; 382/264 |
International
Class: |
G06K 015/02; H04N
001/409; G06T 005/40; H04N 001/58; G06T 005/50; G06T 005/00 |
Claims
What is claimed is:
1. A method for enhancing a digital image comprising: providing a
digital original image comprised of a plurality of pixels, wherein
each pixel includes an original value corresponding to a
characteristic of the image; calculating a dynamic image mask value
for each pixel by averaging the original value of a pixel with the
original values of the pixels proximate that pixel having original
values lower than a threshold sharpness; and applying the dynamic
image mask value to the original value for each corresponding pixel
using a mathematical function to produce an enhanced image.
2. The method of claim 1, wherein providing a digital original
image comprises capturing a digital original image using a digital
capture device.
3. The method of claim 1, wherein providing a digital original
image comprises capturing a digital original image using an imaging
system.
4. The method of claim 1, wherein the original value corresponding
to a characteristic of the image comprises an intensity value
corresponding to a color.
5. The method of claim 1, wherein the original value corresponding
to a characteristic of the image comprises an intensity value
corresponding to range of frequencies.
6. The method of claim 1, wherein averaging the original value of a
pixel with only the original values of the pixels proximate that
pixel having original values less than a sharpness threshold
comprises averaging the original value of a pixel with only the
weighted original values of the pixels proximate that pixel having
original values less than a sharpness threshold.
7. The method of claim 6, wherein the weighted original values are
determined according to the following formula: 3 w N = ( 1 - (
pixel N - centerpixel Gain ) , wherein pixelN is the value of the
pixel being weighed, center pixel is the value of a central pixel,
and wherein Gain is the threshold sharpness.
8. The method of claim 1, wherein the original values used to
calculate the difference less than the sharpness threshold
correspond to different characteristics than the original values
used in averaging.
9. The method of claim 1, wherein calculating a dynamic image mask
value includes performing a pyramidal decomposition on the original
image.
10. The method of claim 1, wherein the mathematical function
comprises division.
11. The method of claim 1, wherein the mathematical function
comprises: 4 OUT = IN 3 4 MASK + 1 4 ,wherein OUT is the value of
the pixel being calculated in the enhanced scanned image, IN is the
value of the relative pixel in the original image, and MASK is the
value of the relative pixel in the dynamic image mask.
12. The method of claim 1, further comprising performing histogram
leveling to the enhanced scanned image.
13. The method of claim 1, wherein the enhanced scanned image
includes an image contrast and a grayscale contrast.
14. The method of claim 13, wherein the image contrast and the
grayscale contrast can be controlled independently of each
other.
15. The method of claim 1, wherein the dynamic image mask value may
be proportionally varied by a user.
16. A system comprising: a sensor system operable to produce
electronic signals corresponding to certain characteristics of a
subject; a processor operable to receive the electronic signals and
produce image values for each pixel; and a memory media having
software stored thereon, wherein the software is operable to:
calculate a dynamic image mask value for each pixel by averaging
the image value of a pixel with the image values of the pixels
proximate that pixel having image values lower than a threshold
sharpness; and apply the dynamic image mask value to the image
value for each corresponding pixel using a mathematical function to
produce an enhanced image.
17. The system of claim 16, wherein the sensor system operates to
measure light from the subject.
18. The system of claim 16, wherein the sensor system operates to
measure a magnetic resonance pulse.
19. The system of claim 16, further comprising a printer operable
to print the enhanced image.
20. The system of claim 19, wherein the printer comprises a
photographic printer.
21. The system of claim 16, further comprising a digital output
device operable to store the enhanced image.
22. The system of claim 16, wherein the system comprises a digital
device within the group of a digital camera and a video camera.
23. The system of claim 16, wherein the system comprises an imaging
system within the group of a magnetic resonance imaging system and
a radar system.
24. The system of claim 16, wherein the software is loaded into an
image capturing device.
25. The system of claim 16, wherein the system comprises a printer
device.
26. A software tangibly embodied in a computer readable medium,
said software operable to produce an enhanced image by implementing
a method comprising: generating a dynamic image mask from a digital
original image, the dynamic image mask and the original image each
comprising a plurality of pixels having varying values, wherein the
values of the plurality of dynamic image mask pixels are set to
form sharper edges corresponding to areas of more rapidly changing
pixel values in the original image and less sharp regions
corresponding to areas of less rapidly changing pixel values in the
original image; and combining the dynamic image mask with the
original image to produce the enhanced image.
27. The software of claim 26, wherein: the original image includes
an amount of image detail encoded in a physically reproducible
dynamic range; and wherein the enhanced image includes an increased
amount of detail encoded in the physically reproducible dynamic
range.
28. The software of claim 26, wherein combining the dynamic image
mask with the original image is performed through mathematical
manipulation.
29. The software of claim 28, wherein the mathematical manipulation
includes division.
30. The software of claim 26, wherein the pixels in the dynamic
image mask are generated according to the equation, 5 OUT = IN 3 4
MASK + 1 4 ,wherein OUT is the value of the pixel being calculated
in the enhanced image, IN is the value of the relative pixel in the
original image, and MASK is the value of the relative pixel in the
dynamic image mask.
31. The software of claim 26, further comprising histogram
leveling.
32. The software of claim 26, wherein the value of a pixel in the
dynamic image mask is generated by averaging the value of a central
pixel corresponding to the pixel in the original image with
weighted values of a plurality of neighboring pixels in the
original image.
33. The software of claim 32, wherein the weighting of the
plurality of neighboring pixels is dependant on a proximity of the
neighboring pixels to the central pixel and a contrast of the
plurality of neighboring pixels to the central pixel.
34. The software of claim 26, wherein the weight of pixels in the
dynamic image mask is determined according to the following
formula: 6 w N = ( 1 - ( pixel N - centerpixel Gain ) , wherein
pixelN is the value of the pixel being weighed, center pixel is the
value of the central pixel, and wherein Gain is a threshold
contrast value for determining a sharp edge.
35. The software of claim 26, wherein the value of a pixel in the
dynamic image mask is generated based on a relationship of the
value of a different characteristic.
36. The software of claim 26, wherein the generating the dynamic
image mask includes performing a pyramidal decomposition on the
original image.
37. The software of claim 26, wherein the software is resident on a
computer.
38. The software of claim 26, wherein the software is resident on a
digital camera.
39. A system comprising: an image sensor to convert light reflected
from an image into information representative of the image; a
processor; memory operably coupled to said processor; and a program
of instructions capable of being stored in said memory and executed
by said processor, said program of instructions to manipulate said
processor to: obtain a dynamic image mask, the dynamic image mask
and the information representative of the image each including a
plurality of pixels having varying values, wherein the values of
the plurality of dynamic image mask pixels are set to form sharper
edges corresponding to areas of more rapidly changing pixel values
in the original image and less sharp regions corresponding to areas
of less rapidly changing pixel values in the original image; and
combine the image mask with the information representative of the
image to obtain a masked image.
40. The system of claim 39, further including a color decoder,
operably connected to said image sensor, to generate color
information from the information representative of the image.
41 The system of claim 40, wherein said program of instructions are
executed on an output of said image sensor, and where a result of
said executed program of instructions are input to said color
decoder.
42. The system of claim 39, further including a color management
system, operably connected to said color decoder, to process said
color information.
43. The system of claim 42, wherein said program of instructions
are executed on an output of said color decoder, and where a result
of said executed program of instructions are input to said color
management system.
44. The system of claim 43, wherein said output of said color
decoder is information representative of a red portion of the
image, a green portion of the image, and a blue portion of the
image.
45. The system of claim 42, further including a storage system,
operably connected to said color management system, to store the
color information.
46. The system of claim 45, wherein said program of instructions
are executed on an output of said color management system, and
where a result of said executed program of instructions are input
to said storage system.
47. The system of claim 39, further including a display, operable
to display a representation of the information representative of
the image.
48. The system of claim 47, wherein said program of instructions
are executed on an output of a color management system, and where a
result of said executed program of instructions are input to said
display.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of the following U.S.
Provisional Patent Applications: Serial No. 60/234,520, filed on
Sep. 21, 2000, and entitled "Method of Generating an Image Mask for
Improving Image Detail;" Serial No. 60/234,408, filed on Sep. 21,
2000, and entitled "Method of Applying An Image Mask For Improving
Image Detail;" and Serial No. 60/285,591, filed on Apr. 19, 2001,
and entitled "Method and System and Software for Applying an Image
Mask for Improving Image Detail;" of common assignee herewith.
FIELD OF THE INVENTION
[0002] The present invention relates generally to imaging systems
and image processing and more particularly to dynamic image
correction and imaging systems.
BACKGROUND OF THE INVENTION
[0003] A variety of methods are commonly employed to capture an
image. For example, photographic film may be exposed to light
reflected from a desired subject to record a latent image withing
the film. The film is then developed to generate a "negative" or
"positive" from which prints or transparencies can be made and
delivered to consumers. The negative, positive, or print can be
scanned to produce a digital representation of the subject.
Alternately, digital devices such as digital camera, video
recorder, and the like, may be used to directly capture a digital
representation of the desired subject by measuring the reflected
light from the subject.
[0004] Lighting is particularly important when capturing images and
care is often take to ensure the proper lighting of the subject
matter of the image. If too much light is reflected from the
subject, the captured image will be over-exposed, and the final
image will appear washed-out. If too little light, the captured
image will appear under-exposed, and the final image will appear
dark. Similarly, if the proper lighting is not provided from a
proper angle, for example when one part of an image is in bright
light while another part is in shadow, some of the image might be
properly exposed, while the remainder of the image is either
under-exposed or over-exposed. Conventional digital devices are
particularly prone to having over-exposed and under-exposed
portions of an image.
[0005] If during an image capture process the subject is
over-exposed or under-exposed, the mistake can sometimes be
minimized in the processing (or development) and/or printing
process. Typically, when an image is captured on film, the negative
contains much more image detail than can be reproduced in a
photographic print, and so a photographic print includes only a
portion of the information available to be printed. Similarly,
images captured directly by digital devices often have considerably
more image detail then can be reproduced or output. By choosing the
proper portion of the image detail to print, the final processed
image may be compensated for the mistakes made during image
capture. However, particularly in the case in which some areas of
an image are underexposed and other areas of an image are
over-exposed, it is difficult to correct both the under-exposed and
over-exposed portions of the image.
[0006] Conventional correction techniques for reducing the effects
of over-exposed and under-exposed regions are generally performed
by hand and can be extremely expensive. One conventional correction
technique is to apply a cutout filter. In this technique, the image
is divided into large, homogeneous regions, and a filter is applied
to each of these regions. Referring now to FIG. 1, in which a
conventional cutout filter 110 is shown. The original image is of a
castle. Assume that in the original image, the sky 160 lacks detail
and is washed out, while the castle 120 is in shadow. The cutout
filter 110 has a dark sky 160 and a light castle 120, so that when
applied to the original image, the sky 160 in the resultant image
will be darker, and the castle 120 will be lighter, thereby
improving "gross" image detail.
[0007] A drawback of cutout filter 110 is that image detail within
the regions is not properly corrected unless the selected region is
truly homogeneous, which is not very likely. As a result, detail
within each region is lost. The number of regions selected for
filtering may be increased, but selecting more regions greatly
increases the time and labor needed to generate the cutout filter
110. In addition, this technique and other conventional techniques
tends to create visually unappealing boundaries between the
regions.
SUMMARY OF THE INVENTION
[0008] In accordance with one implementation of the present
invention a method of enhancing an image is provided. In one
embodiment, the method comprises obtaining an image mask of the
original image. The image mask and the original image each comprise
a plurality of pixels having varying values. The plurality of mask
pixels are set to form sharper edges corresponding to areas of more
rapidly changing pixel values in the original image. The pixels are
further arranged to form areas of less sharp regions corresponding
to areas of less rapidly changing pixel values in the original
image. The method further comprises combining the image mask with
the original image to obtain a masked image.
[0009] Another embodiment of the present invention provides for a
digital file tangibly embodied in a computer readable medium. The
digital file is generated by implementing a method comprising
obtaining an image mask of an original image. The image mask and
the original image each comprise a plurality of pixels having
varying values. The plurality of mask pixels are set to form
sharper edges corresponding to areas of more rapidly changing pixel
values in the original image. The pixels are further arranged to
form areas of less sharp regions corresponding to areas of less
rapidly changing pixel values in the original image. The method
further comprises combining the image mask with the original image
to obtain a masked image.
[0010] An additional embodiment of the present invention provides
for a computer readable medium tangibly embodying a program of
instructions. The program of instructions is capable of obtaining
an image mask of an original image. The image mask and the original
image each comprise a plurality of pixels having varying values.
The plurality of mask pixels are set to form sharper edges
corresponding to areas of more rapidly changing pixel values in the
original image. The pixels are further arranged to form areas of
less sharp regions corresponding to areas of less rapidly changing
pixel values in the original image. The program of instructions is
further capable of combining the image mask with the original image
to obtain a masked image.
[0011] Yet another embodiment of the present invention provides for
a system comprising an image sensor to convert light reflected from
an image into information representative of the image, a processor,
memory operably coupled to the processor, and a program of
instructions capable of being store in the memory and executed by
the processor. The program of instructions manipulate the processor
to obtain an image mask, the image mask and the information
representative of the image each including a plurality of pixels
having varying values, wherein the values of the plurality of mask
pixels are set to form sharper edges corresponding to areas of more
rapidly changing pixel values in the original image and less sharp
regions corresponding to areas of less rapidly changing pixel
values in the original image. The program of instructions also
manipulate the processor to combine the image mask with the
information representative of the image to obtain a masked
image.
[0012] An advantage of at least one embodiment of the present
invention is that an image to improve reproducible detail can be
generated without user intervention.
[0013] An additional advantage of at least one embodiment of the
present invention is that an image mask can be automatically
applied to an original image to generate an image with improved
image detail within a reproducible dynamic range due to the image
detail preserved in the image mask.
[0014] Yet another advantage of at least one embodiment of the
present invention is that calculations to improve the image detail
in scanned images can be performed relatively quickly, due to a
lower processing overhead and less user intervention than
conventional methods.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Other objects, advantages, features and characteristics of
the present invention, as well as methods, operation and functions
of related elements of structure, and the combination of parts and
economies of manufacture, will become apparent upon consideration
of the following description and claims with reference to the
accompanying drawings, all of which form a part of this
specification, wherein like reference numerals designate
corresponding parts in the various figures, and wherein:
[0016] FIG. 1 is an illustration showing a conventional cutout
filter;
[0017] FIG. 2 is a block diagram illustrating a method for dynamic
image correction according to one embodiment of the present
invention;
[0018] FIG. 3 is a block diagram of an original image and a dynamic
image mask according to one embodiment of the present
invention;
[0019] FIG. 4 is a set of graphs showing intensity values of pixels
around an edge before and after a blurring algorithm has been
applied according to one embodiment of the present invention;
[0020] FIG. 5 is a block diagram of a method for generating a
dynamic image mask according to at least one embodiment of the
present invention;
[0021] FIG. 6 is a representation of an dynamic image mask with
properties according to at least one embodiment of the present
invention;
[0022] FIG. 7 is a block diagram illustrating a method of applying
a dynamic image mask to an image according to at least one
embodiment of the present invention;
[0023] FIG. 8A is a block diagram illustrating a wrinkle reduction
process in accordance with one embodiment of the invention;
[0024] FIG. 8B-1 is a picture illustrating an original image;
[0025] FIG. 8B-2 is a picture illustrating the image of 8B-1 with
the wrinkle reduction process applied;
[0026] FIG. 9 is a block diagram illustrating an image capture
system according to at least one embodiment of the present
invention; and
[0027] FIG. 10 is a chart illustrating improvements in the dynamic
range of various image representations according to at least one
embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0028] FIGS. 2-9 illustrate a method for dynamic image correction
and imaging systems having enhanced images. As described in greater
detail below, one embodiment of dynamic image correction utilizes a
dynamic image mask that uses a blurring algorithm that maintains
sharp boundaries of the image. The dynamic image mask is then
applied to the image. In some implementations, the dynamic image
mask is used to increase the amount of reproducible detail within
an image. In another implementation, the dynamic image mask is used
to suppress median frequencies and maintain sharp boundaries. In
this implementation, the dynamic image mask can be regionally
applied using an electronic brush. In yet other implementations,
various embodiments of the dynamic image mask can be used as a
correction map for other correction and enhancement functions.
Systems for utilizing digital image correction can include a
variety of image capturing or processing systems, such as digital
cameras, video cameras, scanners, image processing software, and
the like.
[0029] Referring now to FIG. 2, one method of dynamic image
correction 200 is described. In this embodiment, dynamic image
correction 200 includes creating a dynamic image mask B from an
original image A. The dynamic image mask B be is then combined with
original image A to generate an enhanced image C. In one
embodiment, the enhanced image C has improved image detail over
original image A, within a reproducible dynamic range. For example,
original image A may contain detail which may not be appropriately
represented when output for display or printing, such as containing
high contrast over-exposed (bright) regions and under-exposed
(shadow) regions. It would be helpful to brighten the detail in the
shadow regions and decreasing the brightness of the bright regions
without losing image detail. At least one embodiment of the present
invention automatically performs this function. In contrast,
conventional methods of simply dividing the original image into a
bright and shadow regions will not generally suffice to improve
complex images. Images generally contain complex and diverse
regions of varying contrast levels, and as a result, conventional
methods generally produce inadequate results.
[0030] In step 210, original image A is provided. Original image A
is an electronic representation of a subject and includes one or
more characteristic values corresponding to specific locations, or
pixels. Each pixel has one or more associated values, or planes,
that represents information about a particular location on the
subject. For original image A, the values corresponding to each
pixel can be a measure of any suitable characteristic of the
subject. For example, the values may represent the color, colors,
luminance, incidence angle, x-ray density, or any other value
representing a characteristic or combination of
characteristics.
[0031] Original image A can be obtained in any suitable manner and
need not correlate directly to conventional color images. One
implementation obtains original image A by digitizing an image
using a scanner, such as a flatbed, film scanner, and the like.
Another implementation obtains original image A by directly
capturing the image using a digital device, such as a digital
camera, video camera, and the like. In yet another implementation,
the original image A is captured using an imaging device, such as
magnetic resonance imaging system, radar system, and the like. In
this embodiment, the characteristic values do not correlate to
colors but to other characteristics of the subject matter imaged.
The original image A could also be obtained by computer generation
or other similar technique. Dynamic image correction 200 does not
depend upon how the original image A is obtained, but only that the
original image A includes one or more values that represent the
image.
[0032] In step 220, a dynamic image mask B is generated from
original image A. In the preferred embodiment, the pixel values of
the dynamic image mask B are generated relative to a pixel in the
original image A. In at least one embodiment, the pixels generated
for dynamic image mask B are calculated using weighted averages of
select pixels in original image A, as discussed in greater detail
below. It will be appreciated that the pixels generated for dynamic
image mask B may be calculated using any number of methods without
departing from the spirit or the scope of the present
invention.
[0033] Dynamic image mask B maintains the sharp edges in the
original image A while blurring regions the surrounding the sharp
edges. In effect, rapidly changing characteristics, i.e., values or
contrast, in original image A are used to determine sharp edges in
dynamic image mask B. At the same time, less rapidly changing
values in original image A can be averaged to generate blurred
regions in dynamic image mask B. In effect, the calculations
performed on original image A produce a dynamic image mask B which
preserves the boundaries between dissimilar pixels in original
image A while blurring areas containing similar pixels, as will be
discussed further in FIG. 3.
[0034] A dynamic image mask B is often calculated for each
characteristic value. For example, in the case of an original image
having red, green, and blue values for each pixel, the red values
are used to calculate the blurring and edge parameters of the
dynamic image mask B for the red color, the blue values are used to
calculate the blurring and edge parameters of the dynamic image
mask B for the blue color, and so on. The dynamic image mask B can
use different characteristics, or planes, to establish the regions
and boundaries for different characteristics. For example, in the
case of an original image having red, green, and blue color values
for each pixel, the red values could be used to establish the
blurring and edge parameters that are applied to each of the red,
green, and blue values. Similarly, a calculated luminance value
could be used to calculate the blurring and edge parameters that
are then applied to the red, green, and blue values of each pixel.
In other embodiments, a dynamic image mask B is only calculated for
certain characteristics. Using the same example as above, a dynamic
image mask B for the colors red and green may be calculated, but
the values of the color blue are combined without change, as
described in greater detail below.
[0035] In step 230, dynamic image mask B is applied to original
image A to produce enhanced image C. Dynamic image mask B is
generally applied to original image A by use of an overlay
technique. As discussed further in FIG. 7, a mathematical
operation, such as division between the pixel values of original
image A and the corresponding pixel values in dynamic image mask B,
can be used to generate the pixel values of enhanced image C.
[0036] In general, the process of generating and applying the
dynamic image mask B is performed as part of a set of instructions
run by an information processing system. The processes of steps
210, 220, and 230 can be performed within an image processing
system, implemented by photo-lab technicians, in a system used by a
customer without the assistance of a lab technician, incorporated
into a scanner, digital camera, video recorder and the like, or
performed by a computer system external to the image capturing
device. In at least one embodiment, the processes are automated by
a program of executable instructions executed by an information
processing system such that minimal user interaction is
required.
[0037] In step 240, the enhanced image C is delivered in the form
desired. The form in which the enhanced image C is delivered
includes, but is not limited to, a digital file, a photographic
print, or a film record. Digital files can be stored on mass
storage devices, tape drives, CD recorders, DVD recorders, and/or
various forms of volatile or non-volatile memory. Digital files can
also be transferred to other systems using a communications
adapter, where the file can be sent to the Internet, an intranet,
as an e-mail, etc. A digital file can also be prepared for
retrieval at an image processing kiosk which allows customers to
recover their pictures and print them out in a form of their
choosing without the assistance of a film development technician.
The enhanced image C can also be displayed as an image on a display
or printed using a computer printer. The enhanced image C also can
be represented on a form of film record, such as a film negative,
positive image, or photographic print. In conventional printing
processes, when an image is printed, a large portion of the dynamic
range is lost. In contrast, enhanced image C generally contains
desirable detail from original image A so that a larger quantity of
image detail from original image A is effectively compressed into a
dynamic range capable of being reproduced in print and can be
preserved thereby.
[0038] Referring now to FIG. 3, a diagram of an original image and
a blurred image are shown, according to one embodiment of the
present invention. Original image A is composed of a plurality of
pixels, such as pixels numbered 301-325. Dynamic image mask B is
composed of corresponding pixels, such as pixels numbered 351-375,
calculated from the pixels of original image A. As described in
greater detail below, the pixel values of dynamic image mask B are
calculated using an averaging function that accounts for sharp
edges.
[0039] A sharp edge is generally defined by a variation between
pixel values greater than a certain sharpness threshold, or Gain.
In effect, the sharpness threshold allows the pixels to be
differentiated into regions for purposes of averaging calculations.
In some embodiments, the sharpness threshold is varied by a user.
In other embodiments, the sharpness threshold is fixed within the
software.
[0040] The pixels calculated for dynamic image mask B correspond to
averages taken over regions of pixels in original image A, taking
into account the sharpness threshold, or Gain. For example, pixel
363 corresponds to calculations performed around pixel 313. In one
embodiment, provided that pixels 301-325 are similar, i.e.,
difference is below the sharpness threshold, pixel 363 is
calculated by averaging the values of pixels 311-315, 303, 308,
318, and 323. In another embodiment, pixel 363 is calculated by
averaging the values of pixels 307-309, 312-314, and 317-319. Any
suitable number or selection process for the averaging process may
be used without departing from the scope of the present invention.
In the preferred embodiment, the pixels are assigned a weight based
on their relative distance from pixel 313. In this embodiment,
pixels that are relatively closer have a greater impact on the
averaging calculation that pixels that are relatively remote.
[0041] In one embodiment of the present invention, the dynamic
image mask B is calculated using a weighting function as described
by the following equation: 1 w N = ( 1 - ( pixel N - centerpixel
Gain ) .
[0042] The weight function, w.sub.N, can be used to apply a
separate weight to each of the pixel values. Only values of w.sub.N
between zero and one are accepted. Accordingly, if the value of
w.sub.N is returned as a negative value, the returned weight for
the pixel being weighed is zero. Using the first example above, if
pixel 313 was being calculated, w.sub.N could be used to apply a
weight to each of the pixels 311-315, 303, 308, 318, and 323.
PixelN is the contrast value of the pixel being weighed. Center
pixel is the value of the central pixel, around which the blurring
is being performed. Gain is a threshold value used to determine a
contrast threshold for a sharp edge. For example, if pixel 362 is
being calculated and the difference in contrast between pixel 313
and pixel 308 is 15, with Gain set to 10, the returned value of
w.sub.N is negative. Accordingly, since negative values are not
allowed, pixel 308 is assigned a weight of zero, keeping the value
of pixel 308 from affecting the calculation of pixel 362.
[0043] The value of Gain can be decreased as the pixel being
weighed is further from the central pixel. Lowering the value of
Gain allows small changes in the contrast between pixelN and
centerpixel to result in negative w.sub.N, and thus be weighed to
zero. Accordingly, in one embodiment, the farther the pixel is from
the centerpixel, the smaller Gain gets and the more likely it is
that the value of w.sub.N will be negative and the pixel will be
assigned a weight of zero. The choice of Gain is chosen to
preferably decrease slowly as the distance from the central pixel
is increased. The values of Gain used can be adapted for the
desired application; however, it has been found that slower changes
in Gain provide images with more pleasing detail than sharper
changes in Gain. Furthermore, the weight function itself can be
altered without departing from the scope of the present
invention.
[0044] Once the weights of the surrounding pixels have been
calculated, a sum of each of the pixel values, multiplied by their
relative weights, can be calculated. The sum can then be divided by
the sum of the individual weights to generate the weighted average
of the pixels, which can be used for the pixels of dynamic image
mask B. The minimum weight calculated from the pixels adjacent to
the central pixel can also be used and multiplied by each of the
pixels surrounding the central pixel. Multiplying by the weight of
an adjacent pixel allows the blurring to be effectively turned
"off" if the contrast around the central pixel is changing too
rapidly. For example, if the difference in contrast between a
central pixel and an adjacent pixel is large enough to warrant a
sharp edge in dynamic image mask B, the weight of the adjacent
pixel will be zero, forcing all other values to zero and allowing
the central pixel to retain its value, effectively creating a sharp
edge in dynamic image mask B.
[0045] This embodiment of the processes performed to generate
dynamic image mask B can be likened to a sandblaster. A sandblaster
can be used to soften, or blur, the textures it is working over.
Accordingly, the blurring algorithm as described above will be
herein referred to as the sandblaster algorithm. A sandblaster has
an effective radius over which it is used, with the material closer
to the center of the sandblasting radius affected most. In the
blurring algorithm described, a radius is selected and measured
from the central pixel. The pressure of a sandblaster can be
adjusted to affect more change. The Gain value in the described
algorithm can be altered to affect more or less blurring. In at
least one embodiment, the preferred radius is 4 and the preferred
Gain is 40.
[0046] The sandblaster algorithm can be performed in one
dimensional increments. For example, to calculate the value of
pixel 362, the pixels surrounding pixel 312 are considered. In one
embodiment of the present invention, the averaged pixel values are
determined using the neighboring vertical pixels and then the
neighboring horizontal pixel values, as described above.
Alternatively, windows can be generated and applied to average in
pixels around the central pixel together, in both the horizontal
and vertical directions. Color images can compose multiple image
planes, wherein the multiple image planes may include planes for
each color, a red plane, a green plane, and a blue plane. In a
preferred embodiment, the sandblaster algorithm is only performed
on one plane at a time. Alternatively, the sandblaster algorithm
can be calculated taking other image planes into account,
calculating in the values of pixels relative to the central pixel
from different color planes. However, it should be noted that
performing multi-dimensional calculations over an image may
increase the processing time. Additionally, pixels which are near
an image edge, such as pixel 311 may ignore values desired from
pixels beyond the limits of original image A. In one embodiment,
the images along the edge use their value to reproduce pixel values
beyond the image edge, for calculation with the sandblaster
algorithm. Additionally, zeroes may be used for values lying
outside the edges of original image A.
[0047] Referring now to FIG. 4, a graph of intensities across a row
of intensities, before and after the sandblaster blurring algorithm
has been applied is shown, according to at least one embodiment of
the present invention. Graph 450 represents the intensity values in
an original image A around an edge representing contrasting
intensity. Graph 460 represents the intensities for dynamic image
mask B, among the same pixels as graph 450.
[0048] Two distinct intensity levels are identifiable in graph 450.
A low intensity can be identified among pixels 451-454 and a high
intensity region can be identified by pixel 465. The radius used to
blur the pixels around the central pixel described in FIG. 3 is one
factor in how much blurring will be performed. If too large a
radius is used, little blurring may result. For example, if the
pixel considered for blurring was pixel 451 and the radius was set
large enough, the blurred value of pixel 452 may not change much.
With the radius set large enough, pixel 452 will be averaged with
many pixels above its intensity, such as pixel 451. Pixel 452 will
also be averaged with many pixels below its intensity, such as
pixel 453. If the radius is too large, there could be enough pixels
with intensities above pixel 452 and enough pixels with intensities
below pixel 452 that the value for pixel 452 will remain unchanged
since the intensity value of pixel 452 lies between the high and
low extremes.
[0049] Little blurring could also result from selecting too small a
radius for blurring. In selecting a small radius, only the
intensity values of pixels immediately by the selected pixel will
be considered. For example, selecting pixel 452 as the central
pixel. If the radius is too small, allowing pixels only as far as
pixel 451, pixels 453 and 454 may not be considered in the blurring
around pixel 452. Selection of the radius has drastic effects to
how much blurring is accomplished. The blurring radius must be
large enough to average enough of a region of pixels while being
small enough to effect enough blurring. In one embodiment, the
blurring radius can be controlled automatically. As shown in FIG.
5, blurring can be performed over decimated representations of an
original image using pyramidal decomposition. By performing a
blurring algorithm and decimating the image, the effective radius
of the blur is automatically increased as the image resolution is
decreased. A decimated representation of the original image A can
contain half the resolution of the original image A. Some of the
detail in the original image is lost in the decimated
representation. Performing blurring on the decimated image with a
specific radius can relate to covering twice the radius in the
original image.
[0050] Graph 462 shows a graph of intensities in the dynamic image
mask A using the sandblaster blurring algorithm. As can be seen,
the blurring is enough to bring down the intensity of pixel 452 in
the original image to pixel 462 in the blurred representation.
Pixel 455, in a separate intensity level is increased in intensity
to pixel 465 in the blurred representation. In at least one
embodiment, the blurring is turned off for pixels along an edge.
Turning off the blurring allows the sharpness among edges to be
preserved in the blurred representation, preserving edges between
regions with a high contrast of intensities. Pixel 454 lies along
an edge, where the intensity for pixels nearby, such as pixel 455,
is much higher. The intensity of pixel 454 is not changed,
preserving the difference in contrast between pixel 454 and the
pixels of higher intensity, such as pixel 455.
[0051] Referring now to FIG. 5, a block diagram of a method for
generating another embodiment of a dynamic image mask B is
illustrated. In this embodiment, the sandblaster algorithm can be
used to create a blurred image with sharp edges and blurred
regions. To improve the detail captured by an image mask
incorporating the sandblaster blurring algorithm, a pyramidal
decomposition is performed on the original image, as shown in FIG.
5. In step 510, the original image A is received.
[0052] In step 535, the image size is reduced. In at least one
embodiment, the image size is reduced in half. The reduction in
image size may be performed using a standard digital image
decimation. In one embodiment, the decimation is performed by
discarding every other pixel in the original image from step
510.
[0053] In step 525, the sandblaster algorithm, as discussed in FIG.
3, is performed on the decimated image to create a blurred image.
As previously discussed for FIG. 4, the decimated image contains
half the resolution of the original image A. Some of the detail in
the original image A is lost to the decimated image. By performing
the sandblaster algorithm on the decimated image, the effective
radius covered by the algorithm can relate to twice the radius in
the original image. Since some of the detail from the original
image A is not present in the decimated image, more blurring can
result with the sandblaster algorithm. As the images described
herein are decimated, the effective blur radius and amount of
detail blurred increased in inverse proportion to the change in
resolution in the decimated images. For example, performing the
sandblast algorithm in step 525 to the reduced image of step 535
has twice the effective radius of performing the same algorithm to
the original image, while the reduced image has half the resolution
of the original image.
[0054] In step 536, the blurred image is decimated once again. In
step 526, the decimated image from step 536 is blurred using the
sandblaster algorithm. Further decimation steps 537-539 and
sandblaster steps 527-329 are consecutively performed on the
outputs of previous steps. In step 550, the blurred image from the
sandblaster step 529 is subtracted from the decimated output of
decimation step 550. In step 560, the mixed output from step 550 is
up-sampled. In one embodiment, the image is increased to twice its
pixel resolution. Increasing the image size may be performed by
repeating the image values of present pixels to fill new pixels.
Interpolation may also be performed to determine the values of the
new pixels. in step 552, the up-sampled image from step 560, is
added to the blurred image from step 528. The combined image
information is subtracted from the decimated output from step 538.
The calculations in step 552 are performed to recover image detail
that may have been lost. Mixer steps 554 and 552, consecutively
performed with up-sampling steps 562-366, attempt to generate mask
data. In step 558, a mixer is used to combine the up-sampled image
data from step 566 with the blurred image data from step 525. The
output from the mixer in step 558 is then up-sampled, in step 580,
to produce the image mask of the received image. The dynamic image
mask B is then prepared for delivery and use, as in step 590.
[0055] It will be appreciated that additional or less blurring may
be performed among the steps of the pyramidal decomposition
described herein. It should be noted that by not performing the
blurring algorithm on the original image, a significant amount of
processing time may be saved. Calculations based on the decimated
images can be performed faster and with less overhead than
calculations based off the original image, producing detailed image
masks. The image masks produced using the described method
preferably include sharp edges based on rapidly changing boundaries
found in the original image A, and blurred regions among less
rapidly changing boundaries. It should also be appreciated that
more or less steps may be performed as part of the pyramidal
decomposition described herein, without departing from the scope of
the present invention.
[0056] In the described embodiment, pyramidal decomposition is
performed along a single image color plane. It will be appreciated
that additional color planes may also be presented in the steps
shown. Furthermore, multi-dimensional processing, wherein
information from different color planes or planes of brightness is
processed concurrently, may also be performed. According to at
least one embodiment of the present invention, the resultant image
mask generated is a monochrome mask, used to apply itself to the
intensities of the individual image color planes in the original
image. A monochrome image plane can be calculated from separate
image color planes. For example, in one embodiment, the values of
the monochrome image mask are determined using the following
equation:
OUT=MAX(R,G).
[0057] OUT refers to the pixel being calculated in the
monochromatic image mask. MAX(R,G) is a function in which the
maximum intensity between the intensity value of the pixel in the
red plane and the intensity value of the pixel in the green plane
is chosen. In the case of a dynamic image mask pixel which contains
more than 80% of its intensity from the blue plane, the formula can
be appended to include:
OUT=OUT+50% B.
[0058] wherein 50% B is half of the intensity value in the blue
plane. The dynamic image mask B may also be made to represent image
intensities, such as the intensity among black and white values. It
will be appreciated that while full color image masks may be used,
they will require more processing overhead than using monochrome
masks.
[0059] Referring now to FIG. 6, a dynamic image mask B is
illustrated, in comparison to a prior-art conventional cutout
filter shown in FIG. 1, with properties representative of an
dynamic image mask B created according to at least one embodiment
of the present invention. The dynamic image mask B shown in FIG. 6
will be generally referred to as revelation mask 650. The
conventional image mask shown in FIG. 1 (prior-art) will be
generally referred to as conventional filter 110.
[0060] The revelation mask 650 maintains some of the detail lost to
conventional image masks. Edges are preserved between regions of
rapidly changing contrasts. For example, light region 690,
generated to brighten detail within windows in the original image,
maintains edges to show sharp contrast to the darker region 680,
which is generated to darken details in the walls shown in the
original image. It should be noted that while edges are maintained
between regions of rapidly changing contrasts, blurring is
accomplished within the regions. For example, the details in the
roof of the original image contain dark and light areas with a
gradual shift in contrast. In conventional filter 110, dark region
127 is generated to maintain contrast with the lighter areas in the
tower on the roof. When conventional filter 110 is overlaid with
the original image, the resultant image will show a sharp contrast
difference between dark region 127 and light region 120 which does
not maintain the gradual difference in the original image. In
comparison, revelation mask 650 maintains the gradual shift in
contrast as can be noted by the blurred shift in intensity between
the tower region 655 and the lighter region 670, allowing the roof
in the original image to maintain a gradual shift in intensity
contrast while maintaining the sharp contrast of dark region 655
against the darker region 660, representing the background sky in
the original image.
[0061] Referring now to FIG. 7, a method for generating an enhance
image C in accordance with one embodiment of the present invention
is illustrated. Image information related to an original image A is
mathematically combined with information from dynamic image mask B.
The combined data is used to create the enhanced image C.
[0062] The enhanced image C is generated on a pixel by pixel basis.
Each corresponding pixel from original image A and dynamic image
mask B is combined to form a pixel in masked image 710. For
example, pixel data from pixel 715, of original image A, is
combined with pixel information from pixel 735, of digital image
mask B, using mathematical manipulation, such as overlay function
720. The combined data is used to represent pixel 715 of enhanced
image C.
[0063] Overlay function 720 is a function used to overlay the pixel
information between original image A and dynamic image mask B. In
one embodiment of the present invention, overlay function 720
involves mathematical manipulation and is defined by the equation:
2 OUT = IN 3 4 MASK + 1 4 .
[0064] OUT refers to the value of the pixel in dynamic masked image
B. IN refers to the value of the pixel taken from original image A.
MASK refers to the value of the corresponding pixel in enhanced
image C. For example, to produce the output value of pixel 714, the
value of pixel 714 is divided by 3/4 the value of pixel 734, with
the addition of an offset. The offset, 1/4, is chosen to prevent an
error from occurring due to diving by zero. The offset can also be
chosen to lighten shadows in the resultant masked image 710.
[0065] In one embodiment, the application of dynamic image mask B
to original image A is performed through software run on a
information processing system. As previously discussed, dynamic
image mask B can be a monochromatic mask. The dynamic image mask B
can be used to control the white and black levels in images.
Grayscale contrast is the contrast over large areas in an image.
Image contrast refers to the contrast of details within an image.
Through manipulation of the proportion of the value of MASK and the
offset used in overlay function 720, the grayscale contrast and the
image contrast can be altered to best enhance the enhanced image C
310. In one embodiment of the present invention, overlay function
720 is altered according to settings made by a user. Independent
control of the image contrast and grayscale contrast can be
provided. Control can be used to produce images using low image
contrast in highlights and high image contrast in shadows.
Additionally, functions can be added to control the generation of
the dynamic image mask B. Control can be offered over the pressure
(Gain) and radius (region) effected through the sandblaster
algorithm (described in FIG. 3). Additionally, control over the
histogram of the image can be offered through control over the
image contrast and the grayscale contrast. A normalized image can
be generated in which histogram leveling can be performed without
destroying image contrast. The controls, functions, and algorithms
described herein can be performed within an information processing
system. It will be appreciated that other systems may be employed,
such as through image processing kiosks, to produce enhanced image
C, in keeping with the scope of the present invention.
[0066] Referring to FIG. 8A, a wrinkle reduction process 800 in
accordance with one embodiment of the present invention is
illustrated. As described in greater detail below, this embodiment
of the wrinkle reduction process 800 operates to suppress median
frequencies without suppressing high definition detail or low
frequency contrast. As a result, people have a younger look without
sacrificing detail.
[0067] In the embodiment illustrated, a dynamic image mask B is
calculated from original image A, as shown by block 802. In the
preferred embodiment, the dynamic image mask B is calculated using
a radius of 5 and a Gain of 64, as discussed in FIG. 3. The dynamic
image mask B is then passed through a low pass filter 804. The low
pass filter 804 is preferably a "soft focus" filter. In one
embodiment, the low pass filter 804 is calculated as the average of
a Gaussian average with a radius of one and a Gaussian average with
a radius of three. Other types of low pass filters may be used
without departing from the scope of the present invention.
[0068] The original image A is also passed through a high pass
filter 806. In one embodiment, the high pass filter 806 is
calculated as the inverse of the average of the Gaussian average
with a blur of one and a gaussian average with a blur of three.
Other types of high pass filters may be used without departing from
the scope of the present invention.
[0069] The results from the low pass filter 804 and the high pass
filter 806 are then added together to form a median mask 808. The
median mask 808 can then be applied to the original image A using,
for example, applicator 810 to produce an enhanced image. In the
preferred embodiment, the applicator 810 is an electronic brush
that can be varied by radius to apply the median mask 808 only to
those areas of the original image A specified by the user. Other
types of applicators 810 may be used to apply the median mask 808
to the original image A.
[0070] FIG. 8B-1 illustrates an untouched original image 820, and
FIG. 8B-2 illustrates the same image after having the wrinkle
reduction process 800 applied to the image 820. As can be seen, the
wrinkle reduction process 800 reduces the viable affects of age of
the person in the image, without sacrificing the minute detail of
the image and without apparent blurring or softening of the
details. This creates a more pleasing image to the eye and most
importantly, more pleasing to the person in the picture. The same
process can be applied to other parts of the image to produce
similar results. For example, when applied to clothing, the wrinkle
reduction process 800 produces the appearance of a freshly pressed
shirt or pants without affecting the details or appearing blurry.
Although only a few of the applications of the wrinkle reduction
process 800 and dynamic image mask B have been illustrated, it
should be understood that they may be used for any suitable purpose
or combination without departing from the scope of the present
invention.
[0071] Referring to FIG. 9, an image capture system 900 used to
implement one or more embodiments of the present invention is
illustrated. Image capture system 900 includes any device capable
of capturing data representative of an image and subsequently
processing the data according to the teachings set forth herein.
For example, image capture system 900 could include a digital
camera, video recorder, a scanner, image processing software, and
the like. An embodiment where image capture system 900 includes a
digital camera is discussed subsequently for ease of illustration.
The following discussion may be applied to other embodiments of
image capture system 900 without departing from the spirit or scope
of the present invention.
[0072] Image capture system 900 includes, but is not limited to,
image sensor 910, analog-to-digital (A/D) convertor 920, color
decoder 930, color management system 940, storage system 950,
and/or display 960. In at least one embodiment, image capture
system 900 is connected to printer 980 via a serial cable, printer
cable, universal serial bus, networked connection, and the like.
Image sensor 910, in one embodiment, captures an image and converts
the captured image into electrical information representative of
the image. Image sensor 910 could include an image sensor on a
digital camera, such as a charge coupled device (CCD) sensor,
complementary metal oxide semiconductor sensor, and the like. For
example, a CCD sensor converts photons reflected off of or
transmitted through a subject into stored electrical charge at the
location of each photosite of the CCD sensor. The stored electrical
charge of each photosite is then used to obtain a value associated
with the photosite. Each photosite could have a one-to-one
correspondence with the pixels of the resulting image, or
photosites are used in conjunction to determine the value of one or
more pixels.
[0073] In one embodiment, image sensor 910 sends electrical
information representing a captured image to A/D convertor 920 in
analog form, which converts the electrical information from an
analog form to a digital form. Alternatively, in one embodiment,
image sensor 910 captures an image and outputs the electrical
information representing the image in digital form. It will be
appreciated that, in this case, A/D convertor 920 would not be
necessary.
[0074] It will be appreciated that photosites on image sensors,
such as CCDs, often only measure the magnitude or intensity of the
light striking a photosite. In this case, a number of methods may
be used to convert the intensity values of the photosites (i.e. a
black and white image) into corresponding color values for each
photosite. For example, one method of obtaining color information
is to use a beam splitter to focus the image onto more than one
image sensor. In this case, each image sensor has a filter
associated with a color. For example, image sensor 910 could
include three CCD sensors, where one CCD sensor is filtered for red
light, another CCD sensor is filtered for green light, and the
third sensor is filtered for blue light. Another method is to use a
rotating device having separate color filters between the light
source (the image) and image sensor 910. As each color filter
rotates in front of image sensor 910, a separate image
corresponding to the color filter is captured. For example, a
rotating disk could have a filter for each of the primary colors
red, blue and green. In this case, the disk would rotate a red
filter, a blue filter, and a green filter sequentially in front of
image sensor 910, and as each filter was rotated in front, a
separate image would be captured.
[0075] Alternatively, a permanent filter could be placed over each
individual photosite. By breaking up image sensor 910 into a
variety of different photosites associated with different colors,
the actual color associated with a specific point or pixel of a
captured element may be interpolated. For example, a common pattern
used is the Bayer filter pattern, where rows of red and green
sensitive photosites are alternated with rows of blue and green
photosites. In the Bayer filter pattern, there is often many more
green color sensitive photosites than there are blue or red color
sensitive photosites, as the human eye is more sensitive to green
than the others, so more green color information should be present
for a captured image to be perceived as "true color" by the human
eye.
[0076] Accordingly, in one embodiment, color decoder 930 receives
the digital output representing an image from A/D convertor 920 and
converts the information from intensity values (black-and-white) to
color values. For example, image sensor 910 could utilize a Bayer
filter pattern as discussed previously. In this case, the
black-and-white digital output from A/D convertor 920 could be
interpolated or processed to generate data representative of one or
more color images. For example, color decoder 930 could generate
data representative of one or more full color images, one or more
monochrome images, and the like.
[0077] Using the data representative of an image generated by color
decoder 930, in one embodiment, color management system 940
processes the data for output and/or storage. For example, color
management system 940 could attenuate the dynamic range of the data
from color decoder 930. This may be done to reduce the amount of
data associated with a captured image. Color management 940 could
also format the data into a variety of formats, such as a Joint
Picture Experts Group (JPEG) format, a tagged image file format
(TIFF), a bitmap format, and the like. Color management system 940
may perform a number of other processes or methods to prepare the
data representative of an image for display or output, such as
compressing the data, converting the data from an analog to a
digital format, etc.
[0078] After color management system 940 processes data
representative of an image, the data, in one embodiment, is stored
on storage 950 and/or displayed on display 960. Storage 950 could
include memory, such as removable flash memory for a digital
camera, a storage disk, such as a hard drive or a floppy disk, and
the like. Display 960 could include a liquid crystal display (LCD),
a cathode ray tube (CRT) display, and other devices used to display
or preview captured images. In an alternative embodiment, the data
representative of an image could be processed by printer driver 970
to be printed by printer 970. Printer 970 could include a
photograph printer, a desktop printer, a copier machine, a fax
machine, a laser printer, and the like. Printer driver 970 could be
collocated, physically or logically, with printer 970, on a
computer connected to printer 960, and the like. It will be
appreciated that one or more of the elements of image capture
system 900 may be implemented as a state machine, as combinational
logic, as software executable on a data processor, and the like. It
will also be appreciated that the method or processes performed by
one or more of the elements of image capture system 900 may be
performed by a single device or system. For example, color decoder
930 and color management 940 could be implemented as a monolithic
microprocessor or as a combined set of executable instructions.
[0079] Image capture system 900 can be used to implement one or
more methods of various embodiments of the present invention. The
methods, herein referred to collectively as the image mask method,
may be implemented at one or more stages of the image capturing
process of image system 900. In one embodiment, the image mask
method may be applied at stage 925 between the output of digital
data from A/D convertor 920 and the input of color decoder 930. In
many cases, stage 925 may be the optimal location for application
of the image mask method. For example, if data representative of an
image output from image sensor 910 is monochrome (or
black-and-white) information yet to be decoded into color
information, less information may need to be processed using the
image mask method than after conversion of the data to color
information. For example, if the data were to be decoded into the
three primary colors (red, blue, green), three times of information
may need to be processed, as there are three colors associated with
each pixel of a captured image. The image mask method, according to
at least one embodiment discussed previously, does not affect the
accuracy or operation of color decoder 930.
[0080] Alternatively, the image mask method may be applied at stage
935 between color decoder 930 and color management system 940. In
some situations, the location of stage 935 may not be as optimal as
stage 925, since there may be more data to process between color
decoder 930 and color management system 940. For example, color
decoder 93 0 could generate data for each of the primary colors,
resulting in three times the information to be processed by the
image mask method at stage 935. An image mask method may also be
implemented at stage 945 between color management system 940 and
storage 950 and/or display 960. However, since the data output by
color management system 940 often has been processed which may
result in compression and/or loss of information and dynamic range,
therefore application of the image mask method at stage 945 may not
generate results as favorable as at stages 925, 935.
[0081] If the captured image is to be printed, the image mask
method may be implemented at stages 965, 975. At stage 965, the
image mask method may be implemented by printer driver 970, while
at stage 965, the image mask method may be implemented between
printer driver 970 and printer 980. For example, the connection
between a system connected to printer driver 970, such as a
computer, and printer 980 could include software and/or hardware to
implement the image mask method. However, as discussed with
reference to stage 945, the data representative of a captured image
to be printed may have reduced dynamic range and/or loss of other
information as a result of processing by color management system
940.
[0082] In at least one embodiment, the image mask method performed
at stages 925, 935, 945, 965, and/or 975 is implemented as a set of
instructions executed by processor 942. Processor 942 can include a
microprocessor, a state machine, combinational logic circuitry, and
the like. In one implementation, the set of instructions are stored
and retrieved from memory 943, where memory 943 can include random
access memory, read only memory, flash memory, a storage device,
and the like. Note that processor 942, in one embodiment, also
executes instructions for performing the operations of one or more
of the elements of image capture system 900. For example, processor
942 could execute instructions to perform the color decoding
operations performed by color decoder 930 and then execute the set
of instructions representative of the image mask method at stage
935.
[0083] It will be appreciated that the cost or effort to implement
the image mask method at an optimal or desired stage (stages
925-975) may be prohibitive, resulting in the implementation of the
image mask method at an alternate stage. For example, although
stage 925 is often the optimal location for implementation of the
image mask method, for reasons discussed previously, it may be
difficult to implement the image mask method at this location. For
example, image sensor 910, A/D convertor 920, and color decoder 930
could be implemented as a monolithic electronic circuit. In this
case it might prove difficult to modify the circuit to implement
the method. Alternatively, more than one element of image capture
system 900, such as color decoder 930 and color management system
940, may be implemented as a single software application. In this
case, the software application may be proprietary software where
modification is prohibited, or the source code of the software may
not be available, making modification of the software application
difficult.
[0084] In the event that the image mask method may not be
implemented in the optimal location, application of the image mask
method in a more suitable location often will result in improved
image quality and detail. For example, even though the dynamic
range of data representative of an image may be reduced after
processing by color management system 940, application of the image
mask method at stage 945, in one embodiment, results in data
representative of an image having improved quality and/or detail
over the data output by color management system 940. The improved
image data may result in an improved image for display on display
960, subsequent display when retrieved from storage 960, or
physical replication by printer 980. It will be appreciated that
the image mask method may be employed more than once. For example,
the image mask method may be employed at stage 925 to perform an
initial compression of the dynamic range of the image, and then
again at stage 945 for to further compress the images dynamic
range.
[0085] Referring now to FIG. 10, a chart showing various
improvements in image types is illustrated according to at least
one embodiment of the present invention. As discussed previously,
an implementation of at least one embodiment of the present
invention may be used to improve the dynamic range of
representations of captured images. The horizontal axis of chart
1000 represents the dynamic range of various types of image
representations. The dynamic range of images, as presented to the
human eye, (i.e. "real life") is represented by range 1006. The
dynamic range decreases sequentially from real life (range 1006) to
printed transparencies (range 1005), CRT displays (range 1004),
glossy photographic prints (range 1003), matte photographic prints
(range 1002), and LCD displays (range 1001). Note that the sequence
of dynamic ranges of various image representations is a general
comparison and the sequence of dynamic ranges should not be taken
as absolute in all cases. For example, there could exist a CRT
display (range 1004) which could have a dynamic range greater than
printed transparencies (range 1005).
[0086] According to at least one embodiment, by applying an image
mask method disclosed herein, the dynamic range of the
representation of an image may be improved. For example, by
applying an image mask method sometime before data representing an
image is displayed on an LCD monitor, image information having a
dynamic range comparable to a glossy photographic print (range
1003), could be compressed for display on an LCD monitor having a
dynamic range 1001, resulting in an improved display image.
Likewise, image information having a dynamic range equivalent to a
CRT display (range 1004) may be compressed into a dynamic range
usable for matte photographic prints (range 1002), and so on. As a
result, an image mask method, as disclosed herein, may be used to
improve the dynamic range used for display of a captured image,
thereby improving the quality of the display of the captured
image.
[0087] One of the preferred implementations of the invention is as
sets of computer readable instructions resident in the random
access memory of one or more processing systems configured
generally as described in FIGS. 1-10. Until required by the
processing system, the set of instructions may be stored in another
computer readable memory, for example, in a hard disk drive or in a
removable memory such as an optical disk for eventual use in a CD
drive or DVD drive or a floppy disk for eventual use in a floppy
disk drive. Further, the set of instructions can be stored in the
memory of another image processing system and transmitted over a
local area network or a wide area network, such as the Internet,
where the transmitted signal could be a signal propagated through a
medium such as an ISDN line, or the signal may be propagated
through an air medium and received by a local satellite to be
transferred to the processing system. Such a signal may be a
composite signal comprising a carrier signal, and contained within
the carrier signal is the desired information containing at least
one computer program instruction implementing the invention, and
may be downloaded as such when desired by the user. One skilled in
the art would appreciate that the physical storage and/or transfer
of the sets of instructions physically changes the medium upon
which it is stored electrically, magnetically, or chemically so
that the medium carries computer readable information. The
preceding detailed description is, therefore, not to be taken in a
limiting sense, and the scope of the present invention is defined
only by the appended claims.
[0088] In the preceding detailed description of the figures,
reference has been made to the accompanying drawings which form a
part thereof, and in which is shown by way of illustration specific
preferred embodiments in which the invention may be practiced.
These embodiments are described in sufficient detail to enable
those skilled in the art to practice the invention, and it is to be
understood that other embodiments may be utilized and that logical,
mechanical, chemical and electrical changes may be made without
departing from the spirit or scope of the invention. To avoid
detail not necessary to enable those skilled in the art to practice
the invention, the description may omit certain information known
to those skilled in the art. Furthermore, many other varied
embodiments that incorporate the teachings of the invention may be
easily constructed by those skilled in the art. Accordingly, the
present invention is not intended to be limited to the specific
form set forth herein, but on the contrary, it is intended to cover
such alternatives, modifications, and equivalents, as can be
reasonably included within the spirit and scope of the invention.
The preceding detailed description is, therefore, not to be taken
in a limiting sense, and the scope of the present invention is
defined only by the appended claims.
* * * * *