U.S. patent application number 13/881039 was filed with the patent office on 2014-07-10 for system and method for imaging and image processing.
This patent application is currently assigned to LINX COMPUTATIONAL IMAGING LTD.. The applicant listed for this patent is Chen Aharon-Attar, Ziv Attar, Edwin Maria Wolterink. Invention is credited to Chen Aharon-Attar, Ziv Attar, Edwin Maria Wolterink.
Application Number | 20140192238 13/881039 |
Document ID | / |
Family ID | 44925618 |
Filed Date | 2014-07-10 |
United States Patent
Application |
20140192238 |
Kind Code |
A1 |
Attar; Ziv ; et al. |
July 10, 2014 |
System and Method for Imaging and Image Processing
Abstract
One or more objects of interest from a scene are selected. Depth
information of the one or more objects is calculated. Additionally,
depth information of the scene is calculated. The calculated depth
information of the one or more objects is compared with calculated
depth information of the scene. Based on the comparison a blur is
applied to an image that includes the scene.
Inventors: |
Attar; Ziv; (Rotterdam,
NL) ; Aharon-Attar; Chen; (Rotterdam, NL) ;
Wolterink; Edwin Maria; (Valkenswaard, NL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Attar; Ziv
Aharon-Attar; Chen
Wolterink; Edwin Maria |
Rotterdam
Rotterdam
Valkenswaard |
|
NL
NL
NL |
|
|
Assignee: |
LINX COMPUTATIONAL IMAGING
LTD.
Zichron Yaakov
IL
|
Family ID: |
44925618 |
Appl. No.: |
13/881039 |
Filed: |
April 24, 2011 |
PCT Filed: |
April 24, 2011 |
PCT NO: |
PCT/NL2011/050726 |
371 Date: |
October 9, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61406148 |
Oct 24, 2010 |
|
|
|
Current U.S.
Class: |
348/252 |
Current CPC
Class: |
H04N 9/097 20130101;
H04N 5/262 20130101; H04N 5/2254 20130101; H04N 5/265 20130101;
H04N 5/2258 20130101; H04N 5/3572 20130101; H04N 9/045 20130101;
H04N 5/23232 20130101; H04N 5/2351 20130101; H04N 9/04557 20180801;
H04N 5/3532 20130101; H04N 5/232 20130101; H04N 5/2621 20130101;
H04N 5/2253 20130101; H04N 5/23229 20130101; H04N 5/2226 20130101;
H04N 5/3415 20130101 |
Class at
Publication: |
348/252 |
International
Class: |
H04N 5/232 20060101
H04N005/232 |
Claims
1-15. (canceled)
16. A method for creating an image comprising: selecting one or
more objects of interest from a scene; calculating first depth
information of the one or more objects of interest; calculating
second depth information of the scene; comparing the first depth
information with the second depth information; and creating an
image having at least one blurred area and at least one non-blurred
area based on the comparison.
17. The method of claim 16, wherein the first depth information is
calculated using a multi aperture digital camera having a plurality
of imaging channels.
18. The method of claim 17, wherein the plurality of imaging
channels includes filters with identical chromatic transmission
properties.
19. The method of claim 17, wherein the plurality of imaging
channels each includes a filter with proportional chromatic
transmission properties.
20. The method of claim 17, wherein the first depth information is
calculated by comparing a plurality of respective images from the
plurality of imaging channels.
21. The method of claim 16, wherein the first depth information is
calculated using a time-of-flight system.
22. The method of claim 16, wherein the first depth information is
calculated by comparing two or more images captured by a
differently positioned digital camera.
23. The method of claim 16, wherein the image having at least one
blurred area and at least one non-blurred area has a low depth of
field appearance.
24. A method for creating an image having blurred and non blurred
areas, the method comprising: capturing an image; calculating a
depth map; selecting one or more objects of interest from the
image; comparing the calculated depth map with depth information of
the selected one or more objects; and applying a blur to the image
based on the comparison.
25. The method of claim 24, wherein responsive to applying the
blur, the image has a low depth of field appearance.
26. A method for creating an image having blurred and non blurred
areas, the method comprising: capturing an image sequence
comprising sequential images; calculating differences between the
sequential images; selecting one or more pixel areas of interest
from the sequential images based on the calculated differences; and
applying a blur to the image sequence based on the selection of the
one or more pixel areas.
27. The method of claim 26, wherein responsive to applying the
blur, the differences between the sequential images are highlighted
in the image sequence.
Description
[0001] The present invention relates to a system and method for
creating an image having blurred and non blurred areas using an
image capturing device. Moreover, the invention relates to an
apparatus for creating an image with a low depth of field
appearance, to an apparatus for creating an image with highlighted
areas of interest and to an apparatus for creating an image with
highlighted differences in an image sequence.
BACKGROUND OF THE INVENTION
[0002] WO 2006/039486 relates to a method for digitally imaging a
scene, the method comprising: using a photo sensor array to
simultaneously detect light from the scene that is passed to
different locations on a focal plane; determining the angle of
incidence of the light detected at the different locations on the
focal plane; and using the determined angle of incidence and the
determined depth of field to compute an output image in which at
least a portion of the image is refocused. This International
application discloses a system as well, comprising: a main lens; a
photo sensor array for capturing a set of light rays; a microlens
array between the main lens and the photo sensor array; a data
processor to compute a synthesized refocused image via a virtual
redirection of the set of light rays captured by the photo sensor
array.
[0003] U.S. Pat. No. 7,224,384 relates to an optical imaging system
comprising: a taking lens that collects light from a scene being
imaged with the optical imaging system; a 3D camera comprising at
least one photo surface that receives light from the taking lens
simultaneously from all points in the scene and provides data for
generating a depth map of the scene responsive to the light; and an
imaging camera comprising at least one photo surface that receives
light from the taking lens and provides a picture of the scene
responsive to the light; and a light control system that controls
an amount of light from the taking lens that reaches at least one
of the 3D camera and the imaging camera without affecting an amount
of light that reaches the other of the 3D camera and the imaging
camera.
[0004] WO 2008/087652 relates to a method for mapping an object,
comprising: illuminating the object with at least two beams of
radiation having different beam characteristics; capturing at least
one image of the object under illumination with each of the at
least two beams; processing the at least one image to detect local
differences in an intensity of the illumination cast on the object
by the at least two beams; and analyzing the local differences in
order to generate a three-dimensional (3D) map of the object.
[0005] An object of the present invention is to use information
captured by the camera to blur only selected pixels in the
image.
[0006] Another object of the present invention is to use depth
information captured by the camera and a distance of interest set
by an algorithm or by a user to blur only selected pixels.
[0007] Another object of the present invention is to use chromatic
information captured by the camera and a spectrum of interest set
by an algorithm or by a user to blur only selected pixels.
[0008] Another object of the present invention is to use difference
information between two or more sequential frames to blur only
selected pixels.
[0009] The term multi aperture digital camera as referred to means
a camera that consists of more than one imaging lenses each having
its aperture and lens elements. The term imaging channel refers to
a lens and sensor area of one aperture in a multi aperture digital
camera.
[0010] Using a multi lens camera allows us to extract distance
information of certain objects in a scene. The distance between the
lenses of the different imaging channels creates a parallax effect
causing object that are not at infinity to appear at different
position on the images of the different imaging channels.
Calculating these position shifts using an algorithm such as
auto-correlation allows us to determine the distance of each object
in the scene. Using a time-of-flight systems allows us to calculate
depth information of objects in a scene by means of emitting light
toward the scene and measuring the time it takes the light to be
return to the sensor. The farther an object is the longer time it
will take.
[0011] Using a structured light system to allow us the calculate
depth information of objects in a scene is based on a light
emitting system in which light is emitted in a structured manner
such as a grid of dots. An imaging camera is used to image these
dots and an algorithm measures to position of these dots on the its
image. The light emitting system and the imaging camera are
separated laterally and therefore a parallax effect is present and
by calculating the position of the dots or any other pattern the
system can determine the distance of the object in which the dot
was reflected from.
[0012] Using multiple cameras positioned differently allows us to
extract distance information of certain objects in a scene. The
distance between the lenses of the different imaging channels
creates a parallax effect causing object that are not at infinity
to appear at different position on the images of the different
imaging channels. Calculating these position shifts using an
algorithm such as auto-correlation allows us to determine the
distance of each object in the scene.
[0013] The present inventors found that it possible to blur
selected part of an image in order to create a low depth of field
appearance and to highlight certain areas or objects in an image or
image sequence. Human, when looking at an image tend to focus the
attention to areas that are the sharpest in their surroundings
therefore blurring areas which are of lower interest has a clear
advantage.
[0014] When using cameras with lenses with a low F/# (focal length
divided by aperture diameter) the depth of field becomes smaller
when the F/# is smaller. Although this effect may be considered a
disadvantage as object which are not positioned at the focus
distance are severely blurred it may also create a 3 dimensional
impression of the scene. Using the method described above for
obtaining object distances by calculating the local shift between
the images of the different imaging channels or using another
technology as described above we can intentionally blur areas in
the image that are far from the object of interest which we want to
keep sharp.
[0015] The present invention relates to a system and method which
may be applied to a variety of imaging systems. This system and
method provide high quality imaging while considerably reducing the
length of the camera as compared to other systems and methods.
[0016] Specifically, the object of the present invention is to
provide a system and a method to improve image capturing devices
while. This may be accomplished by using a 2 or more apertures each
using a lens. Each lens forms a small image of the scene. Each lens
transfers light emitted or reflected from objects in the scenery
onto a proportional area in the detector. The optical track of each
lens is proportional to the segment of the detector which the
emitted or reflected light is projected on. Therefore, when using
smaller lenses the area of the detector which the emitted or
reflected light is projected on, referred hereinafter as the active
area of the detector, is smaller. When the detector is active for
each lens separately, each initial image formed is significantly
smaller as compare to using one lens which forms an entire image.
One lens camera transfers emitter or reflected light onto the
entire detector area.
[0017] According to an embodiment the present invention relates to
a method for creating an image having blurred and non blurred areas
using an image capturing device capable of depth mapping comprising
the steps of:
[0018] Selecting one or more object of interest from the scene,
[0019] Calculating depth information of said one or more objects of
interest from the scene,
[0020] Retrieving raw data from the multi aperture camera of the
complete scene,
[0021] Calculating depth information of the complete scene,
[0022] Comparing the calculated depth information of the selected
object of interest with the calculated depth of the complete
scene.
[0023] Applying a blur that is dependent on the result of the
comparison.
[0024] The step of selecting can be done automatically by an
algorithm that recognizes area of interest such as faces in
conventional photography. Blurring can be achieved by means of
convolution of an area of the image with a blur filter such as a
Gaussian.
[0025] More in detail, if a scene consists of a room with 3 people
standing at 1, 2 and 3 meters from the camera respectively, an
object of interest can be chosen as the person standing at 1 meter.
According to the embodiment above, first we will calculate the
distance of the object of interest and than calculate the distance
of all other objects and compare them. According to this comparison
we decide on the type or size of blur to apple to each object. In
this case a small blur will be applied to the person standing at 2
meters and a larger blur will be applied to the person standing at
3 meters. The object of interest which is the person standing at
one meter will not be blurred at all.
[0026] The advantage of the embodiment is that a low depth of field
appearance is achieved.
[0027] Another advantage is that the selection of object of
interest can be applied automatically or by a user using a touch
screen or an input device and a display, in one frame that can be
part of a preview mode frame sequence after which a full resolution
image may be captured and processed to keep the object of interest
in focus while blurring other object respectively with their
distance from the object of interest. This eliminates the need to
apply the blur only after the image is captured.
[0028] According to another embodiment the present invention
relates to a method for creating an image having blurred and non
blurred areas using an image capturing device capable of depth
mapping comprising the steps of:
[0029] Capturing an image from the image capturing device,
[0030] Calculating a depth map,
[0031] Selecting one or more object of interest from the image,
[0032] Comparing the calculated depth information of the selected
object of interest with the calculated depth of the complete
scene,
[0033] Applying a blur that is dependent on the result of the
comparison.
[0034] Blurring can be achieved by means of convolution of an area
of the image with a blur filter such as a Gaussian.
[0035] The advantage of this embodiment is that the selection of
the object of interest is done after the capturing and depth
calculating. This allows the user to choose different objects of
interest or correct his selection while keeping the non blurred
information and depth map. Another advantage is that the selection
of objects of interest, comparing with distances of the other
objects and blurring accordingly can be done at a different time
with respect to the time of the image capturing allowing us the
operate these operations on a device different than the one used
for image capturing. For example the image capturing device could
be a multi aperture camera integrated in to a mobile phone or
tablet computer and the selection of object of interest and
blurring can be done on a tablet or laptop computer at a different
time. Another advantage is that by saving the image and the depth
information it is possible to apply select object of interest and
blur multiple time while saving the resulting image as a computer
file. Each time the selection of object of interest may be
different.
[0036] According to another embodiment the present invention
relates to a method for creating an image having blurred and non
blurred areas using an image capturing device, in which the method
comprises the following steps:
[0037] Capturing an image from the image capturing device,
[0038] Calculating chromatic properties of objects appearing in the
captured image,
[0039] Selecting one or more object of interest from the image
according to the calculated chromatic properties,
[0040] Applying a blur that is dependent on the result of the
selection.
[0041] Blurring can be achieved by means of convolution of an area
of the image with a blur filter such as a Gaussian.
[0042] The advantage of this embodiment is that we can highlight
object with certain chromatic nature such as tissue suspected as
harmful in an image captured by for example an endoscopic
camera.
[0043] According to another embodiment the present invention
relates to a method for creating an image having blurred and non
blurred areas using an image sequence capturing device, in which
the method comprises the following steps:
[0044] Capturing an image sequence from the image sequence
capturing device,
[0045] Calculating differences between the sequential images,
[0046] Selecting one or more pixel area of interest from the images
according to the differences calculated between the sequential
frames,
[0047] Applying a blur that is dependent on the result of the
selection.
[0048] Blurring can be achieved by means of convolution of an area
of the image with a blur filter such as a Gaussian.
[0049] The advantage of this embodiment is that objects that are
moving or changing will be highlighted by the effect of the
blurring of all other areas of the image or image sequence.
[0050] An example of the embodiment is a surveillance camera
coupled with a display that is observed by a human. The scene may
contain many details and objects which make it more difficult for
the human to detect moving objects. By blurring an object that is
not moving we attract the attention of the observing human to the
moving or changing objects.
[0051] The present invention could be integrated in many devices
such as a digital camera, digital video camera, mobile phone, a
personal computer, tablet, PDA, notebooks, gaming consoles,
televisions, monitors, displays, automotive cameras, glasses,
helmet, projector, microscopes, imaging endoscopes, imaging medical
probe, surveillance systems, inspection systems, speed detection
systems, traffic management systems, area access systems, satellite
imaging, machine vision and augmented reality systems.
[0052] The invention will be more clearly understood by reference
to the following description of preferred embodiments thereof read
in conjunction with the figures attached hereto. In the figures,
identical structures, elements or parts which appear in more than
one figure are labeled with the same numeral in all the figures in
which they appear. Dimensions of components and features shown in
the figures are chosen for convenience and clarity of presentation
and are not necessarily shown to scale. The figures are listed
below
BRIEF DESCRIPTION OF THE DRAWINGS
[0053] FIG. 1 illustrates a side view of a single lens camera.
[0054] FIG. 2 illustrates a sensor array (201) having multiple
pixels.
[0055] FIG. 3 illustrates a side view of a three lens camera having
one sensor and three lenses.
[0056] FIG. 4 illustrates an example of a scene as projected on to
the sensor.
[0057] FIG. 5 illustrates a front view of a three lens camera using
one rectangular sensor divided in to three regions.
[0058] FIG. 6 illustrates a front view of a three lens camera
having one sensor, one large lens and two smaller lenses.
[0059] FIG. 7 illustrates a front view of a four lens camera having
a one sensor (700) and four lenses.
[0060] FIG. 8 illustrates a 16 lens camera having four regions,
each containing four lenses as illustrated in FIG. 7.
DETAILED DESCRIPTION OF THE DRAWINGS
[0061] FIG. 1 illustrates a side view of a single lens camera
having a single lens (102) that can comprise one or more elements
and a single sensor (101).
[0062] FIG. 2 illustrates a sensor array (201) having multiple
pixels where the position of the green filter, red filter and blue
filter are marked by (202), (203) and (204) respectively. The image
that will be taken using this configuration needs to be processed
in order to separate the green, red and blue images.
[0063] FIG. 3 illustrates a side view of a three lens camera having
one sensor (310) and three lenses (301), (302) and (303). Each one
of the said lens will project the image of the same scene on to
segments of the sensor marked by (311), (312) and (313)
respectively. Each one of the three lenses will have different
color filters integrated within the lens, in front of it or between
the lens and sensor (310). Using the described configuration the
image acquired by the sensor will be composed of two or more
smaller images, each imaging information from the scene at
different spectrums.
[0064] FIG. 4 illustrates an example of a scene as projected on to
the sensor (401), in each region of the sensor (402), (403) and
(404) the same scene is projected but each region will contain
information for light at different wavelengths representing
different colors according to the filters integrated within the
lens that forms the image on each region.
[0065] The described configuration does not require the use of a
color mask and therefore the maximal spatial frequency that can be
resolved by the sensor is higher, on the other hand using smaller
lens and smaller active area per channel necessarily means that the
focal length of the lens is smaller and therefore the spatial
resolution in objects space is decreased. Overall the maximal
resolvable resolution for each color remains same.
[0066] The image acquired by the sensor is composed of two or
smaller images, each containing information of the same scene but
in different colors. The complete image is then processed and
separated in to 3 or more smaller images and combined together to
one large color image.
The Described Method of Imaging has Many Advantages:
[0067] 1. Shorter lens track (height) as each one of the lens used
is smaller in size than the single lens covering the same field of
view, the total track (height) of each lens is smaller allowing the
camera to be smaller in height, an important factor for mobile
phone cameras, notebook cameras and other applications requiring
short optical track. [0068] 2. Reduced Color artifacts--Since each
color is captured separately, artifacts originating from spatial
dependency of each color in a color mask will not appear. [0069] 3.
Lens requirements: each lens does not have to be optimal for all
spectrums used but only for one spectrum, allowing simplifying the
lens design and possibly decreasing the amount of elements used in
each lens as no color correction is needed. [0070] 4. Larger Depth
of Focus: the depth of focus of a system depends on its focal
length. Since we use smaller lenses with smaller focal lengths, we
increase the depth of focus by the scale factor squared. [0071] 5.
Elimination of focus mechanism: focus mechanisms change the
distance between the lens and the sensor to compensate for the
change in object distance and to assure that the desired distance
is in focus during the exposure time. Such a mechanism is costly
and has many other disadvantages such as: [0072] a. Size [0073] b.
Power consumption [0074] c. Shutter lag [0075] d. Reliability
[0076] e. price
[0077] Using a fourth lens in addition to the three used for each
color red, green and blue (or other colors) with a broad spectral
transmission can allow extension of the sensor's dynamic range and
improve the signal-to-noise performance of the camera in low light
conditions.
[0078] All configuration described above using a fourth lens
element can be applied on other configurations having two or more
lenses.
[0079] Another configuration that is proposed is using two or more
lenses with one sensor having a color mask integrated or on top of
the sensor such as a Bayer mask. In such a configuration no color
filter will be integrated in to each lens channel and all lenses
will create a color image on the sensor region corresponding to the
specific lens. The resulting image will be processed to form one
large image combining the two or more color images that are
projected on to the sensor.
Three Lens Camera:
[0080] Dividing the sensor's active area in to 3 areas, one for
each color Red, Green and Blue (or other colors) can be achieved by
placing 3 lens one beside the other as described in the drawing
below: The resulting image will consist of 3 small images were each
contains information of the same scene in different color. Such a
configuration will comprise of 3 lenses where the focal length of
each lens is 4/9 of an equivalent single lens camera that uses a
color filter array, these values assume a 4:3 aspect ratio
sensor.
[0081] FIG. 5 illustrates a front view of a three lens camera using
one rectangular sensor (500) divided in to three regions (501),
(502) and (503). The three lenses (511), (512) and (513) each
having different color filters integrated within the lens, in front
of the lens or between the lens and the sensor are used to form an
image of the same scene but in different colors. In This example
each region of the sensor (501), (502) and (503) are rectangular
having the longer dimension of the rectangle perpendicular to the
long dimension of the complete sensor.
[0082] Other three lens configuration can be used, such as using a
larger green filtered lens and two smaller lenses for blue and red,
such a configuration will results in higher spatial resolution in
the green channel since more pixels are being used.
[0083] FIG. 6 illustrates a front view of a three lens camera
having one sensor (600), one large lens (613) and two smaller
lenses (611) and (612). The large lens (613) is used to form an
image on the sensor segment marked (603) while the two smaller
lenses form an image on the sensor's segments marked with (601) and
(602) respectively. The larger lens (613) can use a green color
filter while the two smaller lenses (611) and (612) can use a blue
and red filter respectively. Other color filters could be used for
each lens.
Four Lens Camera:
[0084] FIG. 7 illustrates a front view of a four lens camera having
a one sensor (700) and four lenses (711), (712), (713) and (714).
Each lens forms an image on the corresponding sensor region marked
with (701), (702), (703) and (704) respectively. Each one of the
lenses will be integrated with a color filter in side the lens, in
front of the lens or between the lens and the sensor. All four
lenses could be integrated with different color filter or
alternatively two of the four lenses could have the same color
filter integrated in side the lens, in front of the lens or between
the lens and the sensor. For example using two green filters one
blue filter and one red filter will allow more light collection in
the green spectrum.
M.times.N Lens Camera:
[0085] Using M and/or N larger than 2 allows higher shortening
factor and higher increase in depth of focus.
[0086] FIG. 8 illustrates a 16 lens camera having 4 regions (801),
(802), (803) and (804) each containing four lenses as illustrated
in FIG. 7.
* * * * *