U.S. patent application number 15/159517 was filed with the patent office on 2016-11-24 for selective region-based focus with focal adjustment bracketing via lens / image sensor position manipulation.
The applicant listed for this patent is Searete LLC. Invention is credited to W. Daniel Hillis, Nathan P. Myhrvold, Lowell L. Wood.
Application Number | 20160343119 15/159517 |
Document ID | / |
Family ID | 44061818 |
Filed Date | 2016-11-24 |
United States Patent
Application |
20160343119 |
Kind Code |
A1 |
Hillis; W. Daniel ; et
al. |
November 24, 2016 |
Selective Region-Based Focus With Focal Adjustment Bracketing Via
Lens / Image Sensor Position Manipulation
Abstract
An image capturing apparatus constructs a composite image using
image regions of images captured with differing focal distances
between an image plane of an image capturing apparatus
photo-detector image sensor and a subject of the image, providing
selective focus, background de-focus, and/or lens blur modes of the
image capturing apparatus. Construction of the composite image
occurs subsequent to aligning images captured with different focal
lengths, the aligning partly based on registration of one or more
visual features common to one or more images of the plurality of
images utilized in providing an image with the desired focus
responsive to a selection via a user interface of the image
capturing apparatus. Focus bracketing refers to collecting multiple
images of the same scene or object while adjusting the image
capturing apparatus's focal parameters between collecting the
images to focus at distances both nearer and more distant from a
desired focus.
Inventors: |
Hillis; W. Daniel; (Encino,
CA) ; Myhrvold; Nathan P.; (Medina, WA) ;
Wood; Lowell L.; (Bellevue, WA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Searete LLC |
Bellevue |
WA |
US |
|
|
Family ID: |
44061818 |
Appl. No.: |
15/159517 |
Filed: |
May 19, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14108003 |
Dec 16, 2013 |
9348123 |
|
|
15159517 |
|
|
|
|
12925848 |
Oct 28, 2010 |
8643955 |
|
|
14108003 |
|
|
|
|
12072497 |
Feb 25, 2008 |
7826139 |
|
|
12925848 |
|
|
|
|
11811356 |
Jun 7, 2007 |
7742233 |
|
|
12072497 |
|
|
|
|
11804314 |
May 15, 2007 |
|
|
|
11811356 |
|
|
|
|
11498427 |
Aug 2, 2006 |
7259917 |
|
|
11804314 |
|
|
|
|
11221350 |
Sep 7, 2005 |
7417797 |
|
|
11498427 |
|
|
|
|
10764431 |
Jan 21, 2004 |
6967780 |
|
|
11221350 |
|
|
|
|
10764340 |
Jan 21, 2004 |
7251078 |
|
|
10764431 |
|
|
|
|
10738626 |
Dec 16, 2003 |
7231097 |
|
|
10764340 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G02B 7/365 20130101;
H04N 5/23212 20130101; G02B 7/023 20130101; H04N 5/232123 20180801;
H04N 5/3572 20130101; G02B 27/40 20130101; G02B 15/14 20130101;
G02B 27/0075 20130101; G06T 2207/20221 20130101; H04N 5/23232
20130101; G02B 3/0006 20130101; G06T 5/50 20130101 |
International
Class: |
G06T 5/50 20060101
G06T005/50; H04N 5/357 20060101 H04N005/357; G02B 7/36 20060101
G02B007/36; H04N 5/232 20060101 H04N005/232; G02B 3/00 20060101
G02B003/00; G02B 7/02 20060101 G02B007/02 |
Claims
1.-326. (canceled)
327. An image capturing apparatus capable of providing region-based
focus in part using focus bracketing, comprising: circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor; circuitry configured
for aligning the at least one first image and the at least one
second image at least partially based on correlating one or more
common features present at substantially similar pixel locations of
the at least one first image and the at least one second image; and
circuitry configured for constructing at least one composite image
at least partially via blending at least one region of the at least
one first image at the first focal length with at least one region
of the at least one second image at the second focal length.
328. The image capturing apparatus of claim 327, wherein circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor of the image capturing
apparatus comprises: circuitry configured for capturing a primary
image with a lens at a primary position; and circuitry configured
for capturing another image with the at least one lens at another
position.
329. The image capturing apparatus of claim 328, wherein circuitry
configured for capturing another image with the at least one lens
at another position comprises: circuitry configured for moving the
at least one lens to the another position while the at least one
image sensor remains stationary; and circuitry configured for
capturing the another image with the at least one lens at the
another position.
330. The image capturing apparatus of claim 328, wherein circuitry
configured for capturing another image with the at least one lens
at another position comprises: circuitry configured for capturing
one or more additional images with the at least one lens at one or
more focal positions.
331. The image capturing apparatus of claim 328, wherein circuitry
configured for capturing another image with the at least one lens
at another position comprises: circuitry configured for capturing
another image with the at least one lens at another position
relative to the at least one image sensor.
332. The image capturing apparatus of claim 331, wherein circuitry
configured for capturing another image with the at least one lens
at another position relative to the at least one image sensor
comprises: circuitry configured for moving the at least one image
sensor while the at least one lens remains stationary; and
circuitry configured for capturing the another image with the at
least one other lens at the another position relative to the at
least one image sensor subsequent to moving the at least one image
sensor while the at least one lens remains stationary.
333. The image capturing apparatus of claim 327, wherein circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor of the image capturing
apparatus comprises: circuitry configured for capturing a primary
image with a lens at a primary position; and circuitry configured
for capturing another image subsequent to moving the at least one
lens to at least another position.
334. The image capturing apparatus of claim 327, wherein circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor of the image capturing
apparatus comprises: circuitry configured for capturing at least
one first image at a first focal length and at least one second
image at a second focal length at least partially via adjustment of
a focal length associated with at least one lens and at least one
photo-detector of the image capturing apparatus.
335. The image capturing apparatus of claim 327, wherein circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor of the image capturing
apparatus comprises: circuitry configured for capturing at least
one first image at a first focal length and at least one second
image at a second focal length at least partially via moving the at
least one lens to at least one other position while the at least
one image sensor remains stationary.
336. The image capturing apparatus of claim 327, wherein circuitry
configured for capturing at least one first image at a first focal
length and at least one second image at a second focal length at
least partially via adjustment of a focal length associated with at
least one lens and at least one image sensor of the image capturing
apparatus comprises: circuitry configured for capturing at least
one first image at a first focal position relative to an image
plane of the at least one image sensor and at least one second
image at a second focal position relative to an image plane of the
at least one image sensor.
337. The image capturing apparatus of claim 327, wherein circuitry
configured for aligning the at least one first image and the at
least one second image at least partially based on correlating one
or more common features present at substantially similar pixel
locations of the at least one first image and the at least one
second image comprises: circuitry configured for mapping at least
one region of the at least one first image with at least one region
of the at least one second image.
338. The image capturing apparatus of claim 337, wherein circuitry
configured for mapping at least one region of the at least one
first image with at least one region of the at least one second
image comprises: circuitry configured for mapping at least one
out-of-focus region of the at least one first image to at least one
corresponding region of the at least one second image.
339. The image capturing apparatus of claim 327, wherein circuitry
configured for constructing at least one composite image at least
partially via blending at least one region of the at least one
first image at the first focal length with at least one region of
the at least one second image at the second focal length comprises:
circuitry configured for constructing the at least one composite
image in response to the at least one region of the at least one
first image having a sharper focus relative to the focus of the at
least one region of the at least one second image.
340. The image capturing apparatus of claim 327, wherein circuitry
configured for constructing at least one composite image at least
partially via blending at least one region of the at least one
first image at the first focal length with at least one region of
the at least one second image at the second focal length comprises:
circuitry configured for constructing the at least one composite
image to provide at least one desired focus.
341. The image capturing apparatus of claim 340, wherein circuitry
configured for constructing the at least one composite image to
provide at least one desired focus comprises: circuitry configured
for providing one or more interaction devices for receiving at
least one desired focus for constructing at least one composite
image with the at least one desired focus.
342. The image capturing apparatus of claim 327, wherein circuitry
configured for constructing at least one composite image at least
partially via blending at least one region of the at least one
first image at the first focal length with at least one region of
the at least one second image at the second focal length comprises:
circuitry configured for selecting at least one of the at least one
first image or the at least one second image to provide at least
one image with at least one desired focus at least partially based
on at least one interaction via at least one touch screen of the
image capturing apparatus.
343. The image capturing apparatus of claim 327, wherein circuitry
configured for constructing at least one composite image at least
partially via blending at least one region of the at least one
first image at the first focal length with at least one region of
the at least one second image at the second focal length comprises:
circuitry configured for constructing at least one composite image
including at least one region with at least one desired focus
selected at least partially based on at least one interaction via
at least one touch screen of the image capturing apparatus.
344. The image capturing apparatus of claim 343, wherein circuitry
configured for constructing at least one composite image including
at least one region with at least one desired focus selected at
least partially based on at least one interaction via at least one
touch screen of the image capturing apparatus comprises: circuitry
configured for constructing at least one composite image including
at least one desired in-focus region and at least one desired
out-of-focus region, the at least one composite image constructed
at least partially based on at least one interaction via at least
one touch screen of the image capturing apparatus.
345. A method for an image capturing apparatus capable of providing
region-based focus in part using focus bracketing, comprising:
capturing at least one first image at a first focal length and at
least one second image at a second focal length at least partially
via adjustment of a focal length associated with at least one lens
and at least one image sensor of the image capturing apparatus;
aligning the at least one first image and the at least one second
image at least partially based on correlating one or more common
features present at substantially similar pixel locations of the at
least one first image and the at least one second image; and
constructing at least one composite image at least partially via
blending at least one region of the at least one first image at the
first focal length with at least one region of the at least one
second image at the second focal length.
346. An image capturing apparatus capable of providing region-based
focus in part using focus bracketing, comprising: means for
capturing at least one first image at a first focal length and at
least one second image at a second focal length at least partially
via adjustment of a focal length associated with at least one lens
and at least one image sensor of the image capturing apparatus;
means for aligning the at least one first image and the at least
one second image at least partially based on correlating one or
more common features present at substantially similar pixel
locations of the at least one first image and the at least one
second image; and means for constructing at least one composite
image at least partially via blending at least one region of the at
least one first image at the first focal length with at least one
region of the at least one second image at the second focal length.
Description
[0001] If an Application Data Sheet (ADS) has been filed on the
filing date of this application, it is incorporated by reference
herein. Any applications claimed on the ADS for priority under 35
U.S.C. .sctn..sctn.119, 120, 121 or 365(c), and any and all parent,
grandparent, great-grandparent, etc. applications of such
applications, are also incorporated by reference, including any
priority claims made in those applications and any material
incorporated by reference, to the extent such subject matter is not
inconsistent herewith.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0002] The present application is related to and/or claims the
benefit of the earliest available effective filing date(s) from the
following listed application(s) (the "Priority Applications"), if
any, listed below (e.g., claims earliest available priority dates
for other than provisional patent applications or claims benefits
under 35 USC .sctn.119(e) for provisional patent applications, for
any and all parent, grandparent, great-grandparent, etc.
applications of the Priority Application(s)). In addition, the
present application is related to the "Related Application(s)," if
any, listed below.
PRIORITY APPLICATIONS
[0003] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 14/108,003, entitled IMAGE CORRECTION USING
INDIVIDUAL MANIPULATION OF MICROLENSES IN A MICROLENS ARRAY, naming
W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr. as
inventors, filed 16 Dec. 2013, which is currently co-pending or is
an application of which a currently co-pending application is
entitled to the benefit of the filing date;
[0004] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 12/925,848, entitled IMAGE CORRECTION USING
INDIVIDUAL MANIPULATION OF MICROLENSES IN A MICROLENS ARRAY, naming
W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr. as
inventors, filed 28 Oct. 2010, now issued as U.S. Pat. No.
8,643,955 on 4 Feb. 2014, which is currently co-pending or is an
application of which a currently co-pending application is entitled
to the benefit of the filing date;
[0005] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 12/072,497, entitled IMAGE CORRECTION USING
INDIVIDUAL MANIPULATION OF MICROLENSES IN A MICROLENS ARRAY, naming
W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr. as
inventors, filed 25 Feb. 2008, now issued as U.S. Pat. No.
7,826,139 on 2 Nov. 2010, and which is an application of which a
currently co-pending application is entitled to the benefit of the
filing date;
[0006] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 11/811,356, entitled IMAGE CORRECTION USING A
MICROLENS ARRAY AS A UNIT, naming W. Daniel Hillis, Nathan P.
Myhrvold, and Lowell L. Wood Jr. as inventors, filed 7 Jun. 2007,
now issued as U.S. Pat. No. 7,742,233 on 22 Jun. 2010, and which is
an application of which a currently co-pending application is
entitled to the benefit of the filing date;
[0007] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 11/804,314, entitled LENS DEFECT CORRECTION,
naming W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr.
as inventors, filed 15 May 2007, which is abandoned, and which is
an application of which a currently co-pending application is
entitled to the benefit of the filing date;
[0008] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 11/498,427, entitled IMAGE CORRECTION USING A
MICROLENS ARRAY AS A UNIT, naming W. Daniel Hillis, Nathan P.
Myhrvold, and Lowell L. Wood Jr. as inventors, filed 2 Aug. 2006,
now issued as U.S. Pat. No. 7,259,917 on 21 Aug. 2007, and which is
an application of which a currently co-pending application is
entitled to the benefit of the filing date;
[0009] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 11/221,350, entitled IMAGE CORRECTION USING
INDIVIDUAL MANIPULATION OF MICROLENSES IN A MICROLENS ARRAY, naming
W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr. as
inventors, filed 7 Sep. 2005, now issued as U.S. Pat. No. 7,417,797
on 26 Aug. 2008, and which is an application of which a currently
co-pending application is entitled to the benefit of the filing
date;
[0010] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 10/764,431, entitled IMAGE CORRECTION USING
INDIVIDUAL MANIPULATION OF MICROLENSES IN A MICROLENS ARRAY, naming
W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr. as
inventors, filed 21 Jan. 2004, now issued as U.S. Pat. No.
6,967,780 on 22 Nov. 2005, and which is an application of which a
currently co-pending application is entitled to the benefit of the
filing date;
[0011] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 10/764,340, entitled IMAGE CORRECTION USING A
MICROLENS ARRAY AS A UNIT, naming W. Daniel Hillis, Nathan P.
Myhrvold, and Lowell L. Wood Jr. as inventors, filed 21 Jan. 2004,
now issued as U.S. Pat. No. 7,251,078 on 31 Jul. 2007, and which is
an application of which a currently co-pending application is
entitled to the benefit of the filing date; and
[0012] For purposes of the USPTO extra-statutory requirements, the
present application constitutes a continuation of U.S. patent
application Ser. No. 10/738,626, entitled LENS DEFECT CORRECTION,
naming W. Daniel Hillis, Nathan P. Myhrvold, and Lowell L. Wood Jr.
as inventors, filed 16 Dec. 2003, now issued as U.S. Pat. No.
7,231,097 on 12 Jun. 2007, and which is an application of which a
currently co-pending application is entitled to the benefit of the
filing date.
RELATED APPLICATIONS
[0013] None.
[0014] The United States Patent Office (USPTO) has published a
notice to the effect that the USPTO's computer programs require
that patent applicants both reference a serial number and indicate
whether an application is a continuation, continuation-in-part, or
divisional of a parent application. Stephen G. Kunin, Benefit of
Prior Filed Application, USPTO Official Gazette Mar. 18, 2003. The
USPTO further has provided forms for the Application Data Sheet
which allow automatic loading of bibliographic data but which
require identification of each application as a continuation,
continuation-in-part, or divisional of a parent application. The
present Applicant Entity (hereinafter "Applicant") has provided
above a specific reference to the application(s) from which
priority is being claimed as recited by statute. Applicant
understands that the statute is unambiguous in its specific
reference language and does not require either a serial number or
any characterization, such as "continuation" or
"continuation-in-part," for claiming priority to U.S. patent
applications. Notwithstanding the foregoing, Applicant understands
that the USPTO's computer programs have certain data entry
requirements, and hence Applicant has provided designation(s) of a
relationship between the present application and its parent
application(s) as set forth above and in any ADS filed in this
application, but expressly points out that such designation(s) are
not to be construed in any way as any type of commentary and/or
admission as to whether or not the present application contains any
new matter in addition to the matter of its parent
application(s).
[0015] If the listings of applications provided above are
inconsistent with the listings provided via an ADS, it is the
intent of the Applicant to claim priority to each application that
appears in the Priority Applications section of the ADS and to each
application that appears in the Priority Applications section of
this application.
[0016] All subject matter of the Priority Applications and the
Related Applications and of any and all parent, grandparent,
great-grandparent, etc. applications of the Priority Applications
and the Related Applications, including any priority claims, is
incorporated herein by reference to the extent such subject matter
is not inconsistent herewith.
TECHNICAL FIELD
[0017] The present application relates, in general, to imaging.
SUMMARY
[0018] In one aspect, a method includes but is not limited to:
capturing a primary image with a microlens array at a primary
position, the microlens array having at least one microlens
deviation that exceeds a first tolerance from a target optical
property; determining at least one out-of-focus region of the
primary image; capturing another image with at least one microlens
of the microlens array at another position; determining a focus of
at least one region of the other image relative to a focus of the
at least one out-of-focus region of the primary image; and
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In addition to the foregoing, other method embodiments are
described in the claims, drawings, and text forming a part of the
present application. In addition to the foregoing, other method
aspects are described in the claims, drawings, and text forming a
part of the present application.
[0019] In one or more various aspects, related systems include but
are not limited to machinery and/or circuitry and/or programming
for effecting the herein referenced method aspects; the machinery
and/or circuitry and/or programming can be virtually any
combination of hardware, software, and/or firmware configured to
effect the foregoing referenced method aspects depending upon the
design choices of the system designer.
[0020] In one aspect, a system includes but is not limited to: a
photo-detector array; a microlens array having at least one
microlens deviation that exceeds a first tolerance from a target
optical property; a controller configured to position at least one
microlens of the microlens array at a primary and another position
relative to the photo-detector array and to cause an image capture
signal at the primary and the other position; and an image
construction unit configured to construct at least one out-of-focus
region of a first image captured at the primary position with a
more in-focus region of another image captured at the other
position. In addition to the foregoing, other system aspects are
described in the claims, drawings, and text forming a part of the
present application.
[0021] In one aspect, a system includes but is not limited to: a
microlens array having at least one microlens deviation that
exceeds a first tolerance from a target optical property; an
electro-mechanical system configurable to capture a primary image
with at least one microlens of the microlens array at a primary
position said electro-mechanical system including at least one of
electrical circuitry operably coupled with a transducer, electrical
circuitry having at least one discrete electrical circuit,
electrical circuitry having at least one integrated circuit,
electrical circuitry having at least one application specific
integrated circuit, electrical circuitry having a general purpose
computing device configured by a computer program, electrical
circuitry having a memory device, and electrical circuitry having a
communications device; an electro-mechanical system configurable to
capture another image with the at last one microlens of the
microlens array at another position said electro-mechanical system
including at least one of electrical circuitry operably coupled
with a transducer, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
having a general purpose computing device configured by a computer
program, electrical circuitry having a memory device, and
electrical circuitry having a communications device; an
electro-mechanical system configurable to determine at least one
out-of-focus region of the primary image said electro-mechanical
system including at least one of electrical circuitry operably
coupled with a transducer, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
having a general purpose computing device configured by a computer
program, electrical circuitry having a memory device, and
electrical circuitry having a communications device; an
electro-mechanical system configurable to determine a focus of at
least one region of the other image relative to a focus of the at
least one out-of-focus region of the primary image said
electro-mechanical system including at least one of electrical
circuitry operably coupled with a transducer, electrical circuitry
having at least one discrete electrical circuit, electrical
circuitry having at least one integrated circuit, electrical
circuitry having at least one application specific integrated
circuit, electrical circuitry having a general purpose computing
device configured by a computer program, electrical circuitry
having a memory device, and electrical circuitry having a
communications device; an electro-mechanical system configurable to
determine a focus of at least one region of the other image
relative to a focus of the at least one out-of-focus region of the
primary image said electro-mechanical system including at least one
of electrical circuitry operably coupled with a transducer,
electrical circuitry having at least one discrete electrical
circuit, electrical circuitry having at least one integrated
circuit, electrical circuitry having at least one application
specific integrated circuit, electrical circuitry having a general
purpose computing device configured by a computer program,
electrical circuitry having a memory device, and electrical
circuitry having a communications device; and an electro-mechanical
system configurable to construct a composite image in response to
the at least one region of the other image having a sharper focus
relative to the focus of the at least one out-of-focus region of
the primary image said electro-mechanical system including at least
one of electrical circuitry operably coupled with a transducer,
electrical circuitry having at least one discrete electrical
circuit, electrical circuitry having at least one integrated
circuit, electrical circuitry having at least one application
specific integrated circuit, electrical circuitry having a general
purpose computing device configured by a computer program,
electrical circuitry having a memory device, and electrical
circuitry having a communications device. In addition to the
foregoing, other system aspects are described in the claims,
drawings, and text forming a part of the present application.
[0022] In one aspect, a method includes but is not limited to:
capturing a primary image with a microlens array at a primary
position, said capturing effected with a photo-detector array
having an imaging surface deviation that exceeds a first tolerance
from a target surface position; determining at least one
out-of-focus region of the primary image; capturing another image
with at least one microlens of the microlens array at another
position; determining a focus of at least one region of the other
image relative to a focus of the at least one out-of-focus region
of the primary image; and constructing a composite image in
response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image. In addition to the
foregoing, other method aspects are described in the claims,
drawings, and text forming a part of the present application.
[0023] In one embodiment, a method includes but is not limited to:
capturing a primary image with a lens at a primary position, the
lens having at least one deviation that exceeds a first tolerance
from a target optical property; capturing another image with the
lens at another position; determining at least one out-of-focus
region of the primary image; determining a focus of at least one
region of the other image relative to a focus of the at least one
out-of-focus region of the primary image; and constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image. In addition to the
foregoing, various other method embodiments are set forth and
described in the text (e.g., claims and/or detailed description)
and/or drawings of the present application.
[0024] In one or more various embodiments, related systems include
but are not limited to electro-mechanical systems (e.g., motors,
actuators, circuitry, and/or programming) for effecting the herein
referenced method embodiments); the electrical circuitry can be
virtually any combination of hardware, software, and/or firmware
configured to effect the foregoing referenced method embodiments
depending upon the design choices of the system designer.
[0025] In one embodiment, a system includes but is not limited to:
a photo-detector array; a lens having at least one deviation that
exceeds a first tolerance from a target optical property; a
controller configured to position said lens at a primary and
another position relative to said photo-detector array and to cause
an image capture signal at the primary and the other position; and
an image construction unit configured to construct at least one
out-of-focus region of a first image captured at the primary
position with a more in-focus region of another image captured at
the other position.
[0026] In one aspect, a method includes but is not limited to:
capturing a primary image with a microlens array at a primary
position, the microlens array having at least one microlens
deviation that exceeds a first tolerance from a target optical
property; determining at least one out-of-focus region of the
primary image; capturing another image with the microlens array at
another position; determining a focus of at least one region of the
other image relative to a focus of the at least one out-of-focus
region of the primary image; and constructing a composite image in
response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image. In addition to the
foregoing, other method aspects are described in the claims,
drawings, and text forming a part of the present application.
[0027] In one or more various aspects, related systems include but
are not limited to machinery and/or circuitry and/or programming
for effecting the herein referenced method aspects; the machinery
and/or circuitry and/or programming can be virtually any
combination of hardware, software, and/or firmware configured to
effect the foregoing referenced method aspects depending upon the
design choices of the system designer.
[0028] In one aspect, a system includes but is not limited to: a
microlens array having at least one microlens deviation that
exceeds a first tolerance from a target optical property; means for
capturing a primary image with a lens at a primary position; means
for determining at least one out-of-focus region of the primary
image; means for capturing another image with the lens at another
position; means for determining a focus of at least one region of
the other image relative to a focus of the at least one
out-of-focus region of the primary image; and means for
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In addition to the foregoing, other system aspects are described in
the claims, drawings, and text forming a part of the present
application.
[0029] In one aspect, a system includes but is not limited to: a
microlens array having at least one microlens deviation that
exceeds a first tolerance from a target optical property; an
electro-mechanical system configurable to capture a primary image
with the microlens array at a primary position said
electro-mechanical system including at least one of electrical
circuitry operably coupled with a transducer, electrical circuitry
having at least one discrete electrical circuit, electrical
circuitry having at least one integrated circuit, electrical
circuitry having at least one application specific integrated
circuit, electrical circuitry having a general purpose computing
device configured by a computer program, electrical circuitry
having a memory device, and electrical circuitry having a
communications device; an electro-mechanical system configurable to
capture another image with the microlens array at another position
said electro-mechanical system including at least one of electrical
circuitry operably coupled with a transducer, electrical circuitry
having at least one discrete electrical circuit, electrical
circuitry having at least one integrated circuit, electrical
circuitry having at least one application specific integrated
circuit, electrical circuitry having a general purpose computing
device configured by a computer program, electrical circuitry
having a memory device, and electrical circuitry having a
communications device; an electro-mechanical system configurable to
determine at least one out-of-focus region of the primary image
said electro-mechanical system including at least one of electrical
circuitry operably coupled with a transducer, electrical circuitry
having at least one discrete electrical circuit, electrical
circuitry having at least one integrated circuit, electrical
circuitry having at least one application specific integrated
circuit, electrical circuitry having a general purpose computing
device configured by a computer program, electrical circuitry
having a memory device, and electrical circuitry having a
communications device; an electro-mechanical system configurable to
determine a focus of at least one region of the other image
relative to a focus of the at least one out-of-focus region of the
primary image said electro-mechanical system including at least one
of electrical circuitry operably coupled with a transducer,
electrical circuitry having at least one discrete electrical
circuit, electrical circuitry having at least one integrated
circuit, electrical circuitry having at least one application
specific integrated circuit, electrical circuitry having a general
purpose computing device configured by a computer program,
electrical circuitry having a memory device, and electrical
circuitry having a communications device; an electro-mechanical
system configurable to determine a focus of at least one region of
the other image relative to a focus of the at least one
out-of-focus region of the primary image said electro-mechanical
system including at least one of electrical circuitry operably
coupled with a transducer, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
having a general purpose computing device configured by a computer
program, electrical circuitry having a memory device, and
electrical circuitry having a communications device; and an
electro-mechanical system configurable to construct a composite
image in response to the at least one region of the other image
having a sharper focus relative to the focus of the at least one
out-of-focus region of the primary image said electro-mechanical
system including at least one of electrical circuitry operably
coupled with a transducer, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
having a general purpose computing device configured by a computer
program, electrical circuitry having a memory device, and
electrical circuitry having a communications device. In addition to
the foregoing, other system aspects are described in the claims,
drawings, and text forming a part of the present application.
[0030] In one aspect, a method includes but is not limited to:
capturing a primary image with a microlens array at a primary
position, said capturing effected with a photo-detector array
having an imaging surface deviation that exceeds a first tolerance
from a target surface position; determining at least one
out-of-focus region of the primary image; capturing another image
with the microlens array at another position; determining a focus
of at least one region of the other image relative to a focus of
the at least one out-of-focus region of the primary image; and
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In addition to the foregoing, other method aspects are described in
the claims, drawings, and text forming a part of the present
application.
[0031] In addition to the foregoing, various other method and/or
system aspects are set forth and described in the text (e.g.,
claims and/or detailed description) and/or drawings of the present
application.
[0032] The foregoing is a summary and thus contains, by necessity;
simplifications, generalizations and omissions of detail;
consequently, those skilled in the art will appreciate that the
summary is illustrative only and is NOT intended to be in any way
limiting. Other aspects, inventive features, and advantages of the
devices and/or processes described herein, as defined solely by the
claims, will become apparent in the non-limiting detailed
description set forth herein.
BRIEF DESCRIPTION OF THE FIGURES
[0033] FIG. 1 shows a front-plan view of image 100 of a person
(e.g., person 202 of FIG. 2) projected onto photo-detector array
102.
[0034] FIG. 2 depicts a side-plan view of lens system 200 that can
give rise to image 100 of FIG. 1.
[0035] FIG. 3 depicts a high level logic flowchart of a
process.
[0036] FIG. 4 depicts a side-plan view of the system of FIG. 2
wherein microlens array 204 has been moved in accordance with
aspects of the process shown and described in relation to FIG.
3.
[0037] FIG. 5 illustrates another side-plan view of the system of
FIG. 2 wherein microlens array 204 has been moved in accordance
with aspects of the process shown and described in relation to FIG.
3.
[0038] FIG. 1A shows a front-plan view of image 100A of a person
(e.g., person 202A of FIG. 2A) projected onto photo-detector array
102A.
[0039] FIG. 2A depicts a side-plan view of lens system 200A that
can give rise to image 100A of FIG. 1A.
[0040] FIG. 3A depicts a high level logic flowchart of a
process.
[0041] FIG. 4A depicts a side-plan view of the system of FIG. 2A
wherein lens 204A has been moved in accordance with aspects of the
process shown and described in relation to FIG. 3A.
[0042] FIG. 5A illustrates another side-plan view of the system of
FIG. 2A wherein lens 204A has been moved in accordance with aspects
of the process shown and described in relation to FIG. 3A.
[0043] FIG. 1B shows a front-plan view of image 100B of a person
(e.g., person 202B of FIG. 2B) projected onto photo-detector array
102B.
[0044] FIG. 2B depicts a side-plan view of lens system 200B that
can give rise to image 100B of FIG. 1B.
[0045] FIG. 3B depicts a high level logic flowchart of a
process.
[0046] FIG. 4B depicts a side-plan view of the system of FIG. 2B
wherein microlens array 204B has been moved in accordance with
aspects of the process shown and described in relation to FIG.
3B.
[0047] FIG. 5B illustrates another side-plan view of the system of
FIG. 2B wherein microlens array 204B has been moved in accordance
with aspects of the process shown and described in relation to FIG.
3B.
[0048] The use of the same symbols in different drawings typically
indicates similar or identical items.
DETAILED DESCRIPTION
[0049] With reference to the figures, and with reference now to
FIG. 1, shown is a front-plan view of image 100 of a person (e.g.,
person 202 of FIG. 2) projected onto photo-detector array 102.
Image 100 is shown as distorted due to defects in a microlens array
through which image 100 has been projected (e.g., microlens array
204 of lens system 200 of FIG. 2). First portion 104 of image 100
is illustrated as large and blurry, which can occur when a
microlens deviation causes first portion 104 of image 100 to come
to a focus in front of a surface of photo-detector array 102.
Second, third, and fourth portions 106 of image 100 are illustrated
as right sized, which can occur when microlenses of the microlens
array cause portions 106 to correctly focus on an imaging surface
of photo-detector array 102. Fifth portion 108 of image 100 is
shown as small and faint, which can occur when a microlens
deviation causes fifth portion 108 to come to a focus (virtual)
behind an imaging surface of photo-detector array 102. In addition,
although not expressly shown, those having skill in the art will
appreciate that various microlens defects could also cause the
image to be distorted in x-y; those having skill in the art will
also appreciate that different colored wavelengths of light can in
and of themselves focus at different positions due to differences
in refraction of the different colored wavelengths of light. In
addition, although not expressly shown herein, those having skill
in the art will appreciate that the subject matter disclosed herein
may serve to remedy misfocusings/distortions arising from defects
other than lens defects, such as, for example, defects in the
imaging surface of photo-detector array 102 and/or defects in
frames that hold microlens arrays.
[0050] Referring now to FIG. 2, depicted is a side-plan view of
lens system 200 that can give rise to image 100 of FIG. 1.
Microlens array 204 of lens system 200 is illustrated as located at
a primary position and having microlens deviations that give rise
to the five different portions of image 100 shown and described in
relation to FIG. 1. First portion 104 of image 100 is illustrated
as misfocused in front of an imaging surface of photo-detector
array 102, where the misfocusing is due to a deviation of microlens
252. Second, third, and fourth portions 106 of image 100 are
illustrated as respectively right sized and focused by microlenses
250, 254, and 258 on an imaging surface of photo-detector array
102. (It is recognized that in side plan view the head and feet of
person 202 would appear as lines; however, for sake of clarity they
are shown in profile in FIG. 2 to help orient the reader relative
to FIG. 1.) Fifth portion 108 is shown as small and faint, and
(virtually) misfocused behind an imaging surface of photo-detector
array 102, where the misfocusing is due to a deviation of microlens
256. In addition, although not expressly shown herein, those having
skill in the art will appreciate that the subject matter of FIG. 2
is also illustrative of those situations in which one or more
individual photo-detectors forming part of the imaging surface of
photo-detector array 102--rather than one or more microlenses of
microlens array 204--deviate from one or more predefined positions
by amounts such that image misfocuses/distortions arising from such
deviations are unacceptable. That is, insofar as image misfocusing
or distortion could just as easily arise from photo-detector array
102 having mispositioned photo-detectors as from microlens array
204 having mispositioned/defective lenses, the subject matter
disclosed herein may serve to remedy misfocusings/distortions
arising from defects in the imaging surface of photo-detector array
102.
[0051] Continuing to refer to FIG. 2, further shown are components
that can serve as an environment for the process shown and
described in relation to FIG. 3. Specifically, controller 208 is
depicted as controlling the position of the various microlenses
250-258 of microlens array 204 of lens system 200 (e.g., via use of
one or more feedback control subsystems). Image capture unit 206 is
illustrated as receiving image data from photo-detector array 102
and receiving control signals from controller 208. Image capture
unit 206 is shown as transmitting captured image information to
focus detection unit 210. Focus detection unit 210 is depicted as
transmitting focus data to image construction unit 212. Image
construction unit 212 is illustrated as transmitting a composite
image to image store/display unit 214.
[0052] With reference now to FIG. 3, depicted is a high level logic
flowchart of a process. Method step 300 shows the start of the
process. Method step 302 depicts capturing a primary image with a
microlens array having one or more microlenses at one or more
primary positions, the microlens array having at least one
microlens deviation that exceeds a first tolerance from a target
optical property. Examples of the array having at least one
microlens deviation that exceeds a first tolerance from a target
optical property include (a) where at least one actual microlens
position exceeds a first tolerance from at least one defined
microlens position, and (b) where at least one microlens of the
microlens array has at least one focal length that exceeds a first
tolerance from a defined focal length (e.g., a microlens deviation
that would produce fifth portion 108 of image 100 at some place
behind an imaging surface of photo-detector array 102 or a
microlens deviation that would produce portion 104 at some place in
front of the imaging surface of photo-detector array 102 where the
distance in front or back of the imaging surface exceeds a defined
tolerance distance where an image captured with photo-detector
array 102 is deemed acceptable). Specific instances of the
foregoing include a microlens of the microlens array having at
least one spherical aberration that exceeds a first tolerance from
a defined spherical aberration, and a microlens of the microlens
array having at least one cylindrical aberration that exceeds a
first tolerance from a defined cylindrical aberration.
Alternatively, the microlens array may have one or more microlenses
having some combination of such defects. In one implementation,
method step 302 includes the sub-step of capturing the primary
image at an average primary focal surface location of the microlens
array (e.g., a defined focal surface of the microlens array where
an image would form if the microlens array had no microlenses
having aberrations outside a specified tolerance). In another
implementation, method step 302 includes the sub-step of capturing
the primary image with a photo-detector array at the average
primary focal surface location of the microlens array (e.g.,
positioning the microlens array such that a defined focal surface
of the microlens array coincides with an imaging surface of a
photo-detector array).
[0053] Referring again to FIG. 2, one specific example of method
step 302 (FIG. 3) would be controller 208 directing lens system 200
to position one or more microlenses of microlens array 204 at one
or more primary positions, and thereafter instructing image capture
unit 206 to capture an image from photo-detector array 102.
[0054] With reference again to FIG. 3, method step 304 illustrates
determining at least one out-of-focus region of the primary image
(or determining at least one focused region of the primary image).
In one implementation, method step 304 includes the sub-step of
calculating a Fourier transform of at least a part of the primary
image (e.g., sharp, or in-focus images produce abrupt transitions
that often have significant high frequency components).
[0055] Referring again to FIG. 2, one specific example of method
step 304 (FIG. 3) would be focus detection unit 210 performing a
Fourier transform and subsequent analysis on at least a part of an
image captured by image capture unit 206 when the one or more
microlenses of microlens array 204 were at the one or more primary
positions. In this example, focus detection unit 210 could deem
portions of the image having significant high frequency components
as "in focus" images. As a more specific example, the Fourier
transform and analysis may be performed on one or more parts of the
image that are associated with one or more microlenses 250-258 of
microlens array 204.
[0056] With reference again to FIG. 3, method step 305 illustrates
mapping the at least one out-of-focus region to one or more
microlenses of the microlens array. In one implementation, method
step 305 includes the sub-steps of projecting mathematically from a
surface of a photo-detector to the microlens array; and selecting
one or more microlenses of the microlens array in response to said
projecting.
[0057] Referring again to FIG. 2, one specific example of method
step 305 (FIG. 3) would be controller 208 performing a mathematical
mapping based on (a) known geometries of microlenses 250-258
relative to photo-detector array 102 and (b) focus/out-of-focus
information received from focus detection unit 210. In one
exemplary implementation, controller 208 is pre-programmed with
knowledge of the position/orientation of photo-detector array 102
and can thus calculate the mathematical projection based on
controller 208's positioning of microlenses 250-258. In other
exemplary implementations, controller 208 additionally controls
and/or monitors the positioning of photo-detector array 102 through
one or more control and/or monitoring subsystems, and thus has
acquired--rather than pre-programmed--knowledge of the
position/orientation of photo-detector array 102 upon which to base
the calculations.
[0058] With reference again to FIG. 3, method step 306 illustrates
moving at least a part of the mapped one or more microlenses of the
microlens array to one or more other positions.
[0059] Referring again to FIG. 2, one specific example of method
step 306 (FIG. 3) would be controller 208 causing a control
subsystem of lens system 200 to move one or more individual
microlenses 250-258 of microlens array 204. In one exemplary
implementation, MEMS control systems and techniques are used. In
other exemplary implementations, conventional control systems and
techniques are used to effect the movement and control of
microlenses 250-258 of microlens array 204.
[0060] With reference again to FIG. 3, method step 307 shows
capturing another image with the one or more microlenses at the
other positions to which they have been moved. In one exemplary
implementation, method step 306 includes the sub-step of capturing
the other image at the average primary focal surface location of
the microlens array with its individual microlenses at their
primary positions (e.g., one or more microlenses 250-258 of
microlens array 204 are moved, but the image is captured on about
the same surface as that upon which the primary image was captured,
such as shown and described in relation to FIGS. 4 and 5). In
another exemplary implementation, the step of capturing the other
image at a primary focal surface location of the microlens array
with its individual microlenses at their primary positions further
includes the sub-steps of moving at least a part of the microlens
array (e.g., at least one microlens) to the other position; and
capturing the other image with a photo-detector array which remains
stationary at the primary focal surface location of the one or more
microlenses at their one or more primary positions (e.g., one or
more microlenses 250-258 of microlens array 204 are moved to one or
more other positions, while photo-detector array 102 remains
stationary, such as shown and described in relation to FIGS. 4 and
5). In another exemplary implementation, the step of moving at
least a part of the microlens array to the other position further
includes the sub-step of moving the at least a part of the
microlens array to the other position within at least one distance
constrained by a predefined aberration from at least one defined
microlens position.
[0061] Referring now to FIGS. 2, 4 and/or 5, one specific example
of method step 306 (FIG. 3) would be controller 208 directing lens
system 200 to position one or more of microlenses 250-258 of
microlens array 204 at one or more positions other than their
primary positions, and thereafter instructing image capture unit
206 to capture an image from photo-detector array 102. FIG. 4 shows
and describes moving at least a portion of microlens array 204
forward of a primary position (e.g., such as by controller 208
causing a MEMS control system to move microlens 256 of microlens
array 204 forward relative to an imaging surface of photo-detector
array 102, or by causing microlens array 204 to be compressed such
that microlens 256 of microlens array 204 moves forward relative to
the imaging surface of photo-detector array 102). FIG. 5 shows and
describes moving at least a portion of the microlens array rearward
of the primary position (e.g., such as by controller 208 causing a
MEMS control system to move microlens 252 of microlens array 204
rearward relative to an imaging surface of photo-detector array
102, or by causing microlens array 204 to be compressed such that
microlens 252 of microlens array 204 moves rearward relative to an
imaging surface of photo-detector array 102).
[0062] With reference again to FIG. 3, method step 308 depicts
determining a focus of at least one region of the other image
relative to a focus of the at least one out-of-focus region of the
primary image. In one implementation, method step 308 includes the
sub-step of calculating a Fourier transform of at least a part of
at least one region of the other image (e.g., sharp or in-focus
images produce abrupt transitions that often have significant high
frequency components). In one implementation, the step of
calculating a Fourier transform of at least a part of at least one
region of the other image (e.g., sharp or in-focus images produce
abrupt transitions that often have significant high frequency
components) includes the sub-step of mapping at least one region of
the primary image with at least one region of the other image
(e.g., mapping an out-of-focus region of the first image to a
corresponding region of the second image). As a more specific
example, the Fourier transform and analysis may be performed on one
or more parts of the image that are associated with one or more
microlenses of the microlens array (e.g., mapping at least one
region of the primary image associated with at least one specific
microlens against the at least one region of the other image
associated with the at least one specific microlens).
[0063] Referring again to FIGS. 2, 4 and/or 5, one specific example
of method step 308 (FIG. 3) would be focus detection unit 210
performing a Fourier transform and subsequent analysis on at least
a part of an image captured by image capture unit 206 when at least
one microlens of microlenses 250-258 of microlens array 204 was at
the other position specified by controller 208.
[0064] With reference again to FIG. 3, method step 310 depicts
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In one implementation, the step of constructing a composite image
in response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image includes the sub-step of
replacing at least a part of the out-of-focus region of the primary
image with at least a part of the at least one region of the other
image. In yet another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-step
of utilizing at least one of tiling image processing techniques,
morphing image processing techniques, blending image processing
techniques, and stitching image processing techniques.
[0065] In yet another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-steps
of correlating a feature of the primary image with a feature of the
other image; detecting at least one of size, color, and
displacement distortion of at least one of the primary image and
the other image; correcting the detected at least one of size,
color, and displacement distortion of the at least one of the
primary image and the other image; and assembling the composite
image using the corrected distortion. In yet another
implementation, the step of constructing a composite image in
response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image includes the sub-step of
correcting for motion between the primary and the other image.
[0066] Referring again to FIGS. 2, 4 and/or 5, one specific example
of method step 302 (FIG. 3) would be image construction unit 212
creating a composite image by replacing those portions of an image
of person 202 captured at a primary position with more in-focus
portions of an image of person 202 captured by image capture unit
206 when microlens array 204 was at the other position. In one
implementation of the example, image construction unit 212 corrects
for the motion between images using conventional techniques if such
correction is desired. In another implementation of the example,
motion correction is not used.
[0067] With reference again to FIG. 3, method step 312 shows a
determination of whether an aggregate change in focus, relative to
the primary position of method step 302, has exceeded a maximum
expected aberration of at least one lens of the microlens array.
For example, even with a relatively poor quality microlens array,
there will typically be an upper manufacturing limit beyond which
microlens aberrations are not expected to go (e.g., the microlens
array has manufacturing criteria such that each microlens in the
array provide a focal length of 5 mm+/-0.05 mm).
[0068] Referring again to FIGS. 2, 4 and/or 5, one specific example
of method step 312 (FIG. 3) would be controller 208 comparing an
aggregate movement in a defined direction against a pre-stored
upper limit deviation value. In an implementation of the example
illustrated in FIG. 4, if microlens array 204 has manufacturing
criteria such as a focal length of 5 mm+/-0.05 mm, controller 208
will determine whether the total forward movement of microlens 256
of microlens array 204 is greater than 0.05 mm relative to
microlens 256's primary position. In an implementation of the
example illustrated in FIG. 5, if microlens array 204 has
manufacturing criteria such as a focal length of 5 mm+/-0.05 mm,
controller 208 will determine whether the total rearward movement
of microlens 252 of microlens array 204 is greater than 0.05 mm
relative to microlens 252's primary position.
[0069] With reference again to FIG. 3, if the inquiry of method
step 312 yields a determination that the aggregate changes in
focuses has met or exceeded the maximum expected aberration of at
least one lens of the microlens array, the process proceeds to
method step 314. Method step 314 illustrates that the current
composite image (e.g., of method step 310) is stored and/or
displayed. One specific example of method step 314 would be image
store/display unit 214 either storing or displaying the composite
image.
[0070] Method step 316 shows the end of the process.
[0071] Returning to method step 312, shown is that in the event
that the upper limit on microlens array tolerance of at least one
lens of the microlens array has not been met or exceeded, the
process proceeds to method step 306 and continues as described
herein.
[0072] Referring now to FIG. 4, depicted is a side-plan view of the
system of FIG. 2 wherein microlens 256 has been moved in accordance
with aspects of the process shown and described in relation to FIG.
3. Microlens 256 of lens system 200 is illustrated as having been
moved to another position forward of its primary position which
gave rise to microlens 256's respective portion of image 100 shown
and described in relation to FIGS. 1 and 2. Specifically, microlens
256 of microlens array 204 is illustrated as repositioned such that
fifth portion 108 of image 100 is right sized and focused on an
imaging surface of photo-detector array 102 (e.g., as shown and
described in relation to method step 306). In one implementation,
fifth portion 108 of image 100 can be combined with previously
captured in focus and right sized portions 106 (e.g., FIGS. 1 and
2) to create a composite image such that the defects associated
with fifth portion 108 as shown and described in relation to FIGS.
1 and 2 are alleviated (e.g., as shown and described in relation to
method step 310). The remaining components and control aspects of
the various parts of FIG. 4 function as described elsewhere
herein.
[0073] With reference now to FIG. 5, illustrated is another
side-plan view of the system of FIG. 2 wherein microlens 252 has
been moved in accordance with aspects of the process shown and
described in relation to FIG. 3. Microlens 252 of lens system 200
is illustrated as having been moved to another position rearward of
its primary position which gave rise microlens 252's respective
portion of image 100 shown and described in relation to FIG. 1.
Specifically, microlens 252 of microlens array 204 is illustrated
as positioned such that first portion 104 of image 100 is right
sized and focused on an imaging surface of photo-detector array 102
(e.g., as described in relation to method step 306). In one
implementation, first portion 104 of image 100 can be combined with
previously captured in focus and right sized portions 106 of FIGS.
1 and 2, 108 of FIG. 4) to create a composite image such that the
defects associated with first portion 104 as shown and described in
relation to FIGS. 1 and 2 are alleviated (e.g., as shown and
described in relation to method step 310). The remaining components
and control aspects of the various parts of FIG. 5 function as
described elsewhere herein.
II. Lens Defect Correction
[0074] With reference to the figures, and with reference now to
FIG. 1A, shown is a front-plan view of image 100A of a person
(e.g., person 202A of FIG. 2A) projected onto photo-detector array
102A. Image 100A is shown as distorted due to defects in a lens
through which image 100A has been projected (e.g., lens 204A of
lens system 200A of FIG. 2A). First portion 104A of image 100A is
illustrated as large and blurry, which can occur when a lens defect
causes portion 104A of image 100A to come to a focus in front of a
surface of photo-detector array 102A. Second, third, and fourth
portions 106A are illustrated as right sized, which can occur when
the lens causes portions 106A of image 100A to correctly focus on
an imaging surface of photo-detector array 102A. Fifth portion 108A
is shown as small and faint, which can occur when a lens defect
causes portion 108A of image 100A to come to a focus (virtual)
behind an imaging surface of photo-detector array 102A. In
addition, although not expressly shown, those having skill in the
art will appreciate that various lens defects could also cause the
image to be distorted in x-y; those having skill in the art will
also appreciate that different colored wavelengths of light can in
and of themselves focus at different positions due to differences
in refraction of the different colored wavelengths of light.
[0075] Referring now to FIG. 2A, depicted is a side-plan view of
lens system 200A that can give rise to image 100A of FIG. 1A. Lens
204A of lens system 200A is illustrated as located at a primary
position and having defects that give rise to the five different
portions of image 100A shown and described in relation to FIG. 1A.
First portion 104A of image 100A is illustrated as focused in front
of an imaging surface of photo-detector array 102A. Second, third,
and fourth portions 106A are illustrated as right sized and focused
on an imaging surface of photo-detector array 102A. (It is
recognized that in side plan view the head and feet of person 202A
would appear as lines; however, for sake of clarity they are shown
in profile in FIG. 2A to help orient the reader relative to FIG.
1A.) Fifth portion 108A is shown as small and faint, and virtually
focused behind an imaging surface of photo-detector array 102A.
[0076] Continuing to refer to FIG. 2A, further shown are components
that can serve as the environment for the process shown and
described in relation to FIG. 3A.
[0077] Specifically, controller 208A is depicted as controlling the
position of lens 204A of lens system 200A (e.g., via use of a
feedback control subsystem). Image capture unit 206A is illustrated
as receiving image data from photo-detector 102A and receiving
control signals from controller 208A. Image capture unit 206A is
shown as transmitting captured image information to focus detection
unit 210A. Focus detection unit 210A is depicted as transmitting
focus data to image construction unit 212A. Image construction unit
212A is illustrated as transmitting a composite image to image
store/display unit 214A.
[0078] With reference now to FIG. 3A, depicted is a high level
logic flowchart of a process. Method step 300A shows the start of
the process. Method step 302A depicts capturing a primary image
with a lens at a primary position, the lens having at least one
deviation that exceeds a first tolerance from a target optical
property. One example of the lens having at least one deviation
that exceeds a first tolerance from a target optical property would
be where the lens has at least one focal length that exceeds a
first tolerance from a defined focal length (e.g., a defect that
would produce portion 108A of image 100A at some place behind an
imaging surface of photo-detector 102A or a defect that would
produce portion 104A at some place in front of the imaging surface
of photo-detector array 102A where the distance in front or back of
the imaging surface exceeds a defined tolerance distance where an
image captured with the photo-detector array 102A is deemed
acceptable). For instance, the lens may have at least one spherical
aberration that exceeds a first tolerance from a defined spherical
aberration, or the lens may have at least one cylindrical
aberration that exceeds a first tolerance from a defined
cylindrical aberration. Alternatively, the lens may have some
combination of such defects. In one implementation, method step
302A includes the sub-step of capturing the primary image at a
primary focal surface location of the lens (e.g., a defined focal
surface of the lens where an image would form if the lens had no
aberrations). In another implementation, method step 302A includes
the sub-step of capturing the primary image with a photo-detector
array at the primary focal surface location of the lens (e.g.,
positioning the lens such that a defined focal surface of the lens
coincides with an imaging surface of a photo-detector array).
[0079] Referring again to FIG. 2A, one specific example of method
step 302A (FIG. 3A) would be controller 208A directing lens system
200A to position lens 204A at a primary position, and thereafter
instructing image capture unit 100A to capture an image from
photo-detector 102A.
[0080] With reference again to FIG. 3A, method step 304A
illustrates determining at least one out-of-focus region of the
primary image (or determining at least one focused region of the
primary image). In one implementation, method step 304A includes
the sub-step of calculating a Fourier transform of at least a part
of the primary image (e.g., sharp, or in-focus images produce
abrupt transitions that often have significant high frequency
components).
[0081] Referring again to FIG. 2A, one specific example of method
step 304A (FIG. 3A) would be focus detection unit 210A performing a
Fourier transform and subsequent analysis on at least a part of an
image captured by image capture unit 206A when lens 204A was at the
primary position. In this example, focus detection unit 210A could
deem portions of the image having significant high frequency
components as "in focus" images.
[0082] With reference again to FIG. 3A, method step 306A shows
capturing another image with the lens at another position. In one
implementation, method step 306A includes the sub-step of capturing
the other image at the primary focal surface location of the lens
at the primary position (e.g., lens 204A is moved to another
position, while photo-detector 102A remains stationary, such as
shown and described in relation to FIGS. 4A and 5A). In another
implementation, the step of capturing the other image at a primary
focal surface location of the lens at the primary position further
includes the sub-step of moving at least a part of the lens to the
other position; and capturing the other image with a photo-detector
array at the primary focal surface location of the lens at the
primary position. In another implementation, the step of moving at
least a part of the lens to the other position further includes the
sub-step of moving the at least a part of the lens to the other
position within at least one distance constrained by the first
tolerance from the target optical property. In another
implementation, the step of moving at least a part of the lens to
the other position further includes the sub-step of moving an
intermediary lens. In another implementation, the step of moving at
least a part of the lens to the other position further includes the
sub-step of distorting the lens such that the at least a part of
the lens resides at the other position (e.g., a part of lens 204A
is moved to another position, such as might happen if lens 204A
were to be compressed laterally in a controlled manner, while
photo-detector 102A remains stationary, such as shown and described
in relation to FIGS. 4A and 5A).
[0083] Referring now to FIGS. 2A, 4A and/or 5A, one specific
example of method step 306A (FIG. 3A) would be controller 208A
directing lens system 200A to position lens 204A at a position
other than the primary position and thereafter instructing image
capture unit 100A to capture an image from photo-detector 102A.
FIG. 4A shows and describes moving at least a portion of the lens
forward of the primary position (e.g., such as by controller 208A
moving lens 204A forward, or causing lens 204A to be compressed
such that a part of lens 204A moves forward relative to an imaging
surface of photo-detector 102A). FIG. 5A shows and describes moving
at least a portion of the lens rearward of the primary position
(e.g., such as by controller 208A moving lens 204A forward, or
causing lens 204A to be compressed such that a part of lens 204A
moves rearward relative to an imaging surface of photo-detector
102A).
[0084] With reference again to FIG. 3A, method step 308A depicts
determining a focus of at least one region of the other image
relative to a focus of the at least one out-of-focus region of the
primary image. In one implementation, method step 310A includes the
sub-step of calculating a Fourier transform of at least a part of
at least one region of the other image (e.g., sharp or in-focus
images produce abrupt transitions that often have significant high
frequency components). In one implementation, the step of
calculating a Fourier transform of at least a part of at least one
region of the other image (e.g., sharp or in-focus images produce
abrupt transitions that often have significant high frequency
components) includes the sub-step of mapping at least one region of
the primary image with at least one region of the other image
(e.g., mapping an out-of-focus region of the first image to a
corresponding region of the second image).
[0085] Referring again to FIGS. 2A, 4A and/or 5A, one specific
example of method step 302A (FIG. 3A) would be focus detection unit
210A performing a Fourier transform and subsequent analysis on at
least a part of an image captured by image capture unit 206A when
lens 204A was at the other position specified by controller
208A.
[0086] With reference again to FIG. 3A, method step 310A depicts
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In one implementation, the step of constructing a composite image
in response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image includes the sub-step of
replacing at least a part of the out-of-focus region of the primary
image with at least a part of the at least one region of the other
image. In another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-step
of replacing at least a part of the out-of-focus region of the
primary image with at least a part of the at least one region of
the other image. In yet another implementation, the step of
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image
includes the sub-step of utilizing at least one of tiling image
processing techniques, morphing image processing techniques,
blending image processing techniques, and stitching image
processing techniques.
[0087] In yet another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-steps
of correlating a feature of the primary image with a feature of the
other image; detecting at least one of size, color, and
displacement distortion of at least one of the primary image and
the other image; correcting the detected at least one of size,
color, and displacement distortion of the at least one of the
primary image and the other image; and assembling the composite
using the corrected distortion. In yet another implementation, the
step of constructing a composite image in response to the at least
one region of the other image having a sharper focus relative to
the focus of the at least one out-of-focus region of the primary
image includes the sub-step of correcting for motion between the
primary and the other image.
[0088] Referring again to FIGS. 2A, 4A and/or 5A, one specific
example of method step 302A (FIG. 3A) would be image construction
unit 212A creating a composite image by replacing those portions of
an image of person 202A captured at a primary position with more
in-focus positions of an image of person 202A captured by image
capture unit 206A when lens 204A was at the other position. In one
implementation of the example, image construction unit 212A
corrects for the motion between images using conventional
techniques if such correction is desired. In another implementation
of the example, motion correction is not used.
[0089] With reference again to FIG. 3A, method step 312A shows a
determination of whether an aggregate change in focus, relative to
the primary position of method step 302A, has exceeded a maximum
expected deviation of a lens. For example, even with a relatively
poor quality lens, there will typically be an upper manufacturing
limit beyond which lens defects are not expected to go (e.g., the
lens has manufacturing criteria such as a focal length of 5
mm+/-0.05 mm).
[0090] Referring again to FIGS. 2A, 4A and/or 5A, one specific
example of method step 312A (FIG. 3A) would be controller 208A
comparing an aggregate movement in a defined direction against a
pre-stored upper limit deviation value. In an implementation of the
example illustrated in FIG. 4A, if lens 204A has manufacturing
criteria such as a focal length of 5 mm+/-0.05 mm, controller 208A
will determine whether the total forward movement of the lens is
greater than 0.05 mm relative to the primary position. In an
implementation of the example illustrated in FIG. 5A, if lens 204A
has manufacturing criteria such as a focal length of 5 mm+/-0.05
mm, controller 208A will determine whether the total rearward
movement of the lens is greater than 0.05 mm relative to the
primary position.
[0091] With reference again to FIG. 3A, if the inquiry of method
step 312A yields a determination that the aggregate changes in
focuses has met or exceeded the maximum expected deviation of the
lens, the process proceeds to method step 314A. Method step 314A
illustrates that the current composite image (e.g., of method step
310A) is stored and/or displayed. One specific example of method
step 314A would be store/display unit 214A either storing or
displaying the composite image.
[0092] Method step 316A shows the end of the process.
[0093] Returning to method step 312A, shown is that in the event
that the upper limit on lens tolerance has not been met or
exceeded, the process proceeds to method step 306A and continues as
described herein.
[0094] Referring now to FIG. 4A, depicted is a side-plan view of
the system of FIG. 2A wherein lens 204A has been moved in
accordance with aspects of the process shown and described in
relation to FIG. 3A. Lens 204A of lens system 200A is illustrated
as having been moved to another position forward of the primary
position which gave rise to the five different portions of image
100A shown and described in relation to FIGS. 1A and 2A.
Specifically, lens 204A of lens system 200A is illustrated as
repositioned such that fifth portion 108A of image 100A is right
sized and focused on an imaging surface of photo-detector array
102A (e.g., as shown and described in relation to method step
306A). In one implementation, fifth portion 108A of image 100A can
be combined with previously captured in focus and right sized
portions 106A (e.g., FIGS. 1A and 2A) to create a composite image
such that the defects associated with fifth portion 108A as shown
and described in relation to FIGS. 1A and 2A are alleviated (e.g.,
as shown and described in relation to method step 310A). The
remaining components and control aspects of the various parts of
FIG. 4A function as described elsewhere herein.
[0095] With reference now to FIG. 5A, illustrated is another
side-plan view of the system of FIG. 2A wherein lens 204A has been
moved in accordance with aspects of the process shown and described
in relation to FIG. 3A. Lens 204A of lens system 200A is
illustrated as having been moved to another position rearward of
the primary position which gave rise to the five different portions
of image 100A shown and described in relation to FIG. 1A.
Specifically, lens 204A of lens system 200A is illustrated as
positioned such that first portion 104A of image 100A is right
sized and focused on an imaging surface of photo-detector array
102A (e.g., as described in relation to method step 306A). In one
implementation, first portion 104A of image 100A can be combined
with previously captured in focus and right sized portions 106A,
108A (e.g., FIGS. 1A, 2A, and 4A) to create a composite image such
that the defects associated with first portion 104A as shown and
described in relation to FIGS. 1A and 2A are alleviated (e.g., as
shown and described in relation to method step 310A). The remaining
components and control aspects of the various parts of FIG. 5A
function as described elsewhere herein.
III. Image Correction Using a Microlens Array as a Unit
[0096] With reference to the figures, and with reference now to
FIG. 1B, shown is a front-plan view of image 100B of a person
(e.g., person 202B of FIG. 2B) projected onto photo-detector array
102B. Image 100B is shown as distorted due to defects in a
microlens array through which image 100B has been projected (e.g.,
microlens array 204B of lens system 200B of FIG. 2B). First portion
104B of image 100B is illustrated as large and blurry, which can
occur when a microlens deviation causes first portion 104B of image
100B to come to a focus in front of an imaging surface of
photo-detector array 102B. Second, third, and fourth portions 106B
of image 100B are illustrated as right sized, which can occur when
microlenses of the microlens array cause portions 106B to correctly
focus on an imaging surface of photo-detector array 102B. Fifth
portion 108B of image 100B is shown as small and faint, which can
occur when a microlens deviation causes fifth portion 108B to come
to a focus (virtual) behind an imaging surface of photo-detector
array 102B. In addition, although not expressly shown, those having
skill in the art will appreciate that various microlens defects
could also cause the image to be distorted in x-y; those having
skill in the art will also appreciate that different colored
wavelengths of light can in and of themselves focus at different
positions due to differences in refraction of the different colored
wavelengths of light. In addition, although not expressly shown
herein, those having skill in the art will appreciate that the
subject matter disclosed herein may serve to remedy
misfocusings/distortions arising from defects other than lens
defects, such as, for example, defects in the imaging surface of
photo-detector array 102B and/or defects in frames that hold
microlens arrays.
[0097] Referring now to FIG. 2B, depicted is a side-plan view of
lens system 200B that can give rise to image 100B of FIG. 1B.
Microlens array 204B of lens system 200B is illustrated as located
at a primary position and having microlens deviations that give
rise to the five different portions of image 100B shown and
described in relation to FIG. 1B. First portion 104B of image 100B
is illustrated as misfocused in front of an imaging surface of
photo-detector array 102B, where the misfocus is due to a deviation
of microlens 252B. Second, third, and fourth portions 106B of image
100B are illustrated as respectively right sized and focused by
microlenses 250B, 254B, and 258B on an imaging surface of
photo-detector array 102B. (It is recognized that in side plan view
the head and feet of person 202B would appear as lines; however,
for sake of clarity they are shown in profile in FIG. 2B to help
orient the reader relative to FIG. 1B.) Fifth portion 108B is shown
as small and faint, and virtually misfocused behind an imaging
surface of photo-detector array 102B, where the misfocus is due to
a deviation of microlens 256B. In addition, although not expressly
shown herein, those having skill in the art will appreciate that
the subject matter of FIG. 2B is also illustrative of those
situations in which one or more individual photo-detectors forming
part of the imaging surface of photo-detector array 102B--rather
than one or more microlenses of microlens array 204B--deviate from
one or more predefined positions by amounts such that image
misfocuses/distortions arising from such deviations are
unacceptable. That is, insofar as image misfocusing and/or
distortion could just as easily arise from photo-detector array
102B having mispositioned photo-detectors as from microlens array
204B having mispositioned/defective lenses, the subject matter
disclosed herein may serve to remedy misfocusings/distortions
arising from defects in the imaging surface of photo-detector array
102B.
[0098] Continuing to refer to FIG. 2B, further shown are components
that can serve as an environment for the process shown and
described in relation to FIG. 3B. Specifically, controller 208B is
depicted as controlling the position of microlens array 204B of
lens system 200B (e.g., via use of a feedback control subsystem).
Image capture unit 206B is illustrated as receiving image data from
photo-detector array 102B and receiving control signals from
controller 208B. Image capture unit 206B is shown as transmitting
captured image information to focus detection unit 210B. Focus
detection unit 210B is depicted as transmitting focus data to image
construction unit 212B. Image construction unit 212B is illustrated
as transmitting a composite image to image store/display unit
214B.
[0099] With reference now to FIG. 3B, depicted is a high level
logic flowchart of a process. Method step 300B shows the start of
the process. Method step 302B depicts capturing a primary image
with a microlens array at a primary position, the microlens array
having at least one microlens deviation that exceeds a first
tolerance from a target optical property. Examples of the array
having at least one microlens deviation that exceeds a first
tolerance from a target optical property include (a) where at least
one microlens position exceeds a first tolerance from at least one
defined microlens position, and (b) where at least one microlens of
the microlens array has at least one focal length that exceeds a
first tolerance from a defined focal length (e.g., a microlens
deviation that would produce portion 108B of image 100B at some
place behind an imaging surface of photo-detector array 102B or a
microlens deviation that would produce portion 104B at some place
in front of the imaging surface of photo-detector array 102B where
the distance in front or back of the imaging surface exceeds a
defined tolerance distance where an image captured with the
photo-detector array 102B is deemed acceptable). Specific instances
of the foregoing include a microlens of the microlens array having
at least one spherical aberration that exceeds a first tolerance
from a defined spherical aberration, and a microlens of the
microlens array having at least one cylindrical aberration that
exceeds a first tolerance from a defined cylindrical aberration.
Alternatively, the microlens array may have some combination of
microlenses having such defects. In one implementation, method step
302B includes the sub-step of capturing the primary image at an
average primary focal surface location of the microlens array
(e.g., a defined focal surface of the microlens array where an
image would form if the microlens array had no microlenses having
aberrations outside a specified tolerance). In another
implementation, method step 302B includes the sub-step of capturing
the primary image with a photo-detector array at the average
primary focal surface location of the microlens array (e.g.,
positioning the microlens array such that a defined focal surface
of the lens coincides with an imaging surface of a photo-detector
array).
[0100] Referring again to FIG. 2B, one specific example of method
step 302B (FIG. 3B) would be controller 208B directing lens system
200B to position microlens array 204B at a primary position, and
thereafter instructing image capture unit 206B to capture an image
from photo-detector array 102B.
[0101] With reference again to FIG. 3B, method step 304B
illustrates determining at least one out-of-focus region of the
primary image (or determining at least one focused region of the
primary image). In one implementation, method step 304B includes
the sub-step of calculating a Fourier transform of at least a part
of the primary image (e.g., sharp, or in-focus images produce
abrupt transitions that often have significant high frequency
components).
[0102] Referring again to FIG. 2B, one specific example of method
step 304B (FIG. 3B) would be focus detection unit 210B performing a
Fourier transform and subsequent analysis on at least a part of an
image captured by image capture unit 206B when lens 204B was at the
primary position. In this example, focus detection unit 210B could
deem portions of the image having significant high frequency
components as "in focus" images. As a more specific example, the
Fourier transform and analysis may be performed on one or more
parts of the image that are associated with one or more microlenses
250B-258B of microlens array 204B.
[0103] With reference again to FIG. 3B, method step 306B shows
capturing another image with the microlens array at another
position. In one implementation, method step 306B includes the
sub-step of capturing the other image at the average primary focal
surface location of the microlens array at the primary position. In
another implementation, the step of capturing the other image at a
primary focal surface location of the microlens array at the
primary position further includes the sub-step of moving at least a
part of the microlens array to the other position; and capturing
the other image with a photo-detector array at the primary focal
surface location of the microlens at the primary position (e.g.,
microlens array 204B is moved to another position, while
photo-detector array 102B remains stationary, such as shown and
described in relation to FIGS. 4B and 5B). In another
implementation, the step of moving at least a part of the microlens
array to the other position further includes the sub-step of moving
the at least a part of the microlens array to the other position
within at least one distance constrained by a predefined variation
from at least one defined microlens position. In another
implementation, the step of moving at least a part of the microlens
array to the other position further includes the sub-step of moving
an intermediary lens. In another implementation, the step of moving
at least a part of the microlens array to the other position
further includes the sub-step of distorting the microlens array
such that the at least a part of the microlens array resides at the
other position (e.g., a part of microlens array 204B is moved to
another position, such as might happen if microlens array 204B were
to be compressed laterally in a controlled manner, while
photo-detector array 102B remains stationary, such as shown and
described in relation to FIGS. 4B and 5B).
[0104] Referring now to FIGS. 2B, 4B and/or 5B, one specific
example of method step 306B (FIG. 3B) would be controller 208B
directing lens system 200B to position microlens array 204B at a
position other than the primary position and thereafter instructing
image capture unit 206B to capture an image from photo-detector
array 102B. FIG. 4B shows and describes moving at least a portion
of microlens array 204B forward of the primary position (e.g., such
as by controller 208B moving microlens array 204B forward, or
causing microlens array 204B to be compressed such that a part of
microlens array 204B moves forward relative to an imaging surface
of photo-detector array 102B). FIG. 5B shows and describes moving
at least a portion of the microlens array rearward of the primary
position (e.g., such as by controller 208B moving microlens array
204B rearward, or causing microlens array 204B to be compressed
such that a part of microlens array 204B moves rearward relative to
an imaging surface of photo-detector array 102B).
[0105] With reference again to FIG. 3B, method step 308B depicts
determining a focus of at least one region of the other image
relative to a focus of the at least one out-of-focus region of the
primary image. In one implementation, method step 308B includes the
sub-step of calculating a Fourier transform of at least a part of
at least one region of the other image (e.g., sharp or in-focus
images produce abrupt transitions that often have significant high
frequency components). In one implementation, the step of
calculating a Fourier transform of at least a part of at least one
region of the other image (e.g., sharp or in-focus images produce
abrupt transitions that often have significant high frequency
components) includes the sub-step of mapping at least one region of
the primary image with at least one region of the other image
(e.g., mapping an out-of-focus region of the first image to a
corresponding region of the second image). As a more specific
example, the Fourier transform and analysis may be performed on one
or more parts of the image that are associated with one or more
microlenses of the microlens array (e.g., mapping at least one
region of the primary image associated with at least one specific
microlens against the at least one region of the other image
associated with the at least one specific microlens).
[0106] Referring again to FIGS. 2B, 4B and/or 5B, one specific
example of method step 308B (FIG. 3B) would be focus detection unit
210B performing a Fourier transform and subsequent analysis on at
least a part of an image captured by image capture unit 206B when
microlens array 204B was at the other position specified by
controller 208B.
[0107] With reference again to FIG. 3B, method step 310B depicts
constructing a composite image in response to the at least one
region of the other image having a sharper focus relative to the
focus of the at least one out-of-focus region of the primary image.
In one implementation, the step of constructing a composite image
in response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image includes the sub-step of
replacing at least a part of the out-of-focus region of the primary
image with at least a part of the at least one region of the other
image. In yet another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-step
of utilizing at least one of tiling image processing techniques,
morphing image processing techniques, blending image processing
techniques, and stitching image processing techniques.
[0108] In yet another implementation, the step of constructing a
composite image in response to the at least one region of the other
image having a sharper focus relative to the focus of the at least
one out-of-focus region of the primary image includes the sub-steps
of correlating a feature of the primary image with a feature of the
other image; detecting at least one of size, color, and
displacement distortion of at least one of the primary image and
the other image; correcting the detected at least one of size,
color, and displacement distortion of the at least one of the
primary image and the other image; and assembling the composite
image using the corrected distortion. In yet another
implementation, the step of constructing a composite image in
response to the at least one region of the other image having a
sharper focus relative to the focus of the at least one
out-of-focus region of the primary image includes the sub-step of
correcting for motion between the primary and the other image.
[0109] Referring again to FIGS. 2B, 4B and/or 5B, one specific
example of method step 310B (FIG. 3B) would be image construction
unit 212B creating a composite image by replacing those portions of
an image of person 202B captured at a primary position with more
in-focus portions of an image of person 202B captured by image
capture unit 206B when microlens array 204B was at the other
position. In one implementation of the example, image construction
unit 212B corrects for the motion between images using conventional
techniques if such correction is desired. In another implementation
of the example, motion correction is not used.
[0110] With reference again to FIG. 3B, method step 312B shows a
determination of whether an aggregate change in position, relative
to the primary position of method step 302B, has exceeded a maximum
expected deviation of the microlens array. For example, even with a
relatively poor quality microlens array, there will typically be an
upper manufacturing limit beyond which microlens deviations are not
expected to go (e.g., the microlens array has manufacturing
criteria such that each microlens in the array provide a focal
length of 5 mm+/-0.05 mm).
[0111] Referring again to FIGS. 2B, 4B and/or 5B, one specific
example of method step 312B (FIG. 3B) would be controller 208B
comparing an aggregate movement in a defined direction against a
pre-stored upper limit deviation value. In an implementation of the
example illustrated in FIG. 4B, if microlens array 204B has
manufacturing criteria such as a focal length of 5 mm+/-0.05 mm,
controller 208B will determine whether the total forward movement
of the microlens array is greater than 0.05 mm relative to the
primary position. In an implementation of the example illustrated
in FIG. 5B, if microlens array 204B has manufacturing criteria such
as a focal length of 5 mm+/-0.05 mm, controller 208B will determine
whether the total rearward movement of microlens array 204B is
greater than 0.05 mm relative to the primary position.
[0112] With reference again to FIG. 3B, if the inquiry of method
step 312B yields a determination that the aggregate change in
position has met or exceeded the maximum expected deviation of the
microlens array, the process proceeds to method step 314B. Method
step 314B illustrates that the current composite image (e.g., of
method step 310B) is stored and/or displayed. One specific example
of method step 314B would be image store/display unit 214B either
storing or displaying the composite image.
[0113] Method step 316B shows the end of the process.
[0114] Returning to method step 312B, shown is that in the event
that the upper limit on microlens array tolerance has not been met
or exceeded, the process proceeds to method step 306B and continues
as described herein.
[0115] Referring now to FIG. 4B, depicted is a side-plan view of
the system of FIG. 2B wherein microlens array 204B has been moved
in accordance with aspects of the process shown and described in
relation to FIG. 3B. Microlens array 204B of lens system 200B is
illustrated as having been moved to another position forward of the
primary position which gave rise to the five different portions of
image 100B shown and described in relation to FIGS. 1B and 2B.
Specifically, microlens array 204B of lens system 200B is
illustrated as repositioned such that fifth portion 108B of image
100B is right sized and focused on an imaging surface of
photo-detector array 102B (e.g., as shown and described in relation
to method step 306B). In one implementation, fifth portion 108B of
image 100B can be combined with previously captured in focus and
right sized portions 106B (e.g., FIGS. 1B and 2B) to create a
composite image such that the defects associated with fifth portion
108 as shown and described in relation to FIGS. 1B and 2B are
alleviated (e.g., as shown and described in relation to method step
310B). The remaining components and control aspects of the various
parts of FIG. 4B function as described elsewhere herein.
[0116] With reference now to FIG. 5B, illustrated is another
side-plan view of the system of FIG. 2B wherein microlens array
204B has been moved in accordance with aspects of the process shown
and described in relation to FIG. 3B. Microlens array 204B of lens
system 200 is illustrated as having been moved to another position
rearward of the primary position which gave rise to the five
different portions of image 100B shown and described in relation to
FIG. 1B. Specifically, microlens array 204B of lens system 200B is
illustrated as positioned such that first portion 104B of image
100B is right sized and focused on an imaging surface of
photo-detector array 102B (e.g., as described in relation to method
step 306B). In one implementation, first portion 104B of image 100B
can be combined with previously captured in focus and right sized
portions 106B, 108B (e.g., FIGS. 1B, 2B, and 4B) to create a
composite image such that the defects associated with first portion
104B as shown and described in relation to FIGS. 1B and 2B are
alleviated (e.g., as shown and described in relation to method step
310B). The remaining components and control aspects of the various
parts of FIG. 5B function as described elsewhere herein.
[0117] Those skilled in the art will appreciate that the foregoing
specific exemplary processes and/or machines and/or technologies
are representative of more general processes and/or machines and/or
technologies taught elsewhere herein, such as in the claims filed
herewith and/or elsewhere in the present application.
[0118] Those having skill in the art will recognize that the state
of the art has progressed to the point where there is little
distinction left between hardware, software, and/or firmware
implementations of aspects of systems; the use of hardware,
software, and/or firmware is generally (but not always, in that in
certain contexts the choice between hardware and software can
become significant) a design choice representing cost vs.
efficiency tradeoffs. Those having skill in the art will appreciate
that there are various vehicles by which processes and/or systems
and/or other technologies described herein can be effected (e.g.,
hardware, software, and/or firmware), and that the preferred
vehicle will vary with the context in which the processes and/or
systems and/or other technologies are deployed. For example, if an
implementer determines that speed and accuracy are paramount, the
implementer may opt for a mainly hardware and/or firmware vehicle;
alternatively, if flexibility is paramount, the implementer may opt
for a mainly software implementation; or, yet again alternatively,
the implementer may opt for some combination of hardware, software,
and/or firmware. Hence, there are several possible vehicles by
which the processes and/or devices and/or other technologies
described herein may be effected, none of which is inherently
superior to the other in that any vehicle to be utilized is a
choice dependent upon the context in which the vehicle will be
deployed and the specific concerns (e.g., speed, flexibility, or
predictability) of the implementer, any of which may vary. Those
skilled in the art will recognize that optical aspects of
implementations will typically employ optically-oriented hardware,
software, and or firmware.
[0119] In some implementations described herein, logic and similar
implementations may include software or other control structures.
Electronic circuitry, for example, may have one or more paths of
electrical current constructed and arranged to implement various
functions as described herein. In some implementations, one or more
media may be configured to bear a device-detectable implementation
when such media hold or transmit device detectable instructions
operable to perform as described herein. In some variants, for
example, implementations may include an update or modification of
existing software or firmware, or of gate arrays or programmable
hardware, such as by performing a reception of or a transmission of
one or more instructions in relation to one or more operations
described herein. Alternatively or additionally, in some variants,
an implementation may include special-purpose hardware, software,
firmware components, and/or general-purpose components executing or
otherwise invoking special-purpose components. Specifications or
other implementations may be transmitted by one or more instances
of tangible transmission media as described herein, optionally by
packet transmission or otherwise by passing through distributed
media at various times.
[0120] Alternatively or additionally, implementations may include
executing a special-purpose instruction sequence or invoking
circuitry for enabling, triggering, coordinating, requesting, or
otherwise causing one or more occurrences of virtually any
functional operations described herein. In some variants,
operational or other logical descriptions herein may be expressed
as source code and compiled or otherwise invoked as an executable
instruction sequence. In some contexts, for example,
implementations may be provided, in whole or in part, by source
code, such as C++, or other code sequences. In other
implementations, source or other code implementation, using
commercially available and/or techniques in the art, may be
compiled//implemented/translated/converted into a high-level
descriptor language (e.g., initially implementing described
technologies in C or C++ programming language and thereafter
converting the programming language implementation into a
logic-synthesizable language implementation, a hardware description
language implementation, a hardware design simulation
implementation, and/or other such similar mode(s) of expression).
For example, some or all of a logical expression (e.g., computer
programming language implementation) may be manifested as a
Verilog-type hardware description (e.g., via Hardware Description
Language (HDL) and/or Very High Speed Integrated Circuit Hardware
Descriptor Language (VHDL)) or other circuitry model which may then
be used to create a physical implementation having hardware (e.g.,
an Application Specific Integrated Circuit). Those skilled in the
art will recognize how to obtain, configure, and optimize suitable
transmission or computational elements, material supplies,
actuators, or other structures in light of these teachings.
[0121] The foregoing detailed description has set forth various
embodiments of the devices and/or processes via the use of block
diagrams, flowcharts, and/or examples. Insofar as such block
diagrams, flowcharts, and/or examples contain one or more functions
and/or operations, it will be understood by those within the art
that each function and/or operation within such block diagrams,
flowcharts, or examples can be implemented, individually and/or
collectively, by a wide range of hardware, software, firmware, or
virtually any combination thereof. In one embodiment, several
portions of the subject matter described herein may be implemented
via Application Specific Integrated Circuits (ASICs), Field
Programmable Gate Arrays (FPGAs), digital signal processors (DSPs),
or other integrated formats. However, those skilled in the art will
recognize that some aspects of the embodiments disclosed herein, in
whole or in part, can be equivalently implemented in integrated
circuits, as one or more computer programs running on one or more
computers (e.g., as one or more programs running on one or more
computer systems), as one or more programs running on one or more
processors (e.g., as one or more programs running on one or more
microprocessors), as firmware, or as virtually any combination
thereof, and that designing the circuitry and/or writing the code
for the software and or firmware would be well within the skill of
one of skill in the art in light of this disclosure. In addition,
those skilled in the art will appreciate that the mechanisms of the
subject matter described herein are capable of being distributed as
a program product in a variety of forms, and that an illustrative
embodiment of the subject matter described herein applies
regardless of the particular type of signal bearing medium used to
actually carry out the distribution. Examples of a signal bearing
medium include, but are not limited to, the following: a recordable
type medium such as a floppy disk, a hard disk drive, a Compact
Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer
memory, etc.; and a transmission type medium such as a digital
and/or an analog communication medium (e.g., a fiber optic cable, a
waveguide, a wired communications link, a wireless communication
link (e.g., transmitter, receiver, transmission logic, reception
logic, etc.), etc.).
[0122] In a general sense, those skilled in the art will recognize
that the various aspects described herein which can be implemented,
individually and/or collectively, by a wide range of hardware,
software, firmware, and/or any combination thereof can be viewed as
being composed of various types of "electrical circuitry."
Consequently, as used herein "electrical circuitry" includes, but
is not limited to, electrical circuitry having at least one
discrete electrical circuit, electrical circuitry having at least
one integrated circuit, electrical circuitry having at least one
application specific integrated circuit, electrical circuitry
forming a general purpose computing device configured by a computer
program (e.g., a general purpose computer configured by a computer
program which at least partially carries out processes and/or
devices described herein, or a microprocessor configured by a
computer program which at least partially carries out processes
and/or devices described herein), electrical circuitry forming a
memory device (e.g., forms of memory (e.g., random access, flash,
read only, etc.)), and/or electrical circuitry forming a
communications device (e.g., a modem, communications switch,
optical-electrical equipment, etc.). Those having skill in the art
will recognize that the subject matter described herein may be
implemented in an analog or digital fashion or some combination
thereof.
[0123] Modules, logic, circuitry, hardware and software
combinations, firmware, or so forth may be realized or implemented
as one or more general-purpose processors, one or more processing
cores, one or more special-purpose processors, one or more
microprocessors, at least one Application-Specific Integrated
Circuit (ASIC), at least one Field Programmable Gate Array (FPGA),
at least one digital signal processor (DSP), some combination
thereof, or so forth that is executing or is configured to execute
instructions, a special-purpose program, an application, software,
code, some combination thereof, or so forth as at least one
special-purpose computing apparatus or specific computing
component. One or more modules, logic, or circuitry, etc. may, by
way of example but not limitation, be implemented using one
processor or multiple processors that are configured to execute
instructions (e.g., sequentially, in parallel, at least partially
overlapping in a time-multiplexed fashion, at least partially
overlapping across multiple cores, or a combination thereof, etc.)
to perform a method or realize a particular computing machine. For
example, a first module may be embodied by a given processor
executing a first set of instructions at or during a first time,
and a second module may be embodied by the same given processor
executing a second set of instructions at or during a second time.
Moreover, the first and second times may be at least partially
interleaved or overlapping, such as in a multi-threading,
pipelined, or predictive processing environment. As an alternative
example, a first module may be embodied by a first processor
executing a first set of instructions, and a second module may be
embodied by a second processor executing a second set of
instructions. As another alternative example, a particular module
may be embodied partially by a first processor executing at least a
portion of a particular set of instructions and embodied partially
by a second processor executing at least a portion of the
particular set of instructions. Other combinations of instructions,
a program, an application, software, or code, etc. in conjunction
with at least one processor or other execution machinery may be
utilized to realize one or more modules, logic, or circuitry, etc.
to implement any of the processing algorithms described herein.
[0124] Those skilled in the art will recognize that at least a
portion of the devices and/or processes described herein can be
integrated into a data processing system. Those having skill in the
art will recognize that a data processing system generally includes
one or more of a system unit housing, a video display device,
memory such as volatile or non-volatile memory, processors such as
microprocessors or digital signal processors, computational
entities such as operating systems, drivers, graphical user
interfaces, and applications programs, one or more interaction
devices (e.g., a touch pad, a touch screen, an antenna, etc.),
and/or control systems including feedback loops and control motors
(e.g., feedback for sensing position and/or velocity; control
motors for moving and/or adjusting components and/or quantities). A
data processing system may be implemented utilizing suitable
commercially available components, such as those typically found in
data computing/communication and/or network computing/communication
systems.
[0125] For the purposes of this application, "cloud" computing may
be understood as described in the cloud computing literature. For
example, cloud computing may be methods and/or systems for the
delivery of computational capacity and/or storage capacity as a
service. The "cloud" may refer to one or more hardware and/or
software components that deliver or assist in the delivery of
computational and/or storage capacity, including, but not limited
to, one or more of a client, an application, a platform, an
infrastructure, and/or a server The cloud may refer to any of the
hardware and/or software associated with a client, an application,
a platform, an infrastructure, and/or a server. For example, cloud
and cloud computing may refer to one or more of a computer, a
processor, a storage medium, a router, a switch, a modem, a virtual
machine (e.g., a virtual server), a data center, an operating
system, a middleware, a firmware, a hardware back-end, a software
back-end, and/or a software application. A cloud may refer to a
private cloud, a public cloud, a hybrid cloud, and/or a community
cloud. A cloud may be a shared pool of configurable computing
resources, which may be public, private, semi-private,
distributable, scaleable, flexible, temporary, virtual, and/or
physical. A cloud or cloud service may be delivered over one or
more types of network, e.g., a mobile communication network, and
the Internet.
[0126] As used in this application, a cloud or a cloud service may
include one or more of infrastructure-as-a-service ("IaaS"),
platform-as-a-service ("PaaS"), software-as-a-service ("SaaS"),
and/or desktop-as-a-service ("DaaS"). As a non-exclusive example,
IaaS may include, e.g., one or more virtual server instantiations
that may start, stop, access, and/or configure virtual servers
and/or storage centers (e.g., providing one or more processors,
storage space, and/or network resources on-demand, e.g., EMC and
Rackspace). PaaS may include, e.g., one or more software and/or
development tools hosted on an infrastructure (e.g., a computing
platform and/or a solution stack from which the client can create
software interfaces and applications, e.g., Microsoft Azure). SaaS
may include, e.g., software hosted by a service provider and
accessible over a network (e.g., the software for the application
and/or the data associated with that software application may be
kept on the network, e.g., Google Apps, SalesForce). DaaS may
include, e.g., providing desktop, applications, data, and/or
services for the user over a network (e.g., providing a
multi-application framework, the applications in the framework, the
data associated with the applications, and/or services related to
the applications and/or the data over the network, e.g., Citrix).
The foregoing is intended to be exemplary of the types of systems
and/or methods referred to in this application as "cloud" or "cloud
computing" and should not be considered complete or exhaustive.
[0127] Those skilled in the art will recognize that it is common
within the art to implement devices and/or processes and/or
systems, and thereafter use engineering and/or other practices to
integrate such implemented devices and/or processes and/or systems
into more comprehensive devices and/or processes and/or systems.
That is, at least a portion of the devices and/or processes and/or
systems described herein can be integrated into other devices
and/or processes and/or systems via a reasonable amount of
experimentation. Those having skill in the art will recognize that
examples of such other devices and/or processes and/or systems
might include--as appropriate to context and application--all or
part of devices and/or processes and/or systems of (a) an air
conveyance (e.g., an airplane, rocket, helicopter, etc.), (b) a
ground conveyance (e.g., a car, truck, locomotive, tank, armored
personnel carrier, etc.), (c) a building (e.g., a home, warehouse,
office, etc.), (d) an appliance (e.g., a refrigerator, a washing
machine, a dryer, etc.), (e) a communications system (e.g., a
networked system, a telephone system, a Voice over IP system,
etc.), (f) a business entity (e.g., an Internet Service Provider
(ISP) entity such as Comcast Cable, Qwest, Southwestern Bell,
etc.), or (g) a wired/wireless services entity (e.g., Sprint,
Cingular, Nextel, etc.), etc.
[0128] In certain cases, use of a system or method may occur in a
territory even if components are located outside the territory. For
example, in a distributed computing context, use of a distributed
computing system may occur in a territory even though parts of the
system may be located outside of the territory (e.g., relay,
server, processor, signal-bearing medium, transmitting computer,
receiving computer, etc. located outside the territory). A sale of
a system or method may likewise occur in a territory even if
components of the system or method are located and/or used outside
the territory. Further, implementation of at least part of a system
for performing a method in one territory does not preclude use of
the system in another territory.
[0129] One skilled in the art will recognize that the herein
described components (e.g., operations), devices, objects, and the
discussion accompanying them are used as examples for the sake of
conceptual clarity and that various configuration modifications are
contemplated. Consequently, as used herein, the specific exemplars
set forth and the accompanying discussion are intended to be
representative of their more general classes. In general, use of
any specific exemplar is intended to be representative of its
class, and the non-inclusion of specific components (e.g.,
operations), devices, and objects should not be taken limiting.
[0130] With respect to the use of substantially any plural and/or
singular terms herein, those having skill in the art can translate
from the plural to the singular and/or from the singular to the
plural as is appropriate to the context and/or application. The
various singular/plural permutations are not expressly set forth
herein for sake of clarity.
[0131] The herein described subject matter sometimes illustrates
different components contained within, or connected with, different
other components. It is to be understood that such depicted
architectures are merely exemplary, and that in fact many other
architectures may be implemented which achieve the same
functionality. In a conceptual sense, any arrangement of components
to achieve the same functionality is effectively "associated" such
that the desired functionality is achieved. Hence, any two
components herein combined to achieve a particular functionality
can be seen as "associated with" each other such that the desired
functionality is achieved, irrespective of architectures or
intermedial components. Likewise, any two components so associated
can also be viewed as being "operably connected", or "operably
coupled," to each other to achieve the desired functionality, and
any two components capable of being so associated can also be
viewed as being "operably couplable," to each other to achieve the
desired functionality. Specific examples of operably couplable
include but are not limited to physically mateable and/or
physically interacting components, and/or wirelessly interactable,
and/or wirelessly interacting components, and/or logically
interacting, and/or logically interactable components.
[0132] In some instances, one or more components may be referred to
herein as "configured to," "configured by," "configurable to,"
"operable/operative to," "adapted/adaptable," "able to,"
"conformable/conformed to," etc. Those skilled in the art will
recognize that such terms (e.g. "configured to") can generally
encompass active-state components and/or inactive-state components
and/or standby-state components, unless context requires
otherwise.
[0133] This application may make reference to one or more
trademarks, e.g., a word, letter, symbol, or device adopted by one
manufacturer or merchant and used to identify and distinguish his
or her product from those of others. Trademark names used herein
are set forth in such language that makes clear their identity,
that distinguishes them from common descriptive nouns, that have
fixed and definite meanings, and, in many if not all cases, are
accompanied by other specific identification using terms not
covered by trademark. In addition, trademark names used herein have
meanings that are well-known and defined in the literature, and do
not refer to products or compounds protected by trade secrets in
order to divine their meaning. All trademarks referenced in this
application are the property of their respective owners, and the
appearance of one or more trademarks in this application does not
diminish or otherwise adversely affect the validity of the one or
more trademarks. All trademarks, registered or unregistered, that
appear in this application are assumed to include a proper
trademark symbol, e.g., the circle R or [trade], even when such
trademark symbol does not explicitly appear next to the trademark.
To the extent a trademark is used in a descriptive manner to refer
to a product or process, that trademark should be interpreted to
represent the corresponding product or process as of the date of
the filing of this patent application.
[0134] While particular aspects of the present subject matter
described herein have been shown and described, it will be apparent
to those skilled in the art that, based upon the teachings herein,
changes and modifications may be made without departing from the
subject matter described herein and its broader aspects and,
therefore, the appended claims are to encompass within their scope
all such changes and modifications as are within the true spirit
and scope of the subject matter described herein. It will be
understood by those within the art that, in general, terms used
herein, and especially in the appended claims (e.g., bodies of the
appended claims) are generally intended as "open" terms (e.g., the
term "including" should be interpreted as "including but not
limited to," the term "having" should be interpreted as "having at
least," the term "includes" should be interpreted as "includes but
is not limited to," etc.). It will be further understood by those
within the art that if a specific number of an introduced claim
recitation is intended, such an intent will be explicitly recited
in the claim, and in the absence of such recitation no such intent
is present. For example, as an aid to understanding, the following
appended claims may contain usage of the introductory phrases "at
least one" and "one or more" to introduce claim recitations.
However, the use of such phrases should not be construed to imply
that the introduction of a claim recitation by the indefinite
articles "a" or "an" limits any particular claim containing such
introduced claim recitation to claims containing only one such
recitation, even when the same claim includes the introductory
phrases "one or more" or "at least one" and indefinite articles
such as "a" or "an" (e.g., "a" and/or "an" should typically be
interpreted to mean "at least one" or "one or more"); the same
holds true for the use of definite articles used to introduce claim
recitations. In addition, even if a specific number of an
introduced claim recitation is explicitly recited, those skilled in
the art will recognize that such recitation should typically be
interpreted to mean at least the recited number (e.g., the bare
recitation of "two recitations," without other modifiers, typically
means at least two recitations, or two or more recitations).
Furthermore, in those instances where a convention analogous to "at
least one of A, B, and C, etc." is used, in general such a
construction is intended in the sense one having skill in the art
would understand the convention (e.g., "a system having at least
one of A, B, and C" would include but not be limited to systems
that have A alone, B alone, C alone, A and B together, A and C
together, B and C together, and/or A, B, and C together, etc.). In
those instances where a convention analogous to "at least one of A,
B, or C, etc." is used, in general such a construction is intended
in the sense one having skill in the art would understand the
convention (e.g., "a system having at least one of A, B, or C"
would include but not be limited to systems that have A alone, B
alone, C alone, A and B together, A and C together, B and C
together, and/or A, B, and C together, etc.). It will be further
understood by those within the art that typically a disjunctive
word and/or phrase presenting two or more alternative terms,
whether in the description, claims, or drawings, should be
understood to contemplate the possibilities of including one of the
terms, either of the terms, or both terms unless context dictates
otherwise. For example, the phrase "A or B" will be typically
understood to include the possibilities of "A" or "B" or "A and
B."
[0135] With respect to the appended claims, those skilled in the
art will appreciate that recited operations therein may generally
be performed in any order. Also, although various operational flows
are presented in a sequence(s), it should be understood that the
various operations may be performed in other orders than those
which are illustrated, or may be performed concurrently. Examples
of such alternate orderings may include overlapping, interleaved,
interrupted, reordered, incremental, preparatory, supplemental,
simultaneous, reverse, or other variant orderings, unless context
dictates otherwise. Furthermore, terms like "responsive to,"
"related to," or other past-tense adjectives are generally not
intended to exclude such variants, unless context dictates
otherwise.
[0136] While various aspects and embodiments have been disclosed
herein, other aspects and embodiments will be apparent to those
skilled in the art. The various aspects and embodiments disclosed
herein are for purposes of illustration and are not intended to be
limiting, with the true scope and spirit being indicated by the
following claims.
* * * * *