U.S. patent application number 14/411285 was filed with the patent office on 2015-10-15 for systems, methods, and media for performing shape measurement.
This patent application is currently assigned to The Trustees of Columbia University in the Coty of New York. The applicant listed for this patent is The Trustees of Columbia University in the City of New York. Invention is credited to Mohit Gupta, Shree K Nayar.
Application Number | 20150292875 14/411285 |
Document ID | / |
Family ID | 48470306 |
Filed Date | 2015-10-15 |
United States Patent
Application |
20150292875 |
Kind Code |
A1 |
Gupta; Mohit ; et
al. |
October 15, 2015 |
SYSTEMS, METHODS, AND MEDIA FOR PERFORMING SHAPE MEASUREMENT
Abstract
Systems, methods, and media for performing shape measurement are
provided. In some embodiments, systems for performing shape
measurement are provided, the systems comprising: a projector that
projects onto a scene a plurality of illumination patterns, wherein
each of the illumination patterns has a given frequency, each of
the illumination patterns is projected onto the scene during a
separate period of time, three different illumination patterns are
projected with a first given frequency, and only one or two
different illumination patterns are projected with a second given
frequency; a camera that detects an image of the scene during each
of the plurality of periods of time: and a hardware processor that
is configured to: determine the given frequencies of the plurality
of illumination patterns; and measure a shape of an object in the
scene.
Inventors: |
Gupta; Mohit; (New York,
NY) ; Nayar; Shree K; (New York, NY) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
The Trustees of Columbia University in the City of New
York |
New York |
NY |
US |
|
|
Assignee: |
The Trustees of Columbia University
in the Coty of New York
New York
NY
|
Family ID: |
48470306 |
Appl. No.: |
14/411285 |
Filed: |
November 21, 2012 |
PCT Filed: |
November 21, 2012 |
PCT NO: |
PCT/US2012/066307 |
371 Date: |
December 24, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61563470 |
Nov 23, 2011 |
|
|
|
Current U.S.
Class: |
356/610 |
Current CPC
Class: |
G01B 11/2536 20130101;
G01B 11/254 20130101; G06K 9/4661 20130101 |
International
Class: |
G01B 11/25 20060101
G01B011/25 |
Goverment Interests
STATEMENT REGARDING GOVERNMENT FUNDED RESEARCH
[0002] This invention was made with government support under
contract U.S. Pat. No. 0,964,429 awarded by the National Science
Foundation and under contract N00014-11-1-0285 awarded by the
Office of Naval Research. The government has certain rights in the
invention.
Claims
1. A system for performing shape measurement, comprising: a
projector that projects onto a scene a plurality of illumination
patterns, wherein each of the illumination patterns has a given
frequency, each of the illumination patterns is projected onto the
scene during a separate period of time, three different
illumination patterns are projected with a first given frequency,
and only one or two different illumination patterns are projected
with a second given frequency; a camera that detects an image of
the scene during each of the plurality of periods of time; and a
hardware processor that is configured to: determine the given
frequencies of the plurality of illumination patterns; and measure
a shape of an object in the scene.
2. The system of claim 1, wherein the hardware processor is further
configured to determine the given frequencies of the plurality of
illumination patterns by measuring the amplitudes of reflected
light at different frequencies of illumination patterns and
selecting frequencies corresponding to a small range of
amplitudes.
3. The system of claim 2, wherein the small range of amplitudes are
with 1% of each other.
4. The system of claim 2, wherein the small range of amplitudes are
with 5% of each other.
5. The system of claim 2, wherein the small range of amplitudes are
with 10% of each other.
6. The system of claim 1, wherein the hardware processor is further
configured to: determine a plurality of light transport
characteristics relating to the scene; and determine the spatial
frequency based on the plurality of light transport
characteristics.
7. The system of claim 1, wherein the hardware processor is further
configured to determine parameters for each sinusoidal pattern such
that global illumination and defocus effects for the plurality of
images is constant.
8. The system of claim 1, wherein the hardware processor is further
configured to perform phase unwrapping using the Gushov-Solodkin
(G-S) algorithm.
9. The system of claim 1, wherein each of the given frequencies of
the plurality of illumination patterns is higher than 10 Hz.
10. The system of claim 1, wherein each of the given frequencies of
the plurality of illumination patterns is higher than 30 Hz.
11. The system of claim 1, wherein each of the given frequencies of
the plurality of illumination patterns is higher than 60 Hz.
12. A method for performing shape measurement, comprising:
projecting onto a scene a plurality of illumination patterns using
a projector, wherein each of the illumination patterns has a given
frequency, each of the illumination patterns is projected onto the
scene during a separate period of time, three different
illumination patterns are projected with a first given frequency,
and only one or two different illumination patterns are projected
with a second given frequency; detecting an image of the scene
during each of the plurality of periods of time using a camera;
determining the given frequencies of the plurality of illumination
patterns using a hardware processor; and measuring a shape of an
object in the scene using the hardware processor.
13. The method of claim 12, wherein the determining the given
frequencies of the plurality of illumination patterns is performed
by measuring the amplitudes of reflected light at different
frequencies of illumination patterns and selecting frequencies
corresponding to a small range of amplitudes.
14. The method of claim 13, wherein the small range of amplitudes
are with 1% of each other.
15. The method of claim 13, wherein the small range of amplitudes
are with 5% of each other.
16. The method of claim 13, wherein the small range of amplitudes
are with 10% of each other.
17. The method of claim 12, further comprising: determining a
plurality of light transport characteristics relating to the scene;
and determining the spatial frequency based on the plurality of
light transport characteristics.
18. The method of claim 12, further comprising determining
parameters for each sinusoidal pattern such that global
illumination and defocus effects for the plurality of images is
constant.
19. The method of claim 12, further comprising performing phase
unwrapping using the Gushov-Solodkin (G-S) algorithm.
20. The method of claim 12, wherein each of the given frequencies
of the plurality of illumination patterns is higher than 10 Hz.
21. The method of claim 12, wherein each of the given frequencies
of the plurality of illumination patterns is higher than 30 Hz.
22. The method of claim 12, wherein each of the given frequencies
of the plurality of illumination patterns is higher than 60 Hz.
23. A non-transitory computer-readable medium containing
computer-executable instructions that, when executed by a
processor, cause the processor to perform a method for performing
shape measurement, the method comprising: projecting onto a scene a
plurality of illumination patterns, wherein each of the
illumination patterns has a given frequency, each of the
illumination patterns is projected onto the scene during a separate
period of time, three different illumination patterns are projected
with a first given frequency, and only one or two different
illumination patterns are projected with a second given frequency;
detecting an image of the scene during each of the plurality of
periods of time; determining the given frequencies of the plurality
of illumination patterns; and measuring a shape of an object in the
scene.
24. The non-transitory computer-readable medium of claim 23,
wherein the determining the given frequencies of the plurality of
illumination patterns is performed by measuring the amplitudes of
reflected light at different frequencies of illumination patterns
and selecting frequencies corresponding to a small range of
amplitudes.
25. The non-transitory computer-readable medium of claim 24,
wherein the small range of amplitudes are with 1% of each
other.
26. The non-transitory computer-readable medium of claim 24,
wherein the small range of amplitudes are with 5% of each
other.
27. The non-transitory computer-readable medium of claim 24,
wherein the small range of amplitudes are with 10% of each
other.
28. The non-transitory computer-readable medium of claim 23,
wherein the method further comprises: determining a plurality of
light transport characteristics relating to the scene; and
determining the spatial frequency based on the plurality of light
transport characteristics.
29. The non-transitory computer-readable medium of claim 23,
wherein the method further comprises determining parameters for
each sinusoidal pattern such that global illumination and defocus
effects for the plurality of images is constant.
30. The non-transitory computer-readable medium of claim 23,
wherein the method further comprises performing phase unwrapping
using the Gushov-Solodkin (G-S) algorithm.
31. The non-transitory computer-readable medium of claim 23,
wherein each of the given frequencies of the plurality of
illumination patterns is higher than 10 Hz.
32. The non-transitory computer-readable medium of claim 23,
wherein each of the given frequencies of the plurality of
illumination patterns is higher than 30 Hz.
33. The non-transitory computer-readable medium of claim 23,
wherein each of the given frequencies of the plurality of
illumination patterns is higher than 60 Hz.
Description
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 61/563,470, filed Nov. 23, 2011, which is
hereby incorporated by reference herein in its entirety.
BACKGROUND
[0003] Phase shifting is a reliable and widely used shape
measurement technique. Because of its low cost, high speed, and
precision, it has found applications in surgery, factory
automation, performance capture, digitization of cultural heritage,
and other applications.
[0004] Phase shifting belongs to a class of active stereo
triangulation techniques. In these techniques, correspondence
between camera and projector pixels is established by projecting
coded intensity patterns on the scene. This correspondence can then
be used to triangulate points in the scene to establish a shape of
an object in the scene.
[0005] Like other active scene recovery techniques, phase shifting
assumes that scene points are only directly illuminated by a single
light source and thus that there is no global illumination. In
practice, however, global illumination is ubiquitous due to
inter-reflections and subsurface scattering. In fact, global
illumination arises in virtually every real world scene. As a
result, typical phase shifting produces erroneous results due to
such global illumination.
[0006] Furthermore, phase shifting algorithms also typically assume
that the light source is a perfect point with infinite depth of
field. However, all sources of light have limited depths of field,
which results in defocus. In order to account for defocus, existing
phase shifting techniques need to capture a large number of input
images.
[0007] Accordingly, new mechanisms for performing shape measurement
are desirable.
SUMMARY
[0008] Systems, methods, and media for performing shape measurement
are provided. In some embodiments, systems for performing shape
measurement are provided, the systems comprising: a projector that
projects onto a scene a plurality of illumination patterns, wherein
each of the illumination patterns has a given frequency, each of
the illumination patterns is projected onto the scene during a
separate period of time, three different illumination patterns are
projected with a first given frequency, and only one or two
different illumination patterns are projected with a second given
frequency; a camera that detects an image of the scene during each
of the plurality of periods of time; and a hardware processor that
is configured to: determine the given frequencies of the plurality
of illumination patterns; and measure a shape of an object in the
scene.
[0009] In some embodiments, methods for performing shape
measurement are provided, the methods comprising: projecting onto a
scene a plurality of illumination patterns using a projector,
wherein each of the illumination patterns has a given frequency,
each of the illumination patterns is projected onto the scene
during a separate period of time, three different illumination
patterns are projected with a first given frequency, and only one
or two different illumination patterns are projected with a second
given frequency; detecting an image of the scene during each of the
plurality of periods of time using a camera; determining the given
frequencies of the plurality of illumination patterns using a
hardware processor; and measuring a shape of an object in the scene
using the hardware processor.
[0010] In some embodiments, non-transitory computer-readable media
containing computer-executable instructions that, when executed by
a processor, cause the processor to perform a method for performing
shape measurement are provided, the method comprising: projecting
onto a scene a plurality of illumination patterns, wherein each of
the illumination patterns has a given frequency, each of the
illumination patterns is projected onto the scene during a separate
period of time, three different illumination patterns are projected
with a first given frequency, and only one or two different
illumination patterns are projected with a second given frequency;
detecting an image of the scene during each of the plurality of
periods of time; determining the given frequencies of the plurality
of illumination patterns; and measuring a shape of an object in the
scene.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 is a block diagram of an example of hardware for
performing shape measurement that can be used in accordance with
some embodiments.
[0012] FIG. 2 is a block diagram of an example of computer hardware
that can be used in accordance with some embodiments.
[0013] FIG. 3 is a diagram of an example of an illumination pattern
that can be used in accordance with some embodiments.
[0014] FIG. 4 is a diagram of an example of a process that can be
used to perform shape measurement in accordance with some
embodiments.
DETAILED DESCRIPTION
[0015] Systems, methods, and media for performing shape measurement
are provided. In some embodiments, mechanisms for shape measurement
can project illumination patterns on a scene containing one or more
objects using any suitable projector, and reflections of those
illumination patterns from the scene can be detected and stored as
images by any suitable camera. The patterns that are projected can
be any suitable patterns such as sinusoidal patterns. The
frequencies that are used for these patterns can be chosen to be
high enough so that both global illumination and defocus effects
remain essentially constant for all of the detected and stored
images. A correspondence between projector pixels and camera pixels
can then be determined and used to triangulate points on the
surface on the objects in the scene and hence determine the shapes
of those objects.
[0016] Turning to FIG. 1, an example 100 of hardware that can be
used in accordance with some embodiments is illustrated. As shown,
hardware 100 can include a computer, a projector, a camera, one or
more input devices 108, and one or more output devices 110 in some
embodiments.
[0017] During operation, computer 102 can cause projector 104 to
project any suitable number of structure light images onto a scene
112, which can include any suitable objects, such as objects 114
and 116, in some embodiments. At the same time, camera 106 can
detect light reflecting from the scene and provide detected images
to the computer in some embodiments. The computer can then perform
processing as described herein to determine the shape and any other
suitable data regarding the objects in the scene in some
embodiments.
[0018] Computer 102 can be any suitable processing device for
controlling the operation of projector 104 and camera 106, for
performing calculations as described herein, for generating any
suitable output, and/or for performing any other suitable functions
in some embodiments. Features of computer 102 in accordance with
some embodiments are described further in connection with FIG.
2.
[0019] Projector 104 can be any suitable device for projecting
structure light images as described herein. For example, projector
104 can be a projection system, a display, etc. More particularly,
for example, in some embodiments, projector 104 can be a SANYO
PLC-XP18N projection system available from SANYO NORTH AMERICA
CORPORATION of San Diego, Calif.
[0020] Camera 106 can be any suitable device for detecting images
as described herein. For example, camera 106 can be a photograph
camera, a video camera, a light sensor, an image sensor, etc. More
particularly, for example, in some embodiments, camera 106 can be a
machine-vision camera available from POINT GREY RESEARCH, INC. of
Richmond, British Columbia, Canada, or from LUMENERA CORPORATION of
Ottawa, Ontario, Canada.
[0021] Input devices 108 can be any suitable one or more input
devices for controlling computer 102 in some embodiments. For
example, input devices 108 can include a touch screen, a computer
mouse, a pointing device, one or more buttons, a keypad, a
keyboard, a voice recognition circuit, a microphone, etc.
[0022] Output devices 110 can be any suitable one or more output
devices for providing output from computer 102 in some embodiments.
For example, output devices 110 can include a display, an audio
device, etc.
[0023] Any other suitable components can be included in hardware
100 in accordance with some embodiments. Any suitable components
illustrated in hardware 100 can be combined and/or omitted in some
embodiments.
[0024] Computer 102 can be implemented using any suitable hardware
in some embodiments. For example, in some embodiments, computer 102
can be implemented using any suitable general purpose computer or
special purpose computer. Any such general purpose computer or
special purpose computer can include any suitable hardware. For
example, as illustrated in example hardware 200 of FIG. 2, such
hardware can include a hardware processor 202, memory and/or
storage 204, communication interface(s) 206, an input controller
208, an output controller 210, a projector interface 212, a camera
interface 214, and a bus 216.
[0025] Hardware processor 202 can include any suitable hardware
processor, such as a microprocessor, a micro-controller, digital
signal processor, dedicated logic, and/or any other suitable
circuitry for controlling the functioning of a general purpose
computer or special purpose computer in some embodiments.
[0026] Memory and/or storage 204 can be any suitable memory and/or
storage for storing programs, data, images to be projected,
detected images, measurements, etc. in some embodiments. For
example, memory and/or storage 204 can include random access
memory, read only memory, flash memory, hard disk storage, optical
media, etc.
[0027] Communication interface(s) 206 can be any suitable circuitry
for interfacing with one or more communication networks in some
embodiments. For example, interface(s) 206 can include network
interface card circuitry, wireless communication circuitry,
etc.
[0028] Input controller 208 can be any suitable circuitry for
receiving input from one or more input devices 108 in some
embodiments. For example, input controller 208 can be circuitry for
receiving input from a touch screen, from a computer mouse, from a
pointing device, from one or more buttons, from a keypad, from a
keyboard, from a voice recognition circuit, from a microphone,
etc.
[0029] Output controller 210 can be any suitable circuitry for
controlling and driving one or more output devices 110 in some
embodiments. For example, output controller 210 can be circuitry
for driving output to a display, an audio device, etc.
[0030] Projector interface 212 can be any suitable interface for
interfacing hardware 200 to a projector, such as projector 104, in
some embodiments. Interface 212 can use any suitable protocol in
some embodiments.
[0031] Camera interface 214 can be any suitable interface for
interfacing hardware 200 to a camera, such as camera 106, in some
embodiments. Interface 212 can use any suitable protocol in some
embodiments.
[0032] Bus 216 can be any suitable mechanism for communicating
between two or more of components 202, 204, 206, 208, 210, 212, and
214 in some embodiments.
[0033] Any other suitable components can be included in hardware
200 in accordance with some embodiments. Any suitable components
illustrated in hardware 200 can be combined and/or omitted in some
embodiments.
[0034] FIG. 3 illustrates an example of a structured light pattern
302 that can be projected by projector 104 onto scene 112 in
accordance with some embodiments. Pattern 302 can have an intensity
that varies between white and black in accordance with a sine
function, as shown by sine wave 304, in some embodiments. Thus,
each of the columns of pixels in pattern 302 can have a given
intensity vector. The period of the pattern, shown by 306, can be
measured in pixels in some embodiments.
[0035] Turning to FIG. 4, an example 400 of a process for
performing shape measurement in accordance with some embodiments is
illustrated. This process can be performed in computer 102 of FIG.
1 in some embodiments.
[0036] As shown, after process 400 has begun at 402, the process
can determine at 404 frequencies of structured light patterns to be
projected in some embodiments. Any suitable frequencies can be
used, and these frequencies can be determined in any suitable
manner.
[0037] For example, in some embodiments, a set .OMEGA. of pattern
frequencies (i.e., .OMEGA.={.omega..sub.1, . . . , .omega..sub.F})
can be selected so as to meet the following two conditions: (1) the
mean frequency .omega..sub.m is sufficiently high (period .lamda.
is small) so that global illumination does not introduce
significant errors in the recovered phase; and (2) the width of the
frequency band .delta. (i.e., .delta. where all frequencies in
.OMEGA. lie within the band [.omega..sub.m-.delta./2,
.omega..sub.m+.delta./2]) is sufficiently small so that the camera
detected amplitudes for all the frequencies are approximately the
same, i.e., A.sub.1.apprxeq.A.sub.2.apprxeq. . . .
.apprxeq.A.sub.F.apprxeq.A.
[0038] Regarding the first of these conditions, any suitable value
for .omega..sub.m can be used in some embodiments. For example, in
some embodiments, a mean frequency .omega..sub.m corresponding to a
period .lamda..sub.m smaller than 96 pixels (e.g., 16, 32, etc.
pixels) (as described above in connection with FIG. 3) can be
sufficiently high to prevent errors in the recovered phase due to
global illumination for a large collection of scenes. As another
example, in some embodiments, a mean frequency .omega..sub.m higher
than 10 Hz (e.g., 30 Hz, 60 Hz, etc.) can be sufficiently high to
prevent errors in the recovered phase due to global illumination
for a large collection of scenes.
[0039] In some embodiments, the selection of a mean frequency may
take into account that, due to optical aberrations in a given
projector used in such embodiments, the projector may not be able
to project certain high frequencies reliably and therefore that a
mean frequency lower than such high frequencies should be
selected.
[0040] Regarding the second of these conditions, any suitable value
for 6 can be used in some embodiments. For example, in some
embodiments, the width of the frequency band 6 can be selected to
be the largest value that will not cause the maximum variation in
the amplitudes of reflected light between any pair of frequencies
of projected patterns in the frequency band to exceed some
percentage such as 1%, 5%, 10%, etc. based on the noise level of
the camera. For example, higher noise levels in the camera will
allow higher variations in the amplitudes of reflected light
between pairs of frequencies of projected patterns. Such a
variation in the amplitudes can be confirmed by measuring and
averaging the amplitudes of reflected light over a large number of
scene points receiving different amounts of global illumination and
defocus in some embodiments.
[0041] In some embodiments, for a mean frequency .omega..sub.m
corresponding to a period .lamda..sub.m of 16 pixels, the width of
the frequency band 6 can correspond to a period of 3 pixels.
[0042] In some embodiments, the selection of the width of the
frequency band .delta. may take into account that, due to finite
spatial and intensity resolution in a given projector used in such
embodiments, the projector may not be able to reliably distinguish
two frequencies if the difference between them is less than a
threshold .epsilon.. Thus, in such embodiments, the width of the
frequency band .delta. can be selected so as to be large enough to
ensure that F different frequencies can be distinguished--i.e., are
at least .epsilon. apart.
[0043] Once the mean frequency .omega..sub.m and the width of the
frequency band 6 have been selected, a resulting frequency band can
be determined. For example, based on a mean frequency .omega..sub.m
corresponding to a period .lamda..sub.m of 16 pixels and a width of
the frequency band .delta. corresponding to a period of 3 pixels, a
resulting frequency band corresponding to periods of 14.5 through
17.5 pixels can be determined.
[0044] Next, the individual frequencies of the illumination
patterns can be selected. These frequencies can be selected in any
suitable manner in some embodiments. For example, in some
embodiments, these frequencies can be selected so that depth errors
due to phase errors are minimized. In some embodiments, because:
[0045] (a) such depth errors can be proportional to the phase error
.DELTA..phi.=|p-q|, where, for a given camera pixel, p is the
correct projector column and q is the computed projector column;
and [0046] (b) when F frequencies are used, each projector column
can be encoded with a unique F+2 dimensional intensity vector
(i.e., one for each image to be projected), in order to minimize
the probability of a phase error, the set of frequencies can be
chosen so that the set of frequencies maximize the distance
d.sub.pq between vectors corresponding to distinct projector
columns. For a given frequency set .OMEGA., the average weighted
distance between intensity vectors can be calculated as:
[0046] E ( .OMEGA. ) = 1 N 2 p , q = 1 N p - q d pq
##EQU00001##
where N is the total number of projector columns. For d.sub.pq, the
norm-2 Euclidean distance can be chosen in some embodiments. In
some embodiments, the set of frequencies in the frequency-band
[.omega..sub.min, .omega..sub.max] which minimizes E(.OMEGA.) can
then be selected using the following constrained F dimensional
optimization problem:
.OMEGA. *= arg min .OMEGA. E ( .OMEGA. ) , .omega. f .di-elect
cons. [ .omega. m - .delta. , .omega. m + .delta. ] .A-inverted. f
##EQU00002##
This optimization problem can be solved in any suitable manner. For
example, in some embodiments, the simplex search method (e.g., as
implemented in the MATLAB optimization toolbox) can be used to
solve the optimization problem.
[0047] For the frequency band of [14.5, 17.5] pixels and F=5, the
above procedure can return a frequency set corresponding to periods
of 14.57, 16.09, 16.24, 16.47, and 16.60 pixels in some
embodiments.
[0048] Once process 400 has selected the illumination pattern
frequencies, process 400 can next select a first of these
frequencies at 406 in some embodiments. Any suitable frequency can
be selected and that frequency can be selected in any suitable
manner in some embodiments. For example, in some embodiments, the
lowest frequency, the highest frequency, or the frequency closest
to the mean frequency can be selected as the first of the
frequencies.
[0049] Next, at 408, process 400 can cause the projector to project
an illumination pattern with the selected frequency at 408 in some
embodiments. For example, process 400 can cause projector 104 of
FIG. 1 to project illumination pattern 302 of FIG. 3 with a
frequency equal to the selected first frequency.
[0050] While the projector is projecting the illumination pattern
initiated at 408, process 400 can cause the camera to detect the
projected pattern as reflected off the scene as a detected image
and retrieve this image from the camera at 410 in some
embodiments.
[0051] Process can next determine whether the projection just made
at 408 is the last projection at 412 and, if not, whether a
different frequency is to be used for the next projection at 414.
These determinations can be made on any suitable basis and in any
suitable manner. For example, process 400 can determine that
another projection is needed when all frequencies in the set of
frequencies determined at 404 have not yet been selected at either
406 or 416. As another example, process 400 can determine that
another projection is needed when two or more phase shifted
projections are specified for a given frequency, but each of those
projections has not yet been made at 408.
[0052] More particularly, for example, in some embodiments, for a
given number of frequencies F, F+2 images can be projected and
detected in some embodiments. Even more particularly, in some
embodiments, three images can be projected and detected for a first
frequency and one image can be projected and detected for each of
the remaining F-1 frequencies.
[0053] Thus, at 412, process 400 can determine if the last
projection was just made. If so, then process 400 can proceed to
418 to recover the phases of the projector pixels at each of the
projected frequencies as described further below. Otherwise,
process 400 can determine at 414 whether a different frequency is
to be used for the next projection. If so, process 400 can select
the next frequency at 416 and then loop back to 408. Otherwise,
process 400 can loop back to 408.
[0054] As mention above, at 418, process 400 can recover the phases
of the projector pixels at each of the projected frequencies in
some embodiments. Recovering the phases can be performed in any
suitable manner in some embodiments. For example, based on the
detected images retrieved at 410, process 400 can recover the phase
values using the following equation:
.phi. f ( p ) = { acos ( U fact ( f + 1 ) A ( c ) ) if f = 1 , acos
( U fact ( f + 2 ) A ( c ) ) if 2 .ltoreq. f .ltoreq. F .
##EQU00003##
where:
[0055] f is an identifier numbered 1 through F of a frequency in
set .OMEGA.;
[0056] p is the projector pixel which illuminates camera pixel
c;
[0057] A(c) is the amplitude at frequency f of camera pixel c,
encapsulates the scene bidirectional reflectance distribution
function (BRDF), surface shading effects, and intensity fall-off
from the projector, and can be represented as A(c)= {square root
over (U.sub.fact(2).sup.2+U.sub.fact(3).sup.2)}{square root over
(U.sub.fact(2).sup.2+U.sub.fact(3).sup.2)}; and
U fact = [ O ( c ) A ( c ) * cos ( .phi. 1 ( p ) ) A ( c ) * sin (
.phi. 1 ( p ) ) A ( c ) * cos ( .phi. 2 ( p ) ) A ( c ) * cos (
.phi. F ( p ) ) ] . ##EQU00004##
U.sub.fact can be computed by solving the linear system given in
the following equation:
U.sub.micro=R.sub.micro/M.sub.micro
where: [0058] R.sub.micro is the vector of recorded intensities;
[0059] M.sub.micro is a square matrix of size F+2, and is given
as:
[0059] M micro = [ 1 1 0 0 0 1 cos ( 2 .pi. 3 ) - sin ( 2 .pi. 3 )
0 0 1 cos ( 4 .pi. 3 ) - sin ( 4 .pi. 3 ) 0 0 1 0 0 F - 1 1 0 0 ] ,
##EQU00005## [0060] and in which .sub.F-1 is an identity matrix of
size F-1.times.F-1:
[0060] F - 1 = [ 1 0 0 0 1 0 0 0 1 ] ; ##EQU00006##
and [0061] O(c) if the offset term of camera pixel c, and
encapsulates the contribution of ambient illumination.
[0062] Once the phases have been recovered, phase unwrapping can be
performed at 420 in some embodiments. Phase wrapping can be
performed using any suitable technique in some embodiments. For
example, in some embodiments, the Gushov-Solodkin (G-S) algorithm
(described in V. I. Gushov and Y. N. Solodkin, "Automatic
processing of fringe patterns in integer interferometers," Optics
Lasers Engineering, 14, 1991, which is hereby incorporated by
reference herein in its entirety) can be used to combine several
high frequency phases into a single low frequency phase. If the
periods of the high-frequency sinusoids are pairwise co-prime (no
common factors), then a low frequency sinusoid with a period equal
to the product of the periods of all the high-frequency sinusoids
can be simulated.
[0063] To implement the Gushov-Solodkin (G-S) algorithm, any
suitable approach can be used in some embodiments. For example, in
some embodiments, the phases at each frequency can first be
converted into residual projector column numbers p.sub.f as
follows:
p f = .phi. f 2 .pi. .lamda. f .A-inverted. f ##EQU00007##
For example, suppose .lamda..sub.f=16 pixels (period of the
frequency f) and .phi..sub.f=.pi./4 (phase of the frequency f).
Then, the residual column number can be given as p.sub.f=2. Next,
the final disambiguated column correspondence p can be given
as:
p=p.sub.1b.sub.1M.sub.1+ . . . +p.sub.Fb.sub.FM.sub.F,
where M.sub.f=(.lamda..sub.1.lamda..sub.2 . . .
.lamda..sub.F)/.lamda..sub.f, and the coefficients b.sub.f are
computed by solving the congruence b.sub.fM.sub.f .ident.1 (mod
.lamda..sub.f). Such congruences can be solved with the Euclid
algorithm. In some embodiments, the above procedure can also be
used for non-integral residuals p.sub.f and periods .lamda..sub.f
by making the following modification: first round the residuals for
computing the unwrapped column number p, and then add back the
fractional part to p.
[0064] Once phase unwrapping has been completed, at 422, the
shape(s) of objects in the scene can be calculated by determining
correspondence between the phases of camera pixels columns c and
the phases of projector pixels columns p and then by determining
the three dimensional (3D) locations of points on the surface(s)
(S.sub.x, S.sub.y, S.sub.z) of objects in the scene by
triangulation. This shape data can then be stored in suitable
memory and/or storage.
[0065] In accordance with some embodiments, triangulation can be
performed in any suitable manner. For example, in some embodiments,
triangulation can be performed as follows.
[0066] The 3D coordinates of the camera center and the projector
center, (C.sub.Cam1, C.sub.Cam2, C.sub.Cam3) and (C.sub.Proj1,
C.sub.Proj2, C.sub.Proj3), respectively, can be computed a priori
by geometrically calibrating the projector and the camera as known
in the art. Let the 3D coordinates of the camera pixel c be
(V.sub.c1, V.sub.c2, V.sub.c3). The 3D coordinates of the camera
pixel c (V.sub.c1, V.sub.c2, V.sub.c3) are known because the camera
pixel coordinates are known.
[0067] The projector column p and the projector center
(C.sub.Proj1, C.sub.Proj2, C.sub.Proj3) can be considered to define
a unique plane in 3D space. Let that plane be called P, and its
equation in 3D can be given by:
P.sub.1*x+P.sub.2*y+P.sub.3*z+P.sub.4=0,
where the coefficients P.sub.1, P.sub.2, P.sub.3 and P.sub.4 are
known because the column coordinate p is known.
[0068] Let the line passing through pixel c and the camera center
be called L. Note that the scene point S lies at the intersection
of the line L and the plane P. Triangulation involves finding this
intersection. The equation of the line L in 3D can be written
as:
L(t)=(C.sub.Cam1,C.sub.Cam2,C.sub.Cam3)+t*[(V.sub.c1,V.sub.c2,V.sub.c3)--
(C.sub.Cam1,C.sub.Cam2,C.sub.Cam3)]
[0069] The line L is parameterized by a scalar parameter t. The
goal is to find the value oft so that the resulting point on the
line also lies on the plane P. The value oft is given as:
t = - P 1 * C Cam 1 + P 2 * C Cam 2 + P 3 * C Cam 3 + P 4 P 1 * ( V
c 1 - C Cam 1 ) + P 2 * ( V c 2 - C Cam 2 ) + P 3 * ( V c 3 - C Cam
3 ) ##EQU00008##
[0070] Once t is computed, the 3D coordinates of the point S are
given as:
S.sub.x=C.sub.Cam1+t*(V.sub.c1-C.sub.Cam1)
S.sub.y=C.sub.Cam2+t*(V.sub.c2-C.sub.Cam2)
S.sub.z=C.sub.Cam3+t*(V.sub.c3-C.sub.Cam3)
[0071] Finally, process 400 can end at 424.
[0072] It should be understood that at least some of the above
described steps of process 400 of FIG. 4 can be executed or
performed in any order or sequence not limited to the order and
sequence shown and described in the figure. Also, some of the above
steps of process 400 of FIG. 4 can be executed or performed
substantially simultaneously where appropriate or in parallel to
reduce latency and processing times.
[0073] In some embodiments, any suitable computer readable media
can be used for storing instructions for performing the functions
and/or processes described herein. For example, in some
embodiments, computer readable media can be transitory or
non-transitory. For example, non-transitory computer readable media
can include media such as magnetic media (such as hard disks,
floppy disks, etc.), optical media (such as compact discs, digital
video discs, Blu-ray discs, etc.), semiconductor media (such as
flash memory, electrically programmable read only memory (EPROM),
electrically erasable programmable read only memory (EEPROM),
etc.), any suitable media that is not fleeting or devoid of any
semblance of permanence during transmission, and/or any suitable
tangible media. As another example, transitory computer readable
media can include signals on networks, in wires, conductors,
optical fibers, circuits, any suitable media that is fleeting and
devoid of any semblance of permanence during transmission, and/or
any suitable intangible media.
[0074] Although the invention has been described and illustrated in
the foregoing illustrative embodiments, it is understood that the
present disclosure has been made only by way of example, and that
numerous changes in the details of implementation of the invention
can be made without departing from the spirit and scope of the
invention, which is limited only by the claims that follow.
Features of the disclosed embodiments can be combined and
rearranged in various ways.
* * * * *