U.S. patent application number 14/671313 was filed with the patent office on 2015-10-01 for recording animation of rigid objects using a single 3d scanner.
This patent application is currently assigned to KNOCKOUT CONCEPTS, LLC. The applicant listed for this patent is Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen. Invention is credited to Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen.
Application Number | 20150279075 14/671313 |
Document ID | / |
Family ID | 54189850 |
Filed Date | 2015-10-01 |
United States Patent
Application |
20150279075 |
Kind Code |
A1 |
Myers; Stephen Brooks ; et
al. |
October 1, 2015 |
RECORDING ANIMATION OF RIGID OBJECTS USING A SINGLE 3D SCANNER
Abstract
This application teaches a method or methods related to
recording animation. Such a method may include determining a
reference model of an object by separating a 3D image of the object
from a 3D image of its environment. The method may also include
analyzing the reference model using a feature detection and
localization algorithm(s). The object may then be recorded in
motion, and the recording may be analyzed using feature detection
and localization algorithms(s). Features of the recording may be
matched to features of the reference model, wherein a match between
the reference model and a frame of the recording comprises a pose
of the object. A video animation may be created by recording a time
series of poses of the object.
Inventors: |
Myers; Stephen Brooks;
(Shreve, OH) ; Kuttothara; Jacob Abraham;
(Loudonville, OH) ; Paddock; Steven Donald;
(Richfield, OH) ; Wathen; John Moore; (Akron,
OH) ; Slatton; Andrew; (Columbus, OH) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Myers; Stephen Brooks
Kuttothara; Jacob Abraham
Paddock; Steven Donald
Wathen; John Moore
Slatton; Andrew |
Shreve
Loudonville
Richfield
Akron
Columbus |
OH
OH
OH
OH
OH |
US
US
US
US
US |
|
|
Assignee: |
KNOCKOUT CONCEPTS, LLC
Columbus
OH
|
Family ID: |
54189850 |
Appl. No.: |
14/671313 |
Filed: |
March 27, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61971036 |
Mar 27, 2014 |
|
|
|
Current U.S.
Class: |
345/474 |
Current CPC
Class: |
G01B 11/26 20130101;
G06K 9/00201 20130101; G06K 9/4604 20130101; G06T 13/20 20130101;
G06T 15/20 20130101; G06T 2207/10016 20130101; G06K 9/036 20130101;
G06K 2209/40 20130101; G06T 19/20 20130101; G06T 2207/10028
20130101; G06T 2207/30168 20130101; G06F 17/15 20130101; G06T
7/0002 20130101; G06T 17/00 20130101; G06T 17/10 20130101 |
International
Class: |
G06T 13/20 20060101
G06T013/20; G06T 7/00 20060101 G06T007/00; G06T 7/20 20060101
G06T007/20 |
Claims
1. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a
three-dimensional model of the object from its environment in a 3D
reconstruction of a static scene; analyzing the reference model
using a feature detection and localization algorithm; recording
movement of the object; analyzing the recording using feature
detection and localization algorithms; matching features of the
recording to features of the reference model, wherein a match
between the reference model and a frame of the recording comprises
a pose of the object; and recording a time series of poses of the
object, the time series comprising an animation.
2. The method of claim 1 further comprising the step of saving the
reference model in association with the animation on a computer
readable medium.
3. The method of claim 1, wherein data for determining the
reference model is obtained with a three-dimensional scanning
device.
4. The method of claim 3, wherein the step of separating the
three-dimensional image of the object from the three-dimensional
image of the environment of the object is conducted by the
three-dimensional scanning device.
5. The method of claim 3, wherein the step of analyzing the
reference model is conducted by the three dimensional scanning
device.
6. The method of claim 3, wherein the data for determining the
reference model of the object, and for recording movement of the
object, are obtained with the same three-dimensional scanning
device.
7. The method of claim 1, wherein the feature detection and
localization algorithm for analyzing the reference model is
selected from one or more of RANSAC, iterative closest point, a
least squares method, a Newtonian method, a quasi-Newtonian method,
or an expectation-maximization method, detection of principal
curvatures, or detection of distance to a medial surface.
8. The method of claim 1, wherein the feature detection and
localization algorithm for analyzing the recording is selected from
one or more of RANSAC, iterative closest point, a least squares
method, a Newtonian method, a quasi-Newtonian method, or an
expectation-maximization method, detection of principal curvatures,
or detection of distance to a medial surface.
9. The method of claim 1, wherein a quantity of digital
computations of a microprocessor is reduced by applying a Kalman
filter to the step of analyzing the recording using feature
detection and localization algorithms.
10. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a
three-dimensional model of the object from its an environment in a
3D reconstruction of a static scene; analyzing the reference model
using a feature detection and localization algorithm selected from
one or more of RANSAC, iterative closest point, a least squares
method, a Newtonian method, a quasi-Newtonian method, or an
expectation-maximization method, detection of principal curvatures,
or detection of distance to a medial surface; recording movement of
the object; analyzing the recording using feature detection and
localization algorithms selected from one or more of RANSAC,
iterative closest point, a least squares method, a Newtonian
method, a quasi-Newtonian method, or an expectation-maximization
method, detection of principal curvatures, or detection of distance
to a medial surface, wherein a quantity of digital computations of
a microprocessor is reduced by applying a Kalman filter; matching
features of the recording to features of the reference model,
wherein a match between the reference model and a frame of the
recording comprises a pose of the object; and recording a time
series of poses of the object, the time series comprising an
animation.
11. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a
three-dimensional model of the object from its environment in a 3D
reconstruction of a static scene; analyzing the reference model
using a feature detection and localization algorithm selected from
one or more of RANSAC, iterative closest point, a least squares
method, a Newtonian method, a quasi-Newtonian method, or an
expectation-maximization method, detection of principal curvatures,
or detection of distance to a medial surface; recording movement of
the object; analyzing the recording using feature detection and
localization algorithms selected from one or more of RANSAC,
iterative closest point, a least squares method, a Newtonian
method, a quasi-Newtonian method, or an expectation-maximization
method, detection of principal curvatures, or detection of distance
to a medial surface, wherein a quantity of digital computations of
a microprocessor is reduced by applying a Kalman filter; matching
features of the recording to features of the reference model,
wherein a match between the reference model and a frame of the
recording comprises a pose of the object; and recording a time
series of poses of the object, the time series comprising an
animation; wherein the step of separating the three-dimensional
image of the object from the three-dimensional image of the
environment of the object is conducted by the three-dimensional
scanning device, and wherein the data for determining the reference
model of the object, and from recording movement of the object, are
obtained with the same three-dimensional scanning device.
12. The method of claim 11, wherein the step of separating the
three-dimensional image of the object from the three-dimensional
image of the environment of the object is conducted by the
three-dimensional scanning device.
13. The method of claim 12, wherein the step of analyzing the
reference model is conducted by the three dimensional scanning
device.
Description
I. BACKGROUND OF THE INVENTION
[0001] A. Field of Invention
[0002] Some embodiments may generally relate to the field of
extracting elements of 3D images in motion.
[0003] B. Description of the Related Art
[0004] Various video recording methodologies are known in the art
as well as various methods of computer analysis of video. However,
current recording analysis technologies tend to confine users to
merely recognizing features in image data. Furthermore, objects in
recorded digital video cannot be manipulated as in the manner of a
3D CAD drawing. What is missing is methodology for separating an
object from its background in a 3D reconstructed model of a static
scene, then using video of the same object in motion to obtain
further structural detail of the object, and creating a 3D model
object that can be reoriented, manipulated, and moved independent
of the image or video from which it was created.
[0005] Some embodiments of the present invention may provide one or
more benefits or advantages over the prior art.
II. SUMMARY OF THE INVENTION
[0006] Some embodiments may relate to a method for recording
animation comprising the steps of: determining a reference model of
an object by separating a three-dimensional model of the object
from its environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and
localization algorithm; recording movement of the object; analyzing
the recording using feature detection and localization algorithms;
matching features of the recording to features of the reference
model, wherein a match between the reference model and a frame of
the recording comprises a pose of the object; and recording a time
series of poses of the object, the time series comprising an
animation.
[0007] Embodiments may further comprise the step of saving the
reference model in association with the animation on a computer
readable medium.
[0008] According to some embodiments data for determining the
reference model is obtained with a three-dimensional scanning
device.
[0009] According to some embodiments the step of separating the
three-dimensional model of the object from its environment is
conducted by the three-dimensional scanning device.
[0010] According to some embodiments the step of analyzing the
reference model is conducted by the three dimensional scanning
device.
[0011] According to some embodiments the data for determining the
reference model of the object, and from recording movement of the
object, are obtained with the same three-dimensional scanning
device.
[0012] According to some embodiments the feature detection and
localization algorithm for analyzing the reference model is
selected from one or more of RANSAC, iterative closest point, a
least squares method, a Newtonian method, a quasi-Newtonian method,
or an expectation-maximization method, detection of principal
curvatures, or detection of distance to a medial surface.
[0013] According to some embodiments the feature detection and
localization algorithm for analyzing the recording is selected from
one or more of RANSAC, iterative closest point, a least squares
method, a Newtonian method, a quasi-Newtonian method, or an
expectation-maximization method, detection of principal curvatures,
or detection of distance to a medial surface.
[0014] According to some embodiments a quantity of digital
computations of a microprocessor is reduced by applying a Kalman
filter to the step of analyzing the recording using feature
detection and localization algorithms.
[0015] Embodiments may also relate to a method for recording
animation comprising the steps of: determining a reference model of
an object by separating a three-dimensional reconstruction of the
object from its environment in a 3D reconstruction of a static
scene; analyzing the reference model using a feature detection and
localization algorithm selected from one or more of RANSAC,
iterative closest point, a least squares method, a Newtonian
method, a quasi-Newtonian method, or an expectation-maximization
method, detection of principal curvatures, or detection of distance
to a medial surface; recording movement of the object; analyzing
the recording using feature detection and localization algorithms
selected from one or more of RANSAC, iterative closest point, a
least squares method, a Newtonian method, a quasi-Newtonian method,
or an expectation-maximization method, detection of principal
curvatures, or detection of distance to a medial surface, wherein a
quantity of digital computations of a microprocessor is reduced by
applying a Kalman filter; matching features of the recording to
features of the reference model, wherein a match between the
reference model and a frame of the recording comprises a pose of
the object; and recording a time series of poses of the object, the
time series comprising an animation.
[0016] Embodiments may also relate to a method for recording
animation comprising the steps of: determining a reference model of
an object by separating a three-dimensional reconstruction of the
object from its environment in a 3D reconstruction of a static
scene; analyzing the reference model using a feature detection and
localization algorithm selected from one or more of RANSAC,
iterative closest point, a least squares method, a Newtonian
method, a quasi-Newtonian method, or an expectation-maximization
method, detection of principal curvatures, or detection of distance
to a medial surface; recording movement of the object; analyzing
the recording using feature detection and localization algorithms
selected from one or more of RANSAC, iterative closest point, a
least squares method, a Newtonian method, a quasi-Newtonian method,
or an expectation-maximization method, detection of principal
curvatures, or detection of distance to a medial surface, wherein a
quantity of digital computations of a microprocessor is reduced by
applying a Kalman filter; matching features of the recording to
features of the reference model, wherein a match between the
reference model and a frame of the recording comprises a pose of
the object; and recording a time series of poses of the object, the
time series comprising an animation; wherein the step of separating
the three-dimensional image of the object from the
three-dimensional image of the environment of the object is
conducted by the three-dimensional scanning device, and wherein the
data for determining the reference model of the object, and from
recording movement of the object, are obtained with the same
three-dimensional scanning device.
[0017] Other benefits and advantages will become apparent to those
skilled in the art to which it pertains upon reading and
understanding of the following detailed specification.
III. BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The invention may take physical form in certain parts and
arrangement of parts, embodiments of which will be described in
detail in this specification and illustrated in the accompanying
drawings which form a part hereof and wherein:
[0019] FIG. 1 is a process according to an embodiment of the
invention;
[0020] FIG. 2 illustrates capturing 3D reconstructed model of a
static object according to one embodiment;
[0021] FIG. 3 illustrates separating an element of a 3D model from
its background; and
[0022] FIG. 4 illustrates obtaining additional detail of a scanned
and separated object by recording it in motion.
IV. DETAILED DESCRIPTION OF THE INVENTION
[0023] A method for recording animation of a three-dimensional real
world object includes separating a 3D model of the object from a 3D
model of its surroundings. Many known 3D scanners and cameras are
capable of achieving obtaining the data necessary for method
according to embodiments of this invention. This model of the 3D
object separated from the model of its environment, may be used as
a reference model. The reference model may be further analyzed
using a feature detection and localization algorithm to identify
various features of the reference model that may be used for
comparison with live feed from the 3D scanning device. Movement,
manually induced or otherwise, of the object may be recorded using
the 3D scanning device. Once again the features of the recording of
the object in motion may be analyzed utilizing similar feature
detection and localization algorithms. The features of the
recording can be compared with the features of the reference model,
and when matches are found said matches may comprise poses of the
object for rendering an animation. Finally, the poses may be
recombined in any order to formulate an animation of the object.
The combination of a time series of poses arranged in any order and
an arbitrary background allows one to create animations of the
object that differ from the motion observed in the previously
recorded video. As used herein the term posed includes the
generally accepted meaning in the 3D imaging arts.
[0024] Referring now to the drawings wherein the showings are for
purposes of illustrating embodiments of the invention only and not
for purposes of limiting the same, FIG. 1 depicts a flow diagram
100 of an illustrative embodiment for recording animation of a real
world three dimensional object. In a first step (not shown) 3D
model data of an object may be captured by any arbitrary 3D digital
imaging device and/or may be retrieved from storage in a database.
A reference model of the object may be obtained by separating the
object from its environment 110 according to known mathematical
methods. In one embodiment, the act of separating the model of the
object from its environment may be achieved using a 3D scanning
device configured with such capabilities; however it is
contemplated that any 3D digital scanning device may be used to
carry out methods taught herein.
[0025] The reference model may analyzed using feature detection and
localization algorithms 112 in order to enable later comparison of
the features and related data with live feed from the scanning
device. The feature detection and localization algorithm used for
analyzing the reference model may be chosen from many processes and
algorithms now known or developed in the future. Some such feature
detection and localization algorithms include RANSAC (Random Sample
Consensus), iterative closest point, least squares methods,
Newtonian methods, quasi-Newtonian methods,
expectation-maximization methods, detection of principal
curvatures, or detection of distance to a medial surface. The
methodology and corresponding algorithms of all of these processes
are known in the art and incorporated by reference herein. In an
illustrative embodiment, during the step of analyzing the recording
using a feature detection and localization algorithm, the quantity
of digital computations of a microprocessor may be reduced by
applying a Kalman filter. In this context a Kalman filter allows
embodiments to accurately predict the next position and/or
orientation of the object which enables embodiments to apply
feature detection calculations to smaller regions of the 3D data.
Kalman filter methodology is known in the art and is incorporated
by reference herein.
[0026] Movement of the real world three-dimensional object may be
manually induced and recorded using a 3D scanning device 114.
Features of the object in the recording may be analyzed using
similar feature detection and localization algorithms 116. The
feature detection and localization algorithm used for analyzing the
recording may be chosen from many processes and algorithms now
known or developed in the future. Some such feature detection and
localization algorithms include RANSAC (Random Sample Consensus),
iterative closest point, least squares methods, Newtonian methods,
quasi-Newtonian methods, expectation-maximization methods,
detection of principal curvatures, or detection of distance to a
medial surface. The methodology and corresponding algorithms of all
of these processes are incorporated by reference herein. In an
illustrative embodiment, during the step of analyzing the recording
using a feature detection and localization algorithm, the quantity
of digital computations of a microprocessor may be reduced by
applying a Kalman filter.
[0027] Once the features of the recording are obtained, such
features may be compared with the features of the reference model
118. A match between the features of the recording and the features
of the reference model comprises a pose of the object. The feature
comparison may be continuously made until multiple matches result
in multiple poses 120 being obtained. In an alternate embodiment,
the matching of the features to obtain poses is done in real time
when the recording is being made. A time series of the various
poses may be recorded in any order comprising an animation of the
object 122. In an illustrative embodiment, the reference model
initially obtained may be saved in association with the animation.
This may be saved on any computer readable medium.
[0028] FIG. 2 depicts an illustrative embodiment 200 wherein a 3D
scanner 210 is used to obtain an image 216 of a real world object
212. The scanner 210 may collect images of the static object 212
from all directions and orientations 214 to ensure a complete
modeling 216 of the object 212. A reconstruction of this image data
may be used to obtain a reference model of the real world object
212. In another embodiment, images of the static object 212 may be
collected from less than all vantage points, and missing data may
be filled in by correlating areas of missing data to areas of the
object in a later-collected video image showing the object in
motion.
[0029] FIG. 3 depicts an illustrative embodiment 300 wherein the
model 216 of the object is obtained on a 3D data processing device
314 for further processing. After the model is captured 216, a data
processing device may be used to separate the model of the object
312 from the model of its environment 310. This separation of the
object from its environment may then be used as a reference model
of the object, or may be used to produce a reference model of the
object through further data processing.
[0030] FIG. 4 depicts an illustrative embodiment 400 wherein the
movement of the real world object 410 is recorded 412 using a 3D
scanning device 210. The features of the recording 412 are analyzed
using feature detection and localization algorithms and the
features of the recording are compared with the features of the
reference model. A match between the features of the recording 412
and the features of the reference model comprises a pose of the
three-dimensional object. A continuous matching of the features
results in multiple poses and a time series of the various poses
may be recorded comprising an animation of the object. In one
embodiment, the reference model may be saved in association with
the animation on a computer readable medium, device storage or
server (including cloud server).
[0031] It will be apparent to those skilled in the art that the
above methods and apparatuses may be changed or modified without
departing from the general scope of the invention. The invention is
intended to include all such modifications and alterations insofar
as they come within the scope of the appended claims or the
equivalents thereof.
[0032] Having thus described the invention, it is now claimed:
* * * * *