Active Point Cloud Modeling

Myers; Stephen Brooks ;   et al.

Patent Application Summary

U.S. patent application number 14/671749 was filed with the patent office on 2015-10-01 for active point cloud modeling. This patent application is currently assigned to KNOCKOUT CONCEPTS, LLC. The applicant listed for this patent is Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen. Invention is credited to Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen.

Application Number20150279121 14/671749
Document ID /
Family ID54189850
Filed Date2015-10-01

United States Patent Application 20150279121
Kind Code A1
Myers; Stephen Brooks ;   et al. October 1, 2015

Active Point Cloud Modeling

Abstract

A three-dimensional scan editing method can include providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed image. A user may select one or more voxels and change its state to over-writable. The state change may be reflected by a visual cue such as color or transparency. An image capture device may be provided and its field of view may be co-registered with the selected voxels. The user may then acquire new 3D model data with the device and overwrite the selected voxels with the new data.


Inventors: Myers; Stephen Brooks; (Shreve, OH) ; Kuttothara; Jacob Abraham; (Loudonville, OH) ; Paddock; Steven Donald; (Richfield, OH) ; Wathen; John Moore; (Akron, OH) ; Slatton; Andrew; (Columbus, OH)
Applicant:
Name City State Country Type

Myers; Stephen Brooks
Kuttothara; Jacob Abraham
Paddock; Steven Donald
Wathen; John Moore
Slatton; Andrew

Shreve
Loudonville
Richfield
Akron
Columbus

OH
OH
OH
OH
OH

US
US
US
US
US
Assignee: KNOCKOUT CONCEPTS, LLC
Columbus
OH

Family ID: 54189850
Appl. No.: 14/671749
Filed: March 27, 2015

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61971036 Mar 27, 2014

Current U.S. Class: 345/424
Current CPC Class: G06T 17/00 20130101; G06T 13/20 20130101; G06T 2207/10016 20130101; G01B 11/26 20130101; G06K 9/00201 20130101; G06K 2209/40 20130101; G06T 17/10 20130101; G06K 9/4604 20130101; G06T 2207/30168 20130101; G06T 2207/10028 20130101; G06T 7/0002 20130101; G06K 9/036 20130101; G06T 19/20 20130101; G06F 17/15 20130101; G06T 15/20 20130101
International Class: G06T 19/20 20060101 G06T019/20; G06T 17/00 20060101 G06T017/00

Claims



1. A three-dimensional scan editing method comprising the steps of: providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model; providing a scanning device adapted to acquire three-dimensional model data; selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable; providing a visual cue indicating that the selected one or more voxels are over-writable; co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data; using the scanning device to acquire new three-dimensional model data of the three-dimensional subject; and overwriting the selected voxels with the new three-dimensional model data.

2. The method of claim 1, further comprising the step of specifying a data acquisition quality parameter of the scanning device, wherein the quality parameter modifies the quality of the new three-dimensional model data.

3. The method of claim 2, wherein the data acquisition quality parameter is selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.

4. The method of claim 1, wherein the visual cue is selected from one or more of color, transparency, highlighting, or outlining.

5. The method of claim 1, wherein the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels.

6. The method of claim 5, wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.

7. The method of claim 5, wherein the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.

8. The method of claim 1, wherein the step of co-registering further comprises reorienting a three-dimensional model of the subject to match the field of view of the three-dimensional scanning device.

9. The method of claim 8, wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.

10. The method of claim 8, wherein the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.

11. The method of claim 1, wherein the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function.

12. The method of claim 11, wherein the set of three-dimensional model data defining a three-dimensional subject is displayed on a video display device in the form of a three-dimensional model.

13. The method of claim 12, wherein the three-dimensional model may be reoriented according to gesture input or touchscreen input.

14. A three-dimensional scan editing method comprising the steps of: providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model, wherein the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function; providing a scanning device adapted to acquire three-dimensional model data; selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable; providing a visual cue indicating that the selected one or more voxels are over-writable, wherein the visual cue is selected from one or more of color, transparency, highlighting, or outlining; co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data, wherein the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels, and wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point; specifying a data acquisition quality parameter of the scanning device selected from image resolution, optical filtering, background subtraction, color data, or noise reduction; using the scanning device to acquire new three-dimensional model data of the three-dimensional subject, wherein the quality parameter modifies the quality of the new three-dimensional model data; and overwriting the selected voxels with the new three-dimensional model data.
Description



I. BACKGROUND OF THE INVENTION

[0001] A. Field of Invention

[0002] Embodiments may generally relate to the field of modifying selected portions of a three-dimensional scan.

[0003] B. Description of the Related Art

[0004] Three-dimensional model capture and editing methods and devices are known in the imaging arts. For example, it is known to capture visible spectrum or infrared light, or other forms of electromagnetic radiation, or even sound waves with an imaging device, and convert the data to point clouds, voxels, and/or other convenient data formats. It is also known to adjust data acquisition parameters so as to capture an image of suitable resolution, or an image that otherwise has suitable characteristics. However, some three-dimensional models are generally suitable, but include areas where the image quality must be improved. Thus, there is a need in the art for systems and methods capable of editing portions of three dimensional model data without overwriting the entire image.

[0005] Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.

II. SUMMARY OF THE INVENTION

[0006] Some embodiments may relate to a three-dimensional scan editing method comprising the steps of: providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model; providing a scanning device adapted to acquire three-dimensional model data; selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable; providing a visual cue indicating that the selected one or more voxels are over-writable; co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data; using the scanning device to acquire new three-dimensional model data of the three-dimensional subject; and over-writing the selected voxels with the new three-dimensional model data.

[0007] Embodiments may further comprise the step of specifying a data acquisition quality parameter of the scanning device, wherein the quality parameter modifies the quality of the new three-dimensional model data.

[0008] In some embodiments the data acquisition quality parameter is selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.

[0009] In some embodiments the visual cue is selected from one or more of color, transparency, highlighting, or outlining.

[0010] In some embodiments the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels.

[0011] In some embodiments the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.

[0012] In some embodiments the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.

[0013] In some embodiments the step of co-registering further comprises reorienting a three-dimensional model of the subject to match the field of view of the three-dimensional scanning device.

[0014] In some embodiments the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function.

[0015] In some embodiments the set of three-dimensional model data defining a three-dimensional subject is displayed on a video display device in the form of a three-dimensional model.

[0016] In some embodiments the three-dimensional model may be reoriented according to gesture input or touchscreen input.

[0017] Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.

III. BRIEF DESCRIPTION OF THE DRAWINGS

[0018] The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:

[0019] FIG. 1 is an illustration of device acquiring 3D scanning data of a subject in accordance with a method of the invention;

[0020] FIG. 2 is an illustration of a user selecting a portion of a 3D model for editing;

[0021] FIG. 3 is an illustration of voxels of 3D model data;

[0022] FIG. 4 is a flow diagram of an illustrative embodiment;

[0023] FIG. 5 is an illustration of a device capturing new 3D model data for supplementing an existing 3D model data set according to a method of the invention; and

[0024] FIG. 6 illustrates a networked embodiment including separate image capture and image processing devices.

IV. DETAILED DESCRIPTION OF THE INVENTION

[0025] Methodology for modification of three dimensional (3D) scans includes, obtaining the image of a three dimensional subject with the help of 3D cameras, scanners or various other devices now known or developed in the future. The captured 3D model may be provided as a set of three dimensional model data representative of the three dimensional subject. The three dimensional model data may alternatively be obtained from previously recorded and stored data. This model data may be used to reconstruct the image of the three dimensional subject on any user device including but not limited to computing devices, imaging devices, mobile devices and the like. The three dimensional model data may be configured to permit selection and modification of specific voxels of the data for the purposes of further detailing or modification. Herein, the term `voxel` is understood in the same sense as generally understood in the relevant industry i.e. to include a unit of graphic information that defines any point of an object in three dimensional space. The modification of the selected voxels may be achieved by obtaining new three-dimensional model data using 3D scanning devices and overwriting existing voxels based on such new data.

[0026] Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 is an illustrative embodiment of a specific use case 100 wherein a 3D scanning device 110 is used to obtain three dimensional model data of a real world subject 112 (a vehicle in this case). The scanning device may be any known 3D scanning device including but not limited to mobile phones and tablets with three-dimensional scan capabilities. The scanning device 110 captures various features of the subject 112 from various angles and viewpoints 114. The model data so obtained is displayed as a reconstructed 3D model 116 of the subject 112 on the display screen of the scanning device 110. In a related embodiment, the three-dimensional model data may be obtained from a server or device memory where such data is already stored and the corresponding reconstructed 3D model may be displayed on the image-processing device. The three-dimensional model data may be obtained in any of the formats, now known or developed in the future, appropriate for image reconstruction including but not limited to isosurface, a signed distance function, a truncated signed distance function or surface element representation. Alternatively, other forms of model data such as meshes or point clouds or other forms of representation capable of being converted to one of the forms mentioned herein may also be used.

[0027] FIG. 2 represents an illustrative embodiment 200 wherein the three dimensional model data, displayed as a reconstructed 3D model 116 on video display of the image-processing device 210, is configured to permit selection of a specific part or view point 212 (in this case the wheel) of the subject. The selection may be made by selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable. The over-writable state informs the system, user's intention to modify or carry out further detailing of the selected voxels. FIG. 3 illustrates voxel representation 300 of three-dimensional model data wherein specific voxels 312 are selected and marked as over-writable. Herein, the voxels 310 are marked over-writable using the visual cue of change in color of the selected voxels 312. Other suitable visual cues include but are not limited to highlighting, changing transparency, or modifying/marking an outline of the voxels may be used to show the selected voxels marked as over-writable. In the example of a vehicle as a subject, the voxels corresponding to the wheel may be selected and marked as over-writable. This informs the system that the user intends to carry out further image processing of the wheel of the vehicle.

[0028] Once the voxels are selected and marked as over-writable, a 3D scanning device is further used to obtain new three-dimensional model data of the three-dimensional subject. In order to obtain new three-dimensional model data, a data acquisition quality parameter of the scanning device may be specified, to modify the quality of the new three-dimensional model data. The quality parameters may be selected from image resolution, optical filtering, background subtraction, color data, or noise reduction. With specific regard to color data as a quality parameter, it will be understood that one may specify whether data is to be collected in color, black and white, grey scale, etc. With reference to FIG. 2, i.e. the illustration of the vehicle as a subject, once the voxels corresponding to the wheel are selected, `image resolution` may be set as a data acquisition quality parameter of the scanning device in order to obtain further details of the wheel. As a result the scanning device takes a higher-resolution image of the wheel thereby capturing in-depth details of the wheel, its ridge pattern, and rim details etc.

[0029] FIG. 4 illustrates a flow diagram 400 of an illustrative embodiment wherein the new three-dimensional model data is obtained based on co-registration of the scanning device's view of the subject with the selected voxels of the three-dimensional model data. The specific voxels are selected 410 for further detailing, overwriting, or modification and a corresponding visual cue indicates the over-writable state of the voxels 412. The 3D scanning device is set to capture a specific view of the subject. The view being captured by the scanning device is co-registered with the selected voxels 414 to ensure that the correct viewpoint is captured by the scanning device and that the correct voxels are overwritten. Co-registration may involve data and viewpoint comparison by transforming the two sets of data i.e. one obtained from the three dimensional image and the other obtained from the view being captured, into one coordinate system. The device may be repositioned in case the correct viewpoint or angle is not obtained. In an exemplary embodiment, the three-dimensional model of the subject may be reoriented to match the field of view of the three-dimensional scanning device in order to easily and efficiently achieve co-registration. Once the user is satisfied that the appropriate viewpoint has been achieved 416, the scanning device is allowed to capture the new model data 418 from the co-registered viewpoint. In an alternate embodiment, the field of view of the scanning device may be adjusted to match the selected voxels to achieve accurate co-registration. The co-registration process may optionally comprise point-cloud registration, RGB image registration, intensity image registration, or iterative closest point registration to ensure easier, faster, and consistent aligning of the view being captured with the selected voxels. In yet another embodiment, co-registration may be achieved by assuming that the field of view of the scanning device matches the selected voxels.

[0030] FIG. 5 depicts an embodiment 500 illustrating the capture of new three-dimensional model data with the help of the 3D scanning device 110. The co-registered view 510 of the subject 112 is captured by the device 110 by means of rescanning 512 the subject 112. In the vehicle illustration, once the view of the wheel being captured by the device is co-registered with the selected voxels of the wheel in the three dimensional image, the device captures the new model data corresponding to the wheel. This new model data is used to modify or overwrite the existing voxels representing the wheel to provide real time modified 3D model data.

[0031] The method of 3D model data modification provided in exemplary embodiments herein may be used to modify 3D model data in real time and near real time environments. FIG. 6 illustrates an embodiment 600 wherein a 3D model processing device 610 and a 3D model capturing device 612 are connected to each other and to a central server 616 via a Local Area Network or a Wide Area Network (including internet) 614. The image capturing device 612 and the image processing device 610 may be configured to use the 3D model processing and modification methodology provided in the exemplary embodiments herein to work simultaneously on the same subject in real time or near real time environment. For example the image-processing device 610 may be used to select the voxels of a three dimensional image and the visual cue on the selected voxels may also be reflected on the image scanning device 612. The image-scanning device may then capture the new three dimensional model data that may be sent to the image-processing device 610. The image-processing device may use the new model data to modify the selected voxels. In one embodiment the image scanning device 612 and the image-processing device 610 may both be user mobile devices, smart phones, tablets and other similar devices with 3D model capturing and processing capability, or the image processing device 610 may be a purpose-built device. In yet another embodiment scanning device 612 and image processing device 610 may employ different image rendering methodologies and yet may be able to simultaneously use the 3D model modification methodology provided herein and interact with respect to the same 3D model data.

[0032] It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

[0033] Having thus described the invention, it is now claimed:

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed