3d Data To 2d And Isometric Views For Layout And Creation Of Documents

Myers; Stephen Brooks ;   et al.

Patent Application Summary

U.S. patent application number 14/671420 was filed with the patent office on 2015-10-01 for 3d data to 2d and isometric views for layout and creation of documents. This patent application is currently assigned to KNOCKOUT CONCEPTS, LLC. The applicant listed for this patent is Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen. Invention is credited to Jacob Abraham Kuttothara, Stephen Brooks Myers, Steven Donald Paddock, Andrew Slatton, John Moore Wathen.

Application Number20150279087 14/671420
Document ID /
Family ID54189850
Filed Date2015-10-01

United States Patent Application 20150279087
Kind Code A1
Myers; Stephen Brooks ;   et al. October 1, 2015

3D DATA TO 2D AND ISOMETRIC VIEWS FOR LAYOUT AND CREATION OF DOCUMENTS

Abstract

This application relates to methods for generating two-dimensional images from three-dimensional model data. A process according to the application may begin with providing a set of three-dimensional model data of a subject, and determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data. A user or an algorithm may select a view of the three-dimensional model data to convert to a two-dimensional image. The process may further include determining an outline of the three-dimensional model corresponding to the selected view, and projecting the outline of the three-dimensional model and a visible portion of the set of boundaries onto a two-dimensional image plane.


Inventors: Myers; Stephen Brooks; (Shreve, OH) ; Kuttothara; Jacob Abraham; (Loudonville, OH) ; Paddock; Steven Donald; (Richfield, OH) ; Wathen; John Moore; (Akron, OH) ; Slatton; Andrew; (Columbus, OH)
Applicant:
Name City State Country Type

Myers; Stephen Brooks
Kuttothara; Jacob Abraham
Paddock; Steven Donald
Wathen; John Moore
Slatton; Andrew

Shreve
Loudonville
Richfield
Akron
Columbus

OH
OH
OH
OH
OH

US
US
US
US
US
Assignee: KNOCKOUT CONCEPTS, LLC
Columbus
OH

Family ID: 54189850
Appl. No.: 14/671420
Filed: March 27, 2015

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61971036 Mar 27, 2014

Current U.S. Class: 345/420
Current CPC Class: G06K 2209/40 20130101; G06T 7/0002 20130101; G06T 2207/30168 20130101; G06K 9/4604 20130101; G06T 13/20 20130101; G06F 17/15 20130101; G06T 19/20 20130101; G06T 2207/10028 20130101; G06T 17/00 20130101; G06T 15/20 20130101; G06T 17/10 20130101; G06K 9/00201 20130101; G01B 11/26 20130101; G06T 2207/10016 20130101; G06K 9/036 20130101
International Class: G06T 15/20 20060101 G06T015/20; G06K 9/46 20060101 G06K009/46; G06T 17/10 20060101 G06T017/10

Claims



1. A method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; and projecting the outline of the three-dimensional model data and a visible portion of the set of boundaries onto a two-dimensional image plane.

2. The method of claim 1, further comprising the step of determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject.

3. The method of claim 1, further comprising the step of projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

4. The method of claim 2, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

5. The method of claim 1, wherein the three-dimensional model data comprises a point cloud.

6. The method of claim 5, further comprising the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.

7. The method of claim 6, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.

8. The method of claim 1, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.

9. The method of claim 5, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.

10. The method of claim 1, wherein the three-dimensional model data comprises a mesh.

11. The method of claim 10, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

12. A method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a point cloud; converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface; determining a set of boundaries between intersecting the simple surfaces, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method; selecting a view of the three-dimensional model data to convert to a two-dimensional image, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

13. The method of claim 12, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

14. The method of claim 13, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

15. A method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a mesh; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

16. The method of claim 15, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

17. The method of claim 16, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
Description



I. BACKGROUND OF THE INVENTION

[0001] A. Field of Invention

[0002] Embodiments generally relate to creating technical drawings from 3D model data.

[0003] B. Description of the Related Art

[0004] A variety of methods are known in the art for generating 2D images from 3D models. For instance, it is known to generate a collage of 2D renderings that represent a 3D model. It is further known to identify vertices and edges of objects in images. The prior art also includes methods for flattening 3D surfaces to 2D quadrilateral line drawings in a 2D image plane. However, the art is deficient in a number of regards. For instance, the prior art does not teach or suggest fitting a 3D point cloud to a set of simple 2D surfaces, determining boundaries and vertices of the 2D surfaces and projecting them onto an image plane.

[0005] Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.

II. SUMMARY OF THE INVENTION

[0006] Some embodiments may relate to a method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

[0007] Embodiments may further comprise projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

[0008] According to some embodiments the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

[0009] According to some embodiments the three-dimensional model data comprises a point cloud.

[0010] Embodiments may further comprise the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.

[0011] According to some embodiments a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.

[0012] According to some embodiments the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.

[0013] According to some embodiments the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.

[0014] According to some embodiments the three-dimensional model data comprises a mesh.

[0015] According to some embodiments the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

[0016] Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.

III. BRIEF DESCRIPTION OF THE DRAWINGS

[0017] The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:

[0018] FIG. 1 is a flowchart showing an image conversion process according to an embodiment of the invention;

[0019] FIG. 2 is a schematic view of a user capturing 3D model data with a 3D scanning device;

[0020] FIG. 3 is a drawing of a point cloud being converted into an isometric drawing;

[0021] FIG. 4 is a drawing showing the use of a set of simple surfaces for generating 2D drawings;

[0022] FIG. 5 is a drawing of a device according to an embodiment of the invention; and

[0023] FIG. 6 is an illustrative printout according to an embodiment of the invention.

IV. DETAILED DESCRIPTION OF THE INVENTION

[0024] A method for generating two-dimensional images includes determining a set of boundaries between intersecting surfaces of three-dimensional model data corresponding to an object. A specific view of the three-dimensional model data, for which the two-dimensional images are required, is selected. Upon selection of the specific view, the outline of the three-dimensional model data corresponding to the selected view is determined and corresponding invisible portion of the boundaries, due to opacity of the object, is identified. The outline of the three-dimensional model data and the visible portion of the boundaries so determined are projected onto a two-dimensional image plane.

[0025] Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 depicts a flow diagram 100 of an illustrative embodiment wherein three-dimensional data 110 is provided for the purpose of generating corresponding two-dimensional images. The three dimensional data may be in the form of point cloud or mesh representation of a three-dimensional subject. Furthermore, any and all other forms of three-dimensional data representation, now known or developed in the future, that are capable of being converted to point cloud or mesh form may be used.

[0026] The point cloud or mesh may be further converted to a set or sets of continuous simple surfaces by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. Any simple geometric surface including but not limited to a planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface may be used to represent the point cloud as the set of simple continuous surfaces.

[0027] A set of boundaries between intersecting surfaces of the three-dimensional model data is determined 112. In an illustrative embodiment this determination of a set of boundaries may be achieved by using a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. In an alternate embodiment wherein the three-dimensional model data is represented as a mesh, the set of boundaries may be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

[0028] Once the set of boundaries between intersecting surfaces of the three-dimensional model data is determined, a view of the image data for which two-dimensional images are required is selected 114. In one embodiment, the view may be selected by orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible. Based on the view selected, and outline of the image data corresponding to the view is determined 116. In one embodiment, the outline determination may be based upon selecting the portion of the image data from one visible edge to the other in the selected view. Also, the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject is determined 118. In another embodiment, the portion of the set of visible boundaries in the selected viewpoint is determined thereby excluding the invisible boundaries. The determined outline and the visible portion of the set of boundaries are projected on a two-dimensional image plane 120.

[0029] In another embodiment, the invisible portion of the boundaries may also be depicted on a 2D image plane in a manner that distinguishes the invisible boundaries from the visible boundaries. One illustrative mechanism of distinguishing invisible boundaries from visible ones may involve use of dashed, dotted, or broken lines.

[0030] FIG. 2 depicts an illustrative embodiment 200 wherein a three-dimensional scanner 210 is used to scan and obtain three-dimensional model data 216 of a real world subject 212. The three-dimensional model data 216 is obtained by scanning the subject 212 from various directions and orientations 214. The image scanner 210 may be any known or future developed 3D scanner including but not limited to mobile devices, smart phones or tablets configured to scan and obtain three-dimensional model data.

[0031] FIG. 3 depicts an illustrative embodiment 300 wherein the three-dimensional model data of the real world subject is represented in the form of a point cloud 310. This point cloud representation may be further converted to a set or sets of continuous simple surfaces 312. As discussed previously herein, this conversion may be achieved by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. The simple surfaces used to represent the point cloud may be any simple geometric surface (polygonal and cylindrical surface in this case) including but not limited to planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface. In one embodiment, a set of boundaries between the intersecting simple surfaces is determined using various methods known in the art including but not limited to a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. In another embodiment, where a mesh model is used instead of point cloud, the boundaries may also be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

[0032] FIG. 4 depicts an illustrative embodiment 400 wherein the three-dimensional model data, represented as a set of continuous simple surfaces 312, is used for 2D image generation. A view of the set of continuous simple surfaces 312 is chosen and the determined outline and the portion of visible set of boundaries corresponding to the chosen view is projected on a two-dimensional image plane. For example the top view may be chosen and projected 412 or the front view 416 or side view 414 may be chosen and projected. Optionally, the invisible boundaries 418 may be depicted using dashed, dotted, or broken lines. Furthermore, because of the nature of the image data collected and reconstructed, it is possible to produce drawings having precise dimensions, such as the ones shown in FIG. 4 elements 412 and 414.

[0033] It is also contemplated to include a dimensional standard in the collected 3D model data so that drawings can be made to scale, i.e. a 1:1 scale, with the identical measurements of the real world object being modeled. For instance, in some embodiments the scanning device may be equipped with features for measuring its distance from the object being scanned, and may therefore be capable of accurately determining dimensions. Embodiments may also include the ability to manipulate scale, so that a drawing of a very large object can be rendered in a more manageable scale such as 1:10. It may further be advantageous to include dimensions on the 3D or 2D drawings produced according to embodiments of the invention in the form of annotations similar to those shown in FIG. 4 elements 412 and 414.

[0034] FIG. 5 depicts an embodiment 500 wherein a user device is illustrated, such device 510 with a capacitive touch screen 512 and interface may be configured to either carry out the method provided herein or to receive the 2D images and other related data using the method provided herein. The device 510 may be any device with computing and processing capabilities including but not limited to user mobile phones, tablets, smart phones and the like. The device 510 may be adapted to display the point cloud 310 of the scanned subject and the corresponding set of continuous simple surfaces 312. Also, the various views such as top view 412, side view 414 and front view 416 are also displayed on the screen 512 of the device 510. The device 510 may connect to a printing device 520 to enable physical printing of the 2D images and other related information. It will be understood that images may be stored in the form of digital documents as well, and that the invention is not limited to printed documents. The device 520 may be connected to the printing device 520 through a wire connection 518 or wirelessly 516. The wireless connection 516 with the printing device 520 may include Wi-Fi, Bluetooth or any other now known or future developed method of wireless connectivity. There may be contextual touch screen buttons 514 on the screen 512 of the device 510 that may be configured to carry out various actions like execute print command, zoom in/out, select different views of the set of continuous simple surfaces 312 etc.

[0035] FIG. 6 depicts an illustrative embodiment 600 of a physical print or digital document 610 of the 2D images obtained using the methods described herein. A two-dimensional representation of the set of continuous simple surfaces 312, various 2D images such as top view 412, side view 414 and front view 416 may be depicted in the document 610. The document 610 may also contain additional information in the form of notes 612 or annotations with respect to the 2D images, and a header 614 and footer 616 section. For instance, embodiments of the invention may include the ability to precisely measure the actual dimensions of an object being scanned, therefore, notes and annotations may include, without limitation, the volume of the object, the objects dimensions, its texture and color, its location as determined by an onboard GPS, time and date that the scan was taken, the operator's name, or any other data that may be convenient to store with the scan data. If the average density of the object is known even the weight of the object could be determined and displayed in notes.

[0036] It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

[0037] Having thus described the invention, it is now claimed:

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed