U.S. patent application number 14/376187 was filed with the patent office on 2015-01-01 for device for detecting the three-dimensional geometry of objects and method for the operation thereof.
This patent application is currently assigned to A.TRON3D GMBH. The applicant listed for this patent is Jurgen Jesenko, Horst Koinig, Christoph Nowak. Invention is credited to Jurgen Jesenko, Horst Koinig, Christoph Nowak.
Application Number | 20150002649 14/376187 |
Document ID | / |
Family ID | 47845657 |
Filed Date | 2015-01-01 |
United States Patent
Application |
20150002649 |
Kind Code |
A1 |
Nowak; Christoph ; et
al. |
January 1, 2015 |
DEVICE FOR DETECTING THE THREE-DIMENSIONAL GEOMETRY OF OBJECTS AND
METHOD FOR THE OPERATION THEREOF
Abstract
A device for detecting the three-dimensional geometry of objects
(9), in particular teeth, includes a handpiece (1) which is
provided with at least one position sensor (12) for detecting the
change of the spatial position of the handpiece (1), and an optical
device (2) having at least one camera (5, 6) for capturing images
and at least one light source (3) for at least one projector (4).
The position sensor (12) in the handpiece (1) initially determines
the size of the change of the spatial position of the device. It is
determined therefrom, how many pictures the camera (5, 6) can take
in a defined time unit.
Inventors: |
Nowak; Christoph; (Wien,
AT) ; Koinig; Horst; (Klagenfurt, AT) ;
Jesenko; Jurgen; (Finkenstein, AT) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Nowak; Christoph
Koinig; Horst
Jesenko; Jurgen |
Wien
Klagenfurt
Finkenstein |
|
AT
AT
AT |
|
|
Assignee: |
A.TRON3D GMBH
Klagenfurt am Worthersee
AT
|
Family ID: |
47845657 |
Appl. No.: |
14/376187 |
Filed: |
February 2, 2013 |
PCT Filed: |
February 2, 2013 |
PCT NO: |
PCT/AT2013/000017 |
371 Date: |
August 1, 2014 |
Current U.S.
Class: |
348/77 |
Current CPC
Class: |
A61C 9/006 20130101;
H04N 13/221 20180501; A61B 5/4547 20130101; G01B 21/042 20130101;
G01B 11/2513 20130101; A61B 5/1079 20130101; H04N 7/18 20130101;
H04N 2213/001 20130101; H04N 13/254 20180501; H04N 13/189 20180501;
G06T 7/0012 20130101; G01B 11/022 20130101; H04N 13/194 20180501;
A61B 2560/0209 20130101; A61B 5/1077 20130101; A61B 5/1076
20130101 |
Class at
Publication: |
348/77 |
International
Class: |
G01B 11/02 20060101
G01B011/02; H04N 7/18 20060101 H04N007/18; G06T 7/00 20060101
G06T007/00 |
Foreign Application Data
Date |
Code |
Application Number |
Feb 6, 2012 |
DE |
10 2012 100 953.8 |
Claims
1-30. (canceled)
31. Method for operating a device for detecting the
three-dimensional geometry of objects, in particular teeth, with a
scanner with a handpiece (1) with at least one optical device (2)
with at least one camera (5, 6) for capturing images and at least
one light source (3), characterized in that, before the
three-dimensional geometry of objects is detected with the scanner,
calibration images of a preferably flat surface are taken at
different known distances.
32. Method according to claim 31, characterized in that the central
axes of the field angle of the cameras are oriented essentially
normal to the flat surface when taking the calibration images.
33. Method according to claim 31, characterized in that the central
axes of the field angle of the camera are tilted at known angles to
the surface when taking the calibration images.
34. Method according to claim 31, characterized in that brightness
profiles determined when taking the calibration images (and
dependent on the distances) are recorded in a table along with
empirical values of the edges of the pattern.
35. Method according to claim 34, characterized in that the table
is used when sharpening two-dimensional images of the camera (5, 6)
in the course of detecting the three-dimensional geometry.
36. Method according to claim 31, characterized in that a pattern
is projected onto the object with a projector (4).
37. Method according to claim 32, characterized in that brightness
profiles determined when taking the calibration images (and
dependent on the distances) are recorded in a table along with
empirical values of the edges of the pattern.
38. Method according to claim 33, characterized in that brightness
profiles determined when taking the calibration images (and
dependent on the distances) are recorded in a table along with
empirical values of the edges of the pattern.
39. Method according to claim 32, characterized in that a pattern
is projected onto the object with a projector (4).
40. Method according to claim 33, characterized in that a pattern
is projected onto the object with a projector (4).
41. Method according to claim 34, characterized in that a pattern
is projected onto the object with a projector (4).
42. Method according to claim 35, characterized in that a pattern
is projected onto the object with a projector (4).
Description
[0001] The invention relates to a device for detecting the
three-dimensional geometry of objects, in particular teeth,
comprising a handpiece that has an optical device with at least one
camera and at least one light source.
[0002] Furthermore, the invention relates to a method for the
operation of a device for detecting the three-dimensional geometry
of objects, in particular teeth, comprising a handpiece that has at
least one position sensor for detecting the change in the spatial
position of the handpiece and an optical device with at least one
camera for capturing images and at least one light source for a
projector.
[0003] A device of the type mentioned at the outset is, for
example, known from AT 508 563 B. The scope of the invention
extends in this case to recording digital tooth and jaw
impressions, assistance during diagnosis, supervision of dental
treatment, and reliable monitoring of deployed implants. In
addition to further applications in the fields of medical and
industrial technology, such as in endoscopy, objects that are
difficult to access can also be stereometrically measured.
[0004] The use of a position sensor is known from, for example,
U.S. Pat. No. 5,661,519 A.
[0005] The object of the invention is to improve such devices so
that they are operated with the smallest possible power supply. A
value of, for example, 500 mA or 900 mA is sought in this case.
[0006] With a device of the type mentioned at the outset, this
object is accomplished in that the optical device has exclusively
rigidly fixed components and that a means for generating light from
the light source is provided in the handpiece.
[0007] With a method of the type mentioned at the outset, this
object is accomplished in that the position sensor in the handpiece
determines the size of the change in the spatial position of the
device and that it is determined therefrom how many images the
camera can take in a defined time unit.
[0008] By arranging the means of generating light directly in the
handpiece, long optical paths (via fiber optic cable or multiple
deflection mirrors, for example) are avoided. A distinction is made
here between the light source, meaning anything that can emit light
(the end of a fiber optic cable, for example) and the means of
generating light (a laser or the semiconductor of an LED, for
example).
[0009] By eliminating long optical paths, a means of generating
light with lower power can be used in order to sufficiently
illuminate the object--which results in a noteworthy conservation
of energy.
[0010] The rigid assembly of all elements of the optical device
means that it is impossible to focus the optics of the camera. All
calibrations of the optical device therefore take place beforehand.
It is particularly important here to achieve an optimal adjustment
of the aperture. A smaller aperture is good in this case for a
greater depth of field, while a larger aperture requires a smaller
illumination for a sufficiently good image.
[0011] In the case of blurred areas in the 2D images, the issue is
dealt with in two different ways. On the one hand, areas where the
2D images are blurry provide information about distance. In this
way depth information can be gained from the degree of the
blurriness based on previously ascertained information about the
surface curvature. The blurry areas can therefore be utilized as
further sources of information. On the other hand, blurry points,
surfaces, lines, or the like can be drawn sharp and thus become
part of the regular (stereometric, for example) process of
extracting three-dimensional data.
[0012] For this purpose the scanner, in the course of calibration,
is arranged, for example, at various known distances over a flat
plane. Distances that change in steps of 50 .mu.m each have proven
to be particularly suitable for this purpose. Other distances may
also be used for calibration. In general, one skilled in the art
can be guided in choosing the distances or their changes by the
resolution of the means used to capture the two-dimensional images.
The better changes in the captured two-dimensional image can be
recognized, the less minor changes in the distances between the
scanner and the plane are meaningful during calibration.
[0013] Actually detecting the three-dimensional geometry of objects
is therefore preferably prefaced by taking calibration images of a
preferably flat surface at various known distances from the
scanner. The distances vary thereby in steps of preferably 50
.mu.m. Furthermore, the central axes of the field angle of the
camera while taking the calibration images are preferably aligned
essentially normal to the flat surface.
[0014] For each distance a mean brightness profile is saved of the
lightest to darkest areas of the points, surfaces, lines, and the
like. During later processing of blurry two-dimensional images, it
is no longer necessary to start with a statistical brightness
profile in order to clearly define the images; rather, it is
possible to read probable edges from an empirical table created
during calibration. The edges selected during sharpening therefore
have a much higher accuracy than edges chosen by conventional
processes. For a brightness profile in a two-dimensional image, a
brightness profile as similar as possible is chosen in the table;
thanks to this, it is possible, prior to the actual analysis of the
two-dimensional images, to estimate how far the area in question is
from the camera since different distances were recorded in the
table for different brightness profiles during calibration.
[0015] Calibration images in which the central axes of the field
angle of the camera are tilted at certain angles to the surface are
also conceivable.
[0016] In an especially preferred embodiment, the device has a
facility for synchronizing the power supply of the light source and
the camera. In this way the camera and light source are operated
synchronously commensurate to a preferred implementation of the
method. By using pulses of light, point-large outputs can be
achieved with comparatively low energy input. In this embodiment of
the invention, the energy supply is also interrupted on the imaging
end. In this way, unlit images are avoided, and additional energy
is saved.
[0017] In another preferred embodiment, the handpiece has at least
one position sensor (especially an acceleration sensor), magnetic
field sensor, and/or inclination sensor. Using this/these units,
the size of the change in the spatial position of the device is
determined according to the method; from this, it is determined how
many images should be made by the camera in a defined time unit. In
this way it is possible to avoid taking more images, upon slight
movement, of the same place than is necessary for optimal capturing
of the geometry.
[0018] In this sense the frame rate of the captured images in a
preferred implementation can be changed; preferably, the frame rate
is between 1 and 30 images per second. Additionally, or
alternatively, the frame rate can, according to a preferred
implementation of the method, also be adjusted depending on whether
a larger or smaller power supply is available. In the case of a
larger power supply, more light pulses can thus be emitted and
received than in the case of a smaller power supply.
[0019] In a potential embodiment of the invention, it can be
additionally determined how many images of a defined area are
recorded. Using this value, a quality can be assigned to a captured
area of the object. This quality can optionally be reproduced in
the 3D representation of the geometry of the object, so that the
user can react to it. Areas from which only a small amount of data
was captured--and which thus have a greater potential for
deviations from the geometry of the object can, for example, be
displayed in red. Areas in which the number of images is already
sufficient for the desired quality can, for example, be displayed
in green. Additional colors for intermediate stages are likewise
conceivable for areas in which an optimal value has already been
reached--meaning further images would not improve the recorded data
in any substantial way. Naturally, it is also possible to color
only the areas that have a lower quality.
[0020] In the interests of energy conservation, it may be
determined, according to an additional or alternative procedure
step for a defined area, how many images of this area have already
been made. Upon reaching a defined number of images, no further
images of this area are made. Furthermore, this measure is suitable
for optimizing the necessary processing steps in a processing unit
that processes the recorded data, and for conserving computing
power.
[0021] In a preferred embodiment, the optical device has at least
one projector for projecting patterns. Projecting patterns improves
the possibilities of detecting the three-dimensional geometry.
[0022] In a furthermore preferred embodiment, the field angle of
the camera and the field angle of the project overlap by at least
50%, preferably at least 80%, and especially preferably at least
90%. The field angle is the conical area in which the projection or
recording takes place. By having an overlap that is as large as
possible, the largest possible amount of energy expended is
utilized.
[0023] In a preferred embodiment the device optionally has a
rechargeable electrical energy storage system. This energy storage
system can, according to the invention, fulfill multiple
functions.
[0024] On the one hand, the storage system can, in a preferred
embodiment, serve as the sole energy source of the device. In this
case it is sensible for the device to additionally have a data
storage system or a way of providing for wireless data transfer.
The device can thus, without cables, be moved completely freely. In
an embodiment in which the data is saved, it is appropriate to
combine the subsequent transfer of data (via a USB connection, for
example) with the charging of the energy storage system.
[0025] Alternatively, the energy storage system can, according to
the invention, be an auxiliary power source of the device. This
auxiliary power source can be activated when necessary. For this
purpose, it is initially determined, according to a preferred
method, how much electricity is available to the device. In the
embodiment example it is particularly provided that it be
determined whether 500 mA or 900 mA is available to the
device--that is, whether the device is connected to a USB 2.0 port
or a USB 3.0 port. Should one desire to operate the device in a
mode that requires a 900 mA power supply but only have access to a
500 mA power supply, the energy storage system is, according to the
method, provided as an additional energy source. Similarly, a power
supply of, for example, 500 mA or 900 mA can analogously be
implemented when connected to a low-power USB port, which is
typically powered by 100 mA.
[0026] Alternatively or additionally, it can, in a further
preferred embodiment of the invention, be determined from the
ascertained value of the power supply available whether the device
should optionally be operated with two or three or more cameras. In
this way, different modes of operation are created for different
outputs of the power supply. Preferably, for example, two cameras
are operated in a mode of operation for 500 mA and three or more
cameras in a mode of operation for 900 mA.
[0027] In an especially preferred implementation of the method, the
data gathered by the camera is forwarded without further processing
or conditioning to a processing unit or a storage medium. In this
way, it is possible to eliminate completely the energy input that
would otherwise be required for a processor or chip that normally
performs this processing or conditioning. Further processing in the
processing unit can take place at least partially in the CPU;
however, it has been found that it is useful (especially with
regard to data processing speed) to process a part of the data
gathered for detecting or calculating the three-dimensional
geometry in the GPU. In this way it is possible to convert the
data, especially two-dimensional images taken by the cameras,
directly into a three-dimensional representation on a display or
into a file (a 3D file in STL format, for example) available on a
storage medium without any appreciable delay.
[0028] The device can, according to a preferred embodiment, have a
thermovoltaic element. Using this element, electric energy can,
according to a preferred implementation of the method, be obtained
from the heat that is produced during operation. On the one hand,
this energy can then be directly used for operating the device; on
the other hand, an energy storage system can be supplied with the
energy obtained, especially during device cool-down.
[0029] Additional preferred embodiments and implementations of the
invention are the subject matter of the remaining dependent
claims.
[0030] The invention will be subsequently further explained with
reference to the drawings.
[0031] FIG. 1 shows a schematized representation of an embodiment
of the invention and
[0032] FIG. 2 shows a schematic view of the underside of an
embodiment of the invention.
[0033] FIG. 1 shows an example embodiment of the device comprising
a handpiece 1, in which there is an optical device 2, which
comprises a light source 3, a projector 4, a first camera 5, a
second camera 6, and a mirror 7. In front of the mirror, there is a
recess in the housing 15 of the handpiece 1. This recess is
provided with a transparent cover 13 for hygienic reasons and to
protect the sensitive components in the handpiece 1.
[0034] In this embodiment the light source 3 is an LED. A means for
generating the light (not shown in the drawing) is located, in this
embodiment example, right in the light source 3 in the form of a
semiconductor. The subsequent pathway of the light inside and
outside of the device is depicted by an example light beam 8.
[0035] This beam initially passes through the projector 4. The
projector 4 serves thereby to project patterns onto the object.
These may be, depending on the type of capture of the geometry of
the object, both regular patterns, such as stripes, and irregular
patterns, such as irregular dot patterns.
[0036] After the projector 4, the light beam 8 encounters the
mirror 7 and is deflected by it onto the object 9 whose geometry is
to be captured. In the embodiment example depicted, the object 9 is
a tooth. In an embodiment not shown in the drawing, in which the
light source 3 and the projector 4 are already aligned in the
direction of the object, the mirror 7 is unnecessary.
[0037] The cameras 5, 6 record the pattern that is projected onto
the tooth 9, from which pattern the geometry of the tooth 9 will
later be calculated. According to a preferred implementation all
corresponding calculations take place in a processing unit outside
of the handpiece 1, whereby the power consumption of internal
chipsets or processors is minimized. The device may be connected to
this processing unit both physically by a cable 14 and wirelessly.
In the embodiment example, a wireless connection (Bluetooth or
WLAN, for example) is provided. For this purpose there is a
wireless data transfer means 10 in the handpiece, in particular a
transmitter and optionally a receiver.
[0038] Furthermore, an energy storage system 11 (optionally
rechargeable) is provided in the handpiece 1. In the depicted
embodiment example, this serves as an auxiliary power supply of the
device. The cable connected to the handpiece 1 may, however, also
be completely eliminated; this offers optimal freedom of
movement.
[0039] Furthermore, the drawing shows a position sensor 12. Using
this sensor, the size of the spatial movement of the handpiece 1
can be determined. For this purpose, the position sensor 12 can,
for example, be an acceleration sensor, a terrestrial magnetic
field sensor, or an inclination sensor. Combinations of different
sensor types increase the precision with which the change of the
spatial position or the movement of the handpiece 1 is
determined.
[0040] FIG. 2 shows a schematic view of the underside of an
embodiment of the invention. Two areas 16, 17 in which a
thermovoltaic element could be placed are shown.
[0041] In the first area 16, the thermovoltaic element is arranged
directly on the underside (meaning the side on which the cover 13
is located) in proximity to the optical device 2. This is
advantageous because the optical device 2, especially the projector
4, produces the most heat during operation. In this way, this heat
can be utilized with as little loss as possible.
[0042] Placing the thermovoltaic element in the second area 17 is
advantageous because the element can be sized larger; in this case,
however, a heat conductor that directs the heat from the optical
device 2 to the thermovoltaic element is necessary. Even when the
thermovoltaic element is positioned in the second area 17,
attachment to the underside of the handpiece 1 makes sense; by so
doing, a side of the thermovoltaic element that faces outward
(according to a preferred embodiment of the invention) and gives
off heat is not covered by the hand of the user.
* * * * *