U.S. patent application number 17/434064 was filed with the patent office on 2022-05-05 for scanner device with replaceable scanning-tips.
This patent application is currently assigned to 3SHAPE A/S. The applicant listed for this patent is 3SHAPE A/S. Invention is credited to Morten Vendelbo FOGED, Esben Rosenlund HANSEN, Anders Robert JELLINGGAARD, Peter Dahl Ejby JENSEN, Soren Greve JENSEN, Dmytro Chupryna OLEGOVYCH, Michael PEDERSEN, Christoph VANNAHME.
Application Number | 20220133447 17/434064 |
Document ID | / |
Family ID | 1000006148578 |
Filed Date | 2022-05-05 |
United States Patent
Application |
20220133447 |
Kind Code |
A1 |
HANSEN; Esben Rosenlund ; et
al. |
May 5, 2022 |
SCANNER DEVICE WITH REPLACEABLE SCANNING-TIPS
Abstract
The present disclosure provides a scanning system for scanning
an object, including a scanner device including an image sensor for
acquiring images; a mounting-interface for detachably mounting at
least one of a plurality of types of scanning-tips, wherein each of
the plurality of types scanning-tips is configured for providing
light to the object in an illumination-mode that differs for each
of the plurality of types of scanning-tips.
Inventors: |
HANSEN; Esben Rosenlund;
(Bronshoj, DK) ; JELLINGGAARD; Anders Robert;
(Copenhagen K, DK) ; JENSEN; Peter Dahl Ejby;
(Valby, DK) ; FOGED; Morten Vendelbo; (Copenhagen
V, DK) ; VANNAHME; Christoph; (Holte, DK) ;
PEDERSEN; Michael; (Allerod, DK) ; JENSEN; Soren
Greve; (Copenhagen S, DK) ; OLEGOVYCH; Dmytro
Chupryna; (Copenhagen K, DK) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
3SHAPE A/S |
Copenhagen K |
|
DK |
|
|
Assignee: |
3SHAPE A/S
Copenhagen K
DK
|
Family ID: |
1000006148578 |
Appl. No.: |
17/434064 |
Filed: |
February 25, 2020 |
PCT Filed: |
February 25, 2020 |
PCT NO: |
PCT/EP2020/054936 |
371 Date: |
August 26, 2021 |
Current U.S.
Class: |
382/128 |
Current CPC
Class: |
G06T 2207/30036
20130101; A61C 9/0053 20130101; G06T 7/40 20130101; A61B 5/0088
20130101; A61B 5/0062 20130101; G06T 7/0012 20130101 |
International
Class: |
A61C 9/00 20060101
A61C009/00; A61B 5/00 20060101 A61B005/00; G06T 7/00 20060101
G06T007/00; G06T 7/40 20060101 G06T007/40 |
Foreign Application Data
Date |
Code |
Application Number |
Feb 27, 2019 |
EP |
19159766.5 |
Mar 11, 2019 |
EP |
19161887.5 |
Claims
1. A scanning system for scanning an object, comprising: a scanner
device comprising: an image sensor for acquiring images; a
mounting-interface for detachably mounting at least one of a
plurality of types of scanning-tips, wherein each of the plurality
of types scanning-tips is configured for providing light to the
object in an illumination-mode that differs for each of the
plurality of types of scanning-tips; and a recognition component
for recognizing the type of scanning-tip mounted to the
mounting-interface; a processor configured for processing the
images acquired by the image sensor into processed data; and a
controller configured for controlling the operation of the
processor according to the type of the scanning-tip recognized by
the recognition component, wherein the controller is further
configured for controlling the processor such that when a first
type of scanning-tip is mounted and recognized, the processor is
controlled to operate in a first processing-mode corresponding to
the first type of scanning-tip, and such that when a second type of
scanning-tip is mounted and recognized, the processor is controlled
to operate in a second processing-mode corresponding to the second
type of scanning-tip, wherein the second processing-mode is
different from the first processing-mode, and wherein: when in the
first processing mode, the processor processes a first plurality of
images acquired with a first illumination-mode to provide the
processed data in the form of first data for 3D geometry and first
data for texture of the object, wherein the first data for the 3D
geometry is based on: a first subset of the first plurality of
images being selected according to the first type of scanning tip,
thereby defining part of the first processing mode, and/or a first
subset of pixels within said first plurality of images being
selected according to the first type of scanning tip, thereby
defining part of the first processing mode, and wherein the first
data for the texture of the object is based on: a second subset of
the first plurality of images being selected according to the first
type of scanning tip, thereby defining part of the first processing
mode, and/or a second subset of pixels within said first plurality
of images being selected according to the first type of scanning
tip, thereby defining part of the first processing mode, and when
in the second processing mode, the processor processes a second
plurality of images acquired with a second illumination-mode to
provide the processed data in the form of second data for 3D
geometry and second data for texture of the object, wherein the
second data for the 3D geometry is based on: a first subset of the
second plurality of images being selected according to the second
type of scanning tip, thereby defining part of the second
processing mode, and/or a first subset of pixels within said second
plurality of images being selected according to the second type of
scanning tip, thereby defining part of the second processing mode,
wherein the second data for the texture of the object is based on:
a second subset of the second plurality of images being selected
according to the second type of scanning tip, thereby defining part
of the second processing mode, and/or a second subset of pixels
within said second plurality of images being selected according to
the second type of scanning tip, thereby defining part of the
second processing mode.
2. The scanning system according to claim 1, wherein the processor
is integrated in the scanner device.
3. The scanning system according to claim 1, wherein the controller
is external to the scanner device.
4. The scanning system according to claim 1, wherein the type of
scanning-tip, as recognized by the recognition component, is in the
form of recognition-data, and wherein the scanner device is
configured to transmit the recognition-data to the controller.
5. The scanning system according to claim 1, wherein the
recognition element comprises a memory-reader configured to read
recognition-data from an integrated memory on each of the plurality
of types scanning-tips.
6. The scanning system according to claim 1, wherein the
illumination-mode for one type of scanning-tip is defined by the
wavelength of the light and/or wherein the illumination-mode for
one type of scanning tip is defined by the intensity of the
light.
7. The scanning system according to claim 1, wherein the
illumination-mode for one type of scanning-tip is defined by
different wavelengths of the light, whereby one type of
scanning-tip switches between the different wavelengths of the
light.
8. The scanning system according to claim 1, wherein the
illumination-mode for one type of scanning-tip is defined by the
field-of-view of the light and/or wherein the illumination-mode for
one type of scanning-tip is defined by a pattern of the light.
9. The scanning system according to claim 1, wherein the first data
for the 3D geometry is based on the first subset of the first
plurality of images, and the first subset of pixels within said
first plurality of images, and wherein the first data for the
texture of the object is based on the second subset of the first
plurality of images and the second subset of pixels within said
first plurality of images, wherein the first subset of the first
plurality of images is identical to the second subset of the first
plurality of images, and wherein the first subset of pixels within
said first plurality of images is different from the second subset
of pixels within said first plurality of images.
10. The scanning system according to claim 1, wherein the first
data for the 3D geometry is based on the first subset of the first
plurality of images, and the first subset of pixels within said
first plurality of images, and wherein the first data for the
texture of the object is based on the second subset of the first
plurality of images and the second subset of pixels within said
first plurality of images, wherein the first subset of the first
plurality of images is different from the second subset of the
first plurality of images, and wherein the first subset of pixels
within said first plurality of images is different from the second
subset of pixels within said first plurality of images.
11. The scanning system according to claim 1, wherein the first
data for the 3D geometry is based on the first subset of the first
plurality of images, and the first subset of pixels within said
first plurality of images, and wherein the first data for the
texture of the object is based on the second subset of the first
plurality of images and the second subset of pixels within said
first plurality of images, wherein the first subset of the first
plurality of images is different from the second subset of the
first plurality of images, and wherein the first subset of pixels
within said first plurality of images is identical to the second
subset of pixels within said first plurality of images.
12. The scanning system according to claim 10, wherein the first
subset of the first plurality of images is every second image of
the plurality of images as recorded with non-chromatic light at a
plurality of wavelengths, and wherein the second subset of the
first plurality of images is the remaining images of the plurality
of images recorded with monochromatic light at a first
wavelength.
13. The scanning system according to claim 10, wherein the first
subset of the first plurality of images is every third image of the
first plurality of images as recorded with non-chromatic light
defined by a plurality of wavelengths, and wherein the second
subset of the first plurality of images is the remaining images of
the first plurality of images recorded with monochromatic light at
a first wavelength and at a second wavelength.
14. The scanning system according to claim 9, wherein the second
subset of the first plurality of images is a single image as
recorded with non-chromatic light defined by a plurality of
wavelengths.
15. The scanning system according to claim 1, wherein the second
data for the 3D geometry is based on the first subset of the second
plurality of images, and the first subset of pixels within said
second plurality of images, and wherein the second data for the
texture of the object is based on the second subset of the second
plurality of images and the second subset of pixels within said
second plurality of images, wherein the first subset of the second
plurality of images is identical to the second subset of the second
plurality of images, and wherein the first subset of pixels within
said second plurality of images is different from the second subset
of pixels within said second plurality of images.
16. The scanning system according to claim 1, wherein the second
data for the 3D geometry is based on the first subset of the second
plurality of images, and the first subset of pixels within said
second plurality of images, and wherein the second data for the
texture of the object is based on the second subset of the second
plurality of images and the second subset of pixels within said
second plurality of images, wherein the first subset of the second
plurality of images is different from the second subset of the
second plurality of images, and wherein the first subset of pixels
within said second plurality of images is different from the second
subset of pixels within said second plurality of images.
17. The scanning system according to claim 1, wherein the second
data for the 3D geometry is based on the first subset of the second
plurality of images, and the first subset of pixels within said
second plurality of images, and wherein the first data for the
texture of the object is based on the second subset of the second
plurality of images and the second subset of pixels within said
second plurality of images, wherein the first subset of the second
plurality of images is different from the second subset of the
second plurality of images, and wherein the first subset of pixels
within said second plurality of images is identical to the second
subset of pixels within said second plurality of images.
18. The scanning system according to claim 1, wherein the scanner
device further comprises a lens configured to translate back and
forth while the first and/or second plurality of images is
acquired.
19. The scanning system according to claim 1, wherein the scanning
system further comprises a processor configured to generate a
3D-model of the object, and wherein the 3D-model is generated based
on the first data for the 3D geometry, but wherein the 3D-model is
not generated based on the second data for the 3D geometry, or
wherein the 3D-model is generated based on the second data for the
3D geometry, but wherein the 3D-model is not generated based on the
first data for the 3D geometry.
20. The scanning system according to claim 19, wherein when the
3D-model is not generated based on the second data for the 3D
geometry, then the second data for the 3D geometry is compared to
the first data for the 3D geometry, whereby the second data for
texture of the object is matched to the 3D-model.
21. The scanning system according to claim 19, wherein when the
3D-model is not generated based on the first data for the 3D
geometry, then the first data for the 3D geometry is compared to
the second data for the 3D geometry, whereby the first data for
texture of the object is matched to the 3D-model.
22. A computer-implemented method for generating a
3D-representation of an oral cavity displayed in a graphical
user-interface on a screen, comprising the steps of: displaying, in
the graphical user-interface, a plurality of options for scanning,
such that a user is instructed, in the user-interface, to select
one of said options for scanning; receiving, by the user, one or
more of said options for scanning; displaying, in the graphical
user-interface, and based on the one option for scanning as
received, first mounting-instructions for the user to mount a first
scanning-tip to a scanner device; receiving first information from
the scanner device related to the first scanning-tip when the first
scanning-tip is mounted to the scanner device; displaying, in the
graphical user-interface, and based on the first information from
the scanner device, a first scanning instruction and/or a first
scanning indication for the user to scan with the scanner device
having mounted the first scanning-tip; receiving first scan data by
the scanner device with the first scanning-tip, wherefrom a first
part of the 3D-representation is generated; displaying, in the
graphical user-interface, and based on the first scan data as
received, second mounting-instructions to replace the first
scanning-tip with a second scanning-tip; receiving second
information from the scanner device related to the second
scanning-tip when the second scanning tip is mounted to the scanner
device; displaying, in the graphical user-interface, and based on
the second information from the scanner device, a second scanning
instruction and/or second scanning indication for the user to scan
with the scanner device having the second scanning-tip; and
receiving second scan data by the scanner device with the second
scanning-tip, wherefrom a second part of the 3D-representation is
generated.
23. The computer-implemented method according to claim 22, wherein
the one of said options for scanning is related to edentulous
scanning.
24. The computer-implemented method according claim 22, wherein the
step of receiving, by the user, one or more of said options for
scanning is provided by the user clicking on the one or more of
said options in the user-interface.
25. The computer-implemented method according to claim 22, wherein
the step of receiving the first information from the scanner device
related to the first scanning-tip and/or the step of receiving the
second information from the scanner device related to the second
scanning-tip is provided from a recognition component in the
scanner device that recognizes the type of scanning-tip when
mounted to the scanner device.
26. The computer-implemented method according to claim 22, wherein
the step of receiving the first information from the scanner device
related to the first scanning-tip is provided from visual
recognition of at least a part of the first scanning-tip in the
field of view of the first scanning-tip and/or the step of
receiving the second information from the scanner device related to
the second scanning-tip is provided from visual recognition of at
least a part of the second scanning-tip in the field-of-view of the
second scanning-tip.
27. The computer-implemented method according to claim 22, wherein
the first scanning-tip is configured for scanning with a larger
field-of-view in comparison to the second scanning-tip, whereby the
first part of the 3D-representation is used as a reference model
for the second part of the 3D-representation being matched to the
reference model.
28. The computer-implemented method according to claim 22, wherein
the step of displaying instructions to replace the first
scanning-tip with a second scanning-tip is based on
confirmation-input from a user, wherein the confirmation-input
comprise information confirming that the first part of the
3D-representation as generated is sufficient.
29. The computer-implemented method according to claim 28, wherein
the first part of the 3D-representation, as confirmed sufficient,
is collected over time from: the user, and/or a plurality of
different users, thereby forming historical 3D-representations as
confirmed sufficient, whereby the step of displaying the
instructions to replace the first scanning-tip with a second
scanning-tip is automatized and based on the historical
3D-representations as confirmed sufficient.
30. The computer-implemented method according to claim 29, wherein
the historical 3D-representations as confirmed sufficient is used
as input for an algorithm configured to determine when the
3D-representation as generated is sufficient, and wherein the
algorithm is based on averaging the historical 3D-representations,
and/or wherein the algorithm is based on machine learning and/or
artificial intelligence.
Description
FIELD OF THE INVENTION
[0001] The present disclosure relates generally to a scanning
system comprising a scanner device with different replaceable
scanning-tips. More specifically, the present disclosure relates to
how the scanner device operates with the different replaceable
scanning tips. Most specifically, the present disclosure relates to
scanner devices for intra-oral scanning and/or intra-ear
scanning.
BACKGROUND
[0002] Scanner devices with replaceable scanning-tips are known in
the field of scanning. For example, in the field of intra-oral
scanning, scanning tips with different optical configurations are
well-known.
[0003] One scanning-tip might for example be configured with one
specific field-of-view and another scanning-tip might for example
be configured with different field-of-view. This allows not only
for changing the field-of-view, but because the field-of-view is
related to the physical dimension of the scanning-tip, it also
allows for the size of the scanning-tip to be changed. In this way,
the one scanning tip may be used for intra-orally scanning of
adults, and the other scanning-tip may be used for intra-orally
scanning of children.
[0004] Also, one scanning tip might for example be configured to
move an optical component, such as a mirror, at one frequency, and
another scanning tip might for example be configured to move an
identical optical component, such as an identical mirror, at
another frequency. This may for example allow for scanning at
different scanning rates dependent on the scanning-tip being
used.
[0005] All in all, it is well-known in the field of scanning that
scanning-tips can be replaced such that the scanning-operation is
adapted to a specific scanning-situation and/or such that the
scanning-operation is adapted to a specific object to be
scanned.
[0006] However, the flexibility of change in scanning-operation is
limited to the hardware or to the change in the operation of the
hardware responsible for the scanning.
[0007] Further, the scanning-operation might not be the only
operation that the operator of the scanner device would like to
change.
[0008] A more flexible scanner device is therefore desired in the
field of scanning.
SUMMARY
[0009] One objective of the present disclosure is to provide a more
flexible scanner device.
[0010] The present disclosure provides in a first aspect a scanning
system for scanning an object, comprising: a scanner device
comprising: an image sensor for acquiring images; a
mounting-interface for detachably mounting at least one of a
plurality of types of scanning-tips, wherein each of the plurality
of types scanning-tips is configured for providing light to the
object in an illumination-mode that differs for each of the
plurality of types of scanning-tips; and a recognition component
for recognizing the type of scanning-tip mounted to the
mounting-interface. Further, the scanner device may comprise a
processor configured for processing the images acquired by the
image sensor into processed data. Even further, the scanner device
may comprise a controller configured for controlling the operation
of the processor according to the type of the scanning-tip
recognized by the recognition component.
[0011] The scanner device as here disclosed may advantageously
adapt the operation of the processor to a specific
scanning-situation or to a specific scanning-object. This allows
for example to operate the scanner device in the same manner for
two different scanning-situations or for two different
scanning-objects but processing the acquired images differently
dependent on the scanning-situation or the scanning-object. For
example, the scanning tip may be operated in the same manner for
two different scanning-situation of to the scanning-object. In this
manner, based on the type of scanning-tip being used, an operator
may get different results of the processed images.
[0012] Furthermore, the disclosed scanner device may also allow for
example to operate the scanner in different manners for two
different scanning-situations or for two different scanning-objects
and processing the acquired images differently dependent on the
scanning-situation or the scanning-object. For example, the
scanning tip may be operated differently for two different
scanning-situations or for two different scanning-objects. In this
manner, based on the type of scanning-tip being used based on the
scanning-operation of the scanning-tip, an operator may also get
different results of the processed images.
[0013] Accordingly, the present disclosure provides a much more
flexible scanner device than typical scanner devices. To understand
this advantage better, examples of how typical scanners work is
described below.
[0014] The processor in typical scanner devices, as configured to
process images acquired by the scanner device, typically performs a
well-defined task regardless of the scanning-situation and/or
regardless of the scanning-object being scanned. For example, a
fixed or well-defined processing-task may be in: [0015] a confocal
scanner, i.e. a scanner that at least comprises a processor
configured to derive a depth-coordinate based on isolated analysis
of single image-points on an image-sensor, for example by isolated
analyzing the intensity of single image-points on the image-sensor
and determining when the single image-point is maximum and
therefore in-focus; [0016] a triangulation scanner, i.e. a scanner
that at least comprises a processor configured to derive a depth
coordinate based on triangularization of one or more light stripes
on an image-sensor, for example by analyzing the position of a
light stripe in relation to a known position on the image sensor;
and/or [0017] a structured light projection focus-scanner, i.e. a
scanner that at least comprises a processor configured to derive a
depth-coordinate based on comparative analysis of a plurality of
image-points on an image-sensor, for example by analyzing the
correlation of the plurality of points with a reference on the
image-sensor and for example determining when the correlation of
the plurality of image-points is maximum and therefore
in-focus.
[0018] One of the different processing-tasks, as described above,
may for example be performed by a field-programmable-gate-array
(FPGA) processor residing in the given scanner device. The
processor may then perform the given task because the processor is
instructed thereto by a pre-defined script that may run on the
scanner device. Typical scanners are therefore not very flexible
when it comes to the processing of the images.
[0019] The inventors of the scanner device as here disclosed have
realized that by having a controller configured for controlling the
operation of the processor according to the type of the
scanning-tip recognized by the recognition component, the scanner
device does not need to have a fixed processing-task, for example
as pre-defined on the scanner-device, and the scanner device does
not need to run different scripts as defined or re-defined by an
operator.
[0020] By the presently disclosed scanner, the processing tasks or
mode of the acquired images as performed by the processor is
defined in an adaptable manner and defined by the controller as
defined by the recognition-element once a specific scanning-tip is
mounted to the scanner device.
[0021] A technical effect of this adaption is that the processing
task or mode of the processor, in addition to being adapted to the
given scanning-situation and/or scanning-object, is efficiently
controlled. For example, by letting the controller control the
process of the processor is much faster than manually selecting or
instructing the processor to process the acquired images.
[0022] All in all, the present disclosure provides a scanner device
which efficiently adapts both the scanner and the scanner-output,
i.e. the processed images, to a given scanning-situation and/or
scanning object.
[0023] In a second aspect, the present disclosure provides a
computer-implemented method for generating a 3D-representation of
an oral cavity displayed in a graphical user-interface on a screen,
comprising the steps of: [0024] displaying, in the graphical
user-interface, a plurality of options for scanning, such that a
user is instructed, in the user-interface, to select one of said
options for scanning; [0025] receiving, by the user, one or more of
said options for scanning; [0026] displaying, in the graphical
user-interface, and based on the one option for scanning as
received, first mounting-instructions for the user to mount a first
scanning-tip to a scanner device; [0027] receiving first
information from the scanner related to the first scanning-tip when
the first scanning-tip is mounted to the scanner device; [0028]
displaying, in the graphical user-interface, and based on the first
information from the scanner, a first scanning instruction and/or a
first scanning indication for the user to scan with the scanner
device having mounted the first scanning-tip, [0029] receiving
first scan data by the scanner device with the first scanning-tip,
wherefrom a first part of the 3D-representation is generated;
[0030] displaying, in the graphical user-interface, and based on
the first scan data as received, second mounting instructions to
replace the first scanning-tip with a second scanning-tip; [0031]
receiving second information from the scanner device related to the
second scanning-tip when the second scanning tip is mounted to the
scanner device; [0032] displaying, in the graphical user-interface,
and based on the second information from the scanner device, a
second scanning instruction and/or a second scanning indication for
the user to scan with the scanner device having the second
scanning-tip; and [0033] receiving second scan data by the scanner
device with the second scanning-tip, wherefrom a second part of the
3D-representation is generated.
[0034] The 3D-representation as generated using the above disclosed
method, i.e. a final 3D representation made of at least the first
part of the 3D-representation and the second part of the
3D-representation, depends on the interaction between the user and
the user-interface. Further, an advantage of the above disclosed
method, as may be performed by a processor on for example a
computer, is that the 3D-representation is generated only when the
user does what he or she is instructed to via the user-interface.
For example, the first scan data is only received by the processor
when the user mounts the first scanning-tip as displayed in the
user-interface, and the second data is only received by the
processor when the user mounts the second scanning-tip as displayed
in the user-interface. Accordingly, the herein disclosed method
provides at least two steps that change the way that the
3D-representation is made. Further, because the process is only
able to continue to the steps of receiving data when the proper
scanning-tip is mounted, the method can only be carried out when
the user correctly mounts the correct scanning-tip. Thus, if the
user by error does not correctly mount the scanning-tip as
instructed, and/or if the user does not mount the correct
scanning-tip as instructed, then the process is not carried out.
Thus, the user is also prevented from carrying out a process that
is not wanted. Accordingly, the interaction between the
user-interface and the physical world changes the process of
generating the 3D-representation.
[0035] In one embodiment of the second aspect of the invention, the
first and second scanning-tips may be two of a plurality of types
of scanning-tips, wherein each of the two types scanning-tips is
configured for providing light to the object in an
illumination-mode that differs for each of the two types of
scanning-tips.
[0036] In some embodiments, the two aspects may be combined. For
example, the scanning system according to the first aspect may
include a processor to perform the computer-implemented method
according to the second aspect.
[0037] Accordingly, in another embodiment of the second aspect, the
step of receiving the first information from the scanner device
related to the first scanning-tip and/or the step of receiving the
second information from the scanner device related to the second
scanning-tip is provided from a recognition component in the
scanner device, according to the second aspect of the invention,
that recognizes the type of scanning-tip when mounted to the
scanner device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0038] The above and/or additional objects, features and advantages
of the present disclosure, will be further described by the
following illustrative and non-limiting detailed description of
embodiments of the present disclosure, with reference to the
appended drawing(s), wherein:
[0039] FIG. 1 shows an example of a scanning system 1 according to
the invention.
[0040] FIG. 2 shows an example of a processing-mode related to
intra-oral scanning mode
[0041] FIG. 3 shows an example of a processing-mode related to
inner-ear scanning mode
[0042] FIG. 4 shows an example of a processing-mode related to
intra-oral and infrared scanning mode
[0043] FIG. 5 shows an example of a processing-mode related to
intra-oral and fluorescent scanning mode
[0044] FIG. 6 shows an example of a processing-mode related to
intra-oral and reduced field-of-view scanning mode
[0045] FIG. 7a shows an example of a processing-mode related to
face scanning and enlarged field-of-view scanning mode, and FIG. 7b
shows details of a scanning-tip used for face scanning.
[0046] FIG. 8 shows an example of processing-mode related to
intra-oral scanning mode
[0047] FIG. 9 shows an example of a processing-mode related to
intra-oral scanning mode
[0048] FIG. 10(a-f) shows an example of a user-interface according
to the second aspect of the invention.
DETAILED DESCRIPTION
[0049] The Controller and the Processing-Mode(s)
[0050] In one embodiment of the scanning system, the controller is
further configured for controlling the processor such that when a
first type of scanning-tip is mounted and recognized, the processor
is controlled to operate in a first processing-mode corresponding
to the first type of scanning-tip, and such that when a second type
of scanning-tip is mounted and recognized, the processor is
controlled to operate in a second processing-mode corresponding to
the second type of scanning-tip, wherein the second processing-mode
is different from the first processing-mode.
[0051] In a first preferred embodiment, when in the first
processing mode, the processor processes a first plurality of
images acquired with a first illumination-mode to provide the
processed data in the form of first data for 3D geometry and first
data for texture of the object, wherein the first data for the 3D
geometry is based on: a first subset of the first plurality of
images being selected according to the first type of scanning tip,
thereby defining part of the first processing mode, and/or a first
subset of pixels within said first plurality of images being
selected according to the first type of scanning tip, thereby
defining part of the first processing mode, and wherein the first
data for the texture of the object is based on: a second subset of
the first plurality of images being selected according to the first
type of scanning tip, thereby defining part of the first processing
mode, and/or a second subset of pixels within said first plurality
of images being selected according to the first type of scanning
tip, thereby defining part of the first processing mode.
[0052] In a second preferred embodiment, when in the second
processing mode, the processor processes a second plurality of
images acquired with a second illumination-mode to provide the
processed data in the form of second data for 3D geometry and
second data for texture of the object, wherein the second data for
the 3D geometry is based on: a first subset of the second plurality
of images being selected according to the second type of scanning
tip, thereby defining part of the second processing mode, and/or a
first subset of pixels within said second plurality of images being
selected according to the second type of scanning tip, thereby
defining part of the second processing mode, wherein the second
data for the texture of the object is based on: a second subset of
the second plurality of images being selected according to the
second type of scanning tip, thereby defining part of the second
processing mode, and/or a second subset of pixels within said
second plurality of images being selected according to the second
type of scanning tip, thereby defining part of the second
processing mode.
[0053] For example, a first type of scanning-tip may be for
scanning using white light, and a second type of scanning-tip may
be for scanning using infra-red light. Thus, when in the first
processing mode, the processor may process a first plurality of
images acquired with a first illumination-mode, for example
corresponding to white-light-illumination, to provide the processed
data in the form of first data for 3D geometry and first data for
texture of the object.
[0054] According to the above described first preferred embodiment,
then when in the first processing-mode, the processor may process
all the first plurality of images, and from these first plurality
of images, the processor may process a first subset of pixels
within said first plurality of images being selected according to
the first type of scanning tip, thereby defining part of the first
processing mode. The first subset of pixels may for example be all
the green pixels or a selected set of green pixels. Upon processing
said green pixels, the first data for the 3D geometry is
provided.
[0055] Further, according the above described first preferred
embodiments, then when in the first processing-mode, the processor
may also process all the first plurality of images, and from these
first plurality of images, the processor may process a first subset
of pixels within said first plurality of images being selected
according to the first type of scanning tip, thereby defining part
of the first processing mode. The first subset of pixels may for
example be a selected set of green, red and blue pixels. Upon
processing said green, red and blue pixels, the first data for the
texture of the object is provided.
[0056] Further, when in the second processing mode, the processor
may process a second plurality of images acquired with a second
illumination-mode, for example corresponding to infra-red
light-illumination, to provide the processed data in the form of
second data for 3D geometry and second data for texture of the
object.
[0057] According to the above described second preferred
embodiment, then when in the second processing-mode, the processor
may process every second image of the second plurality of images,
and from these first plurality of images, the processor may process
a first subset of pixels within said first plurality of images
being selected according to the first type of scanning tip, thereby
defining part of the second processing mode. The first subset of
pixels may for example be all the green pixels or a selected set of
green pixels. Upon processing said green pixels, the second data
for the 3D geometry is provided.
[0058] Further, according the above described first preferred
embodiments, then when in the second processing-mode, the processor
may also process images between every second image as processed for
the 3D-geometry, and from these images, the processor may process a
first subset of pixels within said second plurality of images being
selected according to the second type of scanning tip, thereby
defining part of the second processing mode. The first subset of
pixels may for example be a selected set of red pixels. Upon
processing said red pixels, the second data for the texture of the
object is provided.
[0059] As can be seen from the above example, the first processing
mode differs from the second processing mode, and vice-versa.
[0060] The further elaborate on this, the above example illustrates
an embodiment, where the first data for the 3D geometry is based on
the first subset of the first plurality of images, and the first
subset of pixels within said first plurality of images, and wherein
the first data for the texture of the object is based on the second
subset of the first plurality of images and the second subset of
pixels within said first plurality of images, wherein the first
subset of the first plurality of images is identical to the second
subset of the first plurality of images, and wherein the first
subset of pixels within said first plurality of images is different
from the second subset of pixels within said first plurality of
images.
[0061] Even further, the above example also illustrates an
embodiment, where the second data for the 3D geometry is based on
the first subset of the second plurality of images, and the first
subset of pixels within said second plurality of images, and
wherein the second data for the texture of the object is based on
the second subset of the second plurality of images and the second
subset of pixels within said second plurality of images, wherein
the first subset of the second plurality of images is different
from the second subset of the second plurality of images, and
wherein the first subset of pixels within said second plurality of
images is different from the second subset of pixels within said
second plurality of images.
[0062] On advantage of having the first processing mode that
differs from the second in the described manner and according to
the herein disclosed scanning system, is that data processing can
be reduced to limit the amount of data being sent to processor for
generating a 3D-model. By reducing the amount of data to generate a
3D-model, a wireless connection between the scanner device and an
external processor to generate the 3D-model may be established in
that only a certain amount of data can be transmitted wirelessly.
Further, by reducing the amount of data to generate a 3D-model, the
3D-model can be generated faster than if all data is processed in
the same manner regardless of the scanning tip.
[0063] Other examples of where the first processing mode differs
from the second processing mode, and vice-versa, are described by
the following embodiments.
[0064] In a first embodiment, the first data for the 3D geometry is
based on the first subset of the first plurality of images, and the
first subset of pixels within said first plurality of images, and
wherein the first data for the texture of the object is based on
the second subset of the first plurality of images and the second
subset of pixels within said first plurality of images, wherein the
first subset of the first plurality of images is different from the
second subset of the first plurality of images, and wherein the
first subset of pixels within said first plurality of images is
different from the second subset of pixels within said first
plurality of images.
[0065] In a second embodiment, the first data for the 3D geometry
is based on the first subset of the first plurality of images, and
the first subset of pixels within said first plurality of images,
and wherein the first data for the texture of the object is based
on the second subset of the first plurality of images and the
second subset of pixels within said first plurality of images,
wherein the first subset of the first plurality of images is
different from the second subset of the first plurality of images,
and wherein the first subset of pixels within said first plurality
of images is identical to the second subset of pixels within said
first plurality of images.
[0066] In a third embodiment, the first subset of the first
plurality of images is every second image of the plurality of
images as recorded with non-chromatic light at a plurality of
wavelengths, and wherein the second subset of the first plurality
of images is the remaining images of the plurality of images
recorded with monochromatic light at a first wavelength.
[0067] In a fourth embodiment, the first subset of the first
plurality of images is every third image of the first plurality of
images as recorded with non-chromatic light defined by a plurality
of wavelengths, and wherein the second subset of the first
plurality of images is the remaining images of the first plurality
of images recorded with monochromatic light at a first wavelength
and at a second wavelength.
[0068] In a fifth embodiment, the second subset of the first
plurality of images is a single image as recorded with
non-chromatic light defined by a plurality of wavelengths.
[0069] In a sixth embodiment, the second data for the 3D geometry
is based on the first subset of the second plurality of images, and
the first subset of pixels within said second plurality of images,
and wherein the second data for the texture of the object is based
on the second subset of the second plurality of images and the
second subset of pixels within said second plurality of images,
wherein the first subset of the second plurality of images is
identical to the second subset of the second plurality of images,
and wherein the first subset of pixels within said second plurality
of images is different from the second subset of pixels within said
second plurality of images.
[0070] In a seventh embodiment, the second data for the 3D geometry
is based on the first subset of the second plurality of images, and
the first subset of pixels within said second plurality of images,
and wherein the first data for the texture of the object is based
on the second subset of the second plurality of images and the
second subset of pixels within said second plurality of images,
wherein the first subset of the second plurality of images is
different from the second subset of the second plurality of images,
and wherein the first subset of pixels within said second plurality
of images is identical to the second subset of pixels within said
second plurality of images.
[0071] The above described embodiments all benefit by providing a
system where data processing can be reduced to limit the amount of
data being sent to processor for generating a 3D-model.
[0072] The data for the 3D geometry may be in the form of a point
cloud, or data adaptable to form a point cloud. A point cloud
typically relates to points in a 3D universe, such as the Euclidian
space.
[0073] The data for the texture may comprise color data, such as
RGB color data, and/or may be in the form of direct color, a
compressed format, or an indexed color.
[0074] The processor as described herein may be responsible for
deriving the data for the 3D geometry and the texture in various
ways dependent on the scanning-tip. For example, when using a tip
with white-light-illumination, the processor may derive both a
point cloud, and for each point in the cloud, a corresponding RGB
color. The derivation of the data is clearly based on the images of
the scanner device. In some embodiments of the processing-modes,
the first processing mode or the second processing mode is such
that the data for the 3D geometry and the data for the texture is
derived from each single image in a stack of images. In other
embodiments, the first processing-mode or the second
processing-mode is such that the first data for the 3D geometry and
the data for the texture, or the second data for the 3D geometry
and the second data for the texture, is derived from a set of
images in a stack of images, for example such that at least one
image is used for deriving the first data for the 3D model or the
second data for the 3D model, and another separate at least one
image used for deriving the first data for the texture for the 3D
model or the second data for the texture for the 3D model.
[0075] In one embodiment, when in the first processing-mode, both
the first data for the 3D geometry and the first data for the
texture of the object is derived for each of the images among the
plurality of images. According to the herein disclosed scanning
system where the processing-modes are different, then related to
the just described embodiment, the second processing-mode may in
one embodiment be such that second data for the 3D geometry and the
second data for the texture of the object is derived for a set of
images among the plurality of images. However, as previously
described, it could also be the other way around. For example, in
another embodiment, when in the first processing-mode, both the
first data for the 3D geometry and the first data for the texture
of the object is derived for a set images among the plurality of
images. According to the embodiment where the processing-modes are
different, then related to the just described embodiment, the
second processing-mode may in one embodiment be such that second
data for the 3D geometry and the second data for the texture of the
object is derived for each of the images among the plurality of
images.
[0076] In some embodiments, when in the first processing-mode, the
first data for the 3D geometry and the first data for the texture
of the object is derived for different images among the plurality
of images. Also, in some other embodiments, when in the second
processing-mode, the second data for the 3D geometry and the second
data for the texture of the object is derived for different images
among the plurality of images.
[0077] In one embodiment, the first data for the 3D geometry and/or
the first data for the texture is derived for every second image
among the plurality of images. In another embodiment, the second
data for the 3D geometry and/or the second data for the texture is
derived for every second image among the plurality of images. For
example, every second image among the plurality of images may be
acquired with white light-illumination, and the images in-between
may be acquired with infra-red light-illumination or fluorescence
light-illumination. If using a white light-illumination, the
processor may in some embodiments be configured to and/or
instructed by the controller to derive the data for the 3D geometry
and the data for the texture from the each of the images acquired
with the white light-illumination. If using infrared
light-illumination, the processor may in some embodiments be
configured to and/or instructed by the controller to derive only
the data for texture from the each of the images acquired with the
infrared light-illumination. If using fluorescence
light-illumination, the processor may in some embodiments be
configured to and/or instructed by the controller to derive only
the data for texture from the each of the images acquired with the
fluorescence light-illumination.
[0078] In other embodiments, the scanner device further comprises a
lens configured to translate back and forth while the first and/or
second plurality of images is acquired. This may for example be the
case for a confocal scanner device or for a structured light
projection focus-scanner. A triangulation scanner may not need such
a lens.
[0079] In some embodiment, the second data for the texture is
derived for a single image among the plurality of images as
acquired while translating a lens element back and forth. For
example, when the single image is acquired with infrared
light-illumination. This may allow for a 2D infrared image to be
acquired such that it thereafter may be correlated to a 3D-model as
provided for by other 2D-images acquired during the translation of
the lens.
[0080] In a preferred embodiment, the controller is external to the
scanner device. Thus, according to one embodiment of the invention,
when the recognition component recognizes the type of scanning-tip
mounted to the mounting-interface, then the controller (as
configured for controlling the operation of the processor according
to the type of the scanning-tip recognized by the recognition
component) controls the operation of the processor. This means that
in this embodiment, the recognition component may transmit
information (for example information of the mounted scanning-tip in
the form of an identification number) to the controller as located
remote from the scanner device. The scanner device may accordingly
be configured to transmit such information to the remotely located
controller, for example on an external computer or a cloud service.
This transmission may for example be a wired transmission or a
wireless transmission. Once the controller receives the information
of the mounted scanning-tip, the controller may transmit
instructions (dependent on the information of the tip) to the
processor, for example located on the scanner device. Thus, the
controller and/or external computer may accordingly be configured
to transmit such information back to the scanner device. This
transmission may also for example be wired transmission or wireless
transmission. In most embodiments, the type of transmission (from
the scanner device to the controller, and from the controller to
the scanner device) is identical. Finally, when the processor
receives the instructions, the processor may process the images as
instructed and dependent on the information of the tip. One
advantage of having the controller external to the scanner device
is that the controller can be modified independently of the scanner
device, and for example be modified via the internet. Another
advantage is that the controller needs not to be present in the
scanner device, and therefore the scanner device itself can be made
more compact. Further, because a controller produces heat when
instructing a processor, the scanner device will also produce less
heat hence less power. This may be advantageous when for example
the scanner device is configured to operate in a wireless mode
and/or powered by a battery.
[0081] In another preferred embodiment, the controller is
integrated in the scanner device. One advantage of having the
controller in the scanner device is that the communication link
from the controller to the processor is reduced (for example in
comparison to the embodiment just described), meaning that the
instructions to the processor can be transmitted efficiently
thereto.
[0082] The Processor(s)
[0083] In one embodiment, the processor is integrated in the
scanner device. In this embodiment, the processer is then
configured for processing the images acquired by the image sensor
into processed data in the scanner device. The processor may, based
on the images, derive data in the form of data for 3D geometry and
data for texture of the object.
[0084] The processed data or derived data might not need to be
distributed in the spatial domain. For example, the processed data
may be partly in the spatial domain, and partly in the temporal
domain. Further processing of the processed data may then be
applied to convert the processed data to purely spatial
domain-data. In one embodiment, the processed data is data 3D
geometry, and as here explained, this may be processed data in the
spatial domain or the temporal domain, or a mix thereof.
[0085] An advantage of integrating the processor in the scanner
device is that less data needs to be transmitted by the scanner
device itself. Thus, to reduce the load of a wireless module
transferring data to an external processing device, it is
advantageous to process as much data as possible on the scanner
device.
[0086] Various processor(s) are known for processing images on
hand-held devices, but for rather simple processing, such as to
compare intensities or more generally to perform operations such as
multiplication and/or addition, a Field-Programmable Gate Array
(FPGA) processor is desired. Thus, in a preferred embodiment, the
processor comprises an FPGA-processor.
[0087] In a most preferred embodiment, the processor is further
configured for compressing the processed data. This may also enable
that a wireless module in the scanner device receives the processed
data in the form of compressed data from the processor and
wirelessly transmits the processed data in the form of compressed
data. Thus, in some embodiments, an FPGA processor both processes
and compresses data.
[0088] In one embodiment, the scanner device comprises a wireless
module which receives the processed data from the processor and
wirelessly transmits the processed data to an external processing
device. For the wireless module to receive the processed data from
the processor, the processor is configured to transmit the
processed data to the wireless module.
[0089] In another embodiment, the transmission of data to a
wireless module on the scanner device is performed by the
processer, preferably a central processing unit (CPU) comprising a
reduced instruction set computer (RISC) architecture. For example,
to transmit processed data to the wireless module, the processor
may be in the form of an Advanced RISC Machines (ARM)-processor
such as based on 32 bits or 64 bits instructions.
[0090] In other words, in another embodiment, the processor
comprises an ARM-processor. An ARM-processor is different from an
FPGA processor, and the two types of processors are designed for
different tasks. Thus, in most preferred embodiments, the processor
comprises both an FPGA-processor and an ARM-processor.
[0091] In some embodiments, the processor is located external to
the scanner device, for example on an external processing
device.
[0092] An advantage of having the processor external to the scanner
device is that a processor needs not to be in the scanner device
itself. Accordingly, this may reduce the weight and size of the
scanner device.
[0093] In other embodiments, the scanning system further comprises
a processor configured to generate a 3D model of the object,
wherein the 3D model is generated based on the processed data and
according to the type of the scanning-tip recognized by the
recognition component and wherein the 3D model is generated based
on the first data for the 3D geometry, but wherein the 3D model is
not generated based on the second data for the 3D geometry, or
wherein the 3D model is generated based on the second data for the
3D geometry, but wherein the 3D model is not generated based on the
first data for the 3D geometry.
[0094] Such a processor is preferably located on an external
processing device. However, in some embodiments also be located on
the scanner device.
[0095] In one embodiment, when the 3D model is not generated based
on the second data for the 3D geometry, then the second data for
the 3D geometry is compared to the first data for the 3D geometry,
whereby the second data for texture of the object is matched to the
3D-model.
[0096] In another embodiment, when the 3D model is not generated
based on the first data for the 3D geometry, then the first data
for the 3D geometry is compared to the second data for the 3D
geometry, whereby the first data for texture of the object is
matched to the 3D-model.
[0097] By only comparing the first or second data for the 3D
geometry to the second or first data for geometry and not
generating the 3D-model, data processing is optimized both in by
increasing processing speed and reducing data transfer.
[0098] Scanning-Tips and Recognition Element
[0099] According to the invention, each of the plurality of types
scanning-tips is configured for providing light to the object in an
illumination-mode that differs from each of the plurality of types
of scanning-tips.
[0100] Providing of light to the object may in one embodiment be
via an optical element, for example via a mirror, located in the
scanning-tip, such that the light may be generated in the scanner
device, and directed to the scanning-tip and re-directed via the
mirror to the object. The light generated in the scanner device may
be generated from a light source residing inside the scanner
device, and external to the to the scanning-tip.
[0101] Providing of light to the object may in another embodiment
be directly from the scanning-tip, such that the light may be
generated in the scanning-tip. The light generated in the
scanning-tip may be generated from a light source residing inside
the scanning-tip and/or on the scanning-tip. In some embodiments,
the light source inside the scanning-tip and/or on the scanning-tip
may be a plurality of light sources, such as a plurality of light
emitting diodes (LEDs).
[0102] Further, according to the invention, the scanner device
comprises a recognition component for recognizing the type of
scanning-tip mounted to the mounting-interface.
[0103] In one embodiment, the recognition element comprises a
memory-reader configured to read recognition-data from an
integrated memory on each of the plurality of types
scanning-tips.
[0104] In another embodiment, the type of scanning-tip, as
recognized by the recognition component, is in the form of
recognition-data, and wherein the scanner device is configured to
transmit the recognition-data to the controller.
[0105] Illumination-Mode for Scanning Tips
[0106] In one embodiment, the illumination-mode for one type of
scanning-tip is defined by the wavelength of the light. For
example, one illumination-mode may be defined as white
light-illumination, where white light is referred to a light in the
wavelength-domain between 400 nm to 700 nm. Another
illumination-mode may be defined as infrared light-illumination,
for example with a wavelength around 850 nm. A third
illumination-mode may be defined as fluorescent light-illumination,
where blue light around 415-405 nm or UV light may be used to
excite a fluorescence response from the illuminated teeth.
[0107] In another embodiment, the illumination-mode for one type of
scanning tip is defined by the intensity of the light.
[0108] In yet another embodiment, the illumination-mode for one
type of scanning-tip is defined by the field-of-view of the
light.
[0109] In some embodiments, the illumination-mode for one type of
scanning-tip is defined by a pattern of the light.
[0110] In some embodiments, the illumination-mode for one type of
scanning-tip is defined by different wavelengths of the light,
whereby one type of scanning-tip switches between the different
wavelengths of the light. For example, a first type of scanning tip
may be configured for providing both white light and infrared light
to the object, and a second type of scanning tip may be configured
for providing both white light and blue light/UV light to excite a
fluorescence response from the illuminated object. One advantage of
such scanning tips in combination with the herein disclosed
scanning system is that the 3D model needs not to be generated
based on 3D geometry as provided by such tips. The 3D model may
have already been generated based on data provided by a tip that
does not switch between different wavelengths of light.
[0111] User-Interface
[0112] In one embodiment of the second aspect, the step of
receiving, by the user, one or more of said options for scanning is
provided by the user clicking on the one or more of said options in
the user-interface.
[0113] In a second embodiment of the second aspect, the one of said
options for scanning is related to edentulous scanning. By
selecting this option, the method may according to the second
aspect of the invention, instruct the user to firstly mount a first
scanning-tip that is configured to scan with a large field-of-view,
whereby the scanning tip is adapted to cover a substantial part of
the entire jaw (e.g. 50% of the jaw). Using such a scanning-tip
provides that 3D-registration relies on the overall jaw structure.
The method according to the second aspect of the invention may
thereafter instruct the user to mount a second scanning-tip that is
configured to scan with small field-of-view, such as a conventional
intraoral scanning-tip. Using such a scanning-tip provides that
3D-registration relies only on a part of the overall structure.
[0114] Typically, when scanning an edentulous patient with a
conventional scanning-tip, the 3D-registration (which relies only
on the part of the overall structure) may be compromised due to
unbound gingiva that shifts around during scanning. However, by
using the first-scanning-tip, and then changing to the second
scanning-tip, 3D-registration of data related to the second
scanning-tip may be improved because the first scanning-tip may
provide a reference for the 3D-registration of data related to the
second scanning-tip. Further, by using the first-scanning-tip, and
then changing to the second scanning-tip, as described above and
according to the second aspect of the invention, processing time is
also reduced because registration errors need not to be
corrected.
[0115] In some embodiments, the first scanning-tip is configured
for scanning with a larger field-of-view in comparison to the
second scanning-tip, whereby the first part of the
3D-representation is used as a reference model for the second part
of the 3D-representation being matched to the reference model. As
just explained above, such embodiments improve 3D-registration.
[0116] As previously explained, the step of receiving the first
information from the scanner device related to the first
scanning-tip and/or the step of receiving the second information
from the scanner device related to the second scanning-tip is
provided from a recognition component in the scanner device that
recognizes the type of scanning-tip when mounted to the scanner
device.
[0117] Additionally, and/or alternatively, the step of receiving
the first information from the scanner device related to the first
scanning-tip is provided from visual recognition of at least a part
of the first scanning-tip in the field of view of the first
scanning-tip and/or the step of receiving the second information
from the scanner device related to the second scanning-tip is
provided from visual recognition of at least a part of the second
scanning-tip in the field-of-view of the second scanning-tip.
[0118] In a preferred embodiment of the second aspect of the
invention, the step of displaying instructions to replace the first
scanning-tip with a second scanning-tip is based on
confirmation-input from a user, wherein the confirmation-input
comprise information confirming that the first part of the
3D-representation as generated is sufficient. For example, the user
may click on a button in the user-interface. The button may
comprise a text that indicates that the user has determined that
the 3D-representation is sufficient. The button may for example
also indicate that the user is now ready to proceed to the next
procedure in the process, and therefore press a "next" button. Once
the input is provided, the user is guided to the next step of
replacing the first scanning-tip with the second scanning-tip. The
herein disclosed confirmation-input from the user changes the
process of providing the 3D-representation, at least by the user
providing input to the computer-implemented method such that it can
determine in which step the method is, and such that the
computer-implemented can continue to the next step.
[0119] In a more preferred embodiment of the second aspect of the
invention, the first part of the 3D-representation, as confirmed
sufficient, is collected over time from: the user, and/or a
plurality of different users, thereby forming historical
3D-representations as confirmed sufficient, whereby the step of
displaying the instructions to replace the first scanning-tip with
a second scanning-tip is automatized and based on the historical
3D-representations as confirmed sufficient. By automizing the
process as here described, the process is optimized, especially
such that the process of generating the final 3D-representation is
reduced in time and made more reliable.
[0120] In a most preferred embodiment of the second aspect of the
invention, the historical 3D-representations as confirmed
sufficient is used as input for an algorithm configured to
determine when the 3D-representation as generated is sufficient,
and wherein the algorithm is based on averaging the historical
3D-representations, and/or wherein the algorithm is based on
machine learning and/or artificial intelligence.
Example 1--a Scanning System and Operation-Modes Thereof
[0121] FIG. 1 shows an example of a scanning system 1 according to
the invention. FIG. 1 shows particularly a scanning system 1 for
scanning an object. The object to be scanned is not shown. The
scanning system comprises firstly a scanner device 2. The scanner
device 2 comprises: an image sensor 3 for acquiring images; a
mounting-interface 4 for detachably mounting at least one of a
plurality of types of scanning-tips, 5a, 5b, 5c, wherein each (5a
or 5b or 5c) of the plurality of types scanning-tips, 5a, 5b, 5c,
is configured for providing light to the object in an
illumination-mode that differs from each of the plurality of types
of scanning-tips, 5a, 5b, 5c. The scanner device further comprises
a recognition-component 6 for recognizing the type of scanning-tip,
5a or 5b or 5c, mounted to the mounting-interface 4. The scanning
system 1 secondly comprises a processor 7 configured for processing
the images acquired by the image sensor into processed data. The
scanning system 1 thirdly comprises a controller 8 configured for
controlling the operation of the processor 7 according to the type
of the scanning-tip, 5a or 5b or 5c, recognized by the recognition
component 6.
[0122] The controller 8 may be located either in the scanner device
2, such as inside a scanner housing 10 of the scanner device 2 or
external to the scanner device 2, such as located on an external
processing device 11, here shown as a lab top 11.
[0123] Alternatively, the controller 8 may be located in both the
scanner device 2 and external to the scanner device 2. For example,
a first part of the controller 8 is located in the scanner housing
10 and a second part of the controller 8 is located in the lab top
11.
[0124] When the controller 8 is located in the scanner device 2,
the controller 8 communicates with the processor 7 in the scanner
device (2) via a printed circuit board (PCB) 9. The PCB 9 transmits
data and control-instructions back and forth between the controller
8 and the processor 7.
[0125] When the controller is located external to the scanner
device 2, the external processing device 11 communicates with
processor 7 in the scanner device via at least the communication
module 12. The PCB 9 might also be involved in the communication
between the scanner device 2, i.e. the processor 7 and the external
processing device 11, i.e. the controller 8. The communication
module 12 may for example be a wired communication module 12, for
example comprising a USB cable or an Ethernet cable, configured to
transmit data and control-instructions back and forth between the
scanner device 2 and the external processing device 11.
Alternatively, the communication module 12 may for example be a
wireless communication module 12 configured to wirelessly transmit
data and control-instructions back and forth between the scanner
device 2 and the external processing device 11.
[0126] In this example, the processor 7 is integrated in the
scanner device 2. However, also in this example, the controller 8
is located only external to the scanner device 2.
[0127] The controller 8 is further configured for controlling the
processor 7 such that when a first type of scanning-tip is mounted
and recognized, the processor 7 is controlled to operate in a first
processing-mode corresponding to the first type of scanning-tip,
for example 5(a).
[0128] This works in the following way. The first type of scanning
tip 5(a) is for intra-orally scanning of teeth using white light,
the white light being emitted by a white light source residing in
the scanner device 2. The first type of scanning tip 5(a) comprises
a mirror located at the distal end of the scanning tip 5(a) with a
reflective surface inside the scanning tip such that when the
mirror receives light from the white light source, the scanning tip
5(a) provides light to the object in a first illumination-mode. By
mounting the first type of scanning tip 5(a) to the scanner device
2, the recognition-component 6 comprises a memory-reader configured
to read recognition-data from an integrated memory on the first
type scanning-tips 5(a) such that the recognition-component 6 at
least reads which type of scanning tip is mounted to the scanner
device 2. The type of scanning tip, here 5(a) as recognized by the
recognition-component 6, is in the form of recognition-data. This
recognition-data is transmitted to the external processing device
11 via a wireless module 12. The controller 8 now receives the
recognition-data. Based on that input, i.e. the recognition-data,
the controller 8 transmits a first set of control-instructions to
the scanner device 2, more specifically to the processor 7 via the
wireless module 12. Thereby, the processor 7 is instructed to
operate in a first processing-mode corresponding to the first type
of scanning-tip 5(a). When in the first processing mode, the
processor 7 processes a first plurality of images acquired with the
first illumination-mode, i.e. with the white light, to provide the
processed data in the form of first data for 3D geometry and first
data for texture of the object. The data for the 3D geometry is
related to 3D positions, i.e. points in space, not necessarily in
the form of spatial coordinates, but at least transformable
thereto. The data for the texture of the object is related to the
color of the surface of the object. The processed data is then
transmitted to the external processing device 11 via the wireless
module 12. The processing device comprises a processor configured
to generate a 3D-model 13 of the object, wherein the 3D-model 13 is
generated based on the processed data regardless of the type of the
scanning-tip recognized by the recognition component 6. The
3D-model 13 is finally displayed on a display 14 of the external
processing device 11, here shown on the screen 14 of the lab top
11.
[0129] The controller 8 is even further configured for controlling
the processor 7 such that when a second type of scanning-tip is
mounted and recognized, the processor 7 is controlled to operate in
a second processing-mode corresponding to the second type of
scanning-tip, for example 5(b) or 5(c).
[0130] This works in the following way. First, the first type of
scanning-tip 5(a) is replaced by the second type of scanning-tip,
in this example chosen to be 5(c). This is performed by un-mounting
the first type of scanning-tip 5(a) and then mounting the second
type of scanning-tip 5(c).
[0131] The second type of scanning tip 5(c) is for intra-orally
scanning of teeth using infrared light, the red light being emitted
by a plurality of infrared light sources residing in the distal end
of the scanning tip 5(c) such that the scanning tip 5(c) provides
light to the object in a second illumination-mode, the second
illumination mode being different from the first illumination-mode.
By mounting the second type of scanning tip 5(c) to the scanner
device 2, the recognition-component 6 comprises a memory-reader
configured to read recognition-data from an integrated memory on
the second type scanning-tips 5(c) such that the
recognition-component 6 at least reads which type of scanning tip
is mounted to the scanner device 2. The type of scanning tip, here
5(c), as recognized by the recognition-component 6, is in the form
of recognition-data. This recognition-data is transmitted to the
external processing device 11 via a wireless module 12. The
controller 8 now receives the recognition-data. Based on that
input, i.e. the recognition-data, the controller 8 transmits a
second set of control-instructions to the scanner device 2, more
specifically to the processor 7 via the wireless module 12.
Thereby, the processor 7 is instructed to operate in a second
processing-mode corresponding to the second type of scanning-tip
5(c). When in the second processing mode, the processor 7 processes
a second plurality of images acquired with the second
illumination-mode, i.e. with the infrared light, to provide the
processed data in the form of second data for 3D geometry and
second data for texture of the object. The second processing-mode
is different from the first processing-mode. The data for the 3D
geometry is related to 3D positions, i.e. points in space, not
necessarily in the form of spatial coordinates, but at least
transformable thereto. The data for the texture of the object is
related to the color of the internal structure of the object. The
processed data is then transmitted to the external processing
device 11 via the wireless module 12. The processing device
comprises a processor configured to generate a 3D-model 13 of the
object, wherein the 3D-model 13 is generated based on the processed
data and here based on the type of the scanning-tip recognized by
the recognition component 6. This means that because the second
type of scanning-tip 5(c) is recognized and due to this tip
emitting infrared light configured to record internal structures,
the 3D-model-generation in the external processing device 11
updates the 3D-model 13, as generated using the white light, with
internal structures of the tooth.
[0132] The updated 3D-model 13 is finally displayed on a display 14
of the external processing device 11, here shown on the screen 14
of the lab top 11.
Example 2--Processing-Mode in Intra-Oral Scanning Mode
[0133] In this example, the scanning system 1 is configured for
performing intra-oral scanning of at least a portion of a
tooth.
[0134] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a first
processing-mode corresponding to scanning intra-orally using a
scanning-tip (5a) therefor.
[0135] This first processing-mode is initiated by mounting the
intra-oral tip 5(a) with a mirror in the distal end that covers the
entire optical field-of-view and directs light from the scanner
device 2 towards the object to be scanned. The intra-oral tip 5(a)
is shown mounted in FIG. 2. This tip is configured for being
inserted into the mouth of a patient.
[0136] In this example, the processor 7 processes images 15
acquired by the image sensor 3 into processed data 16 while a focus
lens is adjusted. The focus lens adjustment is confined to a
specific span length, where the focus lens is moved back a forth
while recording a plurality of 2D-images 15 of a projected pattern
on the object. The processed data 16 is extracted by processing the
plurality of 2D images 15.
[0137] When the scanning tip 5(a) is mounted to the scanner device
2, the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip which is stored on an internal
memory of the scanning tip 5(a). The identification-number 17 is
forwarded to the controller 8 located on the externally connected
computer 11 Based on the scanner-tip identification-number 17, the
controller 8 instructs the processer 7 on the scanner device 2 to
process a continuous sequence of 2D-images 15 recorded with a
white-light illumination pattern on the object. The white light
enables that from each 2D-image, both data for 3D geometry and data
for texture can be derived. In other words, the processed data 16
is in the form of data for 3D geometry and in the form of data for
texture.
[0138] Accordingly, the processor 7 on the scanner device 2
processes a subset of the plurality of 2D-images 15 to construct a
combined depth-frame and color-frame called a sub-scan. In this
example, the processed data 16 from the processor 7 is thus
dependent on the processing-mode.
[0139] The processed data 16 of a sub-scan is sent as a data
package to a scanning application on the externally connected
computer 11 responsible for generating the 3D-model 13.
[0140] The primary task of the scanning application is to process
individual patches of data packages and reconstruct them to a
complete or global scan. That task can be broken down into two
primary routines:
[0141] Registration: The location of the sub-scan is located in
relation to the global scan.
[0142] Stitching: The sub-scan is fused into the global scan as
registered above.
[0143] From the scanning-tip identification-number 17 being
recognized, the processor 7 may perform a post-treatment of the
processed data before transmitting it. For example, the processed
data 16, or part of it, may be mirrored by the processor 7 prior to
being transmitted. Hereafter, the registration and the stitching
can be performed on the external processing device 11.
Alternatively, the processed data may be mirrored on the external
processing device 11.
Example 3--Processing-Mode in Ear Scanning Mode
[0144] In this example, the scanning system 1 is configured for
performing in-ear scanning of at least a portion of an ear.
Further, in this example, the scanning system 1, more particularly
the processor 7, is configured to operate in a second
processing-mode corresponding to scanning in-ear using a
scanning-tip 5(b) therefor.
[0145] This second processing-mode is initiated by mounting the
ear-tip 5(b). This scanning tip 5(b) is open and forward-looking
with a small mirror placed on an extended arm with a small mirror
in the distal end that covers only partly the optical field-of-view
and directs a portion of the light from the scanner device 2
towards the object to be scanned. The in-ear tip 5(b) is shown
mounted in FIG. 3. This scanning tip 5(b) is configured for being
inserted into the ear of a patient.
[0146] In this example, the processor 7 processes images 15
acquired by the image sensor 3 into processed data 16 while a focus
lens is adjusted. The focus lens adjustment is confined to a
specific span length, where the focus lens is moved back a forth
while recording a plurality of 2D-images 15 of a projected pattern
on the object. The processed data 16 is extracted by processing the
plurality of 2D images 15.
[0147] When the scanning tip 5(b) is mounted to the scanner device
2, the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip 5(b) which is stored on an
internal memory of the scanning tip 5(b). The identification-number
17 is forwarded to the controller 8 located on the externally
connected computer 11. Based on the scanner-tip
identification-number 17, the controller 8 instructs the processer
7 on the scanner device 2 to process a continuous sequence of
2D-images 15 recorded with a white-light illumination pattern on
the object. The white light enables that from each 2D-image, both
data for 3D geometry and data for texture can be derived. In other
words, the processed data 16 is in the form of data for 3D geometry
and in the form of data for texture.
[0148] Accordingly, the processor 7 on the scanner device 2
processes a subset of the plurality of 2D-images 15 to construct a
combined depth-frame and color-frame called a sub-scan. In this
example, the processed data 16 from the processor 7 is thus
dependent on the processing-mode.
[0149] The processed data 16 of a sub-scan may be sent as a data
package to a scanning application on the externally connected
computer 11 responsible for generating the 3D-model 13.
[0150] The primary task of the scanning application is to process
individual patches of data packages and reconstruct them to a
complete or global scan. That task can be broken down into two
primary routines:
[0151] Registration: The location of the sub-scan is located in
relation to the global scan.
[0152] Stitching: The sub-scan is fused into the global scan as
registered above.
[0153] From the scanning-tip identification-number 17 being
recognized, the processor 7 may perform a post-processing of the
processed data 16. In this manner, post-processed data 18 is
obtained. For example, the post-processed data 18, or part of it
18(a), may be mirrored by the processor 7 prior to being processed.
FIG. 3 shows how part of the post-processed data 18(a) is partly
mirrored, and that noise 18(b) present in the post-processed data
18 is removed in the processed data 16. In this case, the processor
7 operates differently from the example described in Example 2, and
the controller 8 is configured for controlling the operation of the
processor 7 according to the type of the scanning-tip (5a, 5b or
5c) recognized by the recognition component 6. Hereafter, the
registration and the stitching can be performed on the external
processing device 11. Alternatively, the processed data may be
mirrored or partly mirrored on the external processing device
11.
[0154] Regardless of where the processing and/or post-processing
takes place, the processed data 16 is processed dependent on the
specific scanning-tip. In other words, a tip-specific data-mask may
be applied in the post-processing for reflecting and correcting a
portion of the processed data 16. More specifically, the
scanning-tip identification-number 17 may be associated with a
specific reflection-matrix to be applied by the processor 7 and/or
the scanning application as a data post-processing mask when the
tip 5(b) is recognized on the scanner device 2.
Example 4--Processing-Mode in Infrared Transillumination Scanning
Mode
[0155] In this example, the scanning system 1 is configured for
performing intra-oral scanning of at least a portion of a tooth
using at least infrared light.
[0156] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a second
processing-mode corresponding to scanning intra-orally with at
least infrared light using a scanning-tip 5(c) therefor.
[0157] This second processing-mode is initiated by mounting the
intra-oral tip 5(c) with a mirror in the distal end that covers the
entire optical field-of-view and directs light from the scanner
device 2 towards the object to be scanned. The intra-oral tip 5(c)
is shown mounted in FIG. 4. This tip is configured for being
inserted into the mouth of a patient. Further, in one configuration
of the scanning-device 2, the light is selected to trans-illuminate
the object to be scanned.
[0158] When the scanning tip 5(c) is mounted to the scanner device
2, the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip 5(c) which is stored on an
internal memory of the scanning tip 5(c). The identification-number
is forwarded to the controller 8 located on the externally
connected computer 11. Based on the scanner-tip
identification-number 17, the controller 8 instructs the processer
7 on the scanner device 2 to process a continuous sequence of
2D-images 15 recorded with an infrared-light illumination on the
object. To do this, the scanner device 2 is configured to
illuminate the object with infrared light into the object, for
example into a tooth, and the surrounding gingiva. The scanning tip
5(c) is configured such that the red light propagates through the
gum and tooth material to illuminate the tooth from the inside. The
infrared light illumination is controlled by the controller 8 and
based on the scanner-tip identification-number 17. In other words,
when the controller 8 receives the scanner-tip
identification-number 17, the controller 8 additionally instructs
the scanner device 2 to emit the infrared light. Instructions 19
from the controller and to the tip 5(c) are shown in FIG. 4.
Further, the controller 8 additionally instructs the scanner device
2 to emit the white light.
[0159] In this manner, a regular sequence of images 15 is recorded
with the white-light illumination. However, at a specific point in
time, the white light recording is momentarily interrupted to
record a single image 20 with infrared illumination. The
interruption is based on scan data feedback 21 between the
controller 8 and the scanner device 2, the feedback 21 being also
based on data 22 from the processor 7. The data 22 from the
processor 7 may for example be a 2D image index-number of the
infrared image 19. The index-number may be dynamically determined
for each image in the sequence of images 15.
[0160] Further, when in the second processing-mode, the processor 7
processes the white light images to derive both data for 3D
geometry and data for texture for the surface. Further, the
processor 7 processes the single infrared light image to derive
data for texture of the internal structure of the object. Finally,
the processor correlates data for the texture of the internal
structure of the object to the data for the 3D geometry.
[0161] In this example, the scanning application correlates the
infrared image 15 to a corresponding position on the 3D-model
13.
Example 5--Processing-Mode in Fluorescence Scanning Mode
[0162] In this example, the scanning system 1 is configured for
performing intra-oral scanning of at least a portion of a tooth
using at least fluorescent light.
[0163] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a second
processing-mode corresponding to scanning intra-orally with at
least fluorescent light using a scanning-tip therefor 5(d).
[0164] This second processing-mode is initiated by mounting the
intra-oral tip with a mirror in the distal end that covers the
entire optical field-of-view and directs light from the scanner
device 2 towards the object to be scanned. The intra-oral tip 5(d)
is shown mounted in FIG. 5. This tip is configured for being
inserted into the mouth of a patient. Further, in one configuration
of the scanning-device, the light is selected to excite a
fluorescence material in the object to be scanned.
[0165] When the scanning tip 5(d) is mounted to the scanner device
2, the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip which is stored on an internal
memory of the scanning tip 5(d). The identification-number 17 is
forwarded to the controller 8 located on the externally connected
computer 11. Based on the scanner-tip identification-number 17, the
controller 8 instructs the processer 7 on the scanner device 2 to
process a continuous sequence of 2D-images 15 recorded with a both
white-light illumination pattern and blue-light illumination
pattern on the object. To do this, the scanner device 2 is
configured to illuminate the object in an alternating manner, where
the light is switched between white light and blue light. The
switching of light is controlled by the controller 8 and based on
the scanner-tip identification-number 17.
[0166] In other words, when the controller 8 receives the
scanner-tip identification-number 17, the controller 8 additionally
instructs the scanner device 2 emit the white light and the blue
light in the alternating manner. Instructions 18 from the
controller and to the tip 5(d) are shown in FIG. 5.
[0167] In this manner, every second image 23 contains information
associated with depth information and reflective color information
and every subsequent image 24 between these images 23 contains the
response of emitted fluorescence texture.
[0168] In the second processing-mode, the processor 7 is instructed
to bundle each pair of consecutive white light image 23 and blue
light image 24 together such that the depth information of the
white light image frame is attached to the fluorescence texture of
the subsequent blue light image. This results in a processed data
16 for 3D geometry with less processed data for 3D geometry in
comparison to Example 2 but includes emitted fluorescent texture
instead of reflected color texture. The processed data 16 is sent
as a data package to a scanning application on the externally
connected computer 11 responsible for generating the 3D-model
13.
[0169] In this example, the scanning application only uses the data
for the 3D geometry to locate a specific location on the 3D-model
13 to overlay the florescent color texture correctly on the
3D-model 13.
Example 6--Processing-Mode in Reduced Field-of-View Scanning
Mode
[0170] In this example, the scanning system 1 is configured for
performing intra-oral scanning of at least a portion of a tooth
using a reduced field-of-view.
[0171] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a second
processing-mode corresponding to scanning intra-orally with at
least a reduced field-of-view using a scanning-tip therefor
5(e).
[0172] This second processing-mode is initiated by mounting the
scanning-tip 5(e) with a mirror in the distal end that covers the
entire optical field-of-view and directs light from the scanner
device 2 towards the object to be scanned. The intra-oral tip 5(e)
is shown mounted in FIG. 6. This scanning-tip 5(e) is configured
for being inserted into the mouth of a patient. The field-of-view
in the scanning-tip may be reduced in comparison to the
scanning-tip 5(a) described in Example 2, or it may have the same
field-of-view as the scanning-tip 5(a) described in Example 2.
[0173] When the scanning tip is mounted to the scanner device 2,
the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip which is stored on an internal
memory of the scanning tip 5(e). The identification-number 17 is
forwarded to the controller 8 located on the externally connected
computer 11 Based on the scanner-tip identification-number 17, the
controller 8 instructs the processer 7 on the scanner device 2 to
process a continuous sequence of 2D-images 15 recorded with for
example reduced field-of-view.
[0174] As described above, the reduced field-of-view may be due to
the scanning-tip having a reduced field-of-view in comparison to
the scanning tip 5(a) described in Example 2. However, reduced
field-of-view may additionally or alternatively be defined by the
processor 7. For example, the processor 7 may be instructed by the
controller 8 to avoid processing an outer part 25 of the images
which is not exposed to reflected light due to the reduced
field-of-view of the scanning-tip. In other words, the processor 7
is instructed to only process a specific part 26 of each of the
plurality of images to construct a reduced depth and color
sub-scan.
[0175] The processed data 16 of a sub-scan is finally sent as a
data package to a scanning application on the externally connected
computer 11 responsible for generating the 3D-model 13.
Example 7--Processing-Mode in Enlarged Field-of-View Scanning
Mode
[0176] In this example, the scanning system 1 is configured for
performing face scanning of at least a portion of a face or larger
object using an enlarged field-of-view.
[0177] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a second
processing-mode corresponding to scanning intra-orally with at
least an enlarged field-of-view using a scanning-tip therefor
5(f).
[0178] This second processing-mode is initiated by mounting the
scanning-tip 5(f) as shown mounted in FIG. 7a. The tip 5(f)
comprises an optical element for increasing the scan area to a size
of more than 50 mm and volume by more than a factor of 10 compared
to intra-oral scanning. The field-of-view in the scanning-tip is
thus enlarged in comparison to the scanning-tip 5(a) described in
Example 2.
[0179] When the scanning tip is mounted to the scanner device 2,
the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip which is stored on an internal
memory of the scanning tip 5(e). The identification-number 17 is
forwarded to the controller 8 located on the externally connected
computer 11. Based on the scanner-tip identification-number 17, the
controller 8 instructs the processer 7 on the scanner device 2 to
process a continuous sequence of 2D-images 15 recorded with the
enlarged field-of-view. Due to the enlarged field-of-view, the
processor receives distorted data. FIG. 7 shows how the distorted
data is first post-processed to post-processed data 18 and finally
processed to processed data 16. In this as well as all the Examples
2-7, the processor 7 operates differently dependent on the type of
scanning tip being mounted. In all examples, the controller 8 is
configured for controlling the operation of the processor 7
according to the type of the scanning-tip (5a, 5b, 5c, 5d, 5e, 5f)
recognized by the recognition component 6.
[0180] The processed data 16 is finally sent as a data package to a
scanning application on the externally connected computer 11
responsible for generating the 3D-model 13. Different schematic
versions of the scan tip 5(f) are shown in FIG. 7b illustrated in
5(f)1-4 with increasing complexity.
[0181] Tip version 5(f)-1 shows an enlarged field-of-view tip with
a tilted lens to avoid lens reflection directly back to the image
sensor when mounted (not shown). This simple setup is easy to
produce but may however create distortions whereby the scan-signal
is reduced.
[0182] Tip version 5(f)-2 shows an enlarged field-of-view tip
similar to 5(f)-1 but with an added quarter-wave (QW). In this
example, the QW may be rotatable to minimize reflection.
[0183] Tip version 5(f)-3 shows an enlarged field-of-view tip
similar to 5(f)-2 but with an additional added quarter wave plate.
This configuration enables that the tip retains the polarization of
the light, thus enabling the tip to be used to scan translucent
objects like teeth and eyes.
[0184] Tip version 5(f)-4 shows an optimized enlarged field-of-view
tip comprising several optical elements for fine-tuning the
performance. This version has superior performance compared to the
5(f)1-3.
Example 8--Processing-Mode in Intra-Oral Scanning Mode
[0185] In this example, the scanning system 1 is configured for
performing intra-oral scanning of at least a portion of a
tooth.
[0186] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a first
processing-mode corresponding to scanning intra-orally using a
scanning-tip (5a) therefor.
[0187] This first processing-mode is initiated by mounting the
intra-oral tip 5(a) with a mirror in the distal end that covers the
entire optical field-of-view and directs light from the scanner
device 2 towards the object to be scanned. The intra-oral tip 5(a)
is shown mounted in FIG. 8. This tip is configured for being
inserted into the mouth of a patient.
[0188] In this example, the processor 7 processes images 15
acquired by the image sensor 3 into processed data 16 while a focus
lens is adjusted. The focus lens adjustment is confined to a
specific span length, where the focus lens is moved back a forth
while recording a plurality of 2D-images 15 of a projected pattern
on the object. The processed data 16 is extracted by processing the
plurality of 2D images 15.
[0189] When the scanning tip 5(a) is mounted to the scanner device
2, the scanner device 2 reads recognition data 17 in the form of an
identification-number 17 of the tip which is stored on an internal
memory of the scanning tip 5(a). The identification-number 17 is
forwarded to the controller 8 located on the externally connected
computer 11 Based on the scanner-tip identification-number 17, the
controller 8 instructs the processer 7 (in this example located
external to the scanner device 2) to process a continuous sequence
of 2D-images 15 recorded with a white-light illumination pattern on
the object. The white light enables that from each 2D-image, both
data for 3D geometry and data for texture can be derived. In other
words, the processed data 16 is in the form of data for 3D geometry
and in the form of data for texture.
[0190] Accordingly, the processor 7 on the external computer 11
processes the plurality of 2D-images 15 into processed data 16. In
this example, the processed data 16 from the processor 7 is
dependent on the processing-mode.
[0191] The processed data 16 is used by the external computer 11 to
generate a 3D-model 13.
Example 9--Overview of Examples of Scanning-Tips
[0192] Several scanning-tips (5a, 5b, 5c, 5d, and 5e), as examples,
are described in Example 1-8. This example provides an overview of
the different scanning-tips.
[0193] Intra-Oral Scanning-Tip to Provide White Light
[0194] In one example, there is provided a replaceable scanning-tip
5(a) for a scanner device 2, the scanning-tip 5(a) being configured
for intra-orally scanning of teeth and for providing white light to
the teeth. The scanning-tip 5(a) may be for the scanner device 2
according to the invention, or for any type of scanner device. The
white light may be emitted by a white light source residing in the
scanner device 2. The replaceable scanning-tip 5(a) may comprise a
mirror located at the distal end of the scanning-tip 5(a) with a
reflective surface inside the scanning-tip such that when the
mirror receives light from the white light source, the scanning tip
provides light to the teeth. The mirror may also be configured for
receiving white light as back-reflected from the teeth, such that
when the mirror receives light from teeth, the scanning tip 5(a)
provides light to the image sensor 3.
[0195] In-Ear Scanning-Tip to Provide White Light
[0196] In another example, there is provided a replaceable
scanning-tip 5(b) for a scanner device 2, the scanning-tip 5(b)
being configured for in-ear scanning of the inside of an ear and
for providing white light to the inside of the ear. The
scanning-tip 5(b) may be for the scanner device 2 according to the
invention, or for any type of scanner device. The white light may
be emitted by a white light source residing in the scanner device
2. The replaceable scanning-tip 5(b) may comprise a mirror located
at the distal end of the scanning tip 5(b) with a reflective
surface inside the scanning-tip 5(b) such that when the mirror
receives light from the white light source, the scanning-tip 5(b)
provides light to the inner ear. The mirror may also be configured
for receiving white light as back-reflected from the inner ear,
such that when the mirror receives white light from the inner ear,
the scanning tip 5(c) provides white light to the image sensor 3.
For the scanning-tip 5(b) to be inserted into the ear, the mirror
may be dimensioned according to dimensions of an inner ear.
[0197] Intra-Oral Scanning-Tip to Provide White Light and Infrared
Light
[0198] In a third example, there is provided a replaceable
scanning-tip 5(c) for a scanner device 2, the scanning-tip 5(c)
being configured for intra-orally scanning of teeth and for
providing white light and infrared light to the teeth. The
scanning-tip 5(c) may be for the scanner device 2 according to the
invention, or for any type of scanner device. The white light may
be emitted by a white light source residing in the scanner device
2. The infrared light may be emitted by an infrared light source or
a plurality of light sources located in or on the replaceable
scanning-tip 5(c). The replaceable scanning-tip 5(c) may comprise a
mirror located at the distal end of the scanning-tip 5(c) with a
reflective surface inside the scanning-tip 5(c) such that when the
mirror receives light from the white light source, the scanning tip
5(c) provides white light to the teeth. The mirror may also be
configured for receiving white light as back-reflected from the
teeth, such that when the mirror receives white light from teeth,
the scanning-tip 5(c) provides white light to the image sensor 3.
Further, the mirror may also be configured for receiving infrared
light as back-reflected from the teeth, such that when the mirror
receives infrared light from the teeth, the scanning tip 5(c)
provides infrared light to the image sensor 3.
[0199] Intra-Oral Scanning-Tip to Provide White Light and
Fluorescent Light
[0200] In a fourth example, there is provided a replaceable
scanning-tip 5(d) for a scanner device 2, the scanning-tip 5(d)
being configured for intra-orally scanning of teeth and for
providing white light and infrared light to the teeth. The
scanning-tip 5(c) may be for the scanner device 2 according to the
invention, or for any type of scanner device. Both the white light
and fluorescent light may be emitted by a white light source and a
fluorescent light source residing in the scanner device 2, for
example a single light source configured to emit both white light
and fluorescent light. The replaceable scanning-tip 5(d) may
comprise a mirror located at the distal end of the scanning-tip
5(d) with a reflective surface inside the scanning-tip 5(d) such
that when the mirror receives light from the white light source and
the fluorescent light source, the scanning tip 5(d) provides white
light and fluorescent light to the teeth. The mirror may also be
configured for receiving white light and fluorescent light as
back-reflected from the teeth, such that when the mirror receives
white light and fluorescent light from the teeth, the scanning tip
5(d) provides white light to the image sensor 3.
[0201] Intra-Oral Scanning-Tip to Provide White Light and Reduced
Field-of-View
[0202] In a fifth example, there is provided a replaceable
scanning-tip 5(e) for a scanner device 2, the scanning-tip 5(e)
being configured for intra-orally scanning of teeth and for
providing white light to the teeth and with a reduced
field-of-view. The scanning-tip 5(e) may be for the scanner device
2 according to the invention, or for any type of scanner device.
The white light may be emitted by a white light source residing in
the scanner device 2. The replaceable scanning-tip 5(e) may
comprise a mirror located at the distal end of the scanning-tip
5(e) with a reflective surface inside the scanning-tip such that
when the mirror receives light from the white light source, the
scanning tip provides light to the teeth with a reduced field of
view. The mirror may also be configured for receiving white light
as back-reflected from the teeth, such that when the mirror
receives light from teeth, the scanning tip 5(e) provides light to
the image sensor 3.
[0203] Face Scanning-Tip to Provide White Light and Enlarged
Field-of-View
[0204] In a sixth example, there is provided a replaceable
scanning-tip 5(e) for a scanner device 2, the scanning-tip 5(f)
being configured for surface scanning of a face and for providing
white light to the face and with an enlarged field-of-view. The
scanning-tip 5(e) may be for the scanner device 2 according to the
invention, or for any type of scanner device. The white light may
be emitted by a white light source residing in the scanner device
2. The replaceable scanning-tip 5(e) may be open-ended such that
when the mirror receives light from the white light source, the
scanning-tip 5(f) provides light to the face with an enlarged field
of view. The open-ended opening may be configured for receiving
white light as back-reflected from the face, such that when the
open-ended opening receives light from face, the scanning tip 5(f)
provides light to the image sensor 3.
Example 10--Processing-Mode in Dual Angle Mirror Scanning Mode
[0205] In this example, the scanning system 1 is configured for
performing smooth scanning, providing that data can be obtained
from challenging areas. These areas may for example be the mesial
surface of the 2.sup.nd or 3.sup.rd molars and the so-called
anterior crossover. When performing a full jaw intraoral scan, the
scanning session is typically initiated on the occlusal surface of
a molar. The digital model of the jaw is continuously generated as
the scanner device is moved along the dental arch. At some point,
the scanner device is moved across the canines and incisal edge.
This area is particularly challenging for the scanner device as the
top view of the teeth is very small due to the nature of the tooth
morphology, which results in a limited 3D-information. Typically,
this situation is handled by instructing the operator to perform a
wiggling scanning movement, to continuously record the facial and
lingual/palatal surfaces in order to ease an accurate model
reconstruction. The scanning probe show in FIG. 9 solves this issue
by using a dual angle mirror scan tip 5(g) which is configured for
simultaneously recording 3D-data from an object from multiple
angles, hence creating larger patches of 3D-data. Accordingly, this
tip is configured to access areas in the oral cavity, which are
otherwise hard to reach.
[0206] Further, in this example, the scanning system 1, more
particularly the processor 7, is configured to operate in a second
processing-mode corresponding to scanning intra-orally with at
least a split-view using a dual angle mirror scanning-tip therefor
5(g).
[0207] This second processing-mode is initiated by mounting the
dual-angle-mirror-scanning-tip 5(g) with optical element in the
distal end separated in at least 2 individual reflecting segments
with different angles relative to the incident light originating
from the scanner device. The individual reflecting segments cover
the entire optical field-of-view and directs light from the scanner
device 2 (not shown) towards the object to be scanned. The distal
part of the tip 5(g) is shown from a cross sectional view in FIG.
9. The tip 5(g) comprises an optical element comprising a first
segment 28 and a second segment 29. The two reflecting segments are
arranged such that light reflected from the second segment 29 is
directed towards the object to be scanned in a different angle then
then the light reflected from the first segment 28. The two
segments 28 and 29 are positioned such that the individual
field-of-view from the segments overlap by a substantial amount in
the entire scan volume 30. The combined field-of-view in the
dual-angle-mirror scanning-tip is thus different in comparison to
the scanning-tip 5(a) described in Example 2.
[0208] When the dual-angle-mirror scanning tip is mounted to the
scanner device 2 (not shown), the scanner device 2 reads
recognition data 17 in the form of an identification-number 17 of
the tip which is stored on an internal memory of the scanning tip
5(g). The identification-number 17 is forwarded to the controller 8
located on the externally connected computer 11. Based on the
dual-angle-mirror scanner-tip identification-number 17, the
controller 8 instructs the processer 7 on the scanner device 2 to
process a continuous sequence of 2D-images 15 recorded with the
mixed field-of-view. Due to the mixed field-of-view, the processor
receives distorted data containing 3D-information from the same
object view from different directions mixed together.
[0209] FIG. 9 shows how the mixed data (recorded with the two
segments 28 and 29) is first post-processed to post-processed data
18 and finally processed to processed data 16.
[0210] By comparison, Example 1 only had one mirror that was
located in the tip, whereby each point in the scan volume (i.e. the
volume in which the scanner device was able to collect data) maps
1-to-1 to a pixel in an image. In this example, when the dual angle
mirror tip 5(g) is attached, the geometry of the scan volume
becomes more complicated, and there may be points in the scan
volume 30 that can be simultaneously recorded from both mirror
segments 28 and 29, and hence map to more than one pixel.
[0211] The data processing of mapping between scan volume and depth
image and vice versa is used for registration and reconstruction of
the 3D-model. These transformations are modeled together with a
dedicated automatic calibration routine. Accordingly, in this
example, the processor 7 operates differently dependent on the type
of scanning-tip being mounted. In all examples, the controller 8 is
configured for controlling the operation of the processor 7
according to the type of the scanning-tip (5a, 5b, 5c, 5d, 5e, 5f,
5g) recognized by the recognition component 6.
[0212] The processed data 16 is finally sent as a data package to a
scanning application on the externally connected computer 11
responsible for generating the 3D-model 13.
Example 11--a User-Interface
[0213] A dedicated user-interface is shown in FIG. 10(a-e). A first
part of the purpose of the user-interface is to guide the operator
through a scanning session whereby the operator is guided to use a
first scanning-tip and then to use a second scanning-tip. Another
part of the purpose of the user-interface is to efficiently provide
a 3D-model, by at least changing the process dependent on the input
from the user. This example will show how a 3D-model is efficiently
provided.
[0214] Typically, a 3D-model from an edentulous patient may be
challenging to provide. Further, scanning may be difficult due to
the lack of clear landmarks in the toothless jaw. Unbound gingiva
may also shift around during scanning and create difficulty for
registration algorithms relying on rigid objects to be scanned. A
computer-implemented method with instructions to use the scan tip
from Example 7 as well as the regular scan tip from Example 2 is
demonstrated in the following.
[0215] The example illustrates a computer-implemented method for
generating a 3D-representation 13 of an oral cavity displayed in a
graphical user-interface 30 on a screen in the following steps:
[0216] First, shown in FIG. 10a, the user is presented with a
display having a plurality of options for scanning 31, such that a
user is instructed, in the user-interface, to select one of said
options for scanning 32. For example, in the present case, the
operator wants to scan an edentulous patient, and the operator
therefore selects the option to scan in relation to e.g. a full
denture 32, for example by using a screen cursor (via a pointing
device), and/or by clicking on the screen, and/or by using the
scanner device.
[0217] Upon selection of the scan option, 32, the display will
shift to the display as shown in FIG. 10b. Based on the one option
for scanning 32, as received by the computer-implemented method,
the computer-implemented method provides instructions 33 for the
user to mount a first scanning-tip a scanner device. In this case,
the user-interface prompts the user to mount an enlarged
field-of-view tip 5(f) to the scanner device 2.
[0218] Due to a recognition component in the scanner device 2, both
the scanner device 2 and the computer-implemented method registers
that the tip 5(f) is mounted. Hereafter, the computer-implemented
method proceeds to the next step in the process of generating a
3D-model. In some embodiments, the scanning system may additionally
prompt the user to perform an optical calibration to the mounted
tip 5(f). The calibration may be pre-recorded.
[0219] The computer-implemented method receives first information
from the scanner device 2 related to the mounted scanning-tip when
mounted properly on the scanner device 2. This enables the computer
implemented method to change display mode and direct the user into
a scanning display as shown in FIG. 10c. The change in display mode
into the scanning display is by itself a first scanning indication
34 for the user to scan with the scanner device 2 having mounted
the first scanning-tip 5(f).
[0220] As just explained, and according to the second aspect of the
invention, the computer-implemented method displays, in the
graphical user-interface, and based on the first information from
the scanner device 2, first scanning instruction 34 and/or first
scanning indication 34 for the user to scan with the scanner device
2 having mounted the first scanning-tip 5(f).
[0221] In this case, and when in the display mode of the scanning
display, the graphical user-interface further displays a
live-view-box as an additional part of the first scanning
indication 34 (right lower corner of FIG. 10c), and a 3D-view-box
(in the middle of FIG. 10c) where the 3D-model 13 is generated, or
to be generated, as another additional part of the first scanning
indication.
[0222] After changing to the scanning display, scanning with the
scanning device 2 may be initiated by pressing a scan button on the
scanner device or on the screen in the user-interface. In some
embodiments, the live-view box and/or the 3D-view box may appear on
the scanning display after initiating scanning.
[0223] Further, in the scanning display (FIG. 10c) as indicating
the first scanning indication 34 for the user to scan with the
scanner device 2, the system receives scan data from the scanner
device, which is used to construct a first part of the
3D-representation 13(a).
[0224] Here, the jaw is scanned with the enlarged field-of-view tip
5(f). The enlarged field-of-view tip 5(f) is located outside the
mouth or only slightly inserted into the mouth. Since the enlarged
field-of-view tip 5(f) has a field-of-view that covers a
substantial part of the entire jaw (as shown in the live view box
34) (e.g. 50% of the jaw), the registration relies on the overall
jaw structure. The enlarged field-of view scan tip 5(f) of the
entire jaw is not adequate for clinical purposes. It may only cover
the buccal side of the jaw since the tip 5(f) cannot be inserted
deep into the mouth and moved around to obtain images from e.g. the
lingual side. The enlarged field-of-view tip 5(f) also has a lower
resolution since it expands the field-of-view of the scanning
device 2. Therefore, in this case, a more detailed scan with a
regular scan tip 5(a) may be used to satisfy the clinical needs.
Thus, when a sufficient portion of the jaw has been scanned with
the enlarged field-of-view tip 5(f), the user may confirm that the
first part of the 3D-representation 13(a) as generated is
sufficient by clicking a "next button" 35 to continue to generate
the final 3D-model 13c. Accordingly, the user has here provided
confirmation-input by clicking on the "next button" 35, wherein the
confirmation-input comprise information confirming that the first
part of the 3D-representation 13(a) as generated is sufficient. In
this case, the user has determined that the first part of the
3D-model 13(a) as shown in FIG. 10c is sufficient.
[0225] The user may generate several first parts of the
3D-representation 13(a), for example a lower jaw and an upper jaw.
Accordingly, the user may confirm several times that the first part
of the 3D-representation 13(a) as generated is sufficient, for
example by clicking the first "next button" 35 (indicating that the
lower jaw is sufficient) and then a second "next button" 36
(indicating that the upper jaw is sufficient) to continue to
generate the complete 3D-model 13(c).
[0226] In addition, the first part of the 3D-representation 13(a)
as confirmed sufficient may be collected over time from the user,
and/or a plurality of different users. These 3D-representations
13(a) may thereby form historical 3D-representations as confirmed
sufficient, whereby the step of displaying the instructions to
replace the first scanning-tip with a second scanning-tip is
automatized and based on the historical 3D-representations as
confirmed sufficient. In other words, the step of confirming that
the first 3D-representation 13(a) is sufficient may be taken over
by an automized procedure based on the historical
3D-representations. For example, the historical 3D-represenations
as confirmed sufficient may be used as input for an algorithm
configured to determine when the first part of the
3D-representation 13(a) as generated is sufficient, and wherein the
algorithm is based on averaging the historical 3D-representations,
and/or wherein the algorithm is based on machine learning and/or
artificial intelligence (AI). Accordingly, the input as provided by
the user changes the process of obtaining the final
3D-representation 13(c). For example, due to the input from the
user(s), the user interface may change such that it is no longer
possible to press the "next button". Instead the "next button" may
automatically be ticked with a green tick in the "next buttons" 35
and 36, once the first 3D representation 13a is sufficient.
Thereby, the input from the user makes the process of generating
the final 3D-representation 13(c) much more efficient, and in fact
also more reliable.
[0227] In addition to the judgement from the user(s), the human jaw
has a certain, recognizable shape, and thus it may also be possible
to teach the algorithm to analyze such shape. Accordingly, the
algorithm may in addition to the user input be optimized. AI
assisted data analysis may be enabled or disabled in the
user-interface.
[0228] Upon completion of the entire jaw scan(s) with the enlarged
field-of-view tip 5(f), the computer implemented method prompts the
operator with a second mounting instruction 37 to mount the regular
scan tip 5(a), thus by replacing the first scanning-tip 5(f) with
the second scanning tip 5(a). This is shown in FIG. 10d.
[0229] If the regular tip 5(a) is a tip with electronics, the
computer-implemented method may register that the regular tip 5(a)
is mounted and the computer-implemented method may then proceed to
the next step in the computer-implemented method. If the tip 5(a)
is not a tip with electronic connectors, it may be possible for the
computer-implemented method to identify the tip 5(a) by analyzing
visual characteristics of the recorded images in the scanner device
2. This may also allow the computer-implemented method to continue
to the next step in the computer-implemented method.
[0230] As just described, and displayed in FIG. 10d, the
user-interface, based on the first scan data as received, now
displays instructions 37 to replace the first scanning-tip with a
second scanning-tip 5(a). The user is prompted to replace the
enlarged field-of-view tip 5(f) by a standard scan tip 5(a) to
complete the model. Again, the computer-implemented method
recognizes the replacement of the scan tip and directs the user
back to the scanning display (now shown in FIG. 10e, but similar to
the scanning display in FIG. 10c) in the user-interface to continue
scanning with the standard scan tip 5(a). The standard tip 5(a) is
configured to scan inside the oral cavity and is easily maneuvered
around to capture data from different angles.
[0231] During scanning, the computer-implemented method receives
second scan data from the scanner device 2 with the second
scanning-tip 5(f), wherefrom a second part of the 3D-representation
13(b) is generated. The second part of the 3D-representation 13(b),
and as shown in FIG. 10e, may be from areas, which were not reached
(or properly resolved) with the first (enlarged field-of-view)
scanning-tip 5(f). The second part of the 3D-representation 13(b),
which is in the form of patches (due to being recorded with a
smaller field-of-view scanning-tip 5(a) (also seen in the live-view
box 34 in the lower left corner)|n comparison to the enlarged field
of view scanning-tip 5(f)) are recorded and registered onto the
first part of the 3D-representation 13(a) to obtain a complete
3D-model 13(c). In other words, the patches as just described have
a high degree of details. These patches are thus registered onto
the first partial model (the first 3D-representation 13a) which is
used as the framework for the reconstruction of the full model
13(c). In this manner, small registrations errors are avoided.
[0232] The end-result, here of a single jaw, is a final
3D-registration 13(c) displayed in FIG. 10f.
* * * * *