U.S. patent application number 15/521572 was filed with the patent office on 2017-11-09 for methods and systems for surface informatics based detection with machine-to-machine networks and smartphones.
The applicant listed for this patent is GALILEO GROUP, INC.. Invention is credited to Donald Michael Barnes, James Michael Grichnik.
Application Number | 20170323472 15/521572 |
Document ID | / |
Family ID | 55858224 |
Filed Date | 2017-11-09 |
United States Patent
Application |
20170323472 |
Kind Code |
A1 |
Barnes; Donald Michael ; et
al. |
November 9, 2017 |
METHODS AND SYSTEMS FOR SURFACE INFORMATICS BASED DETECTION WITH
MACHINE-TO-MACHINE NETWORKS AND SMARTPHONES
Abstract
At a computer-enabled imaging device, a workflow including a
plurality of time-stamped images is obtained. Each time-stamped
image has a respective time point at which the respective
time-stamped image was obtained. A first plurality of time-stamped
accelerometer interval readings and a first plurality of
time-stamped interval gyroscope readings are acquired, each of the
readings having a respective time point at which the respective
reading was acquired. A first real-time translational and
rotational trajectory of the first computer-enabled imaging device
is thereby obtained. Time-stamped coordinates of feature points in
the first workflow are acquired at a plurality of time points
during which the time-stamped images were acquired. A first dataset
including the workflow, time-stamped coordinates, the real-time
translational and rotational trajectory, and translational movement
of the coordinates of the feature points is communicated to a data
processing and display system for image processing and
analysis.
Inventors: |
Barnes; Donald Michael;
(Melbourne, FL) ; Grichnik; James Michael; (Miami
Beach, FL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
GALILEO GROUP, INC. |
Melbourne |
FL |
US |
|
|
Family ID: |
55858224 |
Appl. No.: |
15/521572 |
Filed: |
October 26, 2015 |
PCT Filed: |
October 26, 2015 |
PCT NO: |
PCT/US2015/057422 |
371 Date: |
April 24, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62209787 |
Aug 25, 2015 |
|
|
|
62206754 |
Aug 18, 2015 |
|
|
|
62203310 |
Aug 10, 2015 |
|
|
|
62068738 |
Oct 26, 2014 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06K 9/6251 20130101;
G06T 15/04 20130101; G06T 7/74 20170101; G06T 2207/30241 20130101;
H04N 5/23219 20130101; G06K 9/3208 20130101; G06T 17/20 20130101;
G06T 2207/30196 20130101; G06T 15/08 20130101; G06T 2200/08
20130101; G06T 2215/16 20130101; H04N 5/23206 20130101; H04W 4/70
20180201; G06K 9/46 20130101; G06T 2207/10028 20130101; G06F 16/58
20190101; G06T 2207/30244 20130101; H04N 5/23218 20180801; G06T
7/40 20130101; G06T 2200/04 20130101; G01C 19/5776 20130101 |
International
Class: |
G06T 15/08 20110101
G06T015/08; G06T 7/73 20060101 G06T007/73; G06T 15/04 20110101
G06T015/04; G06T 17/20 20060101 G06T017/20; G01C 19/5776 20120101
G01C019/5776; G06T 7/40 20060101 G06T007/40 |
Claims
1. A computer-implemented method of employing surface informatics
based detection using a first computer-enabled imaging device and a
data processing and display system: at the first computer-enabled
imaging device having a first two-dimensional pixilated detector,
at least one first accelerometer, at least one first gyroscope, one
or more first processors, and memory for storing one or more
programs for execution by the one or more first processors, the one
or more programs including programs for real-time feature
detection, real-time generation of feature-based coordinate point
cloud systems, and active mapping and tracking of coordinate points
of a point cloud system to image features: obtaining a respective
time-stamped image with coordinate-mapped feature points for
features of a first subject in a plurality of subjects using the
first two-dimensional pixilated detector at a first frequency,
thereby obtaining a first workflow comprising a first plurality of
time-stamped images, each time-stamped image of the first workflow
having a respective time point of a first plurality of time points
at which the respective time-stamped image was obtained; acquiring
a respective time-stamped accelerometer interval reading and a
respective time-stamped interval gyroscope reading using the
respective at least one first accelerometer and the at least one
first gyroscope at a second frequency independent of the first
frequency, thereby acquiring a first plurality of time-stamped
accelerometer interval readings and a first plurality of
time-stamped interval gyroscope readings, each of the time-stamped
accelerometer interval readings and each of the time-stamped
interval gyroscope readings having a respective time point of a
second plurality of time points at which the respective reading was
acquired, thereby obtaining a first real-time translational and
rotational trajectory of the first computer-enabled imaging device
which indicates a relative position of the first computer-enabled
imaging device with respect to the first subject through the first
plurality of time-stamped images; and acquiring time-stamped
coordinates of the feature points in the first workflow at each of
the first plurality of time points, thereby obtaining real time
translational movement of the coordinates of the feature points;
and communicating, through a network to the data processing and
display system for image processing and analysis, a first dataset
comprising: the first workflow, the time-stamped coordinates of the
feature points in the first workflow, the first real-time
translational and rotational trajectory of the first
computer-enabled imaging device, and the translational movement of
the coordinates of the feature points in the first workflow,
wherein the data processing and display system comprises one or
more processors and memory for storing instructions for execution
by the one or more processors, including instructions for storing
the first dataset in a subject data store associated with the first
subject in a first memory location in the computer memory.
2. The computer-implemented method of claim 1, wherein the data
processing and display system further comprises instructions, for
execution by the one or more processors, for: A) constructing a two
or three-dimensional map from the first dataset; B) using the
time-stamped coordinates of the feature points, the translational
movement of the coordinates of the feature points, and
translational and rotational values from the first real-time
translational and rotational trajectory of the first
computer-enabled imaging device for each time-stamped image in the
first workflow, to refine the two or three-dimensional map
constructed from the first dataset; C) creating, from the two or
three-dimensional map, a dense point cloud representing the first
subject, the dense point cloud comprising a plurality of points;
and D) storing, in a second memory location of the memory of the
data processing and display system, the dense point cloud
representing the first subject.
3. The computer-implemented method of claim 2, wherein the
instructions for constructing (A) comprise: (i) matching a
two-dimensional feature in a first time-stamped image and a second
time-stamped image in the first workflow; (ii) estimating a
parallax between the first time-stamped image and the second
time-stamped image using the first real-time translational and
rotational trajectory and/or the translational movement of the
coordinates of the feature points in the first workflow; (iii)
adding, when the parallax between the first time-stamped image and
the second time-stamped image satisfies a parallax threshold and
the matched two-dimensional feature in the first time-stamped image
and the second time-stamped image satisfies a matching threshold, a
two or three-dimensional point to the two or three-dimensional map
at a distance obtained by triangulating the first time-stamped
image and the second time-stamped image using the first real-time
translational and rotational trajectory; and (iv) repeating the
matching (i), estimating (ii), and adding (iii) for a different
first time-stamped image or a different second time-stamped image
in the first workflow or a different two-dimensional feature,
thereby constructing the two or three-dimensional map.
4. The computer-implemented method of claim 2, wherein each point
in the plurality of points of the dense point cloud: (i) represents
an average value of a respective single pixel or a respective group
of pixels across at least a subset of the first workflow that were
identified by the two or three-dimensional map as corresponding to
each other, and (ii) includes a surface normal computed from the
translational and rotational values of the at least a subset of the
first workflow.
5. The computer-implemented method of any claim 2, wherein the data
processing and display system further comprises instructions, for
execution by the one or more processors, for: processing the dense
point cloud using a surface reconstruction algorithm to generate a
mesh representing the first subject; and applying a texture mapping
algorithm to the mesh to generate a texture-mapped mesh
representing the first subject using one or more additional
time-stamped images of the first workflow.
6. The computer-implemented method of claim 2, further comprising
displaying computed spectral and temporal relationships of two or
three dimensional features of the subject on local or remotely
networked devices using the constructed map, dense point cloud, the
mesh, and/or the texture-mapped mesh.
7. The computer-implemented method of claim 2, wherein the
displaying includes displaying the computed spectral and temporal
relationships on virtual reality displays.
8. The computer-implemented method of claim 1, wherein the
machine-to-machine network further comprises a second
computer-enabled imaging device, the method further comprising: at
the second computer-enabled imaging device having a second
two-dimensional pixilated detector, at least one second
accelerometer, at least one second gyroscope, one or more second
processors, and memory for storing one or more programs for
execution by the one or more second processors, the one or more
programs including programs for real-time feature detection,
real-time generation of feature-based coordinate point cloud
systems, and active mapping and tracking of coordinate points of a
point cloud system to image features: obtaining a respective
time-stamped image with coordinate-mapped feature points for
features of the first subject using the second two-dimensional
pixilated detector at a third frequency, thereby obtaining a second
workflow comprising a second plurality of time-stamped images, each
time-stamped image of the second workflow having a respective time
point of a third plurality of time points at which the respective
time-stamped image was obtained; acquiring a respective
time-stamped accelerometer interval reading and a respective
time-stamped interval gyroscope reading using the respective at
least one second accelerometer and the at least one second
gyroscope at a fourth frequency independent of the third frequency,
thereby acquiring a second plurality of time-stamped accelerometer
interval readings and a second plurality of time-stamped interval
gyroscope readings, each of the time-stamped accelerometer interval
readings and each of the time-stamped interval gyroscope readings
having a respective time point of a fourth plurality of time points
at which the respective reading was acquired, thereby obtaining a
second real-time translational and rotational trajectory of the
second computer-enabled imaging device which indicates a relative
position of the second computer-enabled imaging device with respect
to the first subject through the second plurality of time-stamped
images; and acquiring time-stamped coordinates of the feature
points in the second workflow at each of the third plurality of
time points, thereby obtaining real time translational movement of
the coordinates of the feature points; and communicating, through
the network to the data processing and display system for image
processing and analysis, the second dataset comprising: the second
workflow, the time-stamped coordinates of the feature points in the
second workflow, the second real-time translational and rotational
trajectory of the second computer-enabled imaging device, and the
translational movement of the coordinates of the feature points in
the second workflow, at the data processing and display system, the
instructions for constructing further comprise instructions for A)
constructing the two or three-dimensional map from the first
dataset and the second dataset, and the instructions further
comprise instructions for B) using the time-stamped coordinates of
the feature points in the first workflow and the second workflow,
the translational movement of the coordinates of the feature points
in the first workflow and the second workflow, translational and
rotational values from the first real-time translational and
rotational trajectory of the first computer-enabled imaging device
for each time-stamped image in the first workflow, and
translational and rotational values from the second real-time
translational and rotational trajectory of the second
computer-enabled imaging device for each time-stamped image in the
second workflow, to refine the two or three-dimensional map
constructed from the first dataset and the second dataset.
9. The computer-implemented method of claim 8, wherein the
instructions for (A) constructing the two or three-dimensional map
from the first dataset and the second dataset comprise: (i)
matching a two-dimensional feature in a third time-stamped image
and a fourth time-stamped image selected from the first workflow
and/or the second workflow; (ii) estimating a parallax between the
third time-stamped image and the fourth time-stamped image using
the first and/or second real-time translational and rotational
trajectory, and/or the translational movement of the coordinates of
the feature points in the first and/or second workflow; (iii)
adding, when the parallax between the third time-stamped image and
the fourth time-stamped image satisfies a parallax threshold and
the matched two-dimensional feature in the third time-stamped image
and the fourth time-stamped image satisfies a matching threshold, a
two or three-dimensional point to the two or three-dimensional map
at a distance obtained by triangulating the third time-stamped
image and the fourth time-stamped image using the first and/or
second real-time translational and rotational trajectory; and (iv)
repeating the matching (i), estimating (ii), and adding (iii) for a
different third time-stamped image or a different fourth
time-stamped image in the first and/or second workflow, or a
different two-dimensional feature, thereby constructing the two or
three-dimensional map comprising a plurality of two or
three-dimensional points.
10. The computer-implemented method of claim 8, wherein each point
in the plurality of points of the dense point cloud: (i) represents
an average value of a respective single pixel or a respective group
of pixels across at least a subset of the first and/or second
workflow that were identified by the two or three-dimensional map
as corresponding to each other, and (ii) includes a surface normal
computed from the translational and rotational values of the at
least a subset of the first and/or second workflow.
11. The computer-implemented method of claim 8, wherein: the
instructions for (A) constructing the two or three-dimensional map
from the first dataset and the second dataset further comprise
instructions for constructing the two or three-dimensional map from
the first dataset, the second dataset, and subsequent datasets of
the first subject, the subsequent datasets comprising subsequent
workflows; and the instructions to (B) refine the two or
three-dimensional map constructed from the first dataset and the
second dataset further comprise instructions for using the
time-stamped coordinates of the feature points in the first
workflow and the second workflow, the translational movement of the
coordinates of the feature points in the first workflow and the
second workflow, translational and rotational values from the first
real-time translational and rotational trajectory of the first
computer-enabled imaging device for each time-stamped image in the
first workflow, and translational and rotational values from the
second real-time translational and rotational trajectory of the
second computer-enabled imaging device for each time-stamped image
in the second workflow, to refine the two or three-dimensional map
constructed from the first dataset, the second dataset, and the
subsequent datasets.
12. The computer-implemented method of claim 8, wherein the first,
second, third, and fourth plurality of time points are within a
first timeframe corresponding to a first capture session.
13. The computer-implemented method of claim 8, wherein the first
and second plurality of time points are within a first timeframe
corresponding to a first capture session, and the third and fourth
plurality of time points are within a second timeframe
corresponding to a second capture session, wherein the first
timeframe predates the second timeframe.
14. The computer-implemented method of claim 8, wherein: the second
computer-enabled imaging device is the first computer-enabled
imaging device, the second two-dimensional pixilated detector is
the first two-dimensional pixilated detector, the at least one
second accelerometer is the at least one first accelerometer, the
at least one second gyroscope is the at least one first gyroscope,
the one or more second processors are the one or more first
processors, and the memory for storing one or more programs for
execution by the one or more second processors is the memory for
storing one or more programs for execution by the one or more first
processors.
15. The computer-implemented method of claim 1, wherein the
communicating occurs in real time concurrently with the obtaining
and the acquiring.
16. The computer-implemented method of claim 1, wherein the first
dataset is communicated wirelessly.
17. The computer-implemented method of claim 1, wherein the first
dataset is communicated over a wire.
18. The computer-implemented method of claim 1, wherein the first
dataset is retained and processed on the first computer-enabled
imaging device.
19. The computer-implemented method of claim 1, wherein the first
frequency and second frequency are each independently between 10 Hz
and 100 Hz.
20. The computer-implemented method of claim 1, wherein the first
frequency is 30 Hz and the second frequency is 100 Hz.
21-56. (canceled)
Description
TECHNICAL FIELD
[0001] This relates generally to image processing and informatics,
including but not limited to surface informatics based detection
using computer-enabled imaging devices.
BACKGROUND
[0002] The use of imaging technology for analyzing surface
structures has a number of broad biomedical and non-biological
applications, ranging from medical imaging and disease detection,
to verifying the integrity of building structures. Despite
significant advances in the processing and imaging capabilities of
consumer devices, imaging technology and equipment enabling this
surface imaging and analysis functionality has traditionally been
prohibitively costly and impractical for adoption by the broad
consumer demographic. Furthermore, mechanisms for aggregating
subject data on a large scale for enhanced surface informatics
based detection remains substantially undeveloped.
SUMMARY
[0003] Accordingly, there is a need for faster, more efficient
methods, systems, and interfaces for surface informatics based
detection using computer-enabled imaging devices, such as smart
phones. By utilizing the robust image capture capabilities of and
the multitude of sensor readings generated by basic smart phones, a
variety of spatial, spectral, and/or temporal representations of
datasets that include observed features for a large pool of
subjects may be generated. By processing and extracting data from
generated visual representations, observables changes, potential
conditions, and/or pre-confirmed health conditions of a particular
subject may be detected. Such methods and interfaces optionally
complement or replace conventional methods for surface informatics
based detection.
[0004] In accordance with some embodiments, a method is performed
at a computer-enabled imaging device (e.g., a client device) having
a two-dimensional pixilated detector, at least one accelerometer,
at least one gyroscope, one or more processors, and memory for
storing one or more programs for execution by the one or more
processors. The one or more programs include programs for real-time
feature detection, real-time generation of feature-based coordinate
point cloud systems, and active mapping and tracking of coordinate
points of a point cloud system to image features by way of
implementation of the method.
[0005] The method includes obtaining a respective time-stamped
image with coordinate mapped feature points for features of a
subject in a plurality of subjects using the two-dimensional
pixilated detector at a first frequency, thereby obtaining a
workflow comprising a plurality of time-stamped images. Each
time-stamped image of the workflow has a respective time point of a
first plurality of time points at which the respective time-stamped
image was obtained.
[0006] The method further includes acquiring a respective
time-stamped accelerometer interval reading and a respective
time-stamped interval gyroscope reading using the respective at
least one accelerometer and the at least one gyroscope at a second
frequency independent of the first frequency. In this way, a
plurality of time-stamped accelerometer interval readings and a
plurality of time-stamped interval gyroscope readings is acquired.
Each of the time-stamped accelerometer interval readings and each
of the time-stamped interval gyroscope readings have a respective
time point of a second plurality of time points at which the
respective reading was acquired, thereby obtaining a real-time
translational and rotational trajectory of the computer-enabled
imaging device which indicates a relative position of the
computer-enabled imaging device with respect to the subject through
the plurality of time-stamped images.
[0007] Time-stamped coordinates of the feature points in the
workflow are acquired at each of the first plurality of time
points, thereby obtaining real time translational movement of the
coordinates of the feature points. Furthermore, a dataset is
communicated through a network to the data processing and display
system for image processing and analysis. The dataset includes the
workflow, the time-stamped coordinates of the feature points in the
workflow, the real-time translational and rotational trajectory of
the computer-enabled imaging device, and the translational movement
of the coordinates of the feature points in the workflow. The data
processing and display system comprises one or more processors and
memory for storing instructions for execution by the one or more
processors, including instructions for storing the dataset in a
subject data store associated with the subject in a memory location
in the computer memory.
[0008] In accordance with some embodiments, a computer-enabled
imaging device includes a two-dimensional pixilated detector, at
least one accelerometer, at least one gyroscope, one or more
processors, and memory for storing one or more programs for
execution by the one or more processors. The one or more programs
include programs for real-time feature detection, real-time
generation of feature-based coordinate point cloud systems, and
active mapping and tracking of coordinate points of a point cloud
system to image features. The one or more programs include
instructions for performing the operations of the client-side
method described above.
[0009] In accordance with some embodiments, a computer-readable
storage medium has stored therein instructions that, when executed
by the computer-enabled imaging device, cause the computer-enabled
imaging device to perform the operations described above.
[0010] Thus, computer-enabled imaging devices are provided with
faster, more efficient methods for surface informatics based
detection, thereby increasing the value, effectiveness, efficiency,
and user satisfaction with such devices.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] For a better understanding of the various described
embodiments, reference should be made to the Description of
Embodiments below, in conjunction with the following drawings. Like
reference numerals refer to corresponding parts throughout the
figures and description.
[0012] FIG. 1 is a block diagram illustrating an exemplary surface
informatics based detection system, in accordance with some
embodiments.
[0013] FIG. 2 is a block diagram illustrating an exemplary data
repository, in accordance with some embodiments.
[0014] FIG. 3 is a block diagram illustrating an exemplary client
device, in accordance with some embodiments.
[0015] FIGS. 4A-4B illustrate an environment in which data is
captured for a subject using one or more client devices, in
accordance with some embodiments.
[0016] FIGS. 5A-5B illustrate an exemplary data structure for
information obtained by client devices in a surface informatics
based detection system, in accordance with some embodiments.
[0017] FIG. 6 illustrates a flowchart for the processing of subject
datasets, in accordance with some embodiments.
[0018] FIGS. 7A-7J are flow diagrams illustrating a method of
surface informatics based detection, in accordance with some
embodiments.
DESCRIPTION OF EMBODIMENTS
[0019] Reference will now be made to embodiments, examples of which
are illustrated in the accompanying drawings. In the following
description, numerous specific details are set forth in order to
provide an understanding of the various described embodiments.
However, it will be apparent to one of ordinary skill in the art
that the various described embodiments may be practiced without
these specific details. In other instances, well-known methods,
procedures, components, circuits, and networks have not been
described in detail so as not to unnecessarily obscure aspects of
the embodiments.
[0020] It will also be understood that, although the terms first,
second, etc. are, in some instances, used herein to describe
various elements, these elements should not be limited by these
terms. These terms are used only to distinguish one element from
another. For example, a first smart phone could be termed a second
smart phone, and, similarly, a second smart phone could be termed a
first smart phone, without departing from the scope of the various
described embodiments. The first smart phone and the second smart
phone are both smart phones, but they are not the same smart
phone.
[0021] The terminology used in the description of the various
embodiments described herein is for the purpose of describing
particular embodiments only and is not intended to be limiting. As
used in the description of the various described embodiments and
the appended claims, the singular forms "a," "an," and "the" are
intended to include the plural forms as well, unless the context
clearly indicates otherwise. It will also be understood that the
term "and/or" as used herein refers to and encompasses any and all
possible combinations of one or more of the associated listed
items. It will be further understood that the terms "includes,"
"including," "comprises," and/or "comprising," when used in this
specification, specify the presence of stated features, integers,
steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof.
[0022] As used herein, the term "if" is, optionally, construed to
mean "when" or "upon" or "in response to determining" or "in
response to detecting" or "in accordance with a determination
that," depending on the context. Similarly, the phrase "if it is
determined" or "if [a stated condition or event] is detected" is,
optionally, construed to mean "upon determining" or "in response to
determining" or "upon detecting [the stated condition or event]" or
"in response to detecting [the stated condition or event]" or "in
accordance with a determination that [a stated condition or event]
is detected," depending on the context.
[0023] As used herein, the term "exemplary" is used in the sense of
"serving as an example, instance, or illustration" and not in the
sense of "representing the best of its kind."
[0024] FIG. 1 is a block diagram illustrating an exemplary surface
informatics based detection system 100, in accordance with some
embodiments. The detection system 100 includes a number of client
devices (also called "client systems," "client computers," or
"clients") 104-1, 104-2, . . . 104-n communicably connected to a
data repository 108 by one or more networks 106 (e.g., the
Internet, cellular telephone networks, mobile data networks, other
wide area networks, local area networks, metropolitan area
networks, and so on). In some embodiments, the one or more networks
106 include a public communication network (e.g., the Internet
and/or a cellular data network), a private communications network
(e.g., a private LAN or leased lines), or a combination of such
communication networks. In some embodiments, the one or more
networks 106 use the HyperText Transport Protocol (HTTP) and the
Transmission Control Protocol/Internet Protocol (TCP/IP) to
transmit information between devices or systems. HTTP permits
client devices to access various resources available via the one or
more networks 106. The various embodiments of the invention,
however, are not limited to the use of any particular protocol.
[0025] In some embodiments, the client devices 104-1, 104-2, . . .
104-n are computing devices such as cameras, video recording
devices, smart watches, personal digital assistants, portable media
players, smart phones, tablet computers, 2D devices, 3D (e.g.,
virtual reality) devices, laptop computers, desktop computers,
televisions with one or more processors embedded therein or coupled
thereto, in-vehicle information systems (e.g., an in-car computer
system that provides navigation, entertainment, and/or other
information), and/or other appropriate computing devices that can
be used to communicate with other client devices 104 and/or the
data repository 108. In some embodiments, the data repository 108
is a single computing device such as a computer server, while in
other embodiments, the data repository 108 is implemented by
multiple computing devices working together to perform the actions
of a server system (e.g., cloud computing).
[0026] Users 102-1, 102-2, . . . 102-n employ the client devices
104-1, 104-2, . . . 104-n to obtain or generate data for
transmission to the data repository 108 and/or other client
devices, or to receive, display, and/or manipulate data (e.g., data
generated, obtained, or produced on the device itself, data
received from the data repository 108 or other client devices,
etc.). In some embodiments, the client devices 104 capture
multimedia data (e.g., time-stamped images, video, audio, etc.) and
acquire associated meta data (e.g., environmental information
(time, geographic location, etc.), device readings, such as sensor
readings from accelerometers, gyroscopes, etc.) for communication
to the data repository 108 for further processing and analysis. The
same or other client devices 104 may subsequently receive data from
the data repository 108 and/or other client devices for display
(e.g., constructed two or three-dimensional maps, point clouds,
textured maps, etc.). In some embodiments, separate client devices
104 (e.g., client device 104-4, a dedicated display terminal used
by physicians) are configured for viewing received data and
capturing/acquiring multimedia data and meta data.
[0027] In some embodiments, data is sent to and viewed by the
client devices in a variety of output formats, and/or for further
processing or manipulation (e.g., CAD programs, 3D printing,
virtual reality displays, holography applications, etc.). In some
embodiments, data is sent for display to the same client device
that performs the image capture and acquires sensor readings (e.g.,
client devices 104), and/or other systems and devices (e.g., data
apparatus 108, a client device 104-4 that is a dedicated viewing
terminal, etc.). In some embodiments, client devices 104 access
data and/or services provided by the data repository 108 by
execution of various applications. For example, in some embodiments
client devices 104 execute web browser applications that can be
used to access services provided by the data repository 108. As
another example, one or more of the client devices 104-1, 104-2, .
. . 104-n execute software applications that are specific to
viewing and manipulating data (e.g., surface informatics "apps"
running on smart phones or tablets).
[0028] In some embodiments, client devices 104 are used as control
devices for synchronizing operational processes of one or more
client devices 104. For instance, in some embodiments, one or more
client devices 104 are used to dynamically generate control
commands for transmission to other client devices for synchronized
data capture (e.g., synchronous image/meta data capture with
respect to temporal, spatial, or spectral parameters). As an
example, one or more client devices 104 generate control commands
for time-synchronized image capture of a particular subject using
multiple client devices 104 across a predefined period of time
(e.g., multiple client devices 104 having different positions or
orientations with respect to a subject capturing a workflow of
images at the same frequency), or at specified periods of time
(e.g., each of multiple client devices 104 capturing a stream of
100 images of the same subject each day for a week). In some
embodiments, control commands are also be synchronized by spatial
parameters of the client devices 104 with respect to a subject, an
environment, or one another (e.g., image capture synchronized such
that images are captured from known positions and orientations with
reference to a subject). Moreover, in some embodiments control
commands are synchronized with respect to spectral aspects of a
subject or environment (e.g., identifying a common feature among
images captured by different client devices 104, and synchronizing
image capture based on the identified feature). In some
embodiments, control commands are transmitted by a client device
104 to other client devices such that the client device from which
the control commands are originally transmitted initiate a process
(e.g., image capture sequence using multiple client devices) and
terminate the process with respect to its own operations, while the
other client devices continue through to completion of the process.
In some embodiments, control commands are generated by the data
repository 108 and transmitted to the client devices 104 for
execution of a synchronized process.
[0029] Users interacting with the client devices 104-1, 104-2, . .
. 104-n can participate in or contribute to services provided, or
processing performed, by the data repository 108 by submitting
datasets (or select portions thereof), where the submitted datasets
include captured multimedia data (e.g., time-stamped images, video,
audio, etc.) and/or acquired meta data (e.g., associated sensor
readings). In some embodiments, information is posted on a user's
behalf by systems and/or services external to the data repository
108 (e.g., by a user's physician). In some embodiments, user
submitted datasets are retrieved and processed by the data
repository 108, the results of which are compared with one another
or analyzed in order to detect temporal observable changes (e.g.,
biological or non-biological), potential conditions, and/or
pre-confirmed health conditions (as described in greater detail
below with respect to FIGS. 7F-7J).
[0030] The data repository 108 stores, processes, and/or analyzes
data received from one or more client devices 104 (e.g., datasets
which include multimedia data, associated meta data, localization
data, etc.). The resulting data of such processing and analysis are
in turn disseminated to the same and/or other client devices for
viewing, manipulation, and/or further processing and analysis. In
some embodiments, the data repository 108 consolidates data
received from one or more client devices 104 and performs one or
more geomatics based processes. For example, using associated meta
data and localization data, the data repository 108 constructs two
or three-dimensional maps (e.g., by matching two-dimensional
features identified across an image workflow, estimating parallax
between images, and adding points to a map when a parallax
threshold is satisfied), where the constructed maps are used to
create dense point clouds and/or generate textured meshes
representing a subject. In some embodiments, useful biological or
non-biological data is further derived and extracted from visual
representations generated by geomatics based processes (e.g.,
extracting data from the spatial, spectral, and/or temporal
representations of subject datasets, such as generated maps, point
clouds, and/or meshes). Extracted data can be further processed or
analyzed for detection purposes (e.g., correlating feature/temporal
data of a subject to feature/temporal data of other subjects to
detect a temporally observable change or pre-confirmed condition).
Furthermore, in some embodiments, data analyses or detection
outcomes are disseminated to one or more devices (e.g., client
devices 104), server systems (e.g., data repository 108), and/or
devices of associated individuals (e.g., devices of
physicians).
[0031] FIG. 2 is a block diagram illustrating an exemplary data
repository 108, in accordance with some embodiments. In some
embodiments, the data repository 108 is a data repository
apparatus, server system, or any other electronic device for
receiving, collecting, storing, displaying, and/or processing data
received from a plurality of devices over a network (sometimes
referred to alternatively as a data processing and display
system).
[0032] The data repository 108 typically includes one or more
processing units (processors or cores) 202, one or more network or
other communications interfaces 204, memory 206, and one or more
communication buses 208 for interconnecting these components. The
communication buses 208 optionally include circuitry (sometimes
called a chipset) that interconnects and controls communications
between system components. The data repository 108 optionally
includes a user interface (not shown). The user interface, if
provided, may include a display device and optionally includes
inputs such as a keyboard, mouse, trackpad, and/or input buttons.
Alternatively or in addition, the display device includes a
touch-sensitive surface, in which case the display is a
touch-sensitive display.
[0033] Memory 206 includes high-speed random-access memory, such as
DRAM, SRAM, DDR RAM, or other random-access solid-state memory
devices; and may include non-volatile memory, such as one or more
magnetic disk storage devices, optical disk storage devices, flash
memory devices, and/or other non-volatile solid-state storage
devices. Memory 206 optionally includes one or more storage devices
remotely located from the processor(s) 202. Memory 206, or
alternately the non-volatile memory device(s) within memory 206,
includes a non-transitory computer-readable storage medium. In some
embodiments, memory 206 or the computer-readable storage medium of
memory 206 stores the following programs, modules and data
structures, or a subset or superset thereof: [0034] an operating
system 210 that includes procedures for handling various basic
system services and for performing hardware dependent tasks; [0035]
a network communication module 212 that is used for connecting the
data repository 108 to other computers, systems, and/or client
devices 104 via the one or more communication network interfaces
204 (wired or wireless) and one or more communication networks
(e.g., the one or more networks 106) [0036] a subject data store
214 for storing data associated with one or more subjects (e.g.,
received from one or more associated client devices 104, FIGS. 1
and 3), such as: [0037] subject information 2140 for storing
additional or supplemental information for one or more subjects
(e.g., user medical history, biological data, personal profiles,
and/or any additional info helpful for rendering diagnosis); [0038]
multimedia data 2141 for storing multimedia data (e.g.,
time-stamped images, video, audio, etc.) captured by one or more
sensors or devices (e.g., two-dimensional pixilated detector and/or
microphone of a client device 104, FIG. 3); [0039] localization
data 2142 for environmental device measurements, such as a focal
length, sensor frequencies (e.g., the respective frequency at which
sensors of the client device captured data, such as an
accelerometer frequency, a gyroscope frequency, a barometer
frequency, etc.), accelerometer readings (e.g., in
meters/sec.sup.2), translational data (e.g., (x, y, z) coordinates
of the client device with respect to a pre-defined axes or point of
reference), rotational data (e.g., roll (.phi.), pitch (.theta.),
yaw (.psi.)), and/or any additional sensor or device measurements
or readings for determining spatial, spectral, and/or temporal
characteristics of a client device or subject; [0040] meta data
2143 for storing device data or data associated with captured
multimedia, such as a device identifier (e.g., identifying the
device of a group of devices that captured the multimedia item,
which may include an arbitrary identifier, a MAC address, a device
serial number, etc.), temporal meta data (e.g., date and time of a
corresponding capture), location data (e.g., GPS coordinates of the
location at which multimedia item was captured), a multimedia
capture frequency (e.g., the frequency at which a stream of images
is captured), device configuration settings (e.g., image resolution
captured multimedia items, frequency ranges that the pixilated
detector of a client device 104 is configured to detect), and/or
other camera data or environmental factors associated with captured
multimedia; [0041] feature data 2144 for storing quantitative
and/or qualitative data for observations of a class of features
(e.g., feature data for an observation corresponding to observed
lesions may include data related to a location of observed lesions,
such as diffuse or localized, lesion size and size distribution,
percent body surface area, etc.); and/or [0042] temporal data 2145
for storing data representing observed changes in values of feature
data over time (e.g., percentage increase in number of observed
lesions); [0043] geomatics module 216 for processing, manipulating,
and analyzing datasets (e.g., received from one or more client
devices 104) in order to generate and view spatial, spectral,
and/or temporal representations of subject datasets, which
includes: [0044] map generator 2160 for constructing two or
three-dimensional maps from one or more datasets (e.g.,
corresponding to one or more capture sessions, and received from
one or more client devices); [0045] point cloud generator 2161 for
creating dense point clouds (e.g., consisting of tens of thousands
of points) from constructed two or three-dimensional maps; and/or
[0046] mesh generator 2162 for generating meshes representing a
subject by processing the created dense point cloud (e.g., using a
surface reconstruction algorithm), and for adding texture to the
meshes to generate texture-mapped meshes (e.g., by applying texture
mapping algorithms); [0047] a processing module 218 for processing,
analyzing, and extracting data (e.g., biological/non-biological
feature data and/or temporal data) from generated spatial,
spectral, and/or temporal representations of subject datasets
(e.g., constructed maps, dense point clouds, meshes, texture-mapped
meshes, etc.), and for detecting temporal observable changes and/or
conditions (e.g., potential conditions, health conditions, etc.)
(e.g., by correlating data, calculating numerical scores,
determining satisfaction of score thresholds, etc.); and [0048]
dissemination module 220 for updating subject data stores (e.g.,
214), sending alerts (e.g., to remote devices associated with a
subject), and/or providing notifications (e.g., to caretakers
associated with human subjects).
[0049] The subject data store 214 (and any other data storage
modules) stores data associated with one or more subjects in one or
more types of databases, such as graph, dimensional, flat,
hierarchical, network, object-oriented, relational, and/or XML
databases, or other data storage constructs.
[0050] FIG. 3 is a block diagram illustrating an exemplary client
device 104, in accordance with some embodiments. The client device
104 typically includes one or more processing units (processors or
cores) 302, one or more network or other communications interfaces
304, memory 306, and one or more communication buses 308 for
interconnecting these components. The communication buses 308
optionally include circuitry (sometimes called a chipset) that
interconnects and controls communications between system
components. The client device 104 includes a user interface 310.
The user interface 310 typically includes a display device 312. In
some embodiments, the client device 104 includes inputs such as a
keyboard, mouse, and/or other input buttons 316. Alternatively or
in addition, in some embodiments, the display device 312 includes a
touch-sensitive surface 314, in which case the display device 312
is a touch-sensitive display. In client devices that have a
touch-sensitive display 312, a physical keyboard is optional (e.g.,
a soft keyboard may be displayed when keyboard entry is needed).
The user interface 310 also includes an audio output device 318,
such as speakers or an audio output connection connected to
speakers, earphones, or headphones. Furthermore, some client
devices 104 use a microphone and voice recognition to supplement or
replace the keyboard. Optionally, the client device 104 includes an
audio input device 320 (e.g., a microphone) to capture audio (e.g.,
speech from a user). Optionally, the client device 104 includes a
location detection device 322, such as a GPS (global positioning
satellite) or other geo-location receiver, for determining the
location of the client device 104.
[0051] The client device 104 also optionally includes an
image/video capture device 324, such as a camera or webcam. In some
embodiments, the image/video capture device 324 includes a
two-dimensional pixilated detector/image sensor configured to
capture images at one or more predefined resolutions (e.g., a low
resolution, such as 480.times.360, and a high resolution, such as
3264.times.2448). In some embodiments, the image/video capture
device 324 captures a workflow of images (e.g., a stream of
multiple images) at a predefined frequency (e.g., 30 Hz). In some
embodiments, the client device 104 includes a plurality of
image/video capture devices 324 (e.g., a front facing camera and a
back facing camera), where in some implementations, each of the
multiple image/video capture devices 324 captures a distinct
workflow for subsequent processing (e.g., capturing images at
different resolutions, ranges of light, etc.). Optionally, the
client device 104 includes one or more illuminators (e.g., a light
emitting diode) configured to illuminate a subject or environment.
In some embodiments, the one or more illuminators are configured to
illuminate specific wavelengths of light.
[0052] In some embodiments, the client device 104 includes one or
more sensors 326 including, but not limited to, accelerometers,
gyroscopes, compasses, magnetometer, light sensors, near field
communication transceivers, barometers, humidity sensors,
temperature sensors, proximity sensors, and/or other
sensors/devices for sensing and measuring various environmental
conditions. In some embodiments, the one or more sensors operate
and obtain measurements at respective predefined frequencies.
[0053] Memory 306 includes high-speed random-access memory, such as
DRAM, SRAM, DDR RAM or other random-access solid-state memory
devices; and may include non-volatile memory, such as one or more
magnetic disk storage devices, optical disk storage devices, flash
memory devices, or other non-volatile solid-state storage devices.
Memory 306 may optionally include one or more storage devices
remotely located from the processor(s) 302. Memory 306, or
alternately the non-volatile memory device(s) within memory 306,
includes a non-transitory computer-readable storage medium. In some
embodiments, memory 306 or the computer-readable storage medium of
memory 306 stores the following programs, modules and data
structures, or a subset or superset thereof: [0054] an operating
system 328 that includes procedures for handling various basic
system services and for performing hardware dependent tasks; [0055]
a network communication module 330 that is used for connecting the
client device 104 to other computers, systems (e.g., data
repository 108), and/or client devices 104 via the one or more
communication network interfaces 304 (wired or wireless) and one or
more communication networks, such as the Internet, cellular
telephone networks, mobile data networks, other wide area networks,
local area networks, metropolitan area networks, and so on; [0056]
an image/video capture module 332 (e.g., a camera module) for
processing a respective image or video captured by the image/video
capture device 324, where the respective image or video may be sent
or streamed (e.g., by a client application module 336) to the data
repository 108; [0057] an audio input module 334 (e.g., a
microphone module) for processing audio captured by the audio input
device 320, where the respective audio may be sent or streamed
(e.g., by a client application module 340) to the data repository
108; [0058] a location detection module 336 (e.g., a GPS, Wi-Fi, or
hybrid positioning module) for determining the location of the
client device 104 (e.g., using the location detection device 322)
and providing this location information for use in various
applications (e.g., client application module 340); [0059] a sensor
module 338 for acquiring, processing, and transmitting
environmental device measurements, such as a focal length, sensor
frequencies (e.g., accelerometer frequency, a gyroscope frequency,
etc.), accelerometer readings (e.g., in meters/sec.sup.2),
translational data (e.g., (x, y, z) coordinates of the client
device with respect to a pre-defined axes or point of reference),
rotational data (e.g., roll (.phi.), pitch (.theta.), yaw (.psi.)),
and/or any additional sensor or device measurements or readings for
determining spatial, spectral, and/or temporal characteristics of
the client device or subject; and [0060] one or more client
application modules 340, including the following modules (or sets
of instructions), or a subset or superset thereof: [0061] a web
browser module (e.g., Internet Explorer by Microsoft, Firefox by
Mozilla, Safari by Apple, or Chrome by Google) for accessing,
viewing, and interacting with web sites (e.g., a social-networking
web site provided by the data repository 108), subject datasets
(e.g., including time-stamped images of a subject, real-time
translational and rotational trajectory of the client device,
etc.), and/or spatial, spectral, and/or temporal representations of
subject datasets (e.g., constructed maps, dense point clouds,
meshes, texture-mapped meshes, etc.); and/or [0062] other optional
client application modules for viewing and/or manipulating datasets
of generated representations, such as applications for photo
management, video management, a digital video player,
computer-aided design (CAD), 3D viewing (e.g., virtual reality), 3D
printing, holography, and/or other graphics-based applications.
[0063] Each of the above identified modules and applications
correspond to a set of executable instructions for performing one
or more functions as described above and/or in the methods
described in this application (e.g., the computer-implemented
methods and other information processing methods described herein).
These modules (i.e., sets of instructions) need not be implemented
as separate software programs, procedures or modules, and thus
various subsets of these modules are, optionally, combined or
otherwise re-arranged in various embodiments. In some embodiments,
memory 206 and/or 306 store a subset of the modules and data
structures identified above. Furthermore, memory 206 and/or 306
optionally store additional modules and data structures not
described above.
[0064] Furthermore, in some implementations, the functions of any
of the devices and systems described herein (e.g., client devices
104, data repository 108) are interchangeable with one another and
may be performed by any other devices or systems, where the
corresponding sub-modules of these functions may additionally
and/or alternatively be located within and executed by any of the
devices and systems. As one example, although the data repository
108 (FIG. 2) includes modules for processing subject datasets
(e.g., geomatics module 216) and extracting data from generated
spatial, spectral, and/or temporal representations of subject
datasets (e.g., processing module 218), in some embodiments the
client device 104 may include analogous modules and device
capabilities for performing the same operations (e.g., processing
is additionally and/or alternatively performed by the same client
device used for image capture and sensor acquisitions). The devices
and systems shown in and described with respect to FIGS. 1 through
3 are merely illustrative, and different configurations of the
modules for implementing the functions described herein are
possible in various implementations.
[0065] FIGS. 4A-4B illustrate an environment in which data is
captured for a subject using one or more client devices 104, in
accordance with some embodiments.
[0066] Specifically, the environment shown in FIG. 4A includes a
single client device 104 for capturing images of a subject (e.g.,
user 102-1) and for acquiring various sensor readings. Although the
client device 104 is a smart phone in the example illustrated, in
other implementations the client device 104 may be any electronic
device with image capture capabilities (e.g., a camera, a PDA,
etc.). Furthermore, while the subject is a live, biological subject
(e.g., a human), other non-biological applications exist (as
described in greater detail with respect to FIGS. 7A-7J).
[0067] In some implementations, the client device 104 is used to
capture one or more still-frame images, video sequences, and/or
audio recordings of the subject from one or more positions and
angles. Concurrently with image capture, the client device 104 also
acquires multiple time-stamped sensor readings of various
environmental conditions, such as a measured acceleration and
orientation of the client device 104 as the client device 104 is
re-positioned and oriented into new poses to capture additional
images. By using the captured images, acquired sensor readings, and
additional device/image meta data (e.g., timestamps, resolution,
image capture/sensor frequencies, focal length, etc.),
multi-dimensional maps and point clouds of the subject are
constructed in some embodiments. Values for various observed
features (e.g., characteristics of a visible skin lesion) are then
be extracted from the constructed maps, point clouds, and/or
meshes, which subsequently are analyzed spatially, spectrally,
and/or temporally to detect temporal observable changes or health
conditions of the subject. The detection-based method is described
in greater detail with respect to the method 7000 in FIGS.
7A-7J.
[0068] FIG. 4A illustrates a set of predefined axes which provide
relative localization information of the client device 104 and
subject. Localization information may include a respective
trajectory of the client device 104 or subject with respect to the
predefined axes, which includes translational data (e.g., (x, y, z)
coordinates) and/or rotational data (e.g., roll (.phi.), pitch
(.theta.), yaw (.psi.)). More specifically, in the example shown,
values for the rotational trajectory of the client device 104 are
defined as an angle of rotation within the x-y axis (i.e., yaw
(.psi.)), an angle of rotation within the y-z axis (i.e., pitch
(.theta.)), and an angle of rotation within the x-z axis (i.e.,
roll (.phi.).
[0069] As the client device 104 captures multiple images, the
images form a workflow in which each image of the workflow has a
respective timestamp, and the measured trajectory of the client
device 104 continually changes. Changes in trajectory may, for
example, be derived from one or more time-stamped sensor readings
by the client device 104 (e.g., sensors 326, such as an
accelerometer and/or a gyroscope). In particular, in some
embodiments, as the position of the client device 104 changes as
new images are captured, the translational and and/or rotational
trajectory of the client device 104 is derived for any
corresponding time-stamped image, and at any given time point, in
real-time using accelerometer and gyroscope readings.
[0070] The derived translational and rotational trajectory of the
client device 104, in combination with the workflow of captured
images, forms a respective dataset that is stored in a subject data
store (e.g., subject data store 214, FIG. 2). Using the dataset,
multi-dimensional maps may be constructed, from which a dense point
cloud consisting of multiple points (e.g., on the order of tens of
thousands) representing the subject can be created. Optionally,
surface reconstruction algorithms may then be applied to generate
representative textured polygonal meshes of the subject.
[0071] From the generated visual representations, data may be
extracted for observations sets corresponding to various classes of
features. For example, for an observation set corresponding to a
class that includes skin features, an observation may include data
values corresponding to different characteristics of observed
moles, such as pigmentation. By comparing the extracted data
against stored data, or by analyzing the extracted data itself, a
temporal change in the data, potential conditions, or identifiable
health conditions may detected.
[0072] The construction of multi-dimensional maps, the creation of
point clouds, the generation of meshes, and their use, is described
in greater detail with respect to FIGS. 7A-7J.
[0073] FIG. 4B illustrates an environment in which a plurality of
distinct client devices 104 is used for capturing images of a
subject and for acquiring various sensor readings.
[0074] FIG. 4B illustrates an example in which four smart phones
are positioned at different angles and orientations with respect to
each other and the subject (in some embodiments, each of the client
devices 104-2 through 104-4 represent the client device 104-1 at
different positions and different times). As described in greater
detail below, the use of multiple client devices 104 is
advantageous for obtaining and acquiring comprehensive datasets,
and ultimately for enabling an enhanced analytical approach to
processing subject data.
[0075] As in many data processing applications, the availability of
additional data points allows for an enhanced and more detailed
generation of spatial, spectral, and/or temporal representations of
a subject and observed features. Moreover, additional client
devices 104 may be used to acquire data concurrently with the data
capture session of the first client device 104-1, so as to acquire
time-stamped images and sensor readings from a variety of
viewpoints and angles. Alternatively, additional client devices 104
may be used to acquire data at different times from the data
capture session of the first client device 104-1 in order to
assemble a temporal stack of spatial and spectral data.
[0076] Multiple client devices 104 may also be used to capture
images of the first subject at different resolutions (e.g., a first
dataset for capturing low-resolution images, a second dataset for
capturing high-resolution images, etc.), and/or to capture image
workflows representing distinct frequencies of light (or ranges of
frequencies) (e.g., a first client device 104 configured to detect
visible light frequencies, a second client device 104 configured to
detect IR light frequencies).
[0077] FIGS. 5A-5B illustrate an exemplary data structure for data
obtained by client devices in a surface informatics based detection
system 100, in accordance with some embodiments.
[0078] In particular, the exemplary data structures illustrated in
FIG. 5A correspond to data obtained and acquired by one or more
components or sensors of the client devices (e.g., time-stamped
images of a subject using a pixilated detector, associated meta
data, time-stamped accelerometer and gyroscope readings, etc.),
while the exemplary data structures illustrated in FIG. 5B
correspond to data extracted and derived from spatial, spectral,
and/or temporal representations and subject datasets (e.g., feature
data and temporal data extracted from constructed maps, point
clouds, meshes, etc.). In some embodiments, data structures include
additional or fewer types of data (e.g., data acquired from
optional sensors of the client device 104) or parameters associated
with data acquisition using the client devices. In some
embodiments, all or portions of the illustrated data structures are
stored in the memory of the client devices (e.g., memory 306, FIG.
3) and/or server systems (e.g., subject data store 214, geomatics
data store 230, FIG. 2). Furthermore, in some embodiments, all or
portions of the illustrated data structures are transmitted by one
or more client devices 104 to the data repository 108 for further
processing.
[0079] FIG. 5A illustrates an exemplary data structure within which
data acquired by client devices 104 is populated. For a process
involving the capture of multimedia data (e.g., using a pixilated
camera of the client device 104), the data structure may include a
workflow identifier (e.g., for identifying a particular workflow to
which one or more images/multimedia items correspond), an image
identifier (e.g., for identifying a particular image/multimedia
item of a workflow), and/or a multimedia capture resolution (e.g.,
the resolution of a captured image or video, the bit rate of
captured audio). The data structure may also include associated
meta data of the captured multimedia, such as a device identifier
(e.g., identifying the device of a group of devices that captured
the multimedia item, which may include an arbitrary identifier, a
MAC address, a device serial number, an RFID, etc.), temporal meta
data (e.g., date and time of a corresponding capture), a location
(e.g., GPS coordinates of the location at which multimedia item was
captured), a multimedia capture frequency (e.g., the frequency at
which a stream of images is captured), device configuration
settings (e.g., frequency ranges that the pixilated detector of the
client device is configured to detect), and/or other camera data or
environmental factors associated with the captured multimedia.
[0080] FIG. 5A additionally illustrates an exemplary data structure
populated with acquired localization data. As shown, for a
corresponding device and a time of data capture (and/or other data
or meta data that may be utilized for synchronization),
localization data may include a focal length of the device, sensor
frequencies (e.g., the respective frequency at which sensors of the
client device captured data, such as an accelerometer frequency, a
gyroscope frequency, a barometer frequency, etc.), an accelerometer
reading (e.g., in meters/sec.sup.2), translational data (e.g., (x,
y, z) coordinates of the client device with respect to a
pre-defined axes or point of reference), rotational data (e.g.,
roll (.phi.), pitch (.theta.), yaw (.psi.)), and/or any additional
sensor or device measurements or readings for determining spatial,
spectral, and/or temporal characteristics of the client device or
captured subject.
[0081] Referring now to FIG. 5B, the exemplary data structure
includes data derived and extracted from processed user submitted
data. For example, user submitted datasets that include multimedia
data, associated meta data, and localization data, may be processed
to construct multidimensional maps and to create dense point clouds
representing a captured subject. Point clouds may be further
processed using surface reconstruction algorithms and texture
mapped. From the generated maps, point clouds, and/or meshes,
useful biological/non-biological data may be extracted for further
detection applications or analysis. As shown, such data may include
a corresponding class of biological/non-biological features for
different surface informatics based detection and analysis
scenarios (e.g., genetic features, aging features, disease specific
features), an observation of a corresponding class of features
(e.g., for genetic features, observations of the eye, nose, and
mouth), values for different aspects of the corresponding
biological/non-biological features (e.g., for observations of the
eye, a value for observed eye color, a value for observed eye
shape, a value for an observed eye size), temporal data (e.g.,
indicating a change in corresponding values of an observation over
time, such as a change in the size of a subject's eye since a
previous measurement), and/or additional spatial, spectral, and/or
temporal data that may be derived.
[0082] FIG. 6 illustrates a flowchart for the processing of subject
datasets, in accordance with some embodiments.
[0083] As shown, in some embodiments, a plurality of time-stamped
images (e.g., low resolution two-dimensional images) is obtained at
a respective frequency (602), and a plurality of sensor readings of
the device (e.g., time-stamped accelerometer readings and
time-stamped gyroscope readings) is acquired at a respective
frequency (604).
[0084] In some embodiments, a process for concurrent localization
and mapping (606) is performed such that a real-time translational
and rotational trajectory of a respective acquiring device (e.g.,
(x, y, z) translational data, roll (.phi.), pitch (.theta.), yaw
(.psi.) rotational data) is obtained (608) from the acquired
plurality of sensor readings. Furthermore, in some embodiments, a
multidimensional map is constructed (610) from the obtained and
acquired data (e.g., a dataset), which includes the time-stamped
images and real-time translational and rotational trajectory of the
acquiring device, in addition to optional data such as the
time-stamped coordinates of features identified across the
time-stamped images, and the translational movement of those
coordinates across the time-stamped images. In some embodiments,
the constructed map includes a relatively sparse number of points
(e.g., hundreds of points).
[0085] The multidimensional map is further used in conjunction with
additional images (e.g., high-resolution images) captured (612) by
the one or more client devices to create (614) (e.g., multi-view
stereo) a dense point cloud. In comparison to the constructed map,
the resulting dense point cloud (616) includes many points (e.g.,
tens of thousands). In some embodiments, each of the two or
three-dimensional points of the dense point cloud represents an
average value of a respective single pixel (or group of pixels)
across the time-stamped images, and includes a surface normal
computed from translational and rotational values of the
time-stamped images.
[0086] The processing of subject datasets (e.g., datasets which
include workflows and translational and rotational trajectory
information) shown in FIG. 6 is described in greater detail below
with respect to FIGS. 7A-7J.
[0087] FIGS. 7A-7J are flow diagrams illustrating a method 7000 of
surface informatics based detection, in accordance with some
embodiments. In some implementations, the method 7000 is performed
by one or more electronic devices of one or more systems (e.g.,
client devices 104, FIGS. 1 and 3), a server system (e.g., data
repository 108, FIGS. 1 and 2), and/or additional
devices/systems/entities. Thus, in some implementations, the
operations of the method 7000 described herein are entirely
interchangeable, and respective operations of the method 7000 are
performed by any one of the aforementioned devices and systems, or
combination of devices and systems (e.g., the operations described
in FIGS. 7C-7J may be performed by the client device). For ease of
reference, the methods herein will be described as being performed
by a first smart phone (e.g., client device 104) and a data
repository apparatus (e.g., data repository 108, alternatively
referred to as a data processing and display system), both of which
compose the machine-to-machine network (e.g., surface informatics
based detection system 100, FIG. 1). While parts of the method 7000
are described with respect to a first smart phone, any operations
or combination of operations of the method 7000 may be performed by
any client device 104 having image capture capabilities (e.g., a
camera device, a computer-enabled imaging device, a PDA, etc.).
[0088] In the example provided, FIGS. 7A-7B correspond to
instructions/programs stored in a memory (e.g., memory 306, FIG. 3)
or other computer-readable storage medium of the first smart phone
(e.g., memory 306 of client device 104, FIG. 3). The first smart
phone has one or more first processors (e.g., 302) for executing
the stored instructions/programs, at least one first accelerometer
(e.g., sensor 326), at least one first gyroscope (e.g., sensor
326), and a first two-dimensional pixilated detector (e.g.,
image/video capture module 332). Furthermore, FIGS. 7C-7J
correspond to instructions/programs stored in a memory or other
computer-readable storage medium of the data repository apparatus
(e.g., memory 206 of data repository 108, FIG. 2), which has one or
more processors for executing the stored instructions/programs.
Instructions/programs stored in the memory of the client device 104
and/or the data repository apparatus 108 may include programs for
real-time feature detection, real-time generation of feature-based
coordinate point cloud systems, and/or active mapping and tracking
of coordinate points of a point cloud system to image features
(e.g., geomatics module 224, processing module 232, FIG. 2).
Optionally, the first smart phone includes one or more additional
sensors (e.g., barometer, compass, light sensors, etc.) for
acquiring additional sensor readings that may be used as additional
mathematical variables in spatial, spectral, and/or temporal
processing operations (as described in greater detail below).
[0089] The first smart phone obtains (7002) a respective
time-stamped image of a first subject in a plurality of subjects
using a first two-dimensional pixilated detector of a first smart
phone at a first frequency, thereby obtaining a first workflow. The
first workflow includes a first plurality of time-stamped images.
In some embodiments, the time-stamped images have coordinate mapped
feature points for features of the first subject. That is, in some
embodiments, observable features of the first subject (e.g., a
human subject's eyes) comprise a plurality of feature points that
can be mapped to a predefined coordinate system (e.g., two or
three-dimensional). Furthermore, in some embodiments, each
time-stamped image of the first workflow has a respective time
point of a first plurality of time points at which the respective
time-stamped image was obtained.
[0090] In some embodiments, at least some of the time-stamped
images of the first workflow obtained at the first frequency are
images having a first resolution (e.g., a low resolution, such as
480.times.360) that enables real-time image capture at the first
frequency. In some embodiments, the first resolution is greater
than 1000.times.1000, while in other embodiments, the first
resolution is less than 1000.times.1000. In some embodiments, at
least some of the time-stamped images of the first workflow are
images having a second resolution distinct from the first
resolution (e.g., a high resolution, such as 3264.times.2448). In
some embodiments, a resolution of each image in a first subset of
the first workflow is (7004) less than 1000.times.1000 (e.g.,
low-resolution images so as to enable real-time vision processing).
In some embodiments, a resolution of each image in a second subset
of the first workflow is (7006) greater than 1000.times.1000.
[0091] In some embodiments, the first subject is illuminated by
natural light while obtaining the first workflow. In some
embodiments, the first subject is illuminated by artificial light
while obtaining the first workflow. In some embodiments, the first
smart phone illuminates the first subject with a light emitting
diode of the first smart phone while obtaining the first workflow.
In some embodiments, the first smart phone illuminates the first
subject with polarized light while obtaining the first workflow. In
some embodiments, the first smart phone illuminates the first
subject with specific wavelengths of light while obtaining the
first workflow (e.g., IR, UV). In some embodiments, the first smart
phone illuminates the first subject while obtaining the first
workflow, and reflected light returning from the first subject is
filtered through a polarizer. In some embodiments, the first smart
phone illuminates the first subject while obtaining the first
workflow, and reflected light returning from the first subject is
filtered such that the first two-dimensional pixilated detector is
exposed to a specific wavelength range of light that is more than
the wavelength range of the illuminated light.
[0092] In some embodiments, the first smart phone is configured to
image subjects with a predefined surface type, color, brightness,
size, and shape characteristics (e.g., by adjusting one or more
device or software settings for the first smart phone).
[0093] The first smart phone acquires (7008) a respective
time-stamped accelerometer interval reading and a respective
time-stamped gyroscope interval reading using the respective at
least one first accelerometer and the at least one first gyroscope
of the first smart phone at a second frequency independent of the
first frequency. A first plurality of time-stamped accelerometer
interval readings and a first plurality of time-stamped interval
gyroscope readings is thereby obtained. Furthermore, in some
embodiments, each of the time-stamped accelerometer interval
readings and each of the time-stamped interval gyroscope readings
have a respective time point of a second plurality of time points
at which the respective reading was acquired. As a result, a first
real-time translational and rotational trajectory of the first
smart phone is thereby obtained which indicates a relative position
of first smart phone with respect to the first subject through the
first plurality of time-stamped images. The accelerometer and
gyroscope readings change, for example, as the position of the
first smart phone changes during image capture. Thus, based on a
predefined or known initial trajectory of the first smart phone,
and using the acquired accelerometer readings, gyroscope readings,
and the time at which the readings were acquired, the first
real-time translational and rotational trajectory of the first
smart phone may be obtained, where the obtained trajectory
represents a change in the position and orientation of the first
smart phone (e.g., new poses of the first smart phone relative to
the subject). An exemplary data structure including translational
and rotational trajectory data of the first smart phone is shown in
FIGS. 5A-5B.
[0094] In some embodiments, translational values for the
translational trajectory include (7010) (x, y, z) translational
values, and rotational values for the rotational trajectory include
(yaw, pitch, roll) rotational values (the combination of (x, y, z)
translational values, (yaw, pitch, roll) rotational values, and the
focal length at a given point in time for a respective smart phone
is referred to as "camera pose."). An exemplary predefined set of
axes for defining the translational and rotational trajectory is
illustrated in FIG. 4A. In some embodiments, the first frequency
and second frequency are (7012) are each independently between 10
Hz and 100 Hz. In some embodiments, the first frequency is (7014)
30 Hz and the second frequency is 100 Hz. In some embodiments, the
first frequency and/or second frequency are less than 10 Hz, or
more than 100 Hz.
[0095] In some embodiments, the first plurality of time points
(while obtaining the time-stamped images, 7002) and second
plurality of time points (while acquiring the accelerometer and
gyroscope interval readings, 7008) are within a first timeframe
corresponding to a first capture session. That is, in some
embodiments, the time-stamped images are obtained and the interval
readings are acquired concurrently during the same capture session
(e.g., during the same period of time on the same day).
[0096] In some embodiments, the first smart phone acquires
time-stamped coordinates of the feature points in the first
workflow at each of the first plurality of time points, thereby
obtaining real time translational movement of the coordinates of
the feature points. In each time-stamped image of the first
workflow, statistics of pixel intensities in a local neighborhood
around each pixel are computed, where the computing is iterated
over every pixel location (or alternatively a subset of pixel
locations). In doing so, groups of neighboring pixels with
significant variation along two spatial dimensions are detected and
identified. Subsequently, thresholds (e.g., of pixel intensity
variation) are defined, and every pixel location that exceeds one
or more predefined thresholds corresponds to a positive feature
detection. Furthermore, in some embodiments, for every feature
detected, statistics are computed for the differences between
neighboring pixels surrounding the pixel location. These computed
statistics act as a fingerprint (i.e., a descriptor that uniquely
identifies that feature). As described in greater detail below, for
any two successive images, such as a first and second time-stamped
image, features are detected in each image, and descriptors for
every detected feature are computed. For each descriptor in the
second time-stamped image, a search is performed for the most
similar descriptor in the first time-stamped image until a set is
obtained for potentially matched features whose locations are known
in both the first and second time-stamped image. For geometric
consistency, in some embodiments, one or more algorithms (e.g.,
Random Sample Consensus (RANSAC)) are applied to filter out feature
matches that are not geometrically consistent with other feature
matches, resulting in a set of reliably tracked two-dimensional
feature points between the first and second time-stamped image.
Repeating the above process in real time over every overlapping
pair of frames leads to a set of two-dimensional features whose
translations are known.
[0097] The first smart phone communicates (7016) through a network
to a data repository for image processing and analysis of a first
dataset including the first workflow and the first real-time
translational and rotational trajectory (e.g., client devices 104-1
and 104-2 transmitting respective workflows and obtained
translational and rotational trajectories to the data repository
108 through the network 106, FIG. 1). Furthermore, in some
embodiments, the first dataset further includes the time-stamped
coordinates of the feature points in the first workflow and the
translational movement of the coordinates of the feature points in
the first workflow. In some embodiments, the communicating occurs
(7018) in real time concurrently with the obtaining and the
acquiring. Additionally and/or alternatively, the first dataset is
stored and communicated to the data repository at a time subsequent
to the obtaining/acquiring of the dataset. In some embodiments, the
first dataset is (7020) communicated wirelessly (e.g., using a
wireless-protocol based communications interface 304, the client
device 104 transmits the first dataset to the data repository 108
through the network 106, FIGS. 1 and 3). Alternatively, the first
dataset is communicated over a wire (e.g., USB data transfer
interface).
[0098] In some embodiments, additional datasets are communicated to
the data repository for image processing and analysis. Additional
datasets may include sets of data acquired by the same first smart
phone that obtained and acquired data for the first dataset. This
may, for example, correspond to scenarios in which the additional
datasets representing the same subject are acquired by the same
device, but at subsequent times (e.g., once every week) in order to
assemble a temporal stack of spatial and spectral data for
analysis, as in the case of detecting temporal observable changes.
Alternatively, additional datasets may correspond to sets of data
acquired by one or more additional smart phones distinct from the
first smart phone, either concurrently with or at different times
from the data capture session of the first smart phone. Whether the
additional datasets are acquired by the same or different smart
phones, different data capture sessions may be utilized, for
example, to capture images of the first subject from different
angles or ranges of angles (e.g., a first smart phone for capturing
a first workflow of images from the front of the subject, and a
second smart phone for capturing a second workflow of images from
behind the subject). Different data capture sessions may also be
utilized to capture images at different resolutions (e.g., a first
dataset for capturing low-resolution images and a second dataset
for capturing high-resolution images), and/or to capture image
workflows representing distinct frequencies of light (or ranges of
frequencies) (e.g., a first smart phone with an image
sensor/pixilated detector configured to detect visible light
frequencies, and a second smart phone with an image
sensor/pixilated detector configured to detect IR light
frequencies).
[0099] For example, referring now to FIG. 7B, the
machine-to-machine network (e.g., surface informatics based
detection system 100, FIG. 1) further includes a second smart phone
(e.g., client device 104-1) in some embodiments. As described
above, in other implementations, the second smart phone is the
first smart phone, as opposed to a distinct smart phone. The second
smart phone has a second two-dimensional pixilated detector (e.g.,
image/video capture module 332), at least one second accelerometer
(e.g., sensor 326), at least one second gyroscope (e.g., sensor
326), one or more second processors (e.g., 302), and memory (e.g.,
memory 306, FIG. 3) for storing one or more programs for execution
by the one or more second processors. The one or more programs
include programs for real-time feature detection, real-time
generation of feature-based coordinate point cloud systems, and/or
active mapping and tracking of coordinate points of a point cloud
system to image features. Optionally, the second smart phone
includes one or more additional sensors (e.g., barometer, compass,
light sensors, etc.) for acquiring additional sensor readings that
may be used as additional mathematical variables in spatial,
spectral, and/or temporal processing operations.
[0100] In some embodiments, the second smart phone obtains (7022) a
respective time-stamped image of the first subject using the second
two-dimensional pixilated detector at a third frequency, thereby
obtaining a second workflow including a second plurality of
time-stamped images. In some embodiments, the time-stamped images
of the second workflow have coordinate mapped feature points for
features of the first subject. In some embodiments, each
time-stamped image of the second workflow has a respective time
point of a third plurality of time points at which the respective
time-stamped image was obtained.
[0101] In some embodiments, the second smart phone acquires (7024)
a respective time-stamped accelerometer interval reading and a
respective time-stamped gyroscope interval reading using the
respective at least one second accelerometer and the at least one
second gyroscope of the second smart phone at a fourth frequency
independent of the third frequency. A second plurality of
time-stamped accelerometer interval readings and a second plurality
of time-stamped interval gyroscope readings is thereby obtained. In
some embodiments, each of the time-stamped accelerometer interval
readings and each of the time-stamped interval gyroscope readings
have a respective time point of a fourth plurality of time points
at which the respective reading was acquired. As a result, a second
real-time translational and rotational trajectory of the second
smart phone is thereby obtained which indicates a relative position
of second smart phone with respect to the first subject through the
second plurality of time-stamped images.
[0102] In some embodiments, the second smart phone acquires
time-stamped coordinates of the feature points in the second
workflow at each of the third plurality of time points, thereby
obtaining real time translational movement of the coordinates of
the feature points.
[0103] The second smart phone communicates (7026) through the
network to the data repository for image processing and analysis of
a second dataset including the second workflow and the second
real-time translational and rotational trajectory. Furthermore, in
some embodiments, the second dataset further includes the
time-stamped coordinates of the feature points in the second
workflow and the translational movement of the coordinates of the
feature points in the second workflow.
[0104] In some embodiments, the first, second, third, and fourth
plurality of time points are within a first timeframe corresponding
to a first capture session (e.g., datasets from different smart
phones, but from same capture session).
[0105] In some embodiments, the first and second plurality of time
points are within a first timeframe corresponding to a first
capture session, and the third and fourth plurality of time points
are within a second timeframe corresponding to a second capture
session, wherein the first timeframe predates the second timeframe.
This corresponds to the scenario in which datasets are received
from different smart phones, and from different capture sessions
(e.g., in order to assemble a temporal stack of spatial and
spectral data for analysis). In some further embodiments, the
second computer-enabled imaging device is the first
computer-enabled imaging device, the second two-dimensional
pixilated detector is the first two-dimensional pixilated detector,
the at least one second accelerometer is the at least one first
accelerometer, the at least one second gyroscope is the at least
one first gyroscope, the one or more second processors are the one
or more first processors, and the memory for storing one or more
programs for execution by the one or more second processors is the
memory for storing one or more programs for execution by the one or
more first processors. Here, datasets are received from the same
smart phone, but from different capture sessions.
[0106] Operations 7022 through 7026 performed by the second smart
phone may be performed in accordance with any of the embodiments
described with respect to the first smart phone (e.g., operations
7002 through 7020). Furthermore, any of the operations 7002 through
7026 may be performed for any additional smart phones in order to
produce additional, subsequent datasets for processing. As
described above, the subsequent datasets may corresponds to data
obtained and acquired by the same smart phone at different times,
or data obtained and acquired by additional and distinct smart
phones at the same or different times. An example in which multiple
smart phones (e.g., client devices 104-1 through 104-4) are used
for concurrent and varied data capture is illustrated in FIG.
4B.
[0107] Referring now to FIG. 7C, the data repository apparatus
(e.g., data repository 108, FIGS. 1 and 2) receives the first
dataset and/or second dataset (and/or additional subsequent
datasets) from the first smart phone and/or the second smart phone
(which, in some embodiments, is the first smart phone), and stores
(7028) the first dataset and/or second dataset in a subject data
store associated with the first subject in a first memory location
in the computer memory (e.g., subject data store 214, FIG. 2). The
various processing performed by the data repository apparatus using
received datasets is described in greater detail below.
Alternatively, in some embodiments, the first and/or second dataset
are retained and processed on the first smart phone, where the
various operations described with respect to the data repository
apparatus below are performed by the first smart phone (e.g.,
constructing two or three-dimensional maps from datasets, creating
dense point clouds for viewing, etc.).
[0108] In some embodiments, the data repository apparatus
constructs (7030) a two or three-dimensional map from the first
dataset and/or second dataset (and/or additional subsequent
datasets). In some embodiments, constructing includes matching
(7032) a two-dimensional feature in a first time-stamped image and
a second time-stamped image in the first workflow and/or the second
workflow. Two-dimensional features may include tight groups of
high-contrast pixels identified in both the first and second
time-stamped images. Next, a parallax is estimated (7034) between
the first time-stamped image and the second time-stamped image
using the first and/or the second real-time translational and
rotational trajectory, and/or the translational movement of the
coordinates of the feature points in the first and/or second
workflow. When the estimated parallax satisfies a parallax
threshold and the matched two-dimensional feature satisfies a
matching threshold, a two or three-dimensional point is added
(7036) to the two or three-dimensional map at a distance obtained
by triangulating the first time-stamped image and the second
time-stamped image using the first and/or second real-time
translational and rotational trajectory. Referring now to FIG. 7D,
the matching (7032), the estimating (7034), and the adding (7036)
are repeated (7038) for a different first time-stamped image or a
different second time-stamped image in the first workflow and/or
the second workflow, or for a different two-dimensional feature.
The two or three-dimensional map including a plurality of two or
three-dimensional points is thereby constructed.
[0109] In some embodiments, each of the two or three-dimensional
points represents (7040) an average value of a respective single
pixel or a respective group of pixels across at least a subset of
the first workflow and/or second workflow that were identified by
the two or three-dimensional map as corresponding to each other.
The average value may correspond to an average value of a color
associated with each of the two or three-dimensional points (e.g.,
RGB color system, where red, green, and blue each have integer
values ranging from 0 to 255). In some embodiments, each of the two
or three-dimensional points includes (7042) a surface normal
computed from translational and rotational values of the subset of
the first workflow and/or second workflow.
[0110] In some embodiments, translational and rotational values
from the first and/or second real-time translational and rotational
trajectory of the first and/or second smart phone, for each
time-stamped image in the first and/or second workflow (and/or
additional subsequent workflows of subsequent datasets from the
same or different smart phones), are used (7044) to refine the two
or three-dimensional map constructed from the first dataset and/or
second dataset (and/or subsequent datasets) (a process sometimes
referred to as "bundle adjustment"). In some embodiments, "bundle
adjustment" is an optimization that takes as inputs a set of two or
three-dimensional points, a set of two or three-dimensional device
poses (e.g., translational and/or rotational data for the smart
phone), and/or any sensor measurements that relate points in the
images to the devices (e.g., two-dimensional features observed in
multiple images in the workflows) or that relate devices to devices
(e.g., gyroscope and accelerometer readings). One or more
algorithms (e.g., Levenberg-Marquardt algorithm) are used to find
refined values for all the two or three-dimensional points and
three-dimensional device poses which agree as well as possible with
the sensor measurements. If additional sensor measurements ever
become available, this bundle adjustment process can be performed
again, taking the new measurements into account, in order to
produce a refined map of the scene (i.e., updated three-dimensional
points and three-dimensional device poses). For example, a second
dataset may observe the same three-dimensional points as a first
dataset, but from a new set of perspectives. Two-dimensional
feature correspondences are first calculated between to the sets of
images of the first and second dataset (see discussion of feature
descriptors and matching, described in greater detail above),
creating new measurements linking the new set of devices to the
original set of three-dimensional points (and therefore also to the
original devices).
[0111] In some embodiments, a first two or three-dimensional map is
constructed from a first dataset, and a second two or
three-dimensional map is constructed from a second dataset, where
the first dataset corresponds to a time frame during which data was
captured that predates the time frame of the second dataset. In
other words, in some embodiments, a corresponding two or
three-dimensional map is constructed for each received dataset.
These embodiments enable the concurrent viewing of constructed maps
corresponding to distinct data captures (e.g., datasets acquired at
different times, datasets acquired with devices using distinct
settings (e.g., one device configured for UV, another configured
for IR), etc.). In other embodiments, a two or three-dimensional
map is constructed from a first dataset, and a second dataset
received at a later time frame is used in conjunction with the
already constructed two or three-dimensional map (without
constructing an additional two or three-dimensional map). Thus, the
saved two or three-dimensional map (and correspondingly saved
two-dimensional features) from a previous data collection is
re-used (e.g., time-stamped images from second dataset used only to
texture map the generated mesh).
[0112] Furthermore, in some embodiments, a dense point cloud
representing the first subject is created (7046) from the two or
three-dimensional map, the dense point cloud including a plurality
of points. In an example in which a three-dimensional map is
created, groups of pixels in a first time-stamped image may have
corresponding pixels in a second time-stamped image, where the
corresponding pixels represent the pixels in the first time-stamped
image shifted over by a number of pixel positions, referred to as a
pixel disparity. Using the obtained translational and rotational
trajectory of the first smart phone, the pixel disparity of any
group of pixels identified in multiple images can therefore be
converted directly into a physical distance from the first smart
phone, thus creating a dense three-dimensional point cloud that may
include tens of thousands of points. In other embodiments, the
dense point cloud is a two-dimensional map that includes a
plurality of points (e.g., a mosaic that is a two-dimensional
representation of a region of interest of the first subject). The
active point cloud surface reference system thus expands at the
edges as new areas of the subject are imaged, thereby extending the
map and dense point cloud to include previously non-imaged
structures.
[0113] Furthermore, not only do these embodiments allow for
building and growing the extent of a two or three-dimensional
surface, but they also allow for the addition of more detail as
more resolution of an area is obtained. That is, the active point
system adds new points (e.g., steps 7036) between existing points
as more detail emerges (i.e., as a camera gets closer to the
subject), thereby allowing for building multi-dimensional maps with
increased detail as more pixels are obtained from imaging a
structure or subject.
[0114] Consequently, the dense point cloud (or multiple dense point
clouds) may be created in such a way that a resolution of a target
area (e.g., observed features, feature points, region of the
subject, etc.) is maintained or increased as a function of
increasing proximity to the target area. In other words, the dense
point cloud may be created such that "zooming" into a target area
(e.g., increasing proximity to the target area while viewing a
texture-mapped mesh generated in step 7050) does not decrease a
viewing resolution of the target area, but rather maintains the
same or increases the viewing resolution. By doing so, a subject's
features, such as a human eye, may be displayed and viewed at
closer distances without degrading a viewing resolution.
[0115] For example, in some embodiments, an additional data
collection event is performed, where an additional dataset that
includes time-stamped images of the first subject and corresponding
sensor readings is obtained and communicated to the data repository
(e.g., repeating steps 7002, 7008, and 7016 of the method 7000 for
an additional dataset, FIG. 7A). Here, the time-stamped images and
corresponding sensor readings are obtained at different positions
along a pre-defined axis, where the pre-defined axis defines an
axis along which the distance of the first smart phone from the
subject varies. An example is shown in FIG. 4A, where positions
along the y-axis correspond to distinct distances of the client
device 104-1 from the user 102-1. Thus, referring to this example,
obtaining the additional dataset includes capturing additional
time-stamped images of the subject and acquiring corresponding
sensor readings at distances incrementally closer to and/or farther
from the subject. The two or three-dimensional map is then further
constructed (step 7030) from the additional dataset, where the
dense point cloud is created (step 7046) from the two or
three-dimensional map. Thereafter, the dense point cloud (or any
visualization subsequently generated from the point cloud, such as
a texture-mapped mesh, step 7050) may be viewed and manipulated at
any plurality of selected distances or scales (e.g., "zoom" levels)
without decreasing a resolution of a target area being viewed.
[0116] In some embodiments, to induce additional parallax (e.g.,
for matching features in step 7032 and adding points the
constructed map in step 7036), the time-stamped images and
corresponding sensor readings are obtained at different positions
along the pre-defined axis in accordance with a non-uniform capture
pattern. For example, referring to FIG. 4A, the client device 104-1
may advance along the y-axis in a "zig-zag" fashion (i.e., from
side to side) in order to induce extra parallax, thereby
constructing a two or three-dimensional map/creating a dense point
cloud having a higher resolution (i.e., greater number of
points).
[0117] In some embodiments, the additional dataset is obtained
after constructing the two or three-dimensional map (step 7030) or
after creating the dense point cloud (step 7046). That is, in some
embodiments, the additional dataset that includes time-stamped
images of the subject and corresponding sensor readings is used to
augment an existing map or dense point cloud such that a target
area (e.g., observable features, selected region, etc.) of the
subject may be viewed at fixed and/or increased resolutions. In
some embodiments, the existing map and/or dense point cloud include
at least of the same feature points across at least some of the
plurality of distances (e.g., "zoom" levels).
[0118] Alternatively, in some embodiments, rather than creating a
single dense point cloud that is based on the initial (i.e., the
first and/or second) and additional datasets (i.e., which include
images captured at various distances along a pre-defined axis
defining a distance from the subject), multiple dense point clouds
are created. More specifically, in some embodiments, one or more
additional dense point clouds are created (e.g., by repeating step
7046 for each of one or more additional datasets), where each
additional dense point cloud corresponds to a different distance
(e.g., "zoom" level) of the first smart phone from the subject. In
some embodiments, the initial map is constructed (step 7030) and/or
the dense point cloud is created (step 7046) before creating the
one or more additional dense point clouds. In creating each
respective additional dense point cloud, respective time-stamped
images of the subject and corresponding sensor readings are
obtained (e.g., by orbiting the first subject) at a first
position/distance along a pre-defined axis, where the pre-defined
axis defines an axis along which the distance from the first smart
phone and the subject varies (e.g., y-axis, FIG. 4A). The
respective time-stamped images and sensor readings at the first
distance are then used to build upon the initial map and/or initial
dense point cloud to create the respective additional dense point
cloud corresponding to the first position/distance along the
pre-defined axis. This process is repeated for each additional
distance (or "zoom level) of a plurality of distances, such that
the one or more additional dense point clouds representing the
subject at different distances are created. In some embodiments,
the one or more additional dense point clouds are spatially aligned
as a result of using the same initial map/dense point cloud as a
base coordinate system upon which to create the additional dense
point clouds.
[0119] Referring now to FIG. 7E, in some embodiments, the dense
point cloud is then processed (7048) using a surface reconstruction
algorithm to generate a mesh representing the first subject. Thus,
by converting the dense point cloud and its constituent points
(e.g., tens of thousands of sample points), a surface mesh of the
first subject may be recreated. In some embodiments, a Poisson
surface reconstruction algorithm is used to generate the mesh. In
some embodiments, the mesh is a solid polygonal mesh (e.g., a set
of connected triangular faces). The three-dimensional polygonal
mesh is created for a number of reasons. One reason is that it
allows a human user to visualize the high resolution image data
from multiple angles by using a three-dimensional viewer (e.g.,
client application module 340, FIG. 3) that interactively projects
any of the collected high resolution imagery directly onto the
triangles of the polygonal mesh. Another reason is that, from a
data processing and analysis perspective, the polygonal mesh
enables the alignment of high resolution imagery and/or spectral
data initially captured from different viewpoints. Because the
spatial position of all cameras has been determined (e.g., the
first real-time translational and rotational trajectory of the
first smart phone), the otherwise non-aligned images can be
projected onto the polygonal mesh such that each triangle of the
mesh now contains aligned pixel data.
[0120] In some embodiments, after the mesh is created, a texture
mapping algorithm is applied (7050) to the mesh to generate a
texture-mapped mesh representing the first subject using one or
more third time-stamped images of the first workflow and/or the
second workflow. In some embodiments, the one or more third
time-stamped images of the first workflow are high-resolution
images (e.g., 3264.times.2448), whereas the first time-stamped
image and the second time-stamped image in the first workflow (used
for constructing the two or three-dimensional map) are low
resolution images (e.g., 480.times.360). In some embodiments,
projecting textures, imagery, or data onto a polygonal mesh
includes the following operations for each triangle in the mesh:
(1) the camera pose (e.g., (x,y,z), (yaw, pitch, roll), and focal
length) associated with a high resolution image is used to project
each of the three vertices of the given three-dimensional triangle
into the image (each vertex lands on a specific two-dimensional
pixel position in the image, thus specifying a two-dimensional
triangular area of pixels in the original image, (2) for any given
virtual three-dimensional camera, such as a user-chosen viewpoint
in an interactive three-dimensional viewer application, the
triangular image region from the original image (from step 1) is
warped to occupy the on-screen projection of the same
three-dimensional triangle.
[0121] In some embodiments, the dense point cloud, the mesh, and/or
the texture-mapped mesh representing the first subject are stored
(7052) in a second memory location of the data repository
memory.
[0122] In some embodiments, two, three, or four-dimensional sets of
data are extracted for processing in other systems, integration
into other three dimensional virtual environments, and exportation
to three dimensional printing or other three dimensional rendering
processes. The sets of data may be extracted, for example, from the
constructed map, dense point cloud, mesh, and/or texture-mapped
mesh. In some embodiments, computed spectral and temporal
relationships of two or three dimensional features of the subject
are displayed (7054) on local (e.g., the first smart phone) or
remotely networked devices (e.g., a client device 104-4, a
dedicated display terminal), using the constructed map, dense point
cloud, mesh, and/or texture-mapped mesh. Displaying spatial,
spectral, and/or temporal relationships of two or three dimensional
features of the subject data may include applying a variety of
image processing techniques to the dense point cloud, the mesh,
and/or the texture-mapped mesh. In some embodiments, a contour map
is generated from the texture-mapped mesh. In some embodiments, a
contour map includes data indicating the degree to which an
observed lesion is raised above the skin. In some embodiments, the
boundaries of surface observables are delineated (e.g., by
identifying high contrast pixels) using data from a texture-mapped
mesh. In one example, the boundary and shape of an observed lesion
is traced and identified. In some embodiments, pigmentation maps
are generated from a texture-mapped mesh by identifying varying
degrees of color contrast. As an example, a skin blood map (e.g.,
showing blood pigmentation of a skin region) is generated from a
texture-mapped mesh to indicate the progress of a healing wound. In
some embodiments, displaying the spatial, spectral, and/or temporal
relationships includes displaying the constructed map, dense point
cloud, mesh, and/or texture-mapped mesh on a display device for
manipulation. In some embodiments, the computed spectral and
temporal relationships are displayed on virtual reality
displays.
[0123] After creating the dense point cloud (and optionally
generating the texture-mapped mesh), useful
biological/non-biological information are extracted, processed, and
analyzed in order to detect temporal observable changes, potential
conditions, or pre-confirmed conditions for a subject in some
embodiments. Analysis and processing are performed with respect to
the spatial, spectral, and/or temporal aspects of extracted
data.
[0124] The systems and methods described herein can be used in a
variety of biological and non-biological applications.
[0125] In some embodiments, the described systems and methods are
used to determine whether the subject has a wide variety of medical
conditions, examples of which include, but are not limited to:
abrasion, alopecia, atrophy, av malformation, battle sign, bullae,
burrow, basal cell carcinoma, burn, candidal diaper dermatitis,
cat-scratch disease, contact dermatitis, cutaneous larva migrans,
cutis marmorata, dermatoma, ecchymosis, ephelides, erythema
infectiosum, erythema multiforme, eschar, excoriation, fifth
disease, folliculitis, graft vs. host disease, guttate, guttate
psoriasis, hand, foot and mouth disease, Henoch-Schonlein purpura,
herpes simplex, hives, id reaction, impetigo, insect bite, juvenile
rheumatoid arthritis, Kawasaki disease, keloids, keratosis pilaris,
Koebner phenomenon, Langerhans cell histiocytosis, leukemia, lichen
striatus, lichenification, livedo reticularis, lymphangitis,
measles, meningococcemia, molluscum contagiosum, neurofibromatosis,
nevus, poison ivy dermatitis, psoriasis, scabies, scarlet fever,
scar, seborrheic dermatitis, serum sickness, Shagreen plaque,
Stevens-Johnson syndrome, strawberry tongue, swimmers' itch,
telangiectasia, tinea capitis, tinea corporis, tuberous sclerosis,
urticaria, varicella, varicella zoster, wheal, xanthoma,
zosteriform, basal cell carcinoma, squamous cell carcinoma,
malignant melanoma, dermatofibrosarcoma protuberans, Merkel cell
carcinoma, and Kaposi's sarcoma. Additional examples are provided
below.
[0126] Other examples include, but are not limited to, tissue
viability (e.g., whether tissue is dead or living, and/or whether
it is predicted to remain living); tissue ischemia; malignant cells
or tissues (e.g., delineating malignant from benign tumors,
dysplasias, precancerous tissue, metastasis); tissue infection
and/or inflammation; and/or the presence of pathogens (e.g.,
bacterial or viral counts). Some embodiments include
differentiating different types of tissue from each other, for
example, differentiating bone from flesh, skin, and/or vasculature.
Some embodiments exclude the characterization of vasculature.
[0127] The levels of certain chemicals in the body, which may or
may not be naturally occurring in the body, can also be
characterized. For example, chemicals reflective of blood flow,
including oxyhemoglobin and deoxyhemoglobin, myoglobin, and
deoxymyoglobin, cytochrome, pH, glucose, calcium, and any compounds
that the subject may have ingested, such as illegal drugs,
pharmaceutical compounds, or alcohol.
[0128] In some embodiments, the described systems and methods are
used in a number of agricultural contexts and applications.
Examples include general plant assessment, such as assessing plant
types, plant height, green leaf material, number of leaves and
general health status by assessing shape, greenness, and/or height
of the plant. Other examples include plant status monitoring, which
may include the use of multi-temporal images of the same plant
material to assess plant growth rate and/or leaf area duration, and
to accordingly adjust model-based yield estimations (e.g.,
collection of important morphological and physiological information
for crops of plants over time to assess temporal
features/parameters, such as growth rate, leaf area duration,
etc.). Additional examples include species identification, whereby
plant data of an unknown plant may be collected and compared
against existing databases to identify a species of the plant
(e.g., collecting morphological and physiological information of an
unknown weed plant and comparing against a database of plant
species to identify the plant species, provide treatment
recommendations, determine if species is endemic to a certain
habitat, etc.). Other examples also include disease or pest
identification (e.g., capturing images of damaged surface features
of crops and comparing against a database to determine presence of
disease or infestation). Such systems and methods may be used in
any other agricultural contexts or applications in which observable
plant or crop features can be captured and analyzed.
[0129] Referring to FIG. 7F, in some embodiments, the data
repository apparatus extracts (7056), from the dense point cloud
(and/or the constructed map, mesh, or texture-mapped mesh)
representing the first subject, values for observed features of the
first subject for one or more observations for each of one or more
first-subject observation sets. Each of the one or more
first-subject observation sets corresponds to a respective class of
features.
[0130] Classes of features may correspond to features of a
biological or non-biological subject. Features of a class may be
related physiologically, structurally, biologically, genetically,
and/or in any other classifiable manner. Biological subjects
include humans, plants, fungi, or other living organisms that are
not plants, animals, or fungi. Human disease detection, for
example, may include analyzing observation sets corresponding to
classes of features such as skin features or eye features. In
contrast, non-biological subjects may include physical structures
or objects that are subject to change or defects (e.g., rusting of
metal structures, automobile accidents, defective product on
assembly line, etc.).
[0131] Different classes of features are pertinent to different
surface informatics based detection scenarios. In some embodiments,
surface informatics based detection or analysis in the context of
vegetative target characteristics may involve classes of features
for: plant identification (e.g., plant types, weed/invasive species
detection, crop genotyping (seed registry)), plant phenotyping
(e.g., height, leaf number, leaf morphology, plant count, plant
morphology, root biomass, fruit count and size), plant biomass
analysis (e.g., leaf area, leaf area index, plant biomass, crop
yield), temporal plant analysis (e.g., plant growth rate, leaf area
duration, crop yield prediction), plant physiology analysis (e.g.,
leaf pigments (chlorophyll, carotenoid, anthocyanin), plant water
content, sugar and starch content in crops), and plant health
detection (e.g., insect identification, disease identification and
quantification, plant stress detection, water deficiency, treatment
recommendation). In some embodiments, surface-informatics based
detection or analysis in the context of human target
characteristics involves classes of features for: genetic features
(e.g., eye shape and color, nose shape and size, ear shape and
size, lip size and shape, relative positioning of eyes, nose,
mouth, ears, and hairline, head shape, skin color, hair color (dyed
or natural)), aging features (e.g., solar damage (lentigines),
wrinkles, graying, nose/ear size, head shape), and/or disease
specific features (e.g., trauma wounds, crust, inflammation,
induration, papules, nodules, ulceration, plaques, scale, blisters,
bulla, vessel pattern, pigmentation, eye features, lesions). Other
classes of features may also be used for animal identification,
fungus identification, insect identification, aquatic plants and
animals, non-biologic applications, terrain characteristics (e.g.,
soil type, rock type), construction (e.g., paint color, concrete,
rust, corrosion, fatigue, fracture), urban landscape feature
delineation, and/or rural landscape feature delineation.
[0132] Observations in an observation set are related groups of
qualitative/quantitative data that correspond to a particular
feature or characteristic of the corresponding class of features.
As an example, an observation set corresponding to skin features
may include a first observation corresponding to extracted data
related to pigmentation (e.g., observed skin pigmentation of
different surface regions of the subject) and a second observation
corresponding to extracted data related to lesions (e.g., observed
lesions of a subject). Feature data for an observation therefore
includes quantitative and/or qualitative information for the
particular observation, and may comprise various types of data
related to the particular observation. Continuing the example
above, feature data for the second observation corresponding to
lesions may include data related to: location (e.g., diffuse,
localized), lesion size and size distribution, percent body surface
area, and lesion structures (e.g., scale, blood (deoxidation,
oxidation, degradation), melanin (eumelanin, pheomelanin),
collagen, other pigments (carotenoids, tattoo ink), structure
uniformity (degree/extent of non-uniform features), and/or
surface/subsurface features (milium cysts, comedomal openings,
location of pigment)). As another example, for agricultural
applications (as described above), feature data may include
morphological and physiological information for a variety of
related plant characteristics (e.g., plant size, number of leaves,
pigmentation, etc.).
[0133] Extracting data may include reading data from subject
datasets, and/or from the spatial, spectral, and/or temporal
representations of subject datasets (e.g., constructed maps, dense
point clouds, meshes, texture-mapped meshes, and/or any
visualizations resulting from image processing, where the read data
may be spatial, spectral, or temporal in nature). For example, data
indicating the degree to which an observed lesion is raised above
the skin may be read from a generated contour map. As another
example, the size and shape of an observed lesion may be estimated
using the boundary delineated by image processing performed on the
texture-mapped mesh. In another example, the progress of a healing
wound may be determined based on the changing pigmentation shown by
a blood map generated from a texture-mapped mesh.
[0134] In some embodiments, each respective observation set in the
one or more first-subject observation sets includes (7058) one or
more observations. An individual observation of the one or more
observations of a respective observation set includes: feature data
(7060) of the individual observation of the respective observation
set, and temporal data (7062) of the individual observation of the
respective observation set. The temporal data describes a change in
values for feature data for the individual observation over time.
As described above, feature data may include quantitative and/or
qualitative information for a particular observation (e.g., number,
size, color of lesions). In contrast, temporal data represents
observed changes in values of the feature data over a predefined
period of time. For example, temporal data may indicate that since
a last workflow capture, the number and size of previously observed
lesions has increased by a measurable amount (e.g., expressed as a
quantifiable amount, such as a quantity or percentage of change).
Temporal data may represent changes in feature data measured with
respect to any specified point or range of time (e.g., difference
between a current value of feature data and a most-recently
measured value, an initial value, a value measured on a certain
date at a certain time, etc.).
[0135] As described below, various detection implementations may be
used in the analysis of extracted data, where data extracted for a
subject may be compared against stored data for other
subjects/patients (FIGS. 7G-7H), or may itself be analyzed and
compared against predefined thresholds (FIGS. 7I-7J).
[0136] Referring to FIG. 7G, in some embodiments, the data
repository apparatus retrieves (7064) one or more stored
observation sets, from the subject data store stored in the first
memory location of the data repository memory, based on a
correspondence between the class of features of the one or more
first-subject observation sets and the class of features of the one
or more stored observation sets. For example, if the feature data
for an observation set corresponds to skin features as the
associated class of features, the data repository apparatus
accordingly retrieves stored observation sets corresponding to skin
features. Stored observation sets may include feature data of and
submitted by patients other than the first subject, where stored
observation sets may be retrieved from a large-scale (e.g.,
worldwide) database configured to aggregate and manage a
significant volume of records and patient data (e.g., data
repository 108, FIGS. 1 and 2). As described above with respect to
the first-subject observation sets, in some embodiments, each
respective observation set in the one or more stored observation
sets includes (7066) one or more observations. An individual
observation of the one or more observations of a respective
observation set includes: feature data (7068) of the individual
observation of the respective observation set, and temporal data
(7070) of the individual observation of the respective observation
set. The temporal data describes a change in values for feature
data for the individual observation over time.
[0137] After retrieving the one or more stored observation sets, a
temporal observable change is detected (7072) for the first subject
based on a numerical correlation with a corresponding pattern in at
least one of the one or more stored observation sets. Additionally
and/or alternatively, a potential condition (e.g.,
biological/non-biological, such as suspicious tissue growth, the
spread of fractures in a building structure, etc.) or a
pre-confirmed health condition is detected for the first subject
based on a correlation with a corresponding pattern in at least one
of the one or more stored observation sets.
[0138] Detection may occur in a variety of ways. In one example,
data extracted for a subject includes data related to observed
lesions (e.g., a first observation) on a subject's skin (e.g., a
first observation set corresponding to skin features). Feature data
may describe a number of characteristics of a particular lesion,
including an observed size, shape, and color. Furthermore, the data
for the subject collected over a predefined period of time (e.g.,
over three months) may indicate that the particular lesion has
grown in size by over 50% (e.g., from 4 mm to in width to 6 mm),
has become irregular in shape (e.g., exhibiting jagged edges and
deviating from an initial circular shape), and has become darker in
color (e.g., by a predefined number of shades). To determine if any
pattern exhibited by the extracted data has been linked to
confirmed health conditions, the extracted data (e.g., feature
and/or temporal data) is correlated to, and subsequently compared
against, the retrieved data of other patients and subjects for
corresponding features. The resulting comparison may indicate, for
example, that subjects who exhibit similar characteristics (e.g.,
similar size of an observed lesion, similar degree of change in the
pigmentation of an observed lesion) have previously been diagnosed
with certain conditions (e.g., pigmented basal cell carcinoma).
[0139] More specifically, in some embodiments, detecting (7072) a
temporal observable change includes, for a respective first-subject
observation set, correlating (7074) feature data and/or temporal
data of at least a subset of the observations of the respective
first-subject observation set with feature data and/or temporal
data of at least a corresponding subset of the observations of a
corresponding stored observation set. In other words, extracted
data (e.g., feature and/or temporal data) for the first subject is
compared against stored data for other subjects, where the compared
data relates to the same observation within the same observation
set. In some embodiments, correlating feature data may include a
direct comparison of feature data (e.g., measured and noted
characteristics for a particular observation, irrespective of
temporal data) in the first-subject observation set and the
corresponding stored observation set. In one example, the size of a
lesion observed on the first subject is compared against the size
of an observed lesion for another patient, as indicated in the
stored observation set. In contrast, in some embodiments,
correlating temporal data includes a direct comparison of temporal
data (e.g., measured changes over a predefined period of time in
feature data for a particular observation) in the first-subject
observation set and the corresponding stored observation set. In
one such example, the growth rate of a lesion for the first subject
is compared against a lesion growth rate for another patient, as
indicated in the stored observation set.
[0140] In some embodiments, correlating (7074) includes comparing
the feature data for the first subject against an average value for
the corresponding stored feature data. In one such example, the
average value is an average percentage of growth based on all or a
subset of the stored observation sets. In some embodiments,
correlating (7074) includes comparing the feature data for the
first subject against corresponding data for each of the plurality
of stored observation sets, each stored observation set
corresponding to feature and/or temporal data for a different
subject (e.g., a different patient). In one such example, the
stored observation sets includes respective data for a second,
third, and fourth subject (e.g., different patients). Feature data
(e.g., data for observed lesions) for the first subject is then
compared to data for each of the second, third, and fourth
subjects. The results of the comparison may indicate a disparity in
data. For example, the comparison may indicate a +15% difference
between data for the first and second subjects, where data for the
first subject indicates a 50% increase in size, and data for the
second subject (e.g., who has a confirmed diagnosis of basal cell
carcinoma) indicates a 65% increase. Any known statistical
techniques may then be applied for analyzing such results (e.g.,
calculating an average based on the three separate
comparisons).
[0141] Based on the correlating, a respective numerical score is
computed (7076) for each observation of the subset of observations
of the respective first-subject observation set (e.g., a numerical
score for skin pigmentation and a separate numerical score for
lesions). In some embodiments, the respective numerical score is a
factor of and is based on the numerical values calculated during
the comparison. Continuing the example above, a numerical score
based on feature data indicating a lesion growth rate that is 15%
higher than the average lesion growth rate will be higher than a
numerical score based on a lesion growth rate that is only 5%
higher than the average. In accordance with a respective numerical
score of the one or more numerical scores satisfying a
corresponding feature score threshold, the temporal observable
change (and/or pre-confirmed health condition) is detected (7078).
In some embodiments, numerical scores for the subset of
observations for the respective first-subject observation set are
aggregated, and the aggregate numerical score for the respective
first-subject observation set is compared against a corresponding
feature score threshold.
[0142] In some embodiments, the subject data store is updated
(7080) at the first memory location (e.g., of memory 206 of the
data repository 108, FIG. 2) with one or more computed numerical
scores. In some embodiments, the computed numerical scores that are
stored have associated timestamps indicating the date/time at which
the scores were computed, and are stored for later access on behalf
of the first subject, or by other users with access to the data
repository apparatus. In some embodiments, the subject data store
is updated at the first memory location with an indication of the
temporal observable change.
[0143] In some embodiments, in accordance with the respective
numerical score of the one or more numerical scores satisfying the
corresponding numerical score threshold, an alert is sent (7082) to
a remote device associated with the first subject or a remote
device associated with a caretaker of the first subject. The alert
may include one or more forms of electronic communication (e.g.,
automated e-mail, text message, notification in proprietary
application linked to the data repository apparatus, etc.).
[0144] In some embodiments, in accordance with the respective
numerical score of the one or more numerical scores satisfying the
corresponding numerical score threshold, one or more images are
sent (7084) to a remote device associated with the first subject or
a remote device associated with a caretaker of the first subject.
The one or more images include at least a subset of the image data
used by the correlating. For example, a photograph of an observed
lesion (e.g., where lesions as an observation of an observation set
have a numerical score exceeding a predefined threshold) is sent to
a mobile device of the first subject.
[0145] In some embodiments, remote devices that receive alerts or
images (in accordance with a numerical score satisfying a
corresponding numerical score threshold) for the first subject are
pre-authorized devices permitted to receive the alerts or images
for the first subject (e.g., the first subject provides
authorization for any remote devices and associated users to
receive alerts/images).
[0146] In some embodiments, the data repository apparatus displays
(or causes the display of) at least one time-stamped image of the
first workflow and an indication of the temporal observable change
on the at least one time-stamped image. In some embodiments, the
data repository apparatus displays (or causes the display of) at
least one time-stamped image of the first workflow and a false
color display indication of a pixel or a pixel group that is
associated with the temporal observable change.
[0147] Separate and apart from detection based on comparisons
between extracted data for the first subject and the stored data of
other subjects, observed changes in feature data over time that
satisfy a predefined threshold may in and of themselves warrant
attention.
[0148] Referring now to FIG. 7I, in some embodiments, a temporal
observable change (additionally and/or alternatively, a potential
biological/non-biological condition, or a pre-confirmed health
condition) is detected (7086) for the first subject based on a
variation in the one or more first-subject observation sets over
time satisfying a temporal variation threshold. For example,
detection systems (e.g., data repository 108) may be configured so
that observed lesion growth rates in excess of 10% over the course
of a month--although not linked to a pre-confirmed case of basal
cell carcinoma--indicate detection of a temporal observable change.
Variations in the one or more first-subject observation sets may be
determined directly from the temporal data of observations in an
observation set. As described above with respect to temporal data,
variations may correspond to quantifiable or qualitative changes in
any type of feature data (e.g., an increase in the number or size
of previously observed lesions), and may be measured with respect
to any specified point or range of time (e.g., difference between a
current value of feature data and a most-recently measured
value).
[0149] In some embodiments, each constituent type of feature data
for a respective observation of an observation set has a
corresponding temporal variation threshold. As an example, the
change in an observed number lesions may have a first temporal
variation threshold (e.g., +/-1 lesion), the change in size of
observed lesions may have a second temporal variation threshold
(e.g., +/-20% surface area), and the change in pigmentation of
observed lesions may have a third temporal variation threshold
(e.g., +/-2 shades). In some embodiments, a respective observation
has a corresponding temporal variation threshold that applies to
all constituent types of data (e.g., any variation of +/-20%,
whether a change in the number, size, or pigmentation of observed
lesions). In some embodiments, the temporal observable change is
detected if a variation in any constituent type of feature data
satisfies a temporal variation threshold (e.g., change in an
observed number lesions satisfies threshold), while in other
embodiments, the temporal observable change is detected if
variations for a combination of constituent types of feature data
satisfy their respective thresholds (e.g., both a change in the
number and a pigmentation of an observed number lesions satisfy
their respective thresholds).
[0150] In some embodiments, the indication of the temporal
observable change is reported (7088). In some embodiments,
reporting the indication includes (7090) updating the subject data
store at the first memory location (e.g., of memory 206 of the data
repository 108, FIG. 2) with the indication of the temporal
observable change. In some embodiments, the indication includes an
associated timestamp for the date/time at which the temporal
observable change for the first subject was detected, the
indication being stored for later access on behalf of the first
subject, or by other users with access to the data repository
apparatus. In some embodiments, reporting the indication includes
(7092) displaying at least one time-stamped image of the workflow
and an indication of the temporal observable change on the at least
one time-stamped image. For example, a photograph of an observed
lesion whose growth has exceeded a predefined temporal variation
threshold, and a boundary delineation of the observed lesion, is
displayed on a mobile device of the first subject. In some
embodiments, multiple time-stamped images are displayed as a
chronological series of images (e.g., or as a side-by-side
comparison) such that the variation satisfying the temporal
variation threshold is visually presented. In some embodiments, at
least one time-stamped image of the workflow and a false color
display indication of the pixel or the pixel group that is
associated with the temporal observable change is displayed (7094)
(e.g., an image is modified such that a lesion whose growth
satisfies the temporal variation threshold is shown in a higher
contrast color to surrounding regions).
[0151] Referring now to FIG. 7J, the detection of a temporal
observable change, potential biological/non-biological condition,
or known health condition may be detected based on pixel intensity
across images of a workflow. That is, in some embodiments, the
time-stamped images of the first workflow are aligned (7096) using
the first real-time translational and rotational trajectory thereby
creating an aligned workflow. Corresponding pixel intensities, or
corresponding pixel group intensities, are then compared (7098)
across the first aligned workflow, for satisfaction of an intensity
variation threshold. When a pixel intensity or a pixel group
intensity, across the first workflow, satisfies the intensity
variation threshold, the pixel or the pixel group is reported.
Pixel or pixel group intensities may correspond to features
detected across the time-stamped images of a workflow, as indicated
by high-contrast pixels or groups of pixels in comparison to
surrounding pixels. As an example, a brown-colored lesion observed
on skin having a pale complexion will have a higher color contrast.
When the intensity (e.g., contrast, color intensity, etc.) of the
pixels corresponding to the brown-colored lesion satisfy a
threshold, the pixels are reported (e.g., to the first subject).
Pixel or pixel group intensities may be expressed as a measure of
brightness, contrast, or hue, as a specific color, and/or as a
specific shape (or lack thereof). These pixel groups may be further
combined and segregated into composite structures to identify and
analyze underlying structural patterns (e.g., blood vessel shapes,
extent and distribution of color variation in a pigmented lesion,
location of scale at leading or trailing edge of pink lesion,
extent of rust on a structure, damage on a product, etc.).
[0152] In some embodiments, reporting includes (7100) displaying at
least one time-stamped image of the workflow and an indication of
the pixel or the pixel group that satisfied the intensity variation
threshold (e.g., displaying an image of the observed lesion whose
corresponding pixels satisfied the intensity variation threshold).
In some embodiments, reporting includes (7102) displaying at least
one time-stamped image of the workflow and a false color display
indication of the pixel or the pixel group that satisfied the
intensity variation threshold (e.g., an image is modified such that
a lesion whose pigmentation satisfies the intensity variation
threshold is shown in a higher contrast color to surrounding
regions). In some embodiments, reporting includes (7104) storing an
indication of the pixel or the pixel group that, across the first
workflow, satisfied the intensity variation threshold in the
subject data store associated with the first subject in the first
memory location in the computer memory (e.g., of memory 206 of the
data repository 108, FIG. 2).
[0153] In some embodiments, the data repository apparatus retains
an indexable file of observed features to allow for cross
comparisons between spectral, spatial, and temporal characteristics
of observed features on a plurality of subjects. In some
embodiments, the indexable file is used to interrogate and annotate
collated spectral, spatial, of temporal characteristics of observed
features on a plurality of subjects. Furthermore, in some
embodiments, the indexable file is used to correlate unique
spectral, spatial, of temporal characteristics with specific
biologic or non-biologic processes. In some embodiments, the file
also includes a collection of composite structures, their
structural patterns, temporal changes, and/or locations and size of
observed features on subjects (biologic or non-biologic).
[0154] While some parts of the method 7000 in FIGS. 7A-7J are
described with respect to the first smart phone and/or the second
smart phone (e.g., first and/or second time-stamped images of a
first and/or workflow), any of the embodiments described above may
be analogously applied to each additional device of the
machine-to-machine network.
[0155] Stages of method 7000 described with respect to FIGS. 7A-7J
may be performed additionally and/or alternatively to one another.
For example, temporal observable changes or pre-confirmed health
conditions may be detected by way of comparison to stored
observation sets, concurrently with determining whether variations
in data for a first subject satisfying a temporal variation
threshold (irrespective of being compared to stored observation
sets).
[0156] For situations in which the systems discussed above collect
information about users, the users may be provided with an
opportunity to opt in/out of programs or features that may collect
personal information (e.g., information about a user's preferences
or a user's contributions to social content providers). In
addition, in some embodiments, certain data may be anonymized in
one or more ways before it is stored or used, so that personally
identifiable information is removed. For example, a user's identity
may be anonymized so that the personally identifiable information
cannot be determined for or associated with the user, and so that
user preferences or user interactions are generalized (for example,
generalized based on user demographics) rather than associated with
a particular user.
[0157] Although some of various drawings illustrate a number of
logical stages in a particular order, stages which are not order
dependent may be reordered and other stages may be combined or
broken out. While some reordering or other groupings are
specifically mentioned, others will be apparent to those of
ordinary skill in the art, so the ordering and groupings presented
herein are not an exhaustive list of alternatives. Moreover, it
should be recognized that the stages could be implemented in
hardware, firmware, software or any combination thereof.
[0158] The foregoing description, for purpose of explanation, has
been described with reference to specific embodiments. However, the
illustrative discussions above are not intended to be exhaustive or
to limit the scope of the claims to the precise forms disclosed.
Many modifications and variations are possible in view of the above
teachings. The embodiments were chosen in order to best explain the
principles underlying the claims and their practical applications,
to thereby enable others skilled in the art to best use the
embodiments with various modifications as are suited to the
particular uses contemplated.
[0159] Furthermore, in addition to the list of conditions described
with respect to FIGS. 7F-7J, other medical conditions which the
methods and systems described herein may be used to detect include,
but are not limited to (categories are general and listings under
one heading do not exclude a role in another heading): Acne
(Vulgaris, Rosacea, Fulminans, neonatal, Pomade, Corticosteroid,
Scars), Bites (Chiggers, Fleas, Bedbugs, Brown Recluse) and Stings
(Bee, Wasp, Fire Ant, Scorpion, Jellyfish, Stingray), Dermatitis
(Atopic, Contact, Contact Irritant, Blister Beetle, Caterpillar,
Carpet Beetle, Coral, Sea Urchin, Sponges Seabather's Eruption,
contact allergic (poison ivy, nickel, rubber)), Infections
(impetigo, cellulitis, abscess, candidal intertrigo, Tinea (Barbae,
Capitis, Corporis, Cruri, Faciei, Manuum, Pedis, nigra,
Versicolor), Lyme disease, Rocky mountain spotted fever, Herpes
simplex, Syphilis, molluscum contagiosum, human papilloma virus,
Hand Foot mouth disease, measles, pseudomonas, Bacillary
Angiomatosis, Anthrax, Condyloma Acuminatum, Cutaneous Larva
Migrans, Erysipelas, Leishmaniasis, Leprosy, Meningococcemia,
Myiasis, Tungiasis), Malignant Skin tumors (Melanoma
(Acrolentiginous, Mucosal, Superficial Spreading, Lentigo Maligna,
Desmoplastic, Nodular), Basal Cell Carcinoma (Morpheiform,
Pigmented, Superficial), Squamous Cell Carcinoma, Bowen's Disease,
Merkel Cell Carcinoma, Kaposi Sarcoma, Dermatofibroma
Dermatofibrosarcoma Protuberans, Angosarcoma, Adnexal tumors,
Cutaneous leukemia/lymphoma, Keratoacanthoma), Benign Skin tumors
(Melanocytic Nevi (Atypical/Dysplastic, Becker's Blue Congenital
Halo Intradermal Spillus Spitz Reed Ito/Ota Speckled Lentiginous),
Lentigines (Actinic, Ink spot), Ephelides, Cafe au lait macule,
Hemangiomas (Capillary, Cavernous), Neurofibroma, Angiokeratoma,
Seborrheic Keratosis, Clear Cell Acanthoma, Nevus Sebaceous,
Sebaceous Hyperplasia, Fibrous Papule, Keloid, Acrochordon),
Systemic disease (Addison's Disease, Acromegaly, AIDS-Associated
KS, Amyloidosis, jaundice, vitiligo, Porphyria, Porphyria Cutanea
Tarda, Anemia, Antiphospholipid Syndrome, Neurofibromatosis,
Behcet's Syndrome, Cryoglobulinaemia, Darier's Disease, Dermatitis
Herpetiformis, Disseminated Intravascular Coagulation,
Henoch-Schonlein Purpura, Hidradenitis Suppurativa,
Hyperthyroidism, Xanthelasma, Xanthoma, Xanthogranuloma, Rheumatoid
Arthritis, Gout, Psoriasis (guttate, vulgaris, Palmoplantar,
Inverse, pustular)), Genetic disease (Albinism (tyrosinase
negative/positive), Alkaptonuria (ochronosis), Accessory Tragus,
Accessory Nipple, Anhidrotic Ectodermal Dysplasia Cowden Disease,
Ehlers-Danlos Syndrome, Marfan's Syndrome, Geographic Tongue,
LEOPARD Syndrome, Dysplastic Nevus Syndrome, Neurofibromatosis,
Nevoid Basal Cell Carcinoma Syndrome, Ichthyosis (Vulgaris,
Lamellar, X-Linked, Linearis Circumflexa), Osteogenesis Imperfecta,
Peutz-Jeghers Syndrome, Steatocystoma multiplex, Waardenburg
Synrome, Piebaldism, Xeroderma Pigmentosum, Eye Disease
(Conjunctivitis, Cataracts, corneal abrasion, Blepharitis, stye,
subconjunctival hemorrhage, Pterygium, exophthalmos, Oculodermal
Melanocytosis hyphema, filariasis), Hair disorders (Alopecia
(androgenic, areata, traumatic), Bubble Hair Deformity Hirsutism
Hot comb alopecia, Pili Torti, Telogen Effluvium, Uncombable Hair
Syndrome, Monilethrix, Menkes's Kinky Hair Syndrome, Nail disorders
(Beau's Lines, Mees's Lines, Candidal Paronychia, Half and Half
Nails, Leukonychia Ochronosis, Onychodystrophy, Onychogryphosis,
Onycholysis, Onychomadesis, Onychomycosis, Onychorrhexis (Brittle
Nails), Onychoschizia, Onychotillomania, Splinter hemorrhage,
Terry's Nails, Yellow Nail Syndrome, Melanonychia, Subungual
melanoma, Median Nail Dystrophy, Koilonychia (Spoon Nails), Aging
(wrinkles, Angular Cheilitis, Asteatotic Dermatitis, Cellulite,
Diabetic Dermopathy, Perleche, Rhinophyma, Senile/Actinic Purpura,
Varicose Veins, Spider Veins, Telangiectases, Stasis Dermatitis,
stasis ulcer, Pressure Ulcer), Light induced (Actinic Keratosis,
Actinic Cheilitis, Actinic Reticuloid, Poikiloderma of Civatte,
Photoallergic Contact Dermatitis, Phytophotodermatitis), Pediatric
disorders (Acropustulosis of Infancy, Aplasia Cutis Congenita
Congenital dermal melanosis (Mongolian spot) Diaper Dermatitis,
Diffuse Neonatal Hemangiomatosis, Dyskeratosis Congenita,
Epidermolysis Bullosa (EB, EBDD, EBDR, EBDD, EBS, Acquisita),
Incontinentia Pigmenti, Netherton's Syndrome, Transient Bullous
Dermolysis of the Newborn, Pachyonychia Congenita, Transient
Neonatal Pustular Melanosis), Drug reactions (Atrophie Blanche,
Cushing's Syndrome, Erythema Multiforme, Toxic Epidermal
Necrolysis, stevens johnsons syndrome, Fixed Drug, urticarcia,
vasculitis, angioedema), Oral Candidiasis (Thrush), Nutritional
Deficiency (Vitamin C Deficiency--Scurvy, Zinc Deficiency,
Kwashiorkor, or excess (obesity)) Toxins (Argyria, Arsenical
Keratosis, Carotenemia, Choracne) Pigmentary Disorders (Leukoderma,
Vitiligo, Post inflammatory hyperpigmentation or Hypopigmentation,
Melasma), Immune/inflammatory disorders, Lupus (Erythematosus
Bullous, Discoid, Subacute, Systemic) Linear IgA Bullous
Dermatosis, Bullous Pemphigoid, Pemphigus (Foliaceus, Vegetans,
Vulgaris, IgA, Paraneoplastic), Lichen (Aureus, Amyloidosis,
Nitidus, Planus, Sclerosus et Atrophicus, Simplex Chronicus),
Morphea, Scleroderma, Granuloma Annulare, Id Reaction, Livedo
Reticularis Pityriasis (Alba, Rosea, Lichenoides, Rubra Pilaris),
Pyoderma Gangrenosum, Pyogenic Granuloma, Sarcoidosis,
Telangiectasis Macularis Eruptiva Perstans, Urticaria (Acute,
Chronic, Dermographism, Solar, Vasculitis),
Vasculitis-Leukocytoclastic, Median Rhomboid Glossitis),
Necrobiosis Lipoidica (Necrobiosis Lipoidica Diabeticorum),
Miliaria (Crystallina, Profunda, Rubra), Fox-Fordyce Disease,
Keratosis Pilaris, Seborrheic Dermatitis, Burns (Chemical,
Frostbite, Heat (First-Second Third Degree) Radiation Dermatitis,
Erythema Ab Igne, Sunburn), and Trauma (Traumatic Purpura,
Lymphedema, Friction blister, abrasion, laceration, Immersion Foot
Syndromes, Tattoo).
* * * * *