U.S. patent application number 16/776932 was filed with the patent office on 2020-05-28 for touchless wound measurement, wound volume measurement, and other wound measurement.
The applicant listed for this patent is DermaGenesis LLC. Invention is credited to Ran Cohen, Alexander STEINBERG, Tianning Xu.
Application Number | 20200167945 16/776932 |
Document ID | / |
Family ID | 62624966 |
Filed Date | 2020-05-28 |
![](/patent/app/20200167945/US20200167945A1-20200528-D00000.png)
![](/patent/app/20200167945/US20200167945A1-20200528-D00001.png)
![](/patent/app/20200167945/US20200167945A1-20200528-D00002.png)
![](/patent/app/20200167945/US20200167945A1-20200528-D00003.png)
![](/patent/app/20200167945/US20200167945A1-20200528-D00004.png)
![](/patent/app/20200167945/US20200167945A1-20200528-D00005.png)
![](/patent/app/20200167945/US20200167945A1-20200528-M00001.png)
![](/patent/app/20200167945/US20200167945A1-20200528-M00002.png)
![](/patent/app/20200167945/US20200167945A1-20200528-M00003.png)
![](/patent/app/20200167945/US20200167945A1-20200528-M00004.png)
United States Patent
Application |
20200167945 |
Kind Code |
A1 |
Xu; Tianning ; et
al. |
May 28, 2020 |
TOUCHLESS WOUND MEASUREMENT, WOUND VOLUME MEASUREMENT, AND OTHER
WOUND MEASUREMENT
Abstract
With the inventive technology, wound measurement is performed,
but without the patient wound, or the skin near the patient wound,
needing to have an object physically placed thereon. The patient
wound and patient skin near the wound are spared contact with a
ruler, marker, grid, and spared the wound and area near the wound
being directly physically worked-on.
Inventors: |
Xu; Tianning; (Atlanta,
GA) ; Cohen; Ran; (Petah Tikva, IL) ;
STEINBERG; Alexander; (Ra'anana, IL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
DermaGenesis LLC |
Pompano Beach |
FL |
US |
|
|
Family ID: |
62624966 |
Appl. No.: |
16/776932 |
Filed: |
January 30, 2020 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15850558 |
Dec 21, 2017 |
10593057 |
|
|
16776932 |
|
|
|
|
62438115 |
Dec 22, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06T 2207/30088
20130101; G06T 7/62 20170101; G06T 2207/30096 20130101; G06T 7/13
20170101; G06T 2207/10012 20130101; G06T 2207/10028 20130101 |
International
Class: |
G06T 7/62 20060101
G06T007/62; G06T 7/13 20060101 G06T007/13 |
Claims
1-15. (canceled)
16. A method of modeling a wound of a patient, comprising scanning
the wound with a 3D camera; manipulating the 3D camera around a
center of the wound during the scanning step; and producing a 3D
model of the wound from imaging data of the 3D camera, the 3D model
being displayed or displayable on a screen and manipulatable on the
screen to show a back or underside of the wound.
17. The method of claim 16, wherein the wound is of a human
patient.
18. The method of claim 16, wherein the wound is of a veterinary
patient.
19. The method of claim 16, wherein the scanning step is performed
without the 3D camera coming into physical contact with the
patient.
20. The method of claim 16, wherein the wound is a dermal
wound.
21. The method of claim 16, wherein the producing step comprises
substeps of producing a depth image from the imaging data of the 3D
camera; detecting a wound from the depth image, including producing
a preliminary wound boundary formed of pixels, producing a final
wound boundary from the preliminary wound boundary, including, for
each pixel in the preliminary wound boundary, searching for a
maximum value of a directional second derivative of the depth image
along a direction orthogonal to the preliminary wound boundary, and
setting a pixel of the final wound boundary to coordinates
corresponding with the maximum value, subject to a size control
function to avoid breaking continuity of the final wound boundary.
Description
FIELD OF THE INVENTION
[0001] The invention relates to medical technology in support of
wound care, and more particularly, wound measurement
technology.
BACKGROUND OF THE INVENTION
[0002] Medical treatment of a patient with a wound typically calls
for assessment of wound sizes, repeated over time to provide an
indication of the patient's progress.
[0003] An example of a commercially available wound measuring
device is sold by McKesson, 5.times.7 inch, in a form of a
disposable clear plastic sheet with a circular, bull's eye grid
marked in centimeters and inches, that is placed atop a patient's
wound.
[0004] A recent advance in wound measurement technology is
reflected in Xu, "Wound Measurement on Smart Phones," US
20140088402 published March 27, 2014.
[0005] Generally wound measurement technologies thus far have
incorporated an object, or objects, physically placed onto a
patient near the wound. However, needing to physically place
something onto a patient has at least the inherent disadvantage
from the issue of sterility of any object being placed onto or near
the patient's wound. Further, when a plastic sheet, marker object,
etc. is placed atop or near the wound, the used object must undergo
proper disposal. Also, placing measuring devices or marker objects
atop, or near, a patient wound can be associated with patient
discomfort or pain. Consequently, there remain unmet needs for
improvements in wound measurement technology.
SUMMARY OF THE INVENTION
[0006] The invention aims to carry out wound measurement but
without putting a ruler, grid, marker or other object onto a
patient atop, or in a vicinity of, the wound. An object of the
invention is wound measurement but with disuse of a ruler, grid,
marker or other object physically contacting the patient wound or
the patient's skin near the wound.
[0007] In a preferred embodiment, the invention provides a method
of measuring a wound on a patient, comprising the following steps:
scanning, by a 3D camera, the wound by which a wound image is
acquired by the 3D camera; and, processing the wound image acquired
by the 3D camera, and thereby computing a Wound Volume Measurement,
without any ruler, grid, or marker having been placed on or near
the patient.
[0008] The invention in another preferred embodiment provides a
touchless method of wound measurement, consisting of steps that are
touchless relative to a patient having a wound to be measured,
wherein a Wound Volume Measurement is computed of the wound without
any RGB data processing having been performed and without any other
color-information data processing having been performed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 is a flow chart of method steps in an embodiment of
an inventive wound measurement method.
[0010] FIG. 2 is a flow chart of method steps in an inventive
embodiment of wound measurement technology using computerized
records-keeping.
[0011] FIG. 3 is a flow chart of method steps in an inventive
embodiment of wound scan and measurement.
[0012] FIG. 4 is a flow chart of method steps in an inventive
embodiment of wound detection.
[0013] FIG. 5 is a flow chart of method steps in an inventive
embodiment of wound measurements.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
[0014] The inventive technology is used, in connection with a
patient who suffers from a wound, to compute a Wound Volume
Measurement, advantageously without any ruler, grid, marker (or
such physical object) needing to have been placed on, or near, the
patient (particularly, onto the patient wound or onto skin near the
patient wound). We sometimes refer herein to "touchless", by which
we mean that the patient's wound and the wound's environ is
untouched by any ruler, grid, marker, 3D camera, frame enclosure
holding a 3D camera, or the like.
[0015] The invention is useable for computing a Wound Volume
Measurement for a patient wound that is susceptible of imaging by a
3D camera, such as a dermal wound. The invention mainly
contemplates a human patient, but also can be used in connection
with a veterinary patient.
[0016] Examples of a 3D camera for use in practicing the invention
are, e.g., Real Sense 3-D camera (manufactured by Intel); Orbbec
Astra 3-D camera; ZED stereo 3-D camera by Stereolabs.
[0017] A requisite step is a step of scanning, by a 3D camera, the
wound by which a wound image is acquired by the 3D camera. Most
preferably, the scanning step is performed without the 3D camera
coming into physical contact with the patient.
[0018] After the wound image has been acquired by the 3D camera, a
step is performed (by a computer or other machine) of processing
the wound image acquired by the 3D camera, and thereby computing a
Wound Volume Measurement, without any ruler, grid, or marker having
been placed on or near the patient.
EXAMPLE 1
[0019] In this inventive example, method steps are performed as
follows: [0020] prompting a first user to perform wound-scanning by
operating a 3D camera to scan the wound and obtain a wound image,
wherein the wound image has a wound edge; [0021] saving the wound
image to an OBJ mesh file, with texture; reading the OBJ mesh file;
[0022] prompting a second user to point to the wound edge with the
mouse or pointer at a first point P1; [0023] getting 3D location of
the first point P1 where the mouse was pointed; prompting the
second user to point to a next point Pn on the wound edge; [0024]
getting 3D location of the point Pn where the mouse was pointed,
and, if a total of clicked points is greater than 2, adding the
triangle formed by the most recent 3 clicked points to a Wound
Surface value; [0025] displaying a triangle T1; [0026] pressing a
predetermined key, or clicking a preset button, to calculate the
wound volume; [0027] for each triangle T1 . . . Tn, calculating a
surface area of the triangle for total Wound Surface; [0028]
calculating each edge point 3D distance, thereby getting average
Surface Distance; calculating an area surrounded by edge points;
[0029] dividing the area surrounded by edge points into a grid, and
getting 3D location for each grid joint; [0030] calculating average
distance of each grid joint, thereby getting an average Wound
Distance; subtracting average Wound Distance from average Surface
Distance, to obtain average Wound Depth; [0031] obtaining Wound
Volume by multiplying average Wound Depth by total Wound
Surface.
EXAMPLE 1A
[0032] In this example, Example 1 is performed, using Meshlab. The
step of reading the OBJ file is performed using MeshLab app. The
step of getting 3D location of the first point P1 comprises calling
FindNearestMeshPoint to get the 3D location of point P. The step of
getting 3D location of the point Pn comprises calling
FindNearestMeshPoint to get the 3D location of point Pn. The step
of displaying the triangle T1 comprises calling OpenGL function in
MeshLab to display the triangle T1. The step of displaying the
triangle Tn comprises calling OpenGL function in MeshLab to display
the triangle Tn. The step of dividing the area surrounded by edge
points into a grid and getting a 3D location for each grid joint
comprises calling FindNearestMeshPoint in MeshLab to get 3D
location for each grid joint.
EXAMPLE 1B
[0033] In this example, Example 1 is carried out, and the second
user who operates the mouse also being the first user who operated
the 3D camera.
EXAMPLE 1C
[0034] In this example, Example 1 is carried out, and the second
user who operates the mouse is someone other than the first user
operated the 3D camera.
EXAMPLE 1D
[0035] In this example, Example 1 is performed such that exactly 3
mouse-clicked points are collected by prompting the user's clicking
on the wound edge.
EXAMPLE 1E
[0036] In this example, Example 1 is performed wherein more than 3
mouse-clicked points are collected by prompting the user's clicking
on the wound edge.
EXAMPLE 1F
[0037] In this example, Example 1 is performed, wherein a number of
mouse-clicked points collected by prompting the user's clicking on
the wound edge is in a range of 3-10 mouse-clicked points.
EXAMPLE 1G
[0038] In this example, Example 1 is performed wherein the step of
pressing to calculate wound volume comprises pressing "G" in
Meshlab.
EXAMPLE 2
[0039] An imaging device according to this inventive Example is
useable to acquire 3D images that can be subjected to computer
processing steps.
EXAMPLE 2.1
An imaging device was constructed as follows, according to a novel
algorithm that consists of two main parts: wound detection and
wound measurement.
[0040] The algorithm applies to a 3D model of a human body part
containing a wound. The 3D model is obtained from a scan performed
by an inventive application. The algorithm is not applied directly
to the 3D model. Instead, the generated 3D model is rendered with
camera parameters providing a good view of the wound (typically
perpendicular to the wound or to the body part where the wound is),
from which the algorithm acquires the Z-buffer (depth map) Z,
calculated by the rendering process and the corresponding 4-by-4
projection matrix P as an input. The rendering process is based on
OpenGL API (The Industry Standard for High Performance Graphics),
and hence we use here the OpenGL terminology.
[0041] In addition, the algorithm gets a user defined outer-wound
contour C as a hint for the wound location.
[0042] The algorithm does NOT use any color information.
Wound Detection
[0043] The following steps are performed.
[0044] 1. Convert the Z-buffer Z to the depth image D. The
conversion is given by:
D ( i , j ) = P ( 3 , 4 ) 2 Z ( i , j ) - 1 + P ( 3 , 3 ) , ( i , j
) .di-elect cons. R , ##EQU00001##
where R={1, . . . ,m}.times.{1, . . . ,n}, m is a number of rows
and n is a number of columns in Z and D.
[0045] 2. Define a region of interestUfor wound detection. We
include in U a (i,j) .di-elect cons. R laying inside C, except
border pixels (i=1 or i=m or
[0046] j=1 orj=n) and except pixels which depth is too close to the
far parameter of P, i.e.,
D(i,j)>(1-a)P(3,4)/(P(3,3)+1),
where a is a small positive constant.
[0047] 3. Wound Capping. We reconstruct skin surface S over the
wound in order to enhance wound appearance by subtracting S from
D.
(a) Calculate the First Approximation.
[0048] Since wound boundary is unknown yet, we start from the
region U. Namely, we solve the following discrete Laplace equation
with respect to S
4S(i,j)-S(i-1, j)-S(i+1,j)-S(i,j+1)=0
if (i,j) .di-elect cons. U, and
S(i,j)=D(i,j)
if (i,j) .di-elect cons. R\U.
(b) Iteratively Raise the Capping if Required.
[0049] There is a possibility that the surface S is situated below
the wound boundary. In this case S has to be raised. Let h be a
maximum value of S-D. If, for some small tolerance threshold
.delta.>0h>.delta., then we find all pixels(i, j) .di-elect
cons. U such that
S(i,j)-D(i,j).gtoreq.h-.delta.
[0050] Assuming that these pixels are mostly (up to the threshold
8) outside the wound we redefine the region U by excluding these
pixels from it. We return to the steps (3a) and (3b) with the
updated region U. We proceed in this way until h 5. 8 or maximal
allowed number of iterations is reached.
[0051] 4. Detect a wound. To detect a wound we apply Chan-Vese
algorithm (see T. Chan and L. Vese, Active contours without edges.
IEEE Trans. Image Processing, 10(2):266-277, Febuary 2001) to the
difference F=D-S. The Chan-Vese approach is to find among all
2-valued functions of the form
.phi. ( i , j ) = { c 1 if ( i , j ) .di-elect cons. W , c 2 if ( i
, j ) .di-elect cons. R \ W , ##EQU00002##
the one that minimizes the following energy functional,
.mu.Length(.differential.W)+vArea(W)+.lamda..sub.1.SIGMA..sub.(i,j).di-e-
lect
cons.W(F(i,j)-c.sub.1).sup.2+.lamda..sub.2.SIGMA..sub.(i,j).di-elect
cons.R\W(F(i,j)-c.sub.2).sup.2,
where .differential.W denotes the boundary of W, .mu.>0,
v.gtoreq.0, .lamda..sub.1>0, .lamda..sub.2>0 are fixed
parameters.
[0052] Let W, c.sub.1 and c.sub.2 minimize the energy functional.
We interpret Was a set of pixels belonging to the wound.
[0053] 5. Correct wound boundary. The wound boundary
.differential.W obtained in (4) is not accurate enough. It is
located somewhere on the wound walls, but not necessarily on the
top of them. We move it to the top as described below.
[0054] Starting from each pixel(i, j) .di-elect cons.
.differential.W we go in the direction orthogonal to
.differential.W and select a pixel (p(i, j), q(i,j)) located on the
top of the wound wall by searching for the maximum value of the
directional second derivative of the depth image D. Our intention
is to move pixels(i, j)to pixels
[0055] (p(i,j), q(i,j)), but this operation can break continuity of
the wound boundary.
[0056] Denote by dist(i,j,A)the euclidean distance from the
pixel(i,j) to the set of pixels A. Let
.DELTA.(i,j)=dist(i,j,W)-dist(i,j,R\W).
For any t>0, the set W.sub.t={(i,j) .di-elect cons. R:
.DELTA.(i, j)<t} is an uniform expansion of Wwith size
controlled byt, W.sub.0=W. In order to make this kind of expansion
more flexible we replace t with a function T(i,j) which on the one
hand has to be close to a constant, and on the other hand has to
get values close to dist(p(i, j), q(i, j), W) at the pixels (p(i,
j), q(i,j)).
[0057] We find T as the solution of the following optimization
problem
.SIGMA..sub.i=2.sup.m.SIGMA..sub.j=1.sup.n[T(i,j)-T(i-1,
j)].sup.2+.SIGMA..sub.i=1.sup.m.SIGMA..sub.j=2.sup.n[T(i,j)-T(i,j-1)].sup-
.2+.rho..SIGMA..sub.(i,j).di-elect cons..differential.W[T(p(i,j),
q(i,j))-dist(p(i,j), q(i,j), W)].sup.243 min,
.rho.>0 where is a constant parameter. Finally, we declare
W*={(i,j) .di-elect cons. R: .DELTA.(i,j).ltoreq.T(i,j)}
as a set of the wound pixels.
Wound Measurements
[0058] Formulas for calculating wound volume, maximal depth, area,
perimeter, length and width are set forth below. Note that the last
4 measurements are calculated for wound projection onto a plane
parallel to the camera image plane.
[0059] In order to calculate wound volume we perform capping again
as described in (3a) using W*instead of U. Let S*be the result. We
clamp it as follows
S * = min ( S * , D ) . Then ##EQU00003## WoundVolume = 4 3 mnP ( 1
, 1 ) P ( 2 , 2 ) ( i , j ) .di-elect cons. W * ( D ( i , j ) 3 - S
* ( i , j ) 3 ) , WoundMaximalDepth = max { D ( i , j ) - S * ( i ,
j ) , ( i , j ) .di-elect cons. W * } . ##EQU00003.2##
Tracing the wound boundary .differential.W* we write down all
pixels belonging to .differential.W* as a sequence (i.sub.1,
j.sub.1), (i.sub.2,j.sub.2), . . . , (i.sub.N, j.sub.N) Let Q be
the inverse matrix of P and let for each k=1, . . . ,N,
X k = Q ( 1 , 1 ) x k + Q ( 1 , 4 ) Q ( 4 , 3 ) z k + Q ( 4 , 4 ) ,
Y k = Q ( 2 , 2 ) y k + Q ( 2 , 4 ) Q ( 4 , 3 ) z k + Q ( 4 , 4 ) ,
where ##EQU00004## x k = ( 2 / n ) ( j k - 0.5 ) - 1 , y k = - ( 2
/ m ) ( i k - 0.5 ) + 1 , z k = - P ( 3 , 3 ) + P ( 3 , 4 ) D ( i k
, j k ) . ##EQU00004.2##
Put, in addition, X.sub.0=X.sub.N,Y.sub.0=Y.sub.N and
Y.sub.N+1=Y.sub.1.
[0060] Then
WoundArea=|.SIGMA..sub.k=1.sup.NX.sub.k(Y.sub.k+1-Y.sub.k-1)|,
WoundPerimeter=.SIGMA..sub.k-1.sup.N {square root over
((K.sub.k-X.sub.k-1).sup.2+(Y.sub.k-Y.sub.k-1).sup.2.)}
[0061] Assuming that a human body orientation is defined by an
angle.theta., wound length and width are given by
WoundLength=max{X.sub.kcos.theta.+Y.sub.ksin.theta.,1.ltoreq.k.ltoreq.N}-
-min{X.sub.kcos.theta.+Y.sub.ksin.theta., 1.ltoreq.k.ltoreq.N},
WoundWidth=max{-X.sub.ksin.theta.+Y.sub.kcos.theta.,
1.ltoreq.k.ltoreq.N}-min{-X.sub.ksin.theta.+Y.sub.kcos.theta.,
1.ltoreq.k.ltoreq.N}.
EXAMPLE 2.2
[0062] Optimal values for algorithm parameters in Example 2.1 are
determined by testing the system on phantom wounds and other forms
made from plasticine. For a (a small positive constant), 0.01 was
chosen.
EXAMPLE 2.3
[0063] In this example, when an inventive device used according to
any of Examples 2, 2.1, 2.2, an image was ready to view within 10
seconds of camera operation.
EXAMPLE 2.4
[0064] In this example, when an inventive device was used according
of any of Examples 2, 2.1, 2.2, 2.3, after a scan was completed, a
3D image was displayed to a user, and the displayed 3D image was
subject to being manipulated by a finger of the user.
EXAMPLE 2.5
[0065] In this example according to Example 2.4, a user manipulated
a wound image on screen with the user's finger, including, the user
looked behind and under a wound image on screen.
EXAMPLE 2.6
[0066] Referring to FIG. 2, in this Example, method steps are
performed of: creating 200 a New Patient record or selecting 201 an
Existing Patient record; presenting 202 a gallery of the patient's
wounds; creating 203 a new Wound record or selecting 204 an
existing Wound record; performing Wound Scan & Measurement 205;
adding 206 the scan to Wound Scans History; presenting 207 Wound
Volume trend line, Wound Measurement Per Scan, and Total Volume
Reduction from first scan.
EXAMPLE 2.6A
[0067] Referring to FIG. 2, optionally steps of adding 203A wound
location and type (to the Wound Record, and/or adding/editing 200A
patient details to the Patient Record, are performed.
EXAMPLE 2.7
[0068] (Wound Scan & Measurement)
[0069] Referring to FIG. 3, in this Example, method steps are
performed of: Image Acquisition 300 using a 3D depth and 2D camera
module; previewing 301 video images and selecting a wound to
measure; aiming 302 at the center of the wound, from a proper
distance; starting 303 scanning; manipulating 304 the camera around
the wound center; stopping 305 scanning; a step 307, performed by
an operator, of marking a wound contour as a first estimation and
defining wound-body orientation; automatic detection 308 of wound
borders in the 3D depth image; wound capping 309 (comprising
estimating the optimal upper closure (i.e., cap) for the wound);
calculating 310 wound measurement (comprising measuring the volume
beneath the cap, wound circumference, width, length, maximum depth,
and area).
[0070] Steps 303, 304, 305 are referred to as Wound Scan 306
steps.
[0071] Steps 308, 309, 310 are referred to as Wound Detection 311
steps.
EXAMPLE 2.7A
[0072] Referring to FIG. 3, optionally the operator is allowed to
manually correct 308A bound borders.
EXAMPLE 2.7B
[0073] Referring to FIG. 3, optionally real-time wound tracking and
data collection are output in an outputting step 306.
EXAMPLE 2.8B
[0074] Referring to FIG. 3, optionally a 3D model of the wound is
generated in a generating step 306B.
[0075] EXAMPLE 2.8C
[0076] Referring to FIG. 3, optionally the 3D model of Example 2.8B
is presented to the operator in a displaying step 311A.
EXAMPLE 2.8
[0077] (Wound Detection)
[0078] Referring to FIG. 4, in this Example, steps are performed
of: a step 401 of rendering the 3D wound model from a perpendicular
camera and generating Z-buffer (using OpenGL); converting 402 the
Z-buffer to depth image; defining 403 a region of interest U for
wound detection; wound capping 404 (comprising reconstructing skin
surface over the wound); rough wound boundaries detection 405; and
refined wound boundaries detection 406.
EXAMPLE 2.9
[0079] (Wound Measurements)
[0080] Referring to FIG. 5, in this Example, steps are performed
of: measuring 501 distances from capping to wound floor;
calculating 502 volume by summing distances in all pixels inside
the wound; calculating 503 maximum depth (max distances); summating
504 perimeter length equaling total length of detected wound
boundaries; calculating 505 wound area from detected wound
boundaries; calculating 506 max wound length & width by
aligning the wound contour to body angle; and calculating 507
presented area as Max length.times.Max width.
[0081] The above described embodiments are set forth by way of
example and are not limiting. It will be readily apparent that
obvious modification, derivations and variations can be made to the
embodiments. The claims appended hereto should be read in their
full scope including any such modifications, derivations and
variations.
* * * * *