U.S. patent application number 16/986360 was filed with the patent office on 2020-11-19 for self-correcting method for annotation of data pool using feedback mechanism.
The applicant listed for this patent is KONINKLIJKE PHILIPS N.V.. Invention is credited to Thasneem MOORKAN, Sarif Kumar Naik, Meru Adagouda Patil, Ravindra Balasaheb Patil, Vidya Ravi, Rithesh SREENIVASAN.
Application Number | 20200365262 16/986360 |
Document ID | / |
Family ID | 1000005063275 |
Filed Date | 2020-11-19 |
United States Patent
Application |
20200365262 |
Kind Code |
A1 |
SREENIVASAN; Rithesh ; et
al. |
November 19, 2020 |
SELF-CORRECTING METHOD FOR ANNOTATION OF DATA POOL USING FEEDBACK
MECHANISM
Abstract
A method (200) of converting a flowchart (110) to a structured
electronic representation (116) of the flowchart includes:
identifying, in an image (108) of the flowchart, a plurality of
shapes (112) corresponding to flowchart blocks of the flowchart;
identifying arrows (114) defining flow paths between the flowchart
blocks in the image including identifying flowchart blocks
connected by the arrows and their directionality; identifying text
labels and their locations in the image and determining text
content of the text labels; associating the text labels with
flowchart blocks or defined flow paths based on locations of the
text labels, flowchart blocks, and arrows; and generating the
structured electronic representation of the flowchart based on the
flowchart blocks, the flow paths, and the text labels. A structure
of the structured electronic representation is determined at least
based at least on the flow paths between the flowchart blocks of
the image.
Inventors: |
SREENIVASAN; Rithesh;
(BANGALORE, IN) ; MOORKAN; Thasneem; (BANGALORE,
IN) ; Patil; Ravindra Balasaheb; (Bangalore, IN)
; Patil; Meru Adagouda; (Bangalore, IN) ; Naik;
Sarif Kumar; (Bangalore, IN) ; Ravi; Vidya;
(Bangalore, IN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
KONINKLIJKE PHILIPS N.V. |
Eindhoven |
|
NL |
|
|
Family ID: |
1000005063275 |
Appl. No.: |
16/986360 |
Filed: |
August 6, 2020 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
PCT/EP2019/055526 |
Mar 6, 2019 |
|
|
|
16986360 |
|
|
|
|
62646993 |
Mar 23, 2018 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G16H 40/40 20180101;
G06F 40/211 20200101; G16H 30/00 20180101 |
International
Class: |
G16H 40/40 20060101
G16H040/40; G16H 30/00 20060101 G16H030/00; G06F 40/211 20060101
G06F040/211 |
Claims
1. A monitoring device for generating maintenance alerts, the
monitoring device comprising: a display; an electronic processor
operatively connected with the display; and a non-transitory
storage medium storing instructions readable and executable by the
electronic processor to perform a monitoring method for generating
maintenance alerts, the monitoring method comprising: generating
decision rules for classifying medical imaging device log data
based on whether parameter values of operating parameters of
medical imaging device components identified by component
identifications (component IDs) in the log data are within
operating parameter ranges for the respective operating parameters
wherein the operating parameter ranges for the respective operating
parameters are extracted from electronic medical imaging device
manuals; applying the decision rules to log data generated by a
monitored medical imaging device to detect out-of-range log data
generated by the monitored medical imaging device; and controlling
the display to present maintenance alerts in response to the
detected out-of-range log data wherein the maintenance alerts are
associated with component IDs contained in the detected
out-of-range log data.
2. The monitoring device of claim 1, wherein the monitoring method
further comprises: determining accuracy of the presented
maintenance alerts based on feedback extracted from a service log
of the monitored medical imaging device; and updating the decision
rules based on the determined accuracy of the presented maintenance
alerts.
3. The monitoring device of claim 2, wherein accuracy of a
presented maintenance alert is determined based on feedback
extracted from the service log of the monitored medical imaging
device over a time interval extending from a time of presentation
of the maintenance alert to an end time dependent upon a mean time
to failure (MTTF) of the medical imaging device component
identified by the component ID associated with the maintenance
alert.
4. The monitoring device of claim 2, wherein the generating of the
decision rules includes identifying component IDs in the medical
imaging device manuals using a named entity recognition (NER)
algorithm.
5. The monitoring device of claim 4, wherein the generating of the
decision rules further includes: parsing text and tables of the
electronic medical imaging device manuals to delineate semantic
units including at least sentences, list items, and table rows;
identifying operating parameters by identifying numeric values and
parameter terms that are connected by linking terms or symbols
indicative of equality or inequality and associating operating
parameters identified by the parameter terms with medical imaging
device components whose component IDs occur in the same semantic
units as the parameter terms; and determining operating parameter
ranges associated with the identified operating parameters based on
the numeric values and the equalities or inequalities indicated by
the linking terms or symbols.
6. The monitoring device of claim 1, wherein the presented
maintenance alerts are presented as maintenance
recommendations.
7. A non-transitory computer readable medium storing instructions
executable by at least one electronic processor to perform a method
of converting a flowchart to a structured electronic representation
of the flowchart, the method comprising: identifying, in an image
of the flowchart, a plurality of shapes corresponding to flowchart
blocks of the flowchart in the image; identifying arrows defining
flow paths between the flowchart blocks in the image including
identifying flowchart blocks connected by the arrows and
identifying directionality of the defined flow paths based on
arrowheads of the arrows; identifying text labels and their
locations in the image and performing optical character recognition
(OCR) on the image of the flowchart to determine text content of
the identified text labels; associating the text labels with
flowchart blocks or defined flow paths based on locations of the
text labels, flowchart blocks, and arrows in the image; and
generating the structured electronic representation of the
flowchart based on the flowchart blocks, the flow paths, and the
text labels, wherein a structure of the structured electronic
representation is determined at least based at least on the flow
paths between the flowchart blocks of the image.
8. The non-transitory computer readable medium of claim 7, wherein
the generating of the structured electronic representation
includes: determining shape-indicated functions of the flowchart
blocks by comparing the identified shapes with standard flow chart
shapes representing corresponding functions; wherein the structure
of the structured electronic representation is further determined
by assigning functions to the flowchart blocks based at least on
the determined shape-indicated functions.
9. The non-transitory computer readable medium of claim 7, wherein
the structure of the structured electronic representation is
further determined by: assigning functions to the flowchart blocks
based at least on the text labels associated to the flowchart
blocks and/or associated to the defined flow paths.
10. The non-transitory computer readable medium of claim 7, wherein
the method further includes: displaying the structured electronic
representation on a display device; receiving one or more inputs
from a user, via a user input device, indicative of editing a
portion of the structured electronic representation; and updating
the structured electronic representation based on the received one
or more inputs.
11. The non-transitory computer readable medium of claim 7, wherein
the image of the flowchart includes a plurality of images each
depicting a corresponding portion of the flowchart, and the method
further includes: repeating the identifying of the shapes, the
identifying of the arrows, and generating the representation for
each image of the plurality of images of the flowchart.
12. The non-transitory computer readable medium of claim 7, wherein
the generating of the structured electronic representation further
includes generating a directed graph representing the
structure.
13. The non-transitory computer readable medium of claim 12,
wherein the generating of the structured electronic representation
further includes: converting the directed graph to a structured
dialog flow representation.
14. The non-transitory computer readable medium of claim 13,
wherein the structured dialog flow representation comprises a
JavaScript Object Notation (JSON) representation.
15. The non-transitory computer readable medium of claim 7, further
comprising providing a user interface that guides a user through
the structured dialog flow representation by: presenting a current
flowchart block of the structured electronic representation via a
user interfacing device; receiving an input from the user via the
user interfacing device; updating the current flowchart block based
on the structure of the structured electronic representation and
the received input from the user; and repeating the presenting,
receiving, and updating to guide the user through the
flowchart.
16. The non-transitory computer readable medium of claim 15,
wherein the user interfacing device is a chatbot.
17. A non-transitory computer readable medium storing instructions
executable by at least one electronic processor to perform a method
of converting flowcharts to structured electronic representations,
the method comprising: identifying, in an image of a flowchart, a
plurality of shapes corresponding to flowchart blocks of the
flowcharts in the image; identifying a location and directionality
of a plurality of arrows defining flow paths between the flowchart
blocks in the image; generating a directed graph of the flowchart
blocks; and converting the directed graph to a structured dialog
flow representation.
18. The non-transitory computer readable medium of claim 17,
wherein the structured dialog flow representation comprises a
JavaScript Object Notation (JSON) representation.
19. The non-transitory computer readable medium of claim 17,
further comprising providing a user interface that guides a user
through the structured dialog flow representation by: presenting a
current flowchart block of the structured electronic representation
via a chatbot; receiving an input from the user via the chatbot;
updating the current flowchart block based on the structure of the
structured electronic representation and the received input from
the user; and repeating the presenting, receiving, and updating to
guide the user through the flowchart.
20. The non-transitory computer readable medium of claim 17,
wherein the generating of the structured electronic representation
includes at one of: determining shape-indicated functions of the
flowchart blocks by comparing the identified shapes with standard
flow chart shapes representing corresponding functions, wherein the
structure of the structured electronic representation is further
determined by assigning functions to the flowchart blocks based at
least on the determined shape-indicated functions; and assigning
functions to the flowchart blocks based at least on the text labels
associated to the flowchart blocks and/or associated to the defined
flow paths.
Description
PRIORITY
[0001] This application is a continuation-in-part of Application
Serial No. PCT/EP2019/055526, filed Mar. 6, 2019, which claims
priority to Application Ser. No. U.S. 62/646,993, filed Mar. 23,
2018, both of which are incorporated herein by reference in their
entireties.
FIELD
[0002] The following relates generally to the medical imaging
device maintenance arts, device monitoring arts, predictive
maintenance arts, process flow automation arts, and related
arts.
BACKGROUND
[0003] Medical devices used in the healthcare industry, such as
magnetic resonance imaging (MRI) scanners, computed tomography (CT)
scanners, positron emission tomography (PET) scanners, gamma
cameras used in single photon emission computed tomography (SPECT),
image-guided therapy (iGT) systems, and other medical imaging
devices, or electrocardiograph (ECG) or patient monitor devices,
and so forth, should be in good working condition to ensure doctors
and patients receive correct information for medical diagnoses and
patient monitoring and so forth. Medical imaging devices are
expensive to replace and play a crucial role in diagnosis. Any
downtime of these devices results in a loss of revenue to the
medical institution, loss of quality treatment for patients, and
introduces delays into patient treatment. Emphasis is thus placed
on minimizing the downtime of medical imaging devices and ensuring
uninterrupted operational status while maintaining quality of
performance.
[0004] Predictive maintenance is an important part of minimizing
downtime of medical imaging devices. This approach entails
predicting and proactively repairing or otherwise remediating
possible failures of medical imaging device components in advance
based on the machine logs and usage history. In this way,
maintenance can be proactively performed to minimize or eliminate
downtime and impact on patient care.
[0005] However, predictive maintenance is difficult to implement in
practice. Modern medical imaging devices produce huge volumes of
log data, on the order of gigabytes or terabytes or higher.
Furthermore, proactively identifying log data that statistically
indicates a likely component failure is difficult due to the time
lag between the log data and the subsequent component failure.
Still further, determining which log data is useful in predicting a
component failure is difficult. Conventionally, prior information
has been used to identify these types of bad log data indicative of
a likely component failure. The prior information is typically
provided by subject matter experts with specialized technical
knowledge of the medical imaging device components, their
performance envelopes, and the possible problems. Generation of the
knowledge engine for performing predictive maintenance is typically
a laborious process requiring input of subject matter experts
knowledgeable in the medical imaging devices and their components
to develop decision rules that can be applied to log data of a
monitored medical imaging device.
[0006] A further problem is that the resulting predictive
maintenance system is static. This is a problem because medical
imaging device manufacturers are continually adding new products
and improving existing product lines. These changes will not be
reflected in the knowledge engine used for predictive maintenance.
Consequently, manual updating of the knowledge engine must be
performed on a frequent basis, which again requires extensive input
from subject matter experts.
[0007] Moreover, field service engineers (FSEs) frequently have to
provide service to their customers by repairing their imaging
systems. During the process of inspecting faulty imaging systems,
FSEs can refer to printed manuals or e-books, which have fault
isolation process flows. FSEs need to carry these manuals, then
search for the appropriate content to follow the process flows in
the form of a flow chart. FSEs follow the steps of the flow chart
to perform fault isolation of the imaging system. Some of these
flow charts can be complex, long and run across multiple pages with
a lot of steps. It can be easy can lose track of the steps in the
flow chart.
[0008] The imaging system printed manuals can be in text form or
PDF forms. These process flow diagrams are present as images within
PDF documents. While the image format may allow for the FSE to zoom
in on specific portions of the flow chart, difficulties can still
arise, for example if the portion of the flow chart being traversed
crosses onto another image/page.
[0009] The following discloses certain improvements.
SUMMARY
[0010] In some embodiments disclosed herein, a non-transitory
storage medium stores instructions readable and executable by an
electronic processor to perform a monitoring method for generating
maintenance alerts. The monitoring method includes: extracting
component identifications (component IDs) identifying medical
imaging device components and operating parameters of the
identified medical imaging device components and associated
operating parameter ranges from electronic medical imaging device
manuals; generating a knowledge engine by operations including
formulating the operating parameter ranges into a set of decision
rules for classifying medical imaging device log data; and applying
the knowledge engine to log data generated by a monitored medical
imaging device to detect out-of-range log data generated by the
monitored medical imaging device and to generate maintenance alerts
in response to the detected out-of-range log data wherein the
maintenance alerts are associated with component IDs contained in
the detected out-of-range log data.
[0011] In some embodiments disclosed herein, a monitoring device is
disclosed for generating maintenance alerts. The monitoring device
comprises a display, a non-transitory storage medium as set forth
in the immediately preceding paragraph, and an electronic processor
operatively connected with the display and with the non-transitory
storage medium to perform the monitoring method further including
displaying the generated maintenance alerts on the display.
[0012] In some embodiments disclosed herein, a monitoring method
performed by an electronic processor is disclosed for generating
maintenance alerts. Component identifications (component IDs) are
extracted which identify medical imaging device components in
electronic medical imaging device manuals. Operating parameters of
the identified medical imaging device components and associated
operating parameter ranges are extracted from the electronic
medical imaging device manuals based on numeric values, parameter
terms identifying operating parameters, and linking terms or
symbols indicative of equality or inequality that connect the
numeric values and parameter terms. The operating parameter ranges
are formulated into decision rules for classifying medical imaging
device log data based on whether a value of the associated
operating parameter is outside of the operating parameter range.
The decision rules are applied to log data generated by a monitored
medical imaging device to detect out-of-range log data generated by
the monitored medical imaging device, and maintenance alerts are
displayed on a display in response to the detected out-of-range log
data. The maintenance alerts are generated from out-of-range log
data and are associated with component IDs contained in the
out-of-range log data.
[0013] In some embodiments disclosed herein, a monitoring device is
disclosed for generating maintenance alerts. The monitoring device
includes a display, an electronic processor operatively connected
with the display, and a non-transitory storage medium storing
instructions readable and executable by the electronic processor to
perform a monitoring method for generating maintenance alerts. In
the monitoring method, decision rules are generated for classifying
medical imaging device log data based on whether parameter values
of operating parameters of medical imaging device components
identified by component identifications (component IDs) in the log
data are within operating parameter ranges for the respective
operating parameters. The operating parameter ranges for the
respective operating parameters are extracted from electronic
medical imaging device manuals. The decision rules are applied to
log data generated by a monitored medical imaging device to detect
out-of-range log data generated by the monitored medical imaging
device. The display is controlled to present maintenance alerts in
response to the detected out-of-range log data wherein the
maintenance alerts are associated with component IDs contained in
the detected out-of-range log data.
[0014] In some embodiments disclosed herein, a non-transitory
computer readable medium stores instructions executable by at least
one electronic processor to perform a method of converting a
flowchart to a structured electronic representation of the
flowchart. The method includes: identifying, in an image of the
flowchart, a plurality of shapes corresponding to flowchart blocks
of the flowchart in the image; identifying arrows defining flow
paths between the flowchart blocks in the image including
identifying flowchart blocks connected by the arrows and
identifying directionality of the defined flow paths based on
arrowheads of the arrows; identifying text labels and their
locations in the image and performing optical character recognition
(OCR) on the image of the flowchart to determine text content of
the identified text labels; associating the text labels with
flowchart blocks or defined flow paths based on locations of the
text labels, flowchart blocks, and arrows in the image; and
generating the structured electronic representation of the
flowchart based on the flowchart blocks, the flow paths, and the
text labels. A structure of the structured electronic
representation is determined at least based at least on the flow
paths between the flowchart blocks of the image.
[0015] In some embodiments disclosed herein, a non-transitory
computer readable medium stores instructions executable by at least
one electronic processor to perform a method of converting
flowcharts to structured electronic representations. The method
includes: identifying, in an image of a flowchart, a plurality of
shapes corresponding to flowchart blocks of the flowcharts in the
image; identifying a location and directionality of a plurality of
arrows defining flow paths between the flowchart blocks in the
image; generating a directed graph of the flowchart blocks; and
converting the directed graph to a structured dialog flow
representation.
[0016] One advantage resides in providing a monitoring device for
generating maintenance alerts for one or more monitored medical
imaging devices, in which the knowledge engine of the monitoring
device is generated automatically.
[0017] Another advantage resides in providing such a monitoring
device in which the knowledge base is continuously and efficiently
updated based on service logs of the monitored medical imaging
devices.
[0018] Another advantage resides in providing such a monitoring
device with computationally efficient generation and updating of
the knowledge base.
[0019] Another advantage resides in providing such a monitoring
device in which the knowledge base is developed without input from
subject matter experts.
[0020] Another advantage resides in digitizing paper manuals for
imaging systems.
[0021] Another advantage resides in reducing errors or loss of
process steps when following a process flow chart for servicing an
imaging system.
[0022] Another advantage resides in providing an interactive
electronic flow chart in which a chatbot guides the user through
the steps of the flow chart based on feedback provided to the
chatbot by the user.
[0023] Another advantage resides in providing digitized
instructions for an FSE to provide service to an imaging
system.
[0024] Other advantages include reduced total time to build
predictive models, elimination of subjectivity in labelling hence
reducing manual error, and auto feedback for re-learning and
self-correcting decision rules used for generating maintenance
alerts.
[0025] A given embodiment may provide none, one, two, more, or all
of the foregoing advantages, and/or may provide other advantages as
will become apparent to one of ordinary skill in the art upon
reading and understanding the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The invention may take form in various components and
arrangements of components, and in various steps and arrangements
of steps. The drawings are only for purposes of illustrating the
preferred embodiments and are not to be construed as limiting the
invention.
[0027] FIG. 1 diagrammatically illustrates a monitoring device for
performing predictive maintenance including generating maintenance
alerts for a monitored medical imaging device based on log data
generated by the monitored medical imaging device, in which
decision rules for generating the maintenance alerts are
automatically generating.
[0028] FIG. 2 diagrammatically illustrates a portion of a medical
imaging device manual.
[0029] FIG. 3 diagrammatically illustrates a table of components
and key metrics derived from the manual portion shown in FIG.
2.
[0030] FIGS. 4, 5, and 6 diagrammatically illustrate aspects of
generating the decision rules from medical imaging device manual
content as described herein.
[0031] FIGS. 7 and 8 diagrammatically illustrate aspects of mapping
of medical imaging device log data to decision rules of the
knowledge engine as described herein.
[0032] FIG. 9 diagrammatically illustrates a method suitably
performed by the monitoring device of FIG. 1 to construct decision
rules of the knowledge engine.
[0033] FIGS. 10 and 11 diagrammatically illustrate aspects of the
reader module component of the method of FIG. 9.
[0034] FIG. 12 illustrates a system for converting a digital image
of a flowchart to a structured electronic representation of the
flowchart.
[0035] FIG. 13 illustrates an exemplary digital image of a
flowchart for conversion by the system of FIG. 12.
[0036] FIG. 14 illustrates a directed graph produced from the
flowchart image of FIG. 13.
[0037] FIG. 15 illustrates an exemplary flow chart implemented by
the system of FIG. 12.
DETAILED DESCRIPTION
[0038] In embodiments disclosed herein, electronic medical imaging
device manuals are leveraged to extract component identifications
(component IDs) identifying medical imaging device components and
operating parameters of the identified medical imaging device
components and associated operating parameter ranges. The
electronic medical imaging device manuals can take substantially
any machine-readable form, and may be online manuals, printed
manuals that are scanned to pdf (or another electronic format) and
processed by optical character recognition (OCR) to generate
machine-readable text, and/or so forth. The electronic medical
imaging device manuals may, for example, include one or more of a
service manual, manufacturer specification(s), a user manual, an
operating reference manual, and/or so forth. A knowledge engine is
then generated by operations including formulating the operating
parameter ranges into a set of decision rules for classifying
medical imaging device log data. For example, a decision rule may
classify input log data as bad (i.e. out-of-range) log data if the
input log data indicates a value for an operating parameter that is
outside of the operating parameter range associated with that
operating parameter. The knowledge engine is then applied to log
data generated by a monitored medical imaging device to detect
out-of-range log data generated by the monitored medical imaging
device and to generate maintenance alerts in response to the
detected out-of-range log data. The maintenance alerts are suitably
associated with component IDs contained in the detected
out-of-range log data, and may in some embodiments be formulated as
maintenance recommendations. For example, if the temperature of a
component is indicated as being outside of the operating
temperature range for that component, then the alert may be
formulated as "<Component ID> operating temperature may be
above its recommended operating temperature. Recommend to check
temperature of <Component ID>."
[0039] A further aspect is that accuracy of the generated
maintenance alerts may be determined based on feedback extracted
from a service log of the monitored medical imaging device, and the
knowledge engine may be updated by adjusting the decision rules
based on the determined accuracy of the generated maintenance
alerts. For example, accuracy of a generated maintenance alert may
be determined based on feedback extracted from the service log over
a time interval extending from a time of generation of the
maintenance alert to an end time which is dependent upon (and may
be equal to) the mean time to failure (MTTF) of the medical imaging
device component identified by the component ID associated with the
maintenance alert.
[0040] By the disclosed approaches, the need for manual development
and curation of a set of decision rules for performing predictive
maintenance is reduced or eliminated.
[0041] In addition, improvements are disclosed for the diagnostic
phase, in which a field service engineer (FSE) works through a
fault isolation flowchart to identify a root cause and/or solution
to the problem underlying the maintenance alert. The FSE
conventionally consults a service manual when servicing diagnostic
equipment. Of particular value are fault isolation flowcharts which
provide step-by-step guidance for performing tests and inspections
in order to determine a root cause or solution of a problem. Such
manuals can cover existing and legacy diagnostic equipment. These
are presented in paper form, or in non-machine readable image (e.g.
JPEG) or PDF format, e.g., as images embedded in a PDF, ePub, or
other electronic book (ebook) or digital document format.
[0042] In some embodiments disclosed herein, an approach to convert
such paper flowcharts to a structured electronic form is provided.
To this end, an image (e.g., JPEG, BMP, PNG, or so forth) of the
flowchart serves as input. Shapes with corresponding location
anchors in the image are identified, which correspond to flowchart
blocks. Optical character recognition (OCR) is applied to extract
text with corresponding location anchors in the image. Connecting
arrows are identified and the directionality of the arrows (e.g.,
based on the direction the arrowhead is pointing) is used to define
flow paths between the blocks, and a directed graph representation
of the blocks is generated.
[0043] Based on comparison with standard flowchart symbols (e.g.,
box for a general operation, oval for "begin" or "end", diamond for
decisions, et cetera), a function of each block may be determined.
Additionally or alternatively, OCR'd text anchored inside the block
or near a flow arrow can be processed by keyword detection, natural
language processing (NLP), intent detection, or the like in order
to extract the function information. This enables semantic labeling
of the blocks and transitions of the directed graph. Optionally,
the directed graph may be displayed at this point for review and
editing (if appropriate) by a domain expert.
[0044] Next, the directed graph is converted to a structured dialog
flow representation. In some examples, JavaScript Object Notation
(JSON) can be employed for the dialog flow representation, although
other structured formats such as extensible markup language (XML)
might be suitable.
[0045] The foregoing operations are typically performed offline via
a server computer or the like to create the directed graph and JSON
representations. These (or at least the JSON representation) are
then made accessible to FSEs for use during service calls. This can
be done by downloading an application program ("app") to a tablet
computer, cellphone, notebook computer, or other mobile device used
by the FSE, in which the app includes a database of converted fault
isolation flowcharts. Alternatively, the FSE can access the JSON
(and optionally the directed graph) representation over the
Internet via a website hosted at a remote server using a web
browser or dedicated app. To reduce storage requirements at the
mobile device, the app may provide the FSE with the option to
download only selected flowcharts.
[0046] In a preferred embodiment, the JSON representation is used
to drive a chatbot that guides the FSE through the diagnostic
process. Hence, the chatbot will present the text of each box,
either on-screen (similar to an instant messaging chatbot), and/or
using text to voice, and/or using a graphical user interface (GUI)
chatbot avatar. After each step, the FSE inputs any information
needed to proceed (e.g., answer as to whether a performed test was
positive or negative), and the chatbot proceeds accordingly through
the JSON representation of the directed graph representing the
fault isolation flowchart.
[0047] With reference to FIG. 1, a monitoring device is disclosed
for generating maintenance alerts for a medical imaging device 2.
The illustrative medical imaging device 2 is a PET/CT scanner
including a positron emission tomography (PET) gantry 4 and a
transmission computed tomography (CT) gantry 6 positioned in-line
such that a patient can be loaded into either gantry 4, 6 via a
common robotic patient support or couch 8. Such a PET/CT scanner 2
is commonly used in numerous diagnostic and clinical tasks (e.g.
oncology, cardiology, neurology, and so forth) as the CT imaging
performed by the CT gantry 6 provides anatomical information that
is complementary to functional information provided by PET imaging
performed by the EPT gantry 4. As just one illustration of this
synergy, a CT image can be used to generate an attenuation map that
is used to perform attenuation correction during PET image
reconstruction. The PET/CT scanner 2 is a highly complex medical
imaging device with thousands of components, e.g. as a very few
non-limiting examples including dozens or hundreds of PET detector
modules, an X-ray tube and dozens or hundreds of X-ray detector
modules (components of the CT gantry 6), mechanical components for
revolving the X-ray tube and X-ray detector array around the
patient, numerous robotic mechanisms of the patient support or
couch 8, electrical power components such as a distribution
transformer for delivering electrical power to the PET/CT 2, and so
forth (note, detailed medical imaging device components are not
illustrated in FIG. 1). Each component typically has one, two,
three, or more operating parameters that are monitored by
appropriate sensors. For example, each PET detector module may
include a temperature sensor that outputs module operating
temperature, electrical sensors that monitor dark current and/or
other electrical operating parameters, and/or so forth. A robotic
component of the couch 8 may include position encoders that monitor
positioning of a robotic actuator. The CT gantry may include
sensors monitoring rotation speed and encoding rotational angular
position. The distribution transformer may include sensors
monitoring operating power factor, input and output voltage, and so
forth. Again, these are merely a few non-limiting illustrative
examples. While the illustrative monitored medical imaging device 2
is a PET/CT scanner, it will be appreciated that the monitored
medical imaging device may more generally be any type of medical
imaging device with sufficient complexity to justify continuous
monitoring and generation of timely maintenance alerts as disclosed
herein. For example, the monitored medical imaging device may be a
PET/CT scanner, a SPECT/CT scanner, a standalone gamma camera,
standalone PET scanner, standalone CT scanner, an magnetic
resonance (MR) imaging scanner, a PET/MR scanner, an image-guided
therapy (iGT) device, or so forth. Moreover, it will be appreciated
that the disclosed predictive maintenance monitoring devices and
methods may be applied to monitor many such medical imaging
devices, possibly of different types, in parallel.
[0048] With continuing reference to FIG. 1, the illustrative
monitoring device includes a server computer 10 or other electronic
processor (e.g. desktop computer with sufficient computing
capacity) operatively connected with a non-transitory storage
medium 12 that stores instructions that are readable and executable
by the electronic processor 10 to perform a monitoring method as
disclosed herein, and a monitor interface device 14 which, in the
illustrative embodiment, is embodied by a computer 16 that serves
as a medical imaging device controller 14 for controlling the
monitored medical imaging device 2 (e.g., to operate the robotic
support 8 to load a patient into the appropriate gantry, to select
and execute an imaging sequence or protocol, to perform
reconstruction of the acquired imaging data to generate an image
and to display the image on a display 18 of the computer, and/or so
forth). In other embodiments, the monitor interface device may be a
separate computer, or may be integral with the computer 10 that
performs the monitoring method. The monitor interface device
includes the display 18 for presenting maintenance alerts or other
information, and may optionally include one or more user input
devices (e.g. illustrative keyboard 19) for receiving user
inputs.
[0049] The non-transitory storage medium 12 may be variously
embodied, e.g. as a hard disk drive, RAID array, or other magnetic
storage medium, a solid state drive (SSD) or other electronic
storage medium, an optical disk or other optical storage medium,
various combinations thereof, and/or so forth. Further, it will be
appreciated that the illustrative electronic processors 10, 16 may
be otherwise variously embodied and/or combined, and/or the various
non-transitory storage media may be variously embodied, e.g. by way
of linkages via electronic data networks or the like. For example,
the electronic processor 10 may be implemented as a cloud computing
resource comprising an ad hoc combination of a number of server
computers.
[0050] The electronic processor 10 reads and executes instructions
stored on the non-transitory storage medium 12 in order to perform
a monitoring method including implementing a knowledge engine
builder 20 that builds, and optionally subsequently adaptively
maintains or updates, a knowledge engine 22 comprising a set of
decision rules for generating maintenance alerts 24. To this end,
the knowledge engine builder 20 leverages inputs from available
electronic medical imaging device operating data. A commonly
available form of electronic medical imaging device servicing and
operating data are electronic manuals, such as an illustrative
service manual 25, an illustrative set of manufacturer's
specifications 26, an illustrative user manual 27, and an
illustrative operating reference manual 28. These are merely
illustrative titles, and it will be appreciated that various
medical imaging device manufacturers and users (e.g. hospitals) may
employ different titling for the electronic medical imaging device
manuals. Likewise, the manual content may be variously distributed
amongst one or more such manuals, e.g. in some implementations a
single manual may cover the combined content of the illustrative
user manual 27 and operating reference manual 28, and/or the
service manual may include the manufacturer's specifications as one
or more appendices of the service manual, rather than as a separate
manufacturer's specifications document, and/or so forth. The
electronic medical imaging device servicing and operating data may
also include service notes compiled into electronic form (e.g.
electronic service logs maintained by service engineers), updates
to the manuals (e.g. base system parameter updates, or upgraded
operating parameters due to system/software upgrades, and/or so
forth), external data such as online servicing and operating data
available at the medical imaging device vendor's website, and/or so
forth. The various electronic medical imaging device servicing and
operating data 25, 26, 27, 28 are preferably specific to the make
and model of the medical imaging device 2 to be monitored, although
such a requirement may be relaxed in instances in which different
makes and/or models of a particular type of medical imaging device
share certain systems or sub-systems.
[0051] The knowledge engine builder 20 processes the content of the
electronic medical imaging device servicing and operating data 25,
26, 27, 28 to extract component identifications (component IDs)
identifying medical imaging device components, and to extract
operating parameters of the identified components, and to extract
operating parameter ranges associated with the respective operating
parameters. To this end, the manuals are assumed to be in
electronic form with machine readable text. (If this is not
initially the case, then optical scanning, photocopying,
photography, or the like can be employed to generate digital images
of the manual pages, followed by OCR, to covert a paper manual into
electronic format with machine-readable text). Natural language
processing (NLP) 30 is performed on text of the electronic medical
imaging device servicing and operating data 25, 26, 27, 28 to
tokenize the text into individual tokens (e.g. words), remove
uninformative common words (e.g. "the", "a", et cetera), perform
word stemming and lemmatization, and/or so forth. The NLP 30 may
include parsing of text and tables of the electronic medical
imaging device manuals to delineate semantic units including
sentences, paragraphs, list items, table rows, or so forth.
[0052] Component entity recognition (CER) 32 is applied to identify
component IDs in the NLP-processed text that identify medical
imaging device components. The CER 32 may employ any type of named
entity recognition (NER) algorithm or combination of NER
algorithms, e.g. leveraging a domain-specific vocabulary list to
identify component IDs, identifying component IDs based on factors
such as part-of-speech (if the NLP 30 includes grammatical
parsing), and/or so forth. Although the CER 32 is preferably a
fully automated process, in alternative embodiments it is
contemplated to be semi-supervised, e.g. with uncertain component
IDs presented to a user for confirmation or rejection. The CER 32
also performs operating parameters extraction to extract operating
parameters of the medical imaging device components identified by
component IDs, along with associated operating parameter ranges. In
one approach, operating parameters are identified by identifying
numeric values and parameter terms which are connected by linking
terms or symbols indicative of equality or inequality, and
associating parameters identified by the parameter terms with
medical imaging device components whose component IDs occur in the
same semantic units as the parameter terms. For example, the
sentence: "The PET detector modules should be kept at a temperature
below 100.degree. C." can be processed by identifying the numeric
value (100.degree. C.) and a parameter term (temperature) which are
connected by a linking term or symbol indicative of equality or
inequality (below), and the operating parameter "temperature" is
associated with the component ID (PET detector) occurring in the
same semantic unit (same sentence) as the parameter term
(temperature). Parameter terms can be identified using NER to
identify "named entities" representing operating parameters, e.g.
using a domain-specific vocabulary list, part-of-speech (if
grammatical parsing if available), and/or so forth. Again, this is
preferably a fully automated approach but in some embodiments
semi-supervised parameter term extraction is contemplated, e.g. by
presenting uncertain parameter names to a user for confirmation or
rejection. Operating parameter ranges associated with operating
parameters are determined based on the numeric values and the
equalities or inequalities indicated by the linking terms or
symbols. In the last example, the extracted operating range for the
operating parameter "temperature" of the component ID "PET
detector" can be expressed as "temperature<100.degree. C". The
knowledge engine 22 is then generated by operations including
formulating the operating parameter ranges into a set of decision
rules 36 for classifying medical imaging device log data as good
data (e.g. within operating range) or bad log data (e.g. outside
operating range). Advantageously, the extraction of component IDs
and operating parameters and associated operating parameter ranges,
and the generating of the knowledge engine 22, does not require
receiving input from a subject matter expert.
[0053] Monitoring of the monitored medical imaging device 2
leverages machine logs 40 which are commonly generated by medical
imaging devices. In illustrative FIG. 1, the machine logs are
generated by the medical imaging device controller 14 based on
readings of the various sensors of the components, sub-systems, and
systems of the monitored medical imaging device 2. The machine logs
40 may be stored on the non-transitory storage medium 12 using a
logging format that typically employs a standardized syntax and
formatting for the machine type and manufacturer of the medical
imaging device 2. Consequently, extract-transform-load (ETL) 42 of
the machine log data 40 is straightforward based on the known
standard log notation. Component IDs and operating parameters and
their logged values are identified in the ETL-processed log data,
and the knowledge engine 22 applies the decision rules 36 to this
information to identify any operating parameters whose values are
out of the corresponding operating parameter range. Identified
out-of-range operating parameters trigger the generation of
maintenance alerts 24. Said another way, the knowledge engine 22 is
applied to the log data generated by the monitored medical imaging
device 2 to detect bad log data generated by the monitored medical
imaging device 2 and to generate maintenance alerts 24 in response
to the detected bad log data. The maintenance alerts are associated
with component IDs contained in the detected bad log data, e.g. the
log data will report a temperature reading being associated with
the component (identified by its component ID) for which the
reading was taken. Optionally, the generation of a maintenance
alert may occur only after the out-of-range parameter value is
detected for a certain time interval (possibly sensor-dependent) to
avoid overaggressive triggering of maintenance alerts by occasional
inaccurate sensor readings. It is also contemplated for maintenance
alerts 24 to be coupled with out-of-range log data (or more
specifically, out-of-range parameter values of the log data) on
other than a 1:1 basis. A maintenance alert 24 may be generated in
response to a detected combination of two or more parameters that
are outside their respective operating parameter ranges in the
out-of-range log data. That is, a decision rule may be applied to
detect a combination of two or more parameters that are outside
their respective operating parameter ranges in the out-of-range log
data. For example, a maintenance alert could be generated in
response to a (possibly weighted) sum of different out-of-range
parameters, e.g. if two parameters might be out-of-range by amounts
that individually would not trigger a maintenance alert, but the
combination of out-of-range parameters may trigger an alert. The
alert may be triggered at lower tolerances, e.g. considering
parameters identified generically as parameters A and B, a
maintenance alert may be issued if: A>5%; B>10% or A>3%
AND B>8%. In this example, the tolerance of parameter A is
weighted higher than that of parameter B (40% v. 20% in the
combination tolerance change).
[0054] The maintenance alerts 24 are presented to the user, e.g. by
being displayed in an "alerts" window shown on the display 18 of
the monitor interface device 14. For example, a maintenance alert
may be presented as a maintenance recommendation, e.g. "Recommend
to check PET detector temperature, recent readings have exceeded
the recommended upper limit of 100.degree. C." In some embodiments,
the maintenance alerts may be graded as to severity or urgency,
e.g. based on how far out-of-range the parameter value is (e.g., a
PET detector reading of 102.degree. C. may trigger a lowest-level
alert while a reading of 115.degree. C. may trigger a higher-level
alert), and/or based on the criticality of the component (e.g. an
out-of-range parameter reading for the PET detectors may generate a
higher-level alert than an out-of-parameter reading for a bearing
of the patient support 8). In some embodiments, the processing 30,
32 of the manuals may include detection of urgency notations
contained in the text of the manuals themselves. For example, it is
not uncommon for a manual to highlight critical parameter ranges by
keywords such as "IMPORTANT"--when such a keyword is detected in
conjunction with an extracted decision rule then this decision rule
may be tagged as being of high importance, and any maintenance
alert triggered by such a rule will be assigned a high alert level.
The maintenance alerts 24 may optionally also indicate severity or
urgency in other terms, such as by the potential impact (e.g. slow
performance, short-term breakdown, long-term breakdown, potential
clinical impact, and/or so forth). In addition to being presented
to the user, the maintenance alerts 24 may optionally also be
forwarded to the medical imaging device vendor or other maintenance
service provider to potentially align service calls and/or parts
distribution.
[0055] In some embodiments, feedback from the logs is used to
adaptively adjust or tune the decision rules 36 applied by the
knowledge engine 22. This leverages service logs 44 which are
typically maintained for the medical imaging device 2, e.g. based
on service records generated manually by service personnel and/or
automatically by system software that, for example, detects and
timestamps replacement of various components. The service logs 40
may be stored on the non-transitory storage medium 12 using a
logging format that typically employs a standardized syntax and
formatting for the machine type and manufacturer, and/or a
standardized syntax and formatting used by the servicing
organization. Consequently, ETL 46 of the service log data 44 is
straightforward based on the known standard log notation. In
general, accuracy of generated maintenance alerts 24 is determined
in a feedback analysis 48 based on feedback extracted from the
service log 44, and the knowledge engine 22 is updated by adjusting
the decision rules 36 based on the determined accuracy of the
generated maintenance alerts. For example, accuracy of a generated
maintenance alert may be determined based on feedback extracted
from the service log 44 over a time interval extending from a time
of generation of the maintenance alert to an end time that is
dependent upon a mean time to failure (MTTF) of the medical imaging
device component identified by the component ID associated with the
maintenance alert. To account for statistical variability, the end
time may be a scaled value, e.g. 1.5.times.MTTF. If servicing of
the subject component is recorded in the service log 44 during this
time interval, or if a failure of the subject component is recorded
in the service log 44, then the maintenance alert may be deemed to
be accurate. On the other hand, if no servicing is detected in this
time interval and the subject component does not fail, then it may
be assumed that the maintenance alert was not accurate, in that
service personnel did not believe it appropriate to act upon the
maintenance alert and indeed the maintenance alert did not
accurately predict an impending failure of the subject
component.
[0056] If the feedback analysis 48 determines that the maintenance
alert was accurate, then no adaptation or remediation is performed.
On the other hand, if the feedback analysis 48 determines that the
maintenance alert was not accurate, then this information is used
to adjust or remediate the decision rule that triggered the
maintenance alert. In preferred fully automated adaptation
embodiments, the adjustment or remediation of the inaccurate
decision rule may entail removal of the inaccurate decision rule,
or adjustment of the operating parameter range of the inaccurate
decision rule. Such adjustment may employ a percentage change, for
example if the inaccurate decision rule triggers a maintenance
alert if a maximum parameter value threshold is exceeded, then this
threshold may be increased by 5% (or by 10%, or by some other
chosen adjustment increment). Likewise, if the inaccurate decision
rule triggers a maintenance alert if a minimum parameter value
threshold is exceeded, then this threshold may be decreased by 5%
(or by 10%, or by some other chosen adjustment increment). In a
variant semi-supervised embodiment, the inaccurate decision rule
may be presented to a user for review, perhaps along with instances
in which it has triggered alerts in the past, and the user may
elect to make a manual adjustment to the inaccurate decision rule,
or may elect to remove the rule altogether, or may elect to
maintain the rule without adjustment.
[0057] Having provided an overview of a monitoring device for
monitoring a medical imaging device 2 with reference to FIG. 1,
some additional embodiments and examples are presented in the
following.
[0058] In regards to the processing operations 30, 32, there are
different types of manuals associated with the medical imaging
device 2 that provide a lot of information on the device. The user
manual 27 can inform about features of the medical imaging device
and how to use it, its use case bounding conditions, and this helps
to determine correct practice and use by the user. The service
manual 25 can give further detailed information at the component
level, its life span based on use, absolute life, symptoms and
behaviors, action needs to be taken for each problem and other
critical parameters and their bounds. Similarly, manufacturer
specifications 26 are an important element of cost and quality
control for testing, calibration, and other measurement processes.
Integrating the information from the above manuals provides
comprehensive information for providing maintenance alerts in
furtherance of predictive maintenance for the medical imaging
device and its components. Data ingested from the user manual 27,
service manual 25, and the manufacturer specifications 26 are
parsed to extract the context relevant information from the
documents. The Component Entity Recognition (CER) 32 is performed
to identify names of components (i.e. component IDs) and names of
operating parameters so as to identify relevant operating
parameters for these components. Co-reference is carried out to
identify the specific sentence where the association is not clear.
Dependency parsing is performed to identify the associated patterns
to the specific components that needs to be mapped. Further, each
of the relevant sentences associated with the components are
extracted and tabulated.
[0059] With reference to FIGS. 2 and 3, an example is shown. FIG. 2
shows an example of manual content for a dedicated distribution
transformer from a CT installation guide. FIG. 3 shows a component
ID extracted from the content of FIG. 2, namely the component ID
"Dedicated Distribution Transformer", and key metrics extracted for
that component. All other extracted component IDs are similarly
processed to arrive at a structured table (FIG. 3 being a portion
of this table) containing all the components (or, more
specifically, the component IDs) and the associated key metrics
that needs to be monitored for the smooth running of the medical
imaging device.
[0060] With reference now to FIG. 4, once the data has been
extracted and the components table has been created, a composite is
made that allows for the next step of data classification. The
creation of the decision rules 36 of the knowledge engine 22
entails combining data extracted from the different manuals in a
cogent way for every monitored component of the medical imaging
device. For each component, from the vast inputs that were cleaned
in the operations 30, 32, data from service manuals, manufacturer
specifications, and user manuals are consolidated in the form of
tables. This process is performed by: (1) Taking the system details
from the user manuals such as system make, model etc. a unique ID
can be associated with each system; (2) For the system details from
step (1), the corresponding data from the manufacturer
specifications are extracted and tagged with a unique key
associated with the user manual, and (i) Once the system details
are in place, using the service manual, the service actions
pertaining to the specific system configurations are consolidated
and tagged with the same unique ID. Once the above three steps are
done, the data is tagged per system model. A mapping procedure is
then performed, in which each of the faults that the system is
known to suffer are extracted and mapped to the errors that a
system logs and correlations are drawn from these to predict the
nature of the data. This process is diagrammatically shown in FIG.
4.
[0061] With reference to FIG. 5, the components and the operating
parameters extracted for each component using the CER 32 with their
possible values can be represented using an unstructured data
table, as shown in FIG. 5. Let P.sub.1, P.sub.2, P.sub.3 . . . ,
P.sub.N be the N operating parameters extracted from the fixed
number of components and let Vk.sub.1, Vk.sub.2, Vk.sub.3 . . . ,
Vk.sub.N represents possible parameter values for the respective
components and parameters. These are the sets of parameters
extracted from all the manuals using the NLP 30 and CER 32. The
objective is to find meaningful combinations (Ps.sub.1, Ps.sub.2,
Ps.sub.3, . . . , Ps.sub.M, M.ltoreq.N) of these parameters and
their respective optimal limits aiming to produce model a M.sub.c
to produce good pool or bad pool based on each component. This can
be explained notationally as given below:
[0062] M.sub.2.fwdarw.YP.sub.i is the set of rules based on only
one parameter;
[0063] M.sub.2=Y(P.sub.i.sub.1, P.sub.i.sub.2) is the set of rules
based on each combination of tuple;
[0064] M.sub.k=Y(P.sub.i.sub.1, P.sub.i.sub.2, . . . ,
P.sub.i.sub.k) is the set of rules based on k-tuple at a time;
[0065] Then
Let M c = .psi. l N - 1 ( M l ) ##EQU00001##
be the models considering rules with optimal combination of all
M.sub.k.
[0066] With reference to FIG. 6, the output of the resulting
knowledge engine 22 will contain the set of decision rules 36 for
each of the components as depicted in the sample table of FIG. 6,
which aids in classifying the data into good and bad pool.
Comparing FIG. 6 with FIG. 3 illustrates how the decision rules may
be formulated as maintenance recommendations as shown in the
rightmost column of FIG. 6.
[0067] In the following, some illustrative implementations of the
ETL 42 and application of the knowledge engine 22 shown in FIG. 1
are described.
[0068] The process of mapping of the knowledge engine 22 with the
machine log data 40 involves parsing of machine log data into
system readable format and storing in database system, this process
is the ETL 42 indicated in FIG. 1. The log data that is stored in
database will be typically distributed in multiple tables of the
database. The first step towards the goal of mapping of the
knowledge engine 22 with the machine log 40 is to aggregate related
log data from database. The log data can be related in two ways:
(1) Temporal dependency of log information; and (2) shares a common
component or sub-system.
[0069] With reference to FIG. 7, in the case of temporal dependency
of log information, log data from databases is extracted and
aligned in the temporal domain (increasing/decreasing) and temporal
pattern-matching is used to extract dependent log information, e.g.
as illustrated in FIG. 7.
[0070] The detection and aggregation of log information that belong
to same type of component/sub-system can be performed using various
approaches to detect correlating log data. One suitable approach is
lexical analysis of content of log information and correlating
lexical analysis output of two or more instances of log. A
threshold on correlated value is used to categorize whether a given
set of log information are related or not. Another illustrative
approach is to count appearance or non-appearance of two or more
error log data in given timeframe (typically data generated over a
single day) and categorize as belong to same or different groups.
Yet another illustrative approach is to compute mutual exclusivity
of given set of log information with one another, two or more log
information data having least mutual exclusive value will be
considered as belong to same group of errors.
[0071] With reference to FIG. 8, once the log data is segregated
into groups, the matching of system log data 40 with the knowledge
engine 22 is performed. To perform mapping 50 of the log data, each
group 52 of segregated log information is sent to a splitter and
lexical analyzer (SLA) unit 54. The SLA unit 54 first splits log
information into meaningful tokens (typically words) and then each
token is cross-correlated with a set of vocabulary words 56 derived
from source that generated knowledge engine (e.g. electronic
medical imaging device servicing and operating data such as the
illustrative user manual 27, specification data 26, service manual
25). Each element in the vocabulary 56 is assigned weight based on
its importance. The weightage is decided by using domain expertise
and/or by using statistical techniques like counting of words
appearing in knowledge engine, number of connectivity of a given
word with unique different works, etc. The vocabulary 56 is
suitably derived from CER 32. The mapper 50 uses weighted and
tokenized data from the SLA unit 54 to map log information with
entries (e.g. decision rules) of the knowledge engine 22 as at this
stage both type of data (log data and knowledge engine rules) will
be of same nature as well as employing common language owing to
fact that log data is processed using the stored vocabulary 56. The
mapping 50 can be achieved by techniques such as word similarity
matching or correlating the words.
[0072] With reference to FIG. 9, a method suitably performed by the
monitoring device of FIG. 1 to construct the decision rules 36 of
the knowledge engine 22 is illustrated. In an operation 60, the
various electronic medical imaging device servicing and operating
data 25, 26, 27, 28 are input to the knowledge engine builder 20.
In an operation 62, a reader module (e.g. illustrative NLP 30 and
CER 32) is applied to extract component IDs and operating
parameters from the manuals, as well as associated operating
parameter ranges. In an operation 64, the knowledge engine 22 is
built by operations including formulating the operating parameter
ranges into the set of decision rules 36 for classifying medical
imaging device log data as good or bad log data. As shown on the
right-side of FIG. 9, during deployment of the knowledge engine 22,
the log files 40 from the medical imaging device 2 are
pre-processed by the ETL 42. The CER 32 is applied to the
ETL-processed log data to extract component IDs and operating
parameters and values thereof. This is suitably done by applying
the same CER 32 used in processing the manuals to the ETL-processed
log data. In an operation 66, the decision rules 36 of the
knowledge engine 22 are mapped on the log file output by the
ETL/CER processing chain. The operation 66 may be performed as
already described with reference to FIG. 8. In an operation 68, the
output of the mapping 66 is used to segregate log data into good
data (in which the operating parameters are in range as defined by
the decision rules 36) or bad data (in which at least one operating
parameter is out of range as defined by the decision rules 36). In
an operation 70, maintenance alerts 24 are issued based on detected
bad data. In an optional adaptive updating operation 72, the bad
data may be used to update the decision rules 36 based on whether
the maintenances alerts are determined to be accurate (e.g., based
on information from the service log 44 on whether the subject
component was serviced or failed during the MTTF time interval
after issuance of the maintenance alert).
[0073] With reference to FIGS. 10 and 11, an illustrative more
detailed implementation of the reading operation 62 (e.g.
illustrative NLP 30 and CER 32) is described. Referencing FIG. 10,
the textual content of the raw manual data 80 typically includes
tables and text (e.g. paragraphs, sentences, bullet points, ordered
lists, or so forth). A table parser 82 parses table content into
semantic units such as rows and/or cells, while a text parser 84
parses textual content into semantic units such as sentences or
paragraphs. The NLP 30 and CER 32 are then applied as previously
described to extract component IDs and operating parameters and
associated operating parameter ranges or values. In the example of
FIG. 10, this latter processing is shown separately as a value
engine component 86, where parameter value association is performed
on a sentence-wise, cell-wise, or other semantic unit basis, to
generate component-wise output 88. Turning to FIG. 11, the
component-wise outputs 88 from the various electronic medical
imaging device servicing and operating data 25, 26, 27, 28 are
combined by an assimilator 90 to generate the combined unstructured
data table 92, as already described with reference to FIG. 5. This
data table 92 then serves as input to the knowledge engine building
operation 64 of FIG. 9.
[0074] With returning reference to FIG. 1, during a field servicing
task the user may want to perform fault isolation to determine the
root cause and/or solution to a maintenance alert using a fault
isolation flowchart. To this end, FIG. 1 shows a flowchart
converter 100 implemented on the server computer 10. To support the
flowchart converter, the knowledge engine builder 20 further
implements flowchart image extraction. For example, an image
element of a document 25, 26, 27, or 28 may be identified as
depicting a flowchart by the knowledge engine builder 20 by
detecting a caption of the image as a text block directly above or
below the image in the document, and further detecting text content
of that caption indicating a flow chart. This detection could
entail detecting the keyword "flow chart" or "flowchart", and/or
performing NLP to detect the indication of a flowchart.
Additionally or alternatively, the image may be identified as a
flowchart by analyzing the image content, e.g. detection of
rectangles, diamonds, or other geometric figures containing text
and connected by connecting lines is a strong indication that the
image depicts a flowchart. The flowchart converter 100 then
processes the image of the flowchart to generate a structured
electronic representation of the flowchart. This is done for each
flowchart identified in the documents 25, 26, 27, 28 to generate a
database 101 of structured electronic representations of the
respective flowcharts contained in the documents. A chatbot 102 can
be implemented on a computer 150 that includes at least one user
input device 152 (e.g., a keyboard and a mouse) and a display
device 154 for displaying a graphical use interface (GUI) 156. A
chatbot 102 can be displayed on the GUI 156. The chatbot 102
provides a convenient mechanism for user navigation of the
structured electronic representations of the flowcharts generated
by the flowchart converter 100 and stored in the database 101.
[0075] FIG. 12 shows an example of the flowchart converter 100. The
flowchart converter 100 includes a process flow digitization module
104, and a digitized process flow to chatbot conversion module 106
to generate the structured electronic representation of the
flowchart for implementation by the chatbot 102. The process flow
digitization module 104 is configured to receive a set of scanned
process flow images 108 as an input, e.g. received from the
knowledge engine builder 20 in the illustrative embodiment. The
images 108 depict the process flow or flowcharts, and may for
example be JPEG images, BMP images, PNG images, or the like. The
process flow digitization module 104 is configured to convert the
images 108 into a process tree or graph, such as a directed graph
(DG).
[0076] FIG. 13 shows an example of an image 108 showing a flowchart
110. The flowchart 110 includes shapes or blocks 112 typically used
in flowcharts. For example, the flowchart 110 includes start/stop
blocks (e.g., ovals), process or action blocks (e.g., rectangles),
decision blocks (e.g., diamonds), and arrows 114 connecting the
blocks 112. The flowchart may also include text in the blocks 112
and/or adjacent the arrows 114. The illustrative flowchart 108 is
fully captured in a single image, but this may not always be the
case. Typically, if a flowchart is on multiple pages of a service
manual, this is indicated in the caption and is also indicated by
connector blocks in the flowchart. For example, in a conventional
approach, a circle with a text label such as "A" or "To FIG. 7b" or
"From FIG. 7a" or so forth graphically depicts the flow connections
between the pages.
[0077] FIG. 14 depicts a DG 116 corresponding to the flowchart 108
of FIG. 13. To generate the DG 116, the process flow digitization
module 104 performs the following process on the image(s) 108. A
pre-processing operation is performed to remove noise from the
images. A skew correction operation s performed to align the
orientation of the images 108. A shape detection operation is
performed to detect one or more of the blocks or shapes 112 (e.g.,
oval, rectangle, diamond, arrows 114, and so forth) and connecting
arrows in the flow process. One approach for shape detection is to
apply an edge detector (e.g. Sobel, difference of Gaussians, Canny
algorithm, et cetera) to highlight/isolate edges, then apply
shape-specific detection algorithms (e.g., circle detector,
rectangle detector, and so forth, each operating on the basis of
pixel connectivity and a priori knowledge of the geometry of the
shape. Detection of arrow connectors is also performed after the
edge detector is applied, and is based on connectivity of lines
between the identified shapes. A text detection process (e.g., OCR)
is performed to detect text in the detected shapes 112, adjacent
the detected shapes, or on top of the detected shapes (e.g. text on
top of decision arrows 114). Each of the detected shapes and each
block of detected text (which could be a word, one-line phrase, or
contiguous multi-line block of text) are assigned a location anchor
in the image 108. These location anchors are used to associate text
and shapes or text and arrow connectors. Once the shapes 112 and
text are detected along with associated location anchors and
connecting arrows, this information is analyzed across the image
108 is performed to generate the DG 116.
[0078] To create the DG 116, the detected shapes 112 and text are
parsed from top to bottom, and a node 118 is created for each
shape. Each node 118 includes a parent link pointing to a parent
node 120, child link(s) 122 connected to child nodes 124, a node
type (e.g., a process or decision step in the flowchart), and node
text. In a second pass, the shapes 112 are again parsed from top to
bottom and detected directed arrows 114 are used to connect the
nodes 118 based on their proximity and direction to shapes 112 as
present in the image 108.
[0079] Once the DG 116 is created, the digitized process flow to
chatbot conversion module 106 is configured to convert the DG 116
to a structured dialog flow representation suitable for
presentation by the chatbot 102. The digitized process flow to
chatbot conversion module 106 converts the content of the DG 116
into a structured dialog flow representation that can be presented
by the chat bot as a natural language conversation flow. The
chatbot conversation flow is represented in a structured dialog
flow representation, such as JavaScript Object Notation (JSON)
representation 130.
[0080] An example of a JSON representation 130 is provided
below:
TABLE-US-00001 { ''publishTime'': 0, ''sessions'': [{
''publishTime'': 0, ''topics'': [{ ''dialogues'': [{
''actionType'': '''', ''sequence'': '''', ''previous'': '''',
''actionValue'': '''', ''deliveryMode'': '''', ''Break'': false,
''alternatives'': [ ], ''id'': ''5ee1febe149b3f0001dcc105'',
''text'': ''Error Displayed on screen 208 with 351 (Note this Error
is observed with FA system only)'', ''disableOpenAnswer'': false,
''type'': ''text'', ''characterId'': ''bot'' }, { ''previous'':
''5ee1febe149b3f0001dcc105'', ''actionValue'': '''', ''Break'':
false, ''label'': '''', ''type'': ''text'', ''actionType'': '''',
''sequence'': '''', ''deliveryMode'': '''', ''alternatives'': [ ],
''id'' : ''5ee1fed4149b3f0001dcc106'', ''text'': ''Do tube
conditioning procedure'', ''disableOpenAnswer'': false,
''characterId'': ''bot'' },{ ''previous'' :
''5ee1fed4149b3f0001dcc106'', ''actionValue'': '''', ''Break'':
false, ''label'': '''', ''type'': ''text'', ''actionType'': '''',
''sequence'': '''', ''deliveryMode'': '''', ''alternatives'': [ ],
''id'' : ''5ee1feed149b3f0001dcc107'', ''text'': ''Does tube
conditioning pass without error?'', ''disableOpenAnswer'': false,
''characterId'': ''bot'' },{ ''actionType'': '''', ''sequence'':
'''', ''previous'' : ''5ee1feed149b3f0001dcc107'', ''actionValue'':
'''', ''deliveryMode'': '''', ''Break'': false, ''alternatives'': [
], ''id'' : ''5ee1fefc149b3f0001dcc108'', ''text'': ''Yes'',
''label'': '''', ''disableOpenAnswer'': false, ''type'': ''text''
},{ ''actionType'': '''', ''sequence'': '''', ''previous'':
''5ee1feed149b3f0001dcc107'', ''actionValue'': '''',
''deliveryMode'': '''', ''Break'': false, ''alternatives'': [ ],
''id'': ''5ee1ff05149b3f0001dcc109'', ''text'': ''No'', ''label'':
'''', ''disableOpenAnswer'': false, ''type'': ''text'' },{
''previous'': ''5ee1fefc149b3f0001dcc108'', ''actionValue'': '''',
''Break'': false, ''label'': '''', ''type'': ''action'',
''actionType'': ''Exit'', ''sequence'': '''', ''deliveryMode'':
'''', ''alternatives'': [ ], ''id'': ''5ee1ff12149b3f0001dcc10a'',
''text'': '''', ''disableOpenAnswer'': false, ''characterId'':
''bot'' },{ ''previous'': ''5ee1ff05149b3f0001dcc109'',
''actionValue'': '''', ''Break'': false, ''label'': '''', ''type'':
''text'', ''actionType'': '''', ''sequence'': '''',
''deliveryMode'': '''', ''alternatives'': [ ], ''id'':
''5ee1ff24149b3f0001dcc10b'', ''text'': ''Replace monobloc'',
''disableOpenAnswer'': false, ''characterId'': ''bot'' },{
''previous'': ''5ee1ff24149b3f0001dcc10b'', ''actionValue'': '''',
''Break'': false, ''label'': '''', ''type'': ''text'',
''actionType'': '''', ''sequence'': '''', ''deliveryMode'': '''',
''alternatives'': [ ], ''id'': ''5ee1ff32149b3f0001dcc10c'',
''text'': ''Is error resolved?'', ''disableOpenAnswer'': false,
''characterId'': ''bot'' },{ ''actionType'': '''', ''sequence'':
'''', ''previous'': ''5ee1ff32149b3f0001dcc10c'', ''actionValue'':
'''', ''deliveryMode'': '''', ''Break'': false, ''alternatives'': [
], ''id'': ''5ee1ff3f149b3f0001dcc10d'', ''text'': ''Yes'',
''label'': '''', ''disableOpenAnswer'': false, ''type'': ''text''
},{ ''actionType'': '''', ''sequence'': '''', ''previous'':
''5ee1ff32149b3f0001dcc10c'', ''actionValue'': '''',
''deliveryMode'': '''', ''Break'': false, ''alternatives'': [ ],
''id'': ''5ee1ff49149b3f0001dcc10e'', ''text'': ''No'', ''label'':
'''', ''disableOpenAnswer'': false, ''type'': ''text'' },{
''previous'': ''5ee1ff3f149b3f0001dcc10d'', ''actionValue'': '''',
''Break'': false, ''label'': '''', ''type'': ''action'',
''actionType'': ''Exit'', ''sequence'': '''', ''deliveryMode'':
'''', ''alternatives'': [ ], ''id'': ''5ee1ff56149b3f0001dcc10f'',
''text'': '''', ''disableOpenAnswer'': false, ''characterId'':
''bot'' }, { ''previous'': ''5ee1ff49149b3f0001dcc10e'',
''actionValue'': '''', ''Break'': false, ''label'': '''', ''type'':
''text'', ''actionType'': '''', ''sequence'': '''',
''deliveryMode'': '''', ''alternatives'': [ ], ''id'':
''5ee1ff67149b3f0001dcc110'', ''text'': ''Replace SCPU'',
''disableOpenAnswer'': false, ''characterId'': ''bot'' },{
''previous'': ''5ee1ff67149b3f0001dcc110'', ''actionValue'': '''',
''Break'': false, ''label'': '''', ''type'': ''action'',
''actionType'': ''Exit'', ''sequence'': '''', ''deliveryMode'':
'''', ''alternatives'': [ ], ''id'': ''5ee1ff75149b3f0001dcc111'',
''text'': '''', ''disableOpenAnswer'': false, ''characterId'':
''bot'' }], ''publishTime'': 0, ''sequence'': '''',
''firstTopicInSession'': true, ''name'': ''Topic 1'',
''description'': ''First topic'', ''id'':
''5ee1fe4f149b3f0001dcc104'' }], ''name'': ''Session 1'',
''description'': ''First session'', ''id'':
''5ee1fe4f149b3f0001dcc103'' }], ''defaultLanguage'': ''English'',
''name'': ''IGT Fault Isolation Scripts'', ''description'':
''Program for IGT fault isolation scripts'', ''id'':
''5ee1fe4f149b3f0001dcc102'', ''contentType'': ''Dialogue flow''
}
[0081] The nodes 118 in the process flow (excepting the first, e.g.
"start" node) each contains a link to a previous node or the parent
node 120. A node 118 is also differentiated as bot or user node to
identify bot and user dialogues in a conversation flow. A user
dialog node may, for example, receive the result of a test
performed by the user in response to a preceding bot dialog node
instructing to perform that test. Based on the received test
result, flow may pass to one of several possible child nodes each
of which is a bot dialog node presenting the next step of the flow
chart. The parent node 120 in the process flow is delivered as the
first bot dialogue when the chat bot conversation is initiated.
Thereafter, the user response is matched against the user nodes 118
in the flow, which are linked to the previous delivered bot
dialogue. Various string similarity-matching techniques like cosine
similarity, shortest distance, etc. can be used to find the highest
match. The matched node is then used to find the bot node linked to
it, is delivered as the next bot dialogue in the conversation. The
conversation continues using the same approach.
[0082] With continuing reference to FIGS. 1 and 12-15, the server
computer 10 is configured as described above to perform method or
process 200 of converting the flowchart 110 to a structured
electronic representation, such as the DG 116, and to convert the
DG 116 to a structured dialog flow representation. The
non-transitory storage medium 12 stores instructions which are
readable and executable by the electronic processors 10, 16 to
perform disclosed operations including performing the method or
process 200. In some examples, the method 100 may be performed at
least in part by cloud processing.
[0083] With reference to FIG. 16, an illustrative embodiment of an
instance of the method 200 is diagrammatically shown as a
flowchart. At an operation 202, a plurality of shapes 112
corresponding to flowchart blocks of the flowchart 110 are
identified in one or more images 108 of the flow chart. At an
operation 204, one or more arrows 114 defining flow paths between
the flowchart blocks in the image 108 are identified. In some
examples, arrows 114 can be identified by flowchart blocks
connected by the arrows and identifying directionality of the
defined flow paths based on arrowheads of the arrows.
[0084] At an operation 206, text labels, and their locations in the
image 108, are identified. In addition, an OCR operation can be
performed on the image 108 to determine text content of the
identified text labels. As used herein, the term "text labels" (and
variants thereof) refers to a portion of the image 108 containing
text, while the term "text content" refers to content that has
undergone an OCR process. That is, the text content comprises a
string of ASCII characters (or other suitable characters, such as
Chinese or Indian characters). Moreover, the OCR process is not
limited to images 108 of the flowchart 110 generated by optical
scanning. In some embodiments, the operation 206 includes
extracting text corresponding to location anchors in the image 108.
The locations anchors serve as a location marker for the identified
text. For example, the location anchor might be a single point
(e.g., center of a block) or the location anchor might be an entire
footprint of the shape 112 (or arrow 114 or text label) in the
image 108. The "footprint" of a block or shape 112 can be its
outline or perimeter, while the "footprint" of an arrow 14 can be a
line following the arrow. The "footprint" of a text label is a
polygon enclosing and closely fitted to the area of the text in the
image 108.
[0085] At an operation 208, the text labels (from the operation
206) are associated with flowchart blocks, or defined flow paths in
the flowchart, based on the locations of the text labels, the
flowchart blocks or shapes 112, and arrow 114 in the image 108. In
some examples, the associating operation 208 includes determining
the distances from the footprint of the text label to the
footprints of the various blocks or shapes 112 and arrows 114, and
associating the text label to the block or arrow of shortest
distance. In some examples, a label "Yes" on an arrow 114 coming
from a decision diamond shape 112 might be as close to the diamond
as to the arrow, so a secondary consideration might be the text
content of the text label. For example, text content of "Yes" or
"No" is likely to be labeling an arrow 114. Furthermore, text
content inside a block or shape 112 is almost certainly properly
associated to that block (so the distance is zero).
[0086] At an operation 210, a structured electronic representation,
such as the DG 116, is generated flowchart based on the flowchart
blocks or shapes 112, the flow paths, and the text labels. A
structure of the structured electronic representation 116 is
determined at least based at least on the flow paths between the
flowchart blocks or shapes 112 of the image 108. To do so, in some
embodiments, shape-indicated functions of the flowchart blocks 112
can be determined by comparing the identified shapes with standard
flow chart shapes representing corresponding functions. For
example, a flowchart typically uses oval shapes for start and stop
operations, rectangles to represent process blocks, and diamonds to
represent decision blocks. The "standard" blocks are compared with
the shapes 112 identified in the operation 202.
[0087] In other embodiments, functions are assigned to the
identified flowchart blocks or shapes 112 based on the text labels
associated to the flowchart blocks. Additionally or alternatively,
functions are assigned to the identified flowchart blocks or shapes
112 based on the text labels associated to the defined flow paths.
If the image 108 incorrectly shows a square representing a decision
block (e.g., a decision block should be a diamond according to the
standard flow chart shapes), it may be possible to recognize the
function is a decision block because it will have two (or more)
outgoing arrows (e.g., one labeled "Yes" followed if the decision
is Yes, the other labeled "No" for if the decision is No). In fact,
any time a block has two or more outgoing arrows 114 (regardless of
the labels) it must be a decision block since the block must
"decide" which outgoing arrow to follow. The text labels associated
to the flow paths can likely be useful in determining which paths
are which.
[0088] In some embodiments, the identifying operation 202 of the
shapes 112, the identifying operation 204 of the arrows 114, and
the generating of the structured representation 116 can be repeated
for each image in a set of images 108 for the flow chart. For
example, a first page of a flowchart 110 can end with an arrow to a
circle block labeled "A", and then the next page begins with the
same circle block labeled "A" and moves on to the next block. In
this example, a connecting block or shape 112 in different pages
can be identified, and the structured representation 116 can be
generated based on a connection indicated by the connecting
blocks.
[0089] In other embodiments, the structured representation 116 can
be transferred to the computer 150 (see FIG. 1) operable by a user.
The structured representation 116 can be displayed on the display
device 154 of the computer 150. The user can then enter, via the at
least one user input device 152, one or more inputs indicative of
editing a portion of the structured electronic representation 116.
The structured representation 116 can then updated based on the
user inputs.
[0090] In some embodiments, as noted, the structured electronic
representation 116 can comprise the DG 116. In these embodiments,
the method 200 can include an operation 212 that includes
converting the directed graph 116 to a structured dialog flow
representation (which can be considered another form of structured
electronic representation of the flowchart), such as the JSON
representation 130. In these embodiments, the JSON representation
130 can be presented on the GUI 156 on the computer 150. The GUI
156 can comprise, in some examples, the chatbot 102. The chatbot
102 is configured to guide the user through the JSON representation
130. To do so, a current flowchart block of the JSON representation
130 is presented on the chatbot 102. An input is received from the
user by the chatbot 102 (e.g., a mouse click, or text input as a
question). The current flowchart block can be updated based on the
structure of the JSON representation 130 and the received input.
These processes are repeated to guide the user through the
flowchart 110.
[0091] In the illustrative embodiments, the flowchart that is
converted to a structured electronic representation (e.g. DG or
structured dialog flow representation) is a fault isolation
flowchart used in conjunction with diagnosing a problem with a
medical imaging device. More generally, the disclosed approach can
be employed to generate a structured electronic representation, and
optionally a structured dialog flow representation presented by a
chatbot, for fault isolation flowcharts employed in conjunction
with other types of servicing tasks (e.g. other medical devices
such as infusion pumps, mechanical ventilators, or so forth, or
even more generally servicing tasks for other complex
systems/devices such as aircraft, locomotives, HVAC systems, and/or
so forth.
[0092] The invention has been described with reference to the
preferred embodiments. Modifications and alterations may occur to
others upon reading and understanding the preceding detailed
description. It is intended that the exemplary embodiment be
construed as including all such modifications and alterations
insofar as they come within the scope of the appended claims or the
equivalents thereof.
* * * * *