U.S. patent application number 12/753664 was filed with the patent office on 2010-10-07 for methods, apparatus, and systems for documenting and reporting events via geo-referenced electronic drawings.
This patent application is currently assigned to CertusView Technologies, LLC. Invention is credited to Curtis Chambers, Jeffrey Farr, Steven Nielsen.
Application Number | 20100257477 12/753664 |
Document ID | / |
Family ID | 42826905 |
Filed Date | 2010-10-07 |
United States Patent
Application |
20100257477 |
Kind Code |
A1 |
Nielsen; Steven ; et
al. |
October 7, 2010 |
METHODS, APPARATUS, AND SYSTEMS FOR DOCUMENTING AND REPORTING
EVENTS VIA GEO-REFERENCED ELECTRONIC DRAWINGS
Abstract
One or more electronic drawings may be generated to document
and/or report an event, in which various elements of the drawing(s)
include geographic reference information. A symbols library, a
collection of images (e.g., geo-referenced images), geo-location
data, and time and location data may be stored in memory for use in
connection with such drawings, and a drawing tool graphical user
interface (GUI) may be provided for electronically marking-up
images on which one or more drawings are based. The marked-up
images may be event-specific images, and may be integrated into
various types of electronic reports for accurately depicting events
of interest, such as personal injury events, vehicle accidents,
and/or property damage events.
Inventors: |
Nielsen; Steven; (North Palm
Beach, FL) ; Chambers; Curtis; (Palm Beach Gardens,
FL) ; Farr; Jeffrey; (Jpiter, FL) |
Correspondence
Address: |
WOLF GREENFIELD & SACKS, P.C.
600 ATLANTIC AVENUE
BOSTON
MA
02210-2206
US
|
Assignee: |
CertusView Technologies,
LLC
Palm Beach Garden
FL
|
Family ID: |
42826905 |
Appl. No.: |
12/753664 |
Filed: |
April 2, 2010 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61166385 |
Apr 3, 2009 |
|
|
|
61166392 |
Apr 3, 2009 |
|
|
|
Current U.S.
Class: |
715/771 |
Current CPC
Class: |
B60R 2300/302 20130101;
G06K 9/00791 20130101; B60R 1/00 20130101; B60R 2300/304 20130101;
B60R 2300/50 20130101; G06T 11/60 20130101; G07C 5/085 20130101;
G07C 5/008 20130101 |
Class at
Publication: |
715/771 |
International
Class: |
G06F 3/048 20060101
G06F003/048 |
Claims
1. An apparatus for documenting an incident at an incident site,
the apparatus comprising: a communication interface; a display
device; at least one user input device; a memory to store
processor-executable instructions; and a processing unit
communicatively coupled to the communication interface, the display
device, the at least one user input device, and the memory, wherein
upon execution of the processor-executable instructions by the
processing unit, the processing unit: controls the communication
interface to electronically receive source data representing at
least one input image of a geographic area including the incident
site; controls the display device to display at least a portion of
the at least one input image so as to provide a displayed image;
acquires user input from the at least one user input device to
provide a representation of at least a portion of the incident on
the displayed image; automatically acquires time and/or date
information indicating a time and/or date that the user input was
acquired; generates a marked-up digital image including the
representation of at least a portion of the incident based on the
user input; further controls the communication interface and/or the
memory to electronically transmit and/or electronically store
information relating to the marked-up digital image so as to
document the incident with respect to the geographic area; and
further controls the communication interface and/or the memory to
electronically transmit and/or electronically store the time and/or
date information in association with the information relating to
the marked-up digital image so as to document when the
representation of the at least a portion of the incident was
created.
2. The apparatus of claim 1, wherein the processing unit: modifies
the marked-up digital image to include a timestamp including the
time and/or date information.
3. The apparatus of claim 1, wherein the processing unit:
automatically acquires location information indicating a location
where the user input was acquired; and further controls the
communication interface and/or the memory to electronically
transmit and/or electronically store the location information in
association with the information relating to the marked-up digital
image so as to document where the representation of the at least a
portion of the incident was created.
4. The apparatus of claim 3, wherein the processing unit: modifies
the marked-up digital image to include a location stamp including
the location information.
5. The apparatus of claim 1, wherein the processing unit: further
controls the memory to electronically store a media file at a first
time and a first date; and further controls the communication
interface and/or the memory to electronically transmit and/or
electronically store information indicating the first time and/or
the first date in association with the media file so as to document
when the media file was stored.
6. The apparatus of claim 5, wherein the processing unit: modifies
the media file to include the first time and/or the first date.
7. The apparatus of claim 5, wherein the processing unit: further
controls the communication interface to receive the media file.
8. The apparatus of claim 1, wherein the processing unit: further
controls the memory to electronically store a media file;
automatically acquires location information indicating a location
where the media file was stored; and further controls the
communication interface and/or the memory to electronically
transmit and/or electronically store the location information in
association with the media file so as to document where the media
file was stored.
9. The apparatus of claim 8, wherein the processing unit: modifies
the media file to include the location information.
10. The apparatus of claim 1, wherein the processing unit: further
controls a camera device to acquire image data; automatically
acquires directional orientation information indicating a
directional orientation of the camera device when the image data
was acquired; and further controls the communication interface
and/or the memory to electronically transmit and/or electronically
store the directional orientation information in association with
the image data so as to document a directional orientation of the
camera device when the image data was acquired.
11. The apparatus of claim 10, wherein the processing unit: further
controls the communication interface and/or the memory to
electronically transmit and/or electronically store the image data
and the directional orientation information in association with the
information relating to the marked-up digital image.
12. The apparatus of claim 1, wherein the processing unit: further
controls the communication interface to receive a media file
comprising information relating to at least one condition at or
near the incident site; and further controls the communication
interface and/or the memory to electronically transmit and/or
electronically store the media file in association with the
information relating to the marked-up digital image so as to
document the at least one condition at or near the incident
site.
13. The apparatus of claim 12, wherein the at least one condition
comprises at least one weather condition.
14. The apparatus of claim 12, wherein the at least one condition
comprises at least one traffic condition.
15. The apparatus of claim 1, wherein the at least one input image
is geo-referenced.
16. The apparatus of claim 15, wherein the processing unit: scales
at least a portion of the representation based on a scale of the at
least one geo-referenced input image.
17. The apparatus of claim 16, wherein the at least a portion of
the representation comprises a symbol selected from a symbol
library.
18. The apparatus of claim 15, wherein the processing unit:
acquires geographic location information corresponding to the
incident site from a global positioning system; and acquires, based
on the geographic location information, the source data
representing the at least one geo-referenced input image of the
geographic area including the incident site.
19. The apparatus of claim 15, wherein the at least one
geo-referenced input image comprises a first geo-referenced input
image, and wherein the processing unit: generates, using
geo-reference data associated with the first geo-referenced input
image, a second geo-referenced input image having a different
perspective than the first geo-referenced input image.
20. The apparatus of claim 1, wherein: the incident involves a
vehicle; and the representation of at least a portion of the
incident comprises a representation of the vehicle.
21. The apparatus of claim 20, wherein the processing unit: scales
the representation of the vehicle based on a scale of the at least
one input image.
22. The apparatus of claim 20, wherein the processing unit: selects
a vehicle symbol corresponding to the vehicle from a plurality of
vehicle symbols in a symbol library; and wherein the representation
comprises the selected vehicle symbol.
23. The apparatus of claim 22, wherein the processing unit: selects
the vehicle symbol based on a vehicle identification number of the
vehicle.
24. The apparatus of claim 1, wherein the incident involves a
vehicular incident, and wherein the processing unit: controls the
communication interface and/or the memory to electronically
transmit and/or electronically store a vehicular incident report
including the marked-up digital image.
25. The apparatus of claim 1, wherein the incident involves a
personal injury, and wherein the processing unit: controls the
communication interface and/or the memory to electronically
transmit and/or electronically store a personal injury report
including the marked-up digital image.
26. The apparatus of claim 1, wherein the incident involves
property damage, and wherein the processing unit: controls the
communication interface and/or the memory to electronically
transmit and/or electronically store a property damage report
including the marked-up digital image.
27. The apparatus of claim 1, wherein the incident involves
police-investigated activity, and wherein the processing unit:
controls the communication interface and/or the memory to
electronically transmit and/or electronically store a police report
including the marked-up digital image.
28. The apparatus of claim 1, wherein the processing unit: controls
the communication interface and/or the memory to electronically
transmit and/or electronically store an incident report including
the marked-up digital image.
29. The apparatus of claim 28, wherein the processing unit:
controls the communication interface and/or the memory to
electronically transmit and/or electronically store a descriptor
file comprising: information identifying the incident report; and
information identifying the marked-up digital image.
30. The apparatus of claim 1, wherein the processing unit: controls
the display device to display a symbol palette, the symbol palette
comprising a selection of symbols for depicting objects and/or
events.
31. The apparatus of claim 30, wherein the selection of symbols
comprises at least one landmark symbol.
32. The apparatus of claim 30, wherein the selection of symbols
comprises at least one vehicle symbol.
33. The apparatus of claim 30, wherein the selection of symbols
comprises at least one person symbol.
34. The apparatus of claim 1, wherein the processing unit: controls
the display device to display a sketching palette, the sketching
palette comprising a selection of renderable shapes.
35. The apparatus of claim 1, wherein the processing unit further:
categorizes the source data representing the at least one input
image, and/or the representation of at least a portion of the
incident, into a plurality of display layers of the marked-up
digital image; controls the display device and/or the at least one
user input device so as to provide for independent enabling or
disabling for display of at least some display layers of the
plurality of display layers; and controls the display device so as
to display only enabled display layers of the plurality of display
layers.
36. The apparatus of claim 35, wherein the processing unit:
categorizes the source data representing the at least one input
image as a reference layer; and categorizes the representation of
at least a portion of the incident as a symbols layer.
37. The apparatus of claim 35, wherein the processing unit further
controls the display device and/or the at least one user input
device to provide for alternate enabling and disabling for display
of at least one display layer of the at least some display layers
so as to facilitate a comparative viewing of the at least some
display layers.
38. The apparatus of claim 35, wherein the processing unit further:
controls the display device so as to display a layer directory or
layer legend pane respectively indicating all of the plurality of
display layers; and controls the display device and/or the at least
one user input device to allowing for selection of at least one
display layer of the plurality of display layers indicated in the
layer directory or layer legend pane so as to enable or disable for
display the selected at least one display layer.
39. The apparatus of claim 35, wherein: at least one display layer
of the plurality of display layers includes a plurality of
sub-layers; the processing unit categorizes at least some of the
source data representing the at least one input image, and/or at
least some of the representation of at least a portion of the
incident, into the plurality of sub-layers; the processing unit
controls the display device and/or the at least one user input
device so as to provide for independent enabling or disabling for
display of each sub-layer of the plurality of sub-layers of the at
least one display layer; and the processing unit controls the
display device so as to display only enabled sub-layers of the
plurality of sub-layers so as to provide the electronic rendering
of the locate operation and/or the marking operation.
40. A method for documenting an incident at an incident site, the
method comprising: A) electronically receiving source data
representing at least one input image of a geographic area
including the incident site; B) processing the source data so as to
display at least a portion of the at least one input image on a
display device; C) adding to the at least a portion of the at least
one input image, based on user input received via at least one user
input device associated with the display device, a representation
of at least a portion of the incident to thereby generate a
marked-up digital image; D) automatically acquiring time and/or
date information indicating a time and/or date that the user input
was acquired; E) electronically transmitting and/or electronically
storing information relating to the marked-up digital image so as
to document the incident with respect to the geographic area; and
F) electronically transmitting and/or electronically storing the
time and/or date information in association with the information
relating to the marked-up digital image so as to document when the
representation of the at least a portion of the incident was
created.
41. At least one computer-readable medium encoded with instructions
that, when executed on at least one processing unit, perform a
method for documenting an incident at an incident site, the method
comprising: A) electronically receiving source data representing at
least one input image of a geographic area including the incident
site; B) processing the source data so as to display at least a
portion of the at least one input image on a display device; C)
receiving user input via at least one user input device associated
with the display device; D) automatically acquiring time and/or
date information indicating a time and/or date that the user input
was acquired; E) adding, based on the user input, a representation
of at least a portion of the incident to the displayed at least one
input image to thereby generate a marked-up digital image; F)
electronically transmitting and/or electronically storing
information relating to the marked-up digital image so as to
document the incident with respect to the geographic area; and G)
electronically transmitting and/or electronically storing the time
and/or date information in association with the information
relating to the marked-up digital image so as to document at last
generally when the representation of the at least a portion of the
incident was created.
Description
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application claims a priority benefit, under 35 U.S.C.
.sctn.119(e), to U.S. provisional patent application Ser. No.
61/166,385, entitled "Geo-Referenced Electronic Drawing Application
for Documenting and Reporting Events," filed on Apr. 3, 2009 under
attorney docket no. D0687.70030US00.
[0002] This application also claims a priority benefit, under 35
U.S.C. .sctn.119(e), to U.S. provisional patent application Ser.
No. 61/166,392, entitled "Data Acquisition System for and Method of
Analyzing Vehicle Data for Generating an Electronic Representation
of Vehicle Operations," filed on Apr. 3, 2009 under attorney docket
no. D0687.70032US00.
[0003] Each the above-identified applications is incorporated
herein by reference.
BACKGROUND
[0004] In any business setting, incidents that are not part of the
standard business practice may take place and cause interruption to
the business operation. Such incidents can potentially reduce the
quality of the services or products of the business, and sometimes
may impose civil or even criminal liabilities on the business. For
any given business, the particular types of incidents that are
disruptive may depend on the nature of the business. For example,
in field service applications incidents to be reported may include
personal injury events, vehicle accidents, and/or any types of
property damage events that may occur in the field, and the
like.
[0005] Currently, systems have been implemented for reporting and
managing certain incidents. Using the example of vehicle accidents,
upon arrival at the scene of a vehicle accident, a police officer
or other investigator usually fills out a paper accident report
explaining in detail the accident scene. As part of this report,
the police officer or other investigator may attempt to draw a
sketch of the accident scene on a diagram of the road, which is to
be submitted with the paper accident report. However, a drawback of
these paper-based reports, which may be handwritten and may include
hand sketches, is that the content thereof may be inconsistent,
sloppy, illegible, inaccurate, and/or incomplete. As a result,
incidents, such as vehicle accidents, may be poorly documented.
Once created, the accident reports are distributed to responsible
entities for review, such as to accident investigation companies,
law enforcement agencies, insurance companies, and any supervisory
and/or management personnel. Similar processes may exist with
respect to handling personal injury reports and property damage
reports.
SUMMARY
[0006] Applicants have recognized and appreciated that in
conventional reporting systems, a major issue is the distribution
of reports and tracking of the progress of the reviews to ensure
timely resolution of the events. Depending on the types of events
and other factors, different reports may have to be reviewed by
different entities. The existence of multiple review routing paths
can be rather confusing, making it difficult to ensure that the
paper report is routed to the right entities in the right order.
Moreover, paper reports may be misplaced or lost during transit to
the different entities and the exact status of reports may be hard
to determine. Further, a drawback of conventional reporting systems
is that reports and, in particular paper reports, may not be in a
form that is easy to retrieve for, for example, historical
reference.
[0007] Another concern regarding conventional reporting systems is
the lack of effective control over the access to the reports.
Reports may contain sensitive or confidential information that
should be viewed only by authorized entities. The necessary access
control, however, can be difficult to implement or enforce due to
the lack of effective measures to prevent unauthorized access to
the documents or other factors such as distribution errors.
[0008] Therefore, Applicants have recognized that a need exists for
improved ways of creating, distributing, and/or retrieving reports,
such as, but not limited to, personal injury reports, vehicle
accident reports, any types of damage reports, and the like.
[0009] In view of the foregoing, various embodiments of the present
invention are directed to methods, apparatus, and systems for
documenting events via geo-referenced electronic drawings. With
respect to incidents, such as property damage and personal injury,
that may be reported in field service applications, in exemplary
embodiments one or more drawings may be provided that are
referenced to a geographic location and/or that in some way
indicate (to scale) the actual environment in which incidents have
occurred. In various aspects, drawings may be provided to scale,
include accurate directional and positional information, and/or
include representations of various environmental landmarks (e.g.,
trees, buildings, poles, fire hydrants, barriers, any structures,
etc). Examples of reports that may include one or more
geo-referenced electronic drawings according to various inventive
embodiments disclosed herein include, but are not limited to,
personal injury reports, vehicle accident reports, and any types of
damage reports.
[0010] In sum, one embodiment described herein is directed to an
apparatus for documenting an incident at an incident site. The
apparatus comprises a communication interface; a display device; at
least one user input device; a memory to store processor-executable
instructions; and a processing unit coupled to the communication
interface, the display device, the at least one user input device,
and the memory. Upon execution of the processor-executable
instructions by the processing unit, the processing unit: controls
the communication interface to electronically receive source data
representing at least one input image of a geographic area
including the incident site; controls the display device to display
at least a portion of the at least one input image; acquires user
input from the at least one user input device to provide a
representation of at least a portion of the incident on the
displayed image; automatically acquires time and/or date
information indicating a time and/or date that the user input was
acquired; generates a marked-up digital image including the
representation of at least a portion of the incident based on the
user input; further controls the communication interface and/or the
memory to electronically transmit and/or electronically store
information relating to the marked-up digital image so as to
document the incident with respect to the geographic area; and
further controls the communication interface and/or the memory to
electronically transmit and/or electronically store the time and/or
date information in association with the information relating to
the marked-up digital image so as to document when the
representation of the at least a portion of the incident was
created.
[0011] Another embodiment is directed to a method for documenting
an incident at an incident site. The method comprises: A)
electronically receiving source data representing at least one
input image of a geographic area including the incident site; B)
processing the source data so as to display at least a portion of
the at least one input image on a display device; C) adding to the
at least a portion of the at least one input image, based on user
input received via at least one user input device associated with
the display device, a representation of at least a portion of the
incident to thereby generate a marked-up digital image; D)
automatically acquiring time and/or date information indicating a
time and/or date that the user input was acquired; E)
electronically transmitting and/or electronically storing
information relating to the marked-up digital image so as to
document the incident with respect to the geographic area; and F)
electronically transmitting and/or electronically storing the time
and/or date information in association with the information
relating to the marked-up digital image so as to document when the
representation of the at least a portion of the incident was
created.
[0012] A further embodiment is directed to at least one
computer-readable medium encoded with instructions that, when
executed on at least one processing unit, perform a method for
documenting an incident at an incident site. The method comprises:
A) electronically receiving source data representing at least one
input image of a geographic area including the incident site; B)
processing the source data so as to display at least a portion of
the at least one input image on a display device; C) receiving user
input via at least one user input device associated with the
display device; D) automatically acquiring time and/or date
information indicating a time and/or date that the user input was
acquired; E) adding, based on the user input, a representation of
at least a portion of the incident to the displayed at least one
input image to thereby generate a marked-up digital image; F)
electronically transmitting and/or electronically storing
information relating to the marked-up digital image so as to
document the incident with respect to the geographic area; and G)
electronically transmitting and/or electronically storing the time
and/or date information in association with the information
relating to the marked-up digital image so as to document at last
generally when the representation of the at least a portion of the
incident was created.
[0013] Another embodiment is directed to an apparatus for
documenting an incident at an incident site. The apparatus
comprises a communication interface; a display device; at least one
user input device; a memory to store processor-executable
instructions; and a processing unit coupled to the communication
interface, the display device, the at least one user input device,
and the memory. Upon execution of the processor-executable
instructions by the processing unit, the processing unit: controls
the communication interface to electronically receive source data
representing at least one input image of a geographic area
including the incident site; controls the display device to display
at least a portion of the at least one input image; acquires first
user input from the at least one user input device to provide a
first representation of at least a portion of the incident at a
first time on the at least one input image; generates a first
marked-up digital image including the first representation based on
the first user input; acquires second user input from the at least
one user input device to provide a second representation of at
least a portion of the incident at a second time on the at least
one input image; generates a second marked-up digital image
including the second representation based on the second user input;
and further controls the communication interface and/or the memory
to electronically transmit and/or electronically store information
relating to the first and second marked-up digital images so as to
document the incident at different times with respect to the
geographic area.
[0014] A further embodiment is directed to a method for documenting
an incident at an incident site. The method comprises: A) receiving
source data representing at least one input image of a geographic
area including the incident site; B) processing the source data so
as to display at least a portion of the at least one input image on
a display device; C) receiving first user input via at least one
user input device associated with the display device; D) processing
the first user input so as to display, on the display device, a
first marked-up digital image including a first representation of
at least a portion of the incident at a first time on the at least
one input image; E) receiving second user input via the at least
one user input device; F) processing the second user input so as to
display, on the display device, a second marked-up digital image
including a second representation of at least a portion of the
incident at a second time on the at least one input image; and G)
electronically transmitting and/or electronically storing
information relating to the first and second marked-up digital
images so as to document the incident at different times with
respect to the geographic area.
[0015] Another embodiment is directed to at least one
computer-readable medium encoded with instructions that, when
executed on at least one processing unit, perform a method for
documenting an incident at an incident site. The method comprises:
A) receiving source data representing at least one input image of a
geographic area including the incident site; B) processing the
source data so as to display at least a portion of the at least one
input image on a display device; C) receiving first user input via
at least one user input device associated with the display device;
D) processing the first user input so as to display, on the display
device, a first marked-up digital image including a first
representation of at least a portion of the incident at a first
time on the at least one input image; E) receiving second user
input via the at least one user input device; F) processing the
second user input so as to display, on the display device, a second
marked-up digital image including a second representation of at
least a portion of the incident at a second time on the at least
one input image; and G) electronically transmitting and/or
electronically storing information relating to the first and second
marked-up digital images so as to document the incident at
different times with respect to the geographic area.
[0016] The following U.S. published applications are hereby
incorporated herein by reference:
[0017] U.S. publication no. 2008-0228294-A1, published Sep. 18,
2008, filed Mar. 13, 2007, and entitled "Marking System and Method
With Location and/or Time Tracking;"
[0018] U.S. publication no. 2008-0245299-A1, published Oct. 9,
2008, filed Apr. 4, 2007, and entitled "Marking System and
Method;"
[0019] U.S. publication no. 2009-0013928-A1, published Jan. 15,
2009, filed Sep. 24, 2008, and entitled "Marking System and
Method;"
[0020] U.S. publication no. 2009-0202101-A1, published Aug. 13,
2009, filed Feb. 12, 2008, and entitled "Electronic Manifest of
Underground Facility Locate Marks;"
[0021] U.S. publication no. 2009-0202110-A1, published Aug. 13,
2009, filed Sep. 11, 2008, and entitled "Electronic Manifest of
Underground Facility Locate Marks;"
[0022] U.S. publication no. 2009-0201311-A1, published Aug. 13,
2009, filed Jan. 30, 2009, and entitled "Electronic Manifest of
Underground Facility Locate Marks;"
[0023] U.S. publication no. 2009-0202111-A1, published Aug. 13,
2009, filed Jan. 30, 2009, and entitled "Electronic Manifest of
Underground Facility Locate Marks;"
[0024] U.S. publication no. 2009-0204625-A1, published Aug. 13,
2009, filed Feb. 5, 2009, and entitled "Electronic Manifest of
Underground Facility Locate Operation;"
[0025] U.S. publication no. 2009-0204466-A1, published Aug. 13,
2009, filed Sep. 4, 2008, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0026] U.S. publication no. 2009-0207019-A1, published Aug. 20,
2009, filed Apr. 30, 2009, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0027] U.S. publication no. 2009-0210284-A1, published Aug. 20,
2009, filed Apr. 30, 2009, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0028] U.S. publication no. 2009-0210297-A1, published Aug. 20,
2009, filed Apr. 30, 2009, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0029] U.S. publication no. 2009-0210298-A1, published Aug. 20,
2009, filed Apr. 30, 2009, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0030] U.S. publication no. 2009-0210285-A1, published Aug. 20,
2009, filed Apr. 30, 2009, and entitled "Ticket Approval System For
and Method of Performing Quality Control In Field Service
Applications;"
[0031] U.S. publication no. 2009-0204238-A1, published Aug. 13,
2009, filed Feb. 2, 2009, and entitled "Electronically Controlled
Marking Apparatus and Methods;"
[0032] U.S. publication no. 2009-0208642-A1, published Aug. 20,
2009, filed Feb. 2, 2009, and entitled "Marking Apparatus and
Methods For Creating an Electronic Record of Marking
Operations;"
[0033] U.S. publication no. 2009-0210098-A1, published Aug. 20,
2009, filed Feb. 2, 2009, and entitled "Marking Apparatus and
Methods For Creating an Electronic Record of Marking Apparatus
Operations;"
[0034] U.S. publication no. 2009-0201178-A1, published Aug. 13,
2009, filed Feb. 2, 2009, and entitled "Methods For Evaluating
Operation of Marking Apparatus;"
[0035] U.S. publication no. 2009-0202112-A1, published Aug. 13,
2009, filed Feb. 11, 2009, and entitled "Searchable Electronic
Records of Underground Facility Locate Marking Operations;"
[0036] U.S. publication no. 2009-0204614-A1, published Aug. 13,
2009, filed Feb. 11, 2009, and entitled "Searchable Electronic
Records of Underground Facility Locate Marking Operations;"
[0037] U.S. publication no. 2009-0238414-A1, published Sep. 24,
2009, filed Mar. 18, 2008, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0038] U.S. publication no. 2009-0241045-A1, published Sep. 24,
2009, filed Sep. 26, 2008, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0039] U.S. publication no. 2009-0238415-A1, published Sep. 24,
2009, filed Sep. 26, 2008, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0040] U.S. publication no. 2009-0241046-A1, published Sep. 24,
2009, filed Jan. 16, 2009, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0041] U.S. publication no. 2009-0238416-A1, published Sep. 24,
2009, filed Jan. 16, 2009, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0042] U.S. publication no. 2009-0237408-A1, published Sep. 24,
2009, filed Jan. 16, 2009, and entitled "Virtual White Lines for
Delimiting Planned Excavation Sites;"
[0043] U.S. publication no. 2009-0238417-A1, published Sep. 24,
2009, filed Feb. 6, 2009, and entitled "Virtual White Lines for
Indicating Planned Excavation Sites on Electronic Images;"
[0044] U.S. publication no. 2009-0327024-A1, published Dec. 31,
2009, filed Jun. 26, 2009, and entitled "Methods and Apparatus for
Quality Assessment of a Field Service Operation;"
[0045] U.S. publication no. 2010-0010862-A1, published Jan. 14,
2010, filed Aug. 7, 2009, and entitled "Methods and Apparatus for
Quality Assessment of a Field Service Operation Based on Geographic
Location;"
[0046] U.S. publication no. 2010-0010863-A1, published Jan. 14,
2010, filed Aug. 7, 2009, and entitled "Methods and Apparatus for
Quality Assessment of a Field Service Operation Based on Multiple
Scoring Categories;"
[0047] U.S. publication no. 2010-0010882-A1, published Jan. 14,
2010, filed Aug. 7, 2009, and entitled "Methods and Apparatus for
Quality Assessment of a Field Service Operation Based on Dynamic
Assessment Parameters;" and
[0048] U.S. publication no. 2010-0010883-A1, published Jan. 14,
2010, filed Aug. 7, 2009, and entitled "Methods and Apparatus for
Facilitating a Quality Assessment of a Field Service Operation
Based on Multiple Quality Assessment Criteria."
[0049] It should be appreciated that all combinations of the
foregoing concepts and additional concepts discussed in greater
detail below (provided such concepts are not mutually inconsistent)
are contemplated as being part of the inventive subject matter
disclosed herein. In particular, all combinations of claimed
subject matter appearing at the end of this disclosure are
contemplated as being part of the inventive subject matter
disclosed herein. It should also be appreciated that terminology
explicitly employed herein that also may appear in any disclosure
incorporated by reference should be accorded a meaning most
consistent with the particular concepts disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0050] The drawings are not necessarily to scale, emphasis instead
generally being placed upon illustrating the principles of the
invention.
[0051] FIG. 1 illustrates a functional block diagram of a
geo-referenced electronic drawing application for documenting and
reporting events, according to the present disclosure;
[0052] FIG. 2 illustrates an example of a drawing tool GUI of the
geo-referenced electronic drawing application, according to the
present disclosure;
[0053] FIG. 3 illustrates an example of a series of geo-referenced
drawings that are generated using the geo-referenced electronic
drawing application, according to the present disclosure;
[0054] FIG. 4 illustrates an example of a report that is generated
using the geo-referenced electronic drawing application and that
includes a geo-referenced drawing, according to the present
disclosure;
[0055] FIG. 5 illustrates a flow diagram of an example of a method
of operation of the geo-referenced electronic drawing application,
according to the present disclosure;
[0056] FIG. 6 illustrates a functional block diagram of a networked
system that includes the geo-referenced electronic drawing
application for documenting and reporting events, according to the
present disclosure;
[0057] FIG. 7 shows a map, representing an exemplary input
image;
[0058] FIG. 8 shows a construction/engineering drawing,
representing an exemplary input image;
[0059] FIG. 9 shows a land survey map, representing an exemplary
input image;
[0060] FIG. 10 shows a grid, overlaid on the
construction/engineering drawing of FIG. 8, representing an
exemplary input image;
[0061] FIG. 11 shows a street level image, representing an
exemplary input image;
[0062] FIG. 12 shows the drawing tool GUI of FIG. 2 displaying a
layer directory pane that facilitates the manipulation of
layers;
[0063] FIG. 13 shows the drawing tool GUI of FIG. 2 displaying an
animation controls window that facilitates generation of an
animated sequence;
[0064] FIG. 14 shows an illustrative computer that may be used at
least in part to implement the geo-referenced electronic drawing
application in accordance with some embodiments; and
[0065] FIG. 15 shows an example of an input image constructed from
bare data.
DETAILED DESCRIPTION
[0066] Following below are more detailed descriptions of various
concepts related to, and embodiments of, inventive methods,
apparatus and systems according to the present disclosure for
facilitating documentation of events (e.g., an incident, such as a
motor vehicle accident) via one or more geo-referenced electronic
drawings. It should be appreciated that various concepts introduced
above and discussed in greater detail below may be implemented in
any of numerous ways, as the disclosed concepts are not limited to
any particular manner of implementation. Examples of specific
implementations and applications are provided primarily for
illustrative purposes.
[0067] A geo-referenced electronic drawing application for
documenting and reporting events is described herein. The
geo-referenced electronic drawing application may provide a
mechanism for importing a geo-referenced image that may be marked
up with symbols and/or any other markings for indicating the
details of an event, such as a vehicle accident. The geo-referenced
image may include data associated therewith (e.g., embedded
metadata) that allows identification of locational information
(e.g., locational coordinates) for any point or region on the
image. Further, the geo-referenced electronic drawing application
may provide a mechanism for generating a report of the event that
includes the marked up geo-referenced image. A networked system
that includes the geo-referenced electronic drawing application is
also described.
[0068] It should be appreciated that while the imported or
otherwise acquired image is described herein as "geo-referenced,"
and the drawing application is likewise described as
geo-referenced, the image need not be geo-referenced unless
required for a particular implementation and the drawing
application may be used for non geo-referenced images. In many
instances, an image that is not geo-referenced may be suitably
used. Examples of non geo-referenced images that may be suitable in
various scenarios are: a stock or generic image of an intersection,
a stock or generic image of an room, a stock or generic image of a
street, and a photograph taken during investigation of an incident
or generation of a report on the incident. Of course, these are
merely exemplary, as many other types of non geo-referenced images
are possible.
[0069] Further, while certain embodiments may be described in the
context of generating a vehicle accident report, this is exemplary
only. The geo-referenced electronic drawing application described
herein is suitable for generating any type of report in which a
geo-referenced image (or other image) may be useful, such as, but
not limited to, personal injury reports, vehicle accident reports,
any types of property damage reports, and the like. For example,
the methods and apparatus described herein may be useful for
providing reports that include images in various field service
applications, such as, but not limited to, those of underground
facilities locate companies, excavation companies, landscaping
companies, tree care and removal companies, utility installation
and repair companies, and the like.
[0070] The geo-referenced electronic drawing application described
herein may provide the ability to electronically mark up real world
geo-referenced images with symbols, shapes, and/or lines in order
to provide improved and consistent accuracy with respect to
drawings that support incident reports.
[0071] In addition, the geo-referenced electronic drawing
application described herein may provide the ability to
electronically mark up real world geo-referenced images with
symbols, shapes, and/or lines to scale, again providing improved
and consistent accuracy with respect to drawings that support
incident reports.
[0072] Further, the geo-referenced electronic drawing application
may provide a standard symbols library, thereby providing
standardization with respect to drawings that support incident
reports.
[0073] Networked systems that include the geo-referenced electronic
drawing application described herein may provide improved
distribution, tracking, and auditing of reports among entities and
the systems provide improved control over access to reports.
[0074] Referring to FIG. 1, a functional block diagram of a
geo-referenced electronic drawing application 100 for documenting
and reporting events is presented. Geo-referenced electronic
drawing application 100 may be a standalone and/or a web-based
software application that allows a user to import a geo-referenced
image and then mark up the image with symbols and/or any other
markings for indicating the details of an event, such as a vehicle
accident.
[0075] Geo-referenced electronic drawing application 100 may be
executed by a processing unit 110 and stored in memory 112.
Processing unit 110 may be any standard microcontroller or
microprocessor device that is capable of executing program
instructions of geo-referenced electronic drawing application 100.
Memory 112 may be any standard data storage medium. In one example,
a symbols library 114, a collection of input images 116, certain
geo-location data 118, and timestamp data 120, may be stored in
memory 112.
[0076] Timestamp data 120 may include calendar date and/or time
information. Timestamp data 120 may originate from, for example,
the computing device on which geo-referenced electronic drawing
application 100 is installed, any other computing device, and/or
manual entry by the user.
[0077] Location stamp data 150 may include location information
such as a city and state, zip code, or geographic coordinates.
Location stamp data 150 may originate from, for example, the
computing device on which geo-referenced electronic drawing
application 100 is installed, any other computing device, and/or
manual entry by the user.
[0078] Symbols library 114, input images 116, geo-location data
118, timestamp data 120 and location stamp data 150 support the
functions of a drawing tool graphical user interface (GUI) 122 of
geo-referenced electronic drawing application 100. Drawing tool GUI
122 is suitable for presenting on the display of any computing
device, such as a computing device 140. By reading geographic
location information from geo-location data 118 and/or by
processing geographic location information that may be manually
entered, processing unit 110 retrieves a certain input image 116
that corresponds to the geographic location information and
displays the input image 116 in a window of drawing tool GUI 122.
Geographic location information may be, for example, a physical
address, latitude and longitude coordinates, and/or any global
positioning system (GPS) data.
[0079] For purposes of the present disclosure, an input image 116
is any image represented by source data that is electronically
processed (e.g., the source data is in a computer-readable format)
to display the image on a display device. An input image 116 may
include any of a variety of paper/tangible image sources that are
scanned (e.g., via an electronic scanner) or otherwise converted so
as to create source data (e.g., in various formats such as XML,
PDF, JPG, BMP, etc.) that can be processed to display the input
image 116. An input image 116 also may include an image that
originates as source data or an electronic file without necessarily
having a corresponding paper/tangible copy of the image (e.g., an
image of a "real-world" scene acquired by a digital still frame or
video camera or other image acquisition device, in which the source
data, at least in part, represents pixel information from the image
acquisition device).
[0080] In some exemplary implementations, input images 116
according to the present disclosure may be created, provided,
and/or processed by a geographic information system (GIS) that
captures, stores, analyzes, manages and presents data referring to
(or linked to) location, such that the source data representing the
input image 116 includes pixel information from an image
acquisition device (corresponding to an acquired "real world" scene
or representation thereof), and/or spatial/geographic information
("geo-encoded information").
[0081] In some exemplary implementations, one or more input images
116 may be stored in local memory 112 of the computing device 140
and/or retrieved from the optional remote computer (e.g., via the
communication interface 124) and then stored in local memory.
Various information may be derived from the one or more input
images for display (e.g., all or a portion of the input image,
metadata associated with the input image, etc.).
[0082] In view of the foregoing, various examples of input images
and source data representing input images 116 according to the
present disclosure, to which the inventive concepts disclosed
herein may be applied, include but are not limited to: [0083]
Various maps, such as street/road maps (e.g., map 700 of FIG. 7),
topographical maps, military maps, parcel maps, tax maps, town and
county planning maps, virtual maps, etc. (such maps may or may not
include geo-encoded information). Such maps may be scaled to a
level appropriate for the application; [0084] Architectural,
construction and/or engineering drawings and virtual renditions of
a space/geographic area (including "as built" or post-construction
drawings). Such drawings/renditions may be useful, e.g., in
property damage report applications or for documenting
construction, landscaping or maintenance. An exemplary
construction/engineering drawing 800 is shown in FIG. 8; [0085]
Land surveys, i.e., plots produced at ground level using references
to known points such as the center line of a street to plot the
metes and bounds and related location data regarding a building,
parcel, utility, roadway, or other object or installation. Land
survey images may be useful, e.g., in vehicular incident report
applications or police report applications. FIG. 9 shows an
exemplary land survey map 900; [0086] A grid (a pattern of
horizontal and vertical lines used as a reference) to provide
representational geographic information (which may be used "as is"
for an input image or as an overlay for an acquired "real world"
scene, drawing, map, etc.). An exemplary grid 1000, overlaid on
construction/engineering drawing 800, is shown in FIG. 10. It
should be appreciated that the grid 1000 may itself serve as the
input image (i.e., a "bare" grid), or be used together with another
underlying input image; [0087] "Bare" data representing geo-encoded
information (geographical data points) and not necessarily derived
from an acquired/captured real-world scene (e.g., not pixel
information from a digital camera or other digital image
acquisition device). Such "bare" data may be nonetheless used to
construct a displayed input image, and may be in any of a variety
of computer-readable formats, including XML). [0088] One example of
bare data is geo-referenced data relating to municipal assets.
Databases exist that include geo-location information (e.g.,
latitude and longitude coordinates) and attribute information
(e.g., sign type) for municipal assets such as signs, crash
attenuators, parking meters, barricades, and guardrails. Such a
database may be used in connection with an asset management system,
such as the Infor EAM (Enterprise Asset Management) system by Infor
Global Solutions of Alpharetta, Ga., to manage municipal assets.
Using bare data relating to municipal assets, a geo-encoded image
may be constructed that includes representations of municipal
assets at their relative locations. In particular, the attribute
information may be used to select a symbol representing the asset
in the image, and the geo-location information may be used to
determine the placement of the symbol in the image. [0089] Other
examples of bare data are geo-referenced data relating to weather
and geo-referenced data relating to traffic. Both weather and
traffic data are available from various sources in Geographic
Information System (GIS) format. For example, a set of points,
lines, and/or regions in a spatial database may represent locations
or areas having a particular traffic attribute (e.g., heavy
traffic, construction, moderate congestion, minor stall, normal
speeds) or a particular weather attribute (e.g., heavy snow, rain,
hail, fog, lightning, clear skies). The data in the database may be
dynamic, such that the points, lines, and/or regions and
corresponding attributes change as the traffic and weather
conditions change. Using bare data relating to traffic and/or
weather, a geo-encoded image may be constructed that includes
representations of traffic and/or weather conditions at their
relative locations. In particular, the attribute information may be
used to select a symbol, pattern, and/or color representing the
traffic or weather condition in the image, and the geo-location
information may be used to determine the placement of the symbol,
pattern and/or color in the image. An example of a source for GIS
traffic data is NAVIGATOR, the Georgia Department of
Transportation's Intelligent Transportation System (ITS). GIS
weather data is available from the National Weather Service (NWS).
Such weather data may be provided as shapefiles, which is a format
for storing geographic information and associated attribute
information. Shapefiles may include information relating to weather
warnings (e.g., tornado, severe thunderstorm, and flash flood
warnings) and the like. [0090] FIG. 15 shows an example of an input
image 1500 constructed from bare data. In particular, input image
1500 includes a representation of a street sign 1510,
representations of traffic conditions 1512 and 1514, and a
representation of a weather condition 1516. The location of the
street sign representation 1510 and traffic condition
representations 1512 and 1514 may correspond to the actual
locations of the street signs and traffic conditions in the region
shown in the input image 1500. The location of the representation
of the weather condition 1516 may be arbitrarily selected, or
selected to be in a corner of the input image 1500, as the
representation may indicate that the weather condition corresponds
generally to the entire region shown in the input image 1500. Each
of the representations shown in FIG. 15 is based on geo-location
information (e.g., latitude and longitude coordinates) and
attribute information (e.g., a sign type, traffic conditions, and a
weather condition). In the example shown, the type of street sign
1510 is a stop sign, the traffic conditions 1512 and 1514 are
"construction" and "light traffic," and the weather condition 1516
is lightning; and [0091] Photographic renderings/images, including
street level (see e.g., street level image 1100 of FIG. 11),
topographical, satellite, and aerial photographic
renderings/images, any of which may be updated periodically to
capture changes in a given geographic area over time (e.g.,
seasonal changes such as foliage density, which may variably impact
the ability to see some aspects of the image). Such photographic
renderings/images may be useful, e.g., in connection with preparing
property damage reports, vehicular incident reports, police
reports, etc.
[0092] It should also be appreciated that source data representing
an input image 116 may be compiled from multiple data/information
sources; for example, any two or more of the examples provided
above for input images and source data representing input images
116, or any two or more other data sources, can provide information
that can be combined or integrated to form source data that is
electronically processed to display an image on a display
device.
[0093] Referring to FIG. 2, an example of a drawing tool GUI 122 of
geo-referenced electronic drawing application 100 is presented. In
the case of a web-based application, drawing tool GUI 122 that may
be implemented, for example, by a web browser that is presented via
any networked computing device, such as computing device 140 of
FIG. 1. In the case of a standalone application, drawing tool GUI
122 that may be implemented, for example, by a GUI window that is
presented via any computing device.
[0094] Drawing tool GUI 122 may present a certain input image 116
that corresponds to specified geographic location information. For
example, location information from geo-location data 118 may be
automatically read into an address field 210 and/or a geo-location
data field 212. Alternatively, location information may be manually
entered in address field 210 and/or geo-location data field 212. In
one example, input image 116 may be an aerial image that
corresponds to the geographic location information. Overlaying
input image 116 may be an image scale 214. Input image 116 is read
into drawing tool GUI 122 and may be oriented in the proper manner
with respect to directional heading (i.e., north, south, east, and
west).
[0095] Drawing tool GUI 122 may also include various palettes,
toolbars, or other interfaces that enable the user to manipulate
(e.g., zoom in, zoom out) and/or mark up input image 116. For
example, drawing tool GUI 122 may include a drawing toolbar 216
that may include a sketching palette as well as a symbols palette.
The sketching palette portion of drawing toolbar 216 may provide
standard drawing tools that allow a user to draw certain shapes
(e.g., a polygon, a rectangle, a circle, a line) atop input image
116. The symbols palette portion of drawing toolbar 216 provides a
collection of any symbols that may be useful for depicting the
event of interest, such as a vehicle accident. The source of these
symbols may be symbols library 114. For example, symbols library
114 may include, but is not limited to, a collection of car
symbols, truck symbols, other vehicle symbols (e.g., emergency
vehicles, buses, farm equipment, 2-wheel vehicles, etc), landmark
symbols (e.g., fire hydrants, trees, fences, poles, cross walks,
various barriers, etc), symbols of signs (e.g., standard road
signs, any other signs, etc), symbols of people (e.g.,
pedestrians), symbols of animals, and the like. By use of the
elements of drawing toolbar 216, a user may mark up input image 116
in a manner that depicts, for example, the vehicle accident scene.
In one example and referring to FIG. 2, a vehicle collision is
depicted by a vehicle #1 and a vehicle #2 overlaid on input image
116. The symbols for vehicle #1 and vehicle #2 are selected from
the symbols palette portion of drawing toolbar 216.
[0096] Optionally, the drawing tool GUI 122 may allow a user to
specify a confidence level for a selected symbol. For example, if a
user selects a symbol corresponding to a bus to be overlaid on
input image 116, the user may specify an associated confidence
level to indicate a degree of confidence that the observed vehicle
was a bus. The confidence level may be numeric, e.g., "25%," or
descriptive, e.g., "low." An indication of the confidence level or
a degree of uncertainty may be displayed adjacent the corresponding
symbol or may be integrated with the symbol itself. For example, a
question mark or the confidence level may be displayed on or near
the symbol. Additionally or alternatively, an indication of the
confidence level may be included in the text of a vehicle accident
report including the marked up input image.
[0097] The aforementioned palettes, toolbars, and/or symbols
library are described in the context of preparing a vehicle
accident report. However, this is exemplary only. The palettes,
toolbars, and/or symbols library of the geo-referenced electronic
drawing application of the present disclosure may be
industry-specific and/or incident type-specific. As a result, the
palettes, toolbars, and/or symbols library may be selectable by the
user depending on the application in which the geo-referenced
electronic drawing application is being used. In one example, with
respect to an incident involving tree damage and/or a tree damaging
a structure, the user may select palettes, toolbars, and/or symbols
that include trees and building rooflines that may be used for
marking up the geo-referenced image.
[0098] Additionally, geo-referenced electronic drawing application
100 may be designed to automatically render symbols to scale upon
the geo-referenced drawing according to the settings of scale 214.
This is one example of how geo-referenced electronic drawing
application 100 may provide consistent accuracy to drawings that
support incident reports. Further, the presence of a standard
symbols library, such as symbols library 114, is one example of how
geo-referenced electronic drawing application 100 provides
standardization to drawings that support incident reports.
[0099] The geo-referenced electronic drawing application 100 may be
configured to allow the viewing angle or perspective of the input
image 116 and/or representations thereon to be changed. For
example, a user may switch between an overhead view, a perspective
view, and a side view. This may be accomplished by correlating
corresponding points in two or more geo-referenced images, for
example. A symbol, such as a representation of a vehicle, or other
content-related marking added to an image may have
three-dimensional data associated therewith to enable the symbol to
be viewed from different angles. Thus, while a viewing angle or
perspective of an image may change, its content (e.g., a
representation of a vehicle accident and its surrounding) may
remain the same.
[0100] Further, the geo-referenced electronic drawing application
100 may be configured to allow the input image 116 to be manually
or automatically modified. For example, it may be desirable to
remove extraneous features, such as cars, from the input image 116.
The geo-referenced electronic drawing application 100 may include
shape or object recognition software that allows such features to
be identified and/or removed. One example of software capable of
recognizing features in an image, such as an aerial image, is
ENVI.RTM. image processing and analysis software by ITT Corporation
of White Plains, N.Y. Exemplary features that may be recognized
include vehicles, buildings, roads, bridges, rivers, lakes, and
fields. The geo-referenced electronic drawing application 100 may
be configured such that a value indicating a level of confidence
that an identified object corresponds to a particular feature may
optionally be displayed. Automatically identified features may be
automatically modified in the image in some manner. For example,
the features may be blurred or colored (e.g., white, black or to
resemble a color of one or more pixels adjacent the feature).
Additionally, or alternatively, the geo-referenced electronic
drawing application 100 may include drawing tools (e.g., an eraser
tool or copy and paste tool), that allow such features to be
removed, concealed, or otherwise modified after being visually
recognized by a user or automatically recognized by the
geo-referenced electronic drawing application 100 or associated
software.
[0101] Drawing toolbar 216 may also allow the user to add text
boxes that can be used to add textual content to input image 116.
In one example, a callout 218 may be one mechanism for entering and
displaying textual information about, in this example, the vehicle
collision.
[0102] Further, drawing tool GUI 122 may include a navigation
toolbar 220 by which the user may zoom or pan input image 116
(e.g., zoom in, zoom out, zoom to, pan, pan left, pan right, pan
up, pan down, etc.). Navigation toolbar 220 may additionally
include one or more buttons that enable user drawn shapes to be
accentuated (e.g., grayscale, transparency, etc.). Additionally, a
set of scroll controls 222 may be provided in the image display
window that allows the user to scroll input image 116 north, south,
east, west, and so on with respect to real world directional
heading. In addition, the drawing application may be configured to
reposition the displayed image so that it is directionally aligned
with a direction of the display screen, based on an input from a
compass or other device indicative of an orientation of the display
screen in the environment.
[0103] Overlaying input image 116 may also be a timestamp 224
and/or a location stamp 250. Timestamp 224 may indicate the
creation date and/or time or a save date and/or time of a marked up
input image 116 or information used to generate the marked up input
image. Timestamp data 120 in memory 112 of FIG. 1 may be the source
of information of timestamp 224. Such data may be based on an
output of a local or remote timer, for example. Location stamp 250
may indicate the location where the marked up input image 116 or
information used to generate the marked up input image was saved.
Location stamp 250 in memory 112 of FIG. 1 may be the source of
information of location stamp 250. Such data may be based on an
output of GPS device, for example.
[0104] The timestamp 224 and location stamp 250 may be
automatically generated based, for example, on the output of a
timer device and GPS device as discussed above. Further, the
timestamp and location stamp may be difficult or impossible for a
user to modify. Thus, the timestamp and location stamp may be used
to verify that the marked-up input image with which they are
associated was created at an expected time and place, such as the
general or specific time and place where the vehicular accident or
other incident was investigated. If desired, time and/or location
data may be automatically acquired several times during the
creation of one or more marked-up digital images, and may be stored
in association with the images, to enable verification that the
user was present at the time and/or place of the investigation for
some duration of time.
[0105] The ability to read in and electronically mark up real world
geo-referenced images, such as input images 116, with symbols,
shapes, and/or lines is one example of how geo-referenced
electronic drawing application 100 may provide improved and
consistent accuracy to drawings that support incident reports.
[0106] In some embodiments, the input image data and the mark up
data (e.g., the electronic representations of the vehicles,
landmarks and/or signs), may be displayed as separate "layers" of
the visual rendering, such that a viewer of the visual rendering
may turn on and turn off displayed data based on a categorization
of the displayed data. Respective layers may be enabled or disabled
for display in any of a variety of manners. According to one
exemplary implementation shown in FIG. 12, a "layer directory" or
"layer legend" pane 1200 may be rendered in the viewing window of
drawing tool GUI 122 described in connection with FIG. 2. The layer
directory pane 1200 may show all available layers, and allow a
viewer to select each available layer to be either displayed or
hidden, thus facilitating comparative viewing of layers. The layer
directory pane 1200 may be displayed by selecting a "display layer
directory pane" action item in the layers menu 1202.
[0107] In the example of FIG. 12, image information is categorized
generally under layer designation 1202 ("reference layer") and may
be independently enabled or disabled for display (e.g., hidden) by
selecting the corresponding check box. Similarly, information
available to be overlaid on the input image is categorized
generally under layer designation 1206 ("symbols layer") and may be
independently enabled or disabled for display by selecting the
corresponding check box.
[0108] The reference layer and symbols layers may have
sub-categories for sub-layers, such that each sub-layer may also be
selectively enabled or disabled for viewing by a viewer. For
example, under the general layer designation 1202 of "reference
layer," a "base image" sub-layer may be selected for display. The
base image sub-layer is merely one example of a sub-layer that may
be included under the "reference layer," as other sub-layers (e.g.,
"grid") are possible. Under the general layer designation 1206 of
"symbols layer," different symbol types that may be overlaid on the
input image may be categorized under different sub-layer
designations (e.g., designation 1208 for "cars layer;" designation
1212 for "trucks layer;" designation 1216 for "other vehicles
layer;" designation 1218 for "landmarks layer;" and designation
1220 for "signs layer"). In this manner, a viewer may be able to
display certain symbols information (e.g., concerning cars and
trucks), while hiding other symbols information (e.g., concerning
other vehicles, landmarks and signs).
[0109] Further, the various sub-layers may have further
sub-categories for sub-layers, such that particular features within
a sub-layer may also be selectively enabled or disabled for viewing
by a viewer. For example, the cars layer may include a designation
1210 for "car 1," and the truck layer may include a designation
1214 for "truck 1." Thus, information concerning the car 1222 ("car
1") and truck 1224 ("truck 1") involved in the accident can be
selected for display.
[0110] As shown in the example of FIG. 12, both the reference and
symbols layers are enabled for display. Under the reference layer,
the base image layer is enabled for display. Amongst the symbols
layer sub-layers, only the cars layer and the trucks layer are
enabled for display. Amongst these sub-layers, the further
sub-layers "car 1" and "truck 1" are enabled for display.
Accordingly, a base image is rendered in the viewing window of
drawing tool GUI 122, and only car 1222 and truck 1224 are rendered
thereon.
[0111] Virtually any characteristic of the information available
for display may serve to categorize the information for purposes of
display layers or sub-layers. In particular, any of the various
exemplary elements that may be rendered using the drawing tool GUI
122 discussed herein (e.g., timestamps; scales; callouts; estimated
time information; input image content; symbols relating to
vehicles, landmarks, signs, people, animals or the like, etc.) may
be categorized as a sub-layer, and one or more sub-layers may
further be categorized into constituent elements for selective
display (e.g., as sub-sub-layers).
[0112] Further, layers may be based on user-defined attributes of
symbols or other rendered features. For example, a layer may be
based on the speed of vehicles, whether vehicles were involved in
the accident, whether the vehicles are public service vehicles, the
location of vehicles at a particular time, and so on. For example,
a user may define particular vehicle symbols as having
corresponding speeds, and a "moving vehicles layer" may be selected
to enable the display of vehicles having non-zero speeds.
Additionally or alternatively, selecting the moving vehicles layer
may cause information concerning the speed of the moving vehicles
to be displayed. For example, text indicating a speed of 15 mph may
be displayed adjacent a corresponding vehicle. Similarly, a user
may define particular vehicle symbols as being involved in the
accident, and an "accident vehicles layer" may be selected to
enable the display of vehicles involved in the accident.
Additionally or alternatively, selecting the accident vehicles
layer may cause information identifying accident vehicles to be
displayed. For example, an icon indicative of an accident vehicle
may be displayed adjacent a corresponding vehicle. The "moving
vehicles layer" and the "accident vehicles" layer may be sub-layers
under the symbols layer, or may be sub-layers under a "vehicle
layer" (not shown), which itself is a sub-layer under the symbols
layer. Further, the "moving vehicles layer" and the "accident
vehicles layer" may in turn include sub-layers. For example, the
"moving vehicles layer" may include a sub-layer to enable the
display of all vehicles traveling east. From the foregoing, it may
be appreciated that a wide variety of information may be
categorized in a nested hierarchy of layers, and information
included in the layers may be visually rendered, when
selected/enabled for display, in a variety of manners.
[0113] Other attributes of symbols or other rendered features may
also be used as the basis for defining layers. For example, the
user-determined and/or automatically determined confidence levels
of respective symbols, as discussed herein, may be used as the
basis for defining layers. According to one illustrative example, a
layer may be defined to include only those symbols that have an
associated user-determined and/or automatically determined
confidence level of at least some percentage, e.g., 50%. The
information concerning the confidence levels associated with the
symbols may be drawn from a report in which such levels are
included.
[0114] It should further be appreciated that, according to various
embodiments, the attributes and/or type of visual information
displayed as a result of selecting one or more layers or sub-layers
is not limited. In particular, visual information corresponding to
a selected layer or sub-layer may be electronically rendered in the
form of one or more lines or shapes (of various colors, shadings
and/or line types), text, graphics (e.g., symbols or icons), and/or
images, for example. Likewise, the visual information corresponding
to a selected layer or sub-layer may include multiple forms of
visual information (one or more of lines, shapes, text, graphics
and/or images).
[0115] In yet other embodiments, all of the symbols and/or other
overlaid information of a particular marked up input image may be
categorized as a display layer, such that the overlaid information
may be selectively enabled or disabled for display as a display
layer. In this manner, a user may conveniently toggle between the
display of various related marked up input images (e.g., marked up
input images relating to the same accident or other event) for
comparative display. In particular, a user may toggle between
scenes depicting the events of an accident at different times.
[0116] It should be appreciated that a layer need not include a
singular category of symbols or overlaid information, and may be
customized according to a user's preferences. For example, a user
may select particular features in one or more marked up input
images that the user would like to enable to be displayed
collectively as a layer. Additionally or alternatively, the user
may select a plurality of categories of features that the user
would like to enable to be displayed collectively as a layer.
[0117] In some embodiments, processing unit 110 (FIG. 1) may
automatically select which layers are displayed or hidden. As an
example, if a user depicts a truck in the accident scene using a
truck symbol, processing unit 110 may automatically select the
"truck layer" sub-layer and the "truck 1" sub-sub layer for display
in the display field. As another example, if a user specifies or
selects landmarks to be displayed, processing unit 110 may
automatically select the base image to be hidden to provide an
uncluttered depiction of an accident scene. The foregoing are
merely illustrative examples of automatic selection/enabling of
layers, and the inventive concepts discussed herein are not limited
in these respects.
[0118] Referring to FIGS. 1 and 2, when the user has completed
marking up (e.g., with lines, shapes, symbols, text, etc.) the
certain input image 116, the marked up input image 116 may be saved
as an event-specific image 126. For example, during the save
operation of geo-referenced electronic drawing application 100, any
event-specific images 126 created therein may be converted to any
standard digital image file format, such as PDF, JPG, and BMP file
format, and saved, for example, in memory 112 or to an associated
file system (not shown). In some cases, it may be beneficial for
the user to generate multiple event-specific images 126 in order to
depict, for example, more details of how a vehicle accident
occurred. The multiple event-specific images 126 may be associated
to one another via, for example, respective descriptor files 128
and saved as an image series 130. An example of an image series 130
is shown with reference to FIG. 3.
[0119] Each descriptor file 128 includes information about each
event-specific image 126 of an image series 130. Using the example
of a vehicle accident report, each descriptor file 128 may include
the accident report number, the name of the event-specific image
126 with respect to the image series 130, the creation date, and
the like. Descriptor files 128 provide a mechanism of
geo-referenced electronic drawing application 100 that allow
event-specific images 126 and/or any image series 130 to be queried
by other applications, such as any incident management
applications. In one example, descriptor files 128 may be
extensible markup language (XML) files that are created during the
save process of event-specific images 126 and/or image series
130.
[0120] Referring to FIG. 3, an example of a series of
geo-referenced drawings that are generated using geo-referenced
electronic drawing application 100 is presented. FIG. 3 shows an
example of an image series 130 that depicts time-lapsed sequential
images of a vehicle collision (i.e., essentially representing
time-lapsed frames 1, 2, and 3 in sequence). In this example, frame
1 is represented by an event-specific image 126A that depicts
vehicle #1 heading westbound and vehicle #2 heading eastbound, just
prior to the collision. Frame 2 is represented by an event-specific
image 126B that depicts vehicle #1 and vehicle #2 at the moment of
impact during the collision. Frame 3 is represented by an
event-specific image 126C that depicts the final resting place of
vehicle #1 and vehicle #2 after the collision.
[0121] Each of the event-specific images 126A-C may include a
corresponding estimated relative time 225A-C represented thereon.
The estimated relative time may reflect an estimated time of the
event (e.g., a vehicle accident) depicted in the event-specific
image. In the example of FIG. 3, an estimated relative time is
rendered visually (e.g., overlaid) on the input image 116 of each
of the event-specific images 126A-C. In event-specific image 126A,
the estimated relative time 225A is represented by a variable "t,"
which corresponds to an unknown time. In event-specific images 126B
and 126C, the estimated relative times 225B and 225C are
represented by times relative to the variable "t" (i.e., "t+0.5
sec" and "t+1 sec," respectively). Additionally or alternatively,
the estimated relative time may reflect an estimated date of the
vehicle accident. As also shown in FIG. 3, one or more of
event-specific images 126A-C may include a corresponding estimated
actual time 227 represented thereon. The estimated actual time may
reflect an estimated non-relative time of the vehicle accident. The
estimated relative time 225A-C and the estimated actual time 227
may be estimated by the user of the drawing application or a
related party.
[0122] In some embodiments, it may be desirable to generate an
animated sequence based on a plurality of event-specific images
126. According to one exemplary implementation shown in FIG. 13, an
animation controls window 1302 may be rendered in the viewing
window of drawing tool GUI 122 described in connection with FIG. 2
to facilitate generation of an animated sequence. The animation
controls window 1302 may be displayed by selecting a "display
animation controls" action item in the animation menu 1300.
[0123] The animation controls window 1302 comprises an interface
1304 for specifying frame order, an interface 1306 for specifying
animation speed, and an interface 1308 for specifying a transition
between frames. In the example of FIG. 13, interface 1304 lists
each of the frames representing event-specific images. A user may
specify a sequential order for the listed frames by selecting up or
down arrows associated with the listed frames. For example, a user
may select the down arrow associated with "Frame 1" to move this
frame to a later sequential order.
[0124] Interface 1304 lists options for specifying the animation
speed of the frames. A first option, which is selected in the
example of FIG. 13, provides that the animation speed of the frames
will be based on an estimated time for each frame. In particular,
by selecting this option, the animation speed may be based on an
estimated relative time or an estimated actual time that may be
specified for each frame as discussed in connection with FIG. 3.
For example, if Frame 2 has an estimated relative time that is two
seconds after that of Frame 1, and Frame 2 has an estimated
relative time that is five seconds after that of Frame 2, the
frames may be displayed at zero seconds, two seconds, and seven
seconds, respectively, or at some multiplier thereof. For example,
the frames may be displayed at half of the estimated actual speed
by displaying the frames at zero seconds, four seconds, and
fourteen seconds, respectively. A second option, which is
unselected in the example of FIG. 13, provides that the animation
speed of the frames will be based on a regular interval, the length
of which may be adjusted by sliding the arrow associated with the
interval length control feature to the left or the right.
[0125] It should be appreciated that the animation speed need not
be consistent for all frames, and that the animation speed for
particular sequences of frames may be adjusted as desired by the
user. For example, the time associated with one or more frames may
be increased or decreased relative to an estimated time so that the
user can observe how such an increase or decrease impacts the
animation and/or simulate different scenarios.
[0126] Interface 1306 lists options for specifying a transition
between overlaid features in the frames (e.g., vehicle symbols). A
first option, which is selected in the example of FIG. 13, provides
that there is no transition between the overlaid features. In this
case, the frames will simply be displayed sequentially as a
time-lapse animation. A second option, which is unselected in the
example of FIG. 13, provides that the overlaid features (e.g.,
vehicle symbols) will trace a path between their position in
consecutive frames to transition between the event-specific images
of consecutive frames. In the case of a vehicle, for example, the
path may be a linear path between a first point representing a
center of the vehicle symbol in one image and a second point
representing a center of the vehicle symbol in a consecutive image.
According to one exemplary implementation, all overlaid features or
all overlaid features of a particular type (e.g., vehicle symbols)
will, by default, be animated in this manner. According to another
exemplary implementation, if the second option is selected, an
additional interface may be displayed that allows a user to select
which overlaid features are to be animated and which are to remain
stationary. For example, a user may specify that only features
belonging to a certain custom or non-custom layer be animated while
all other features remain stationary. Conversely, a user may
specify that only features belonging to a certain custom or
non-custom layer shall remain stationary while all other features
are animated. This additional interface may or may not be used in
connection with default settings.
[0127] Referring to FIGS. 1, 2, and 3, geo-referenced electronic
drawing application 100 provides a mechanism by which
event-specific images 126 and/or any image series 130 or animation
based thereon may be integrated into electronic reports, such as
reports 132 of FIG. 1. Reports 132 may be any electronic reports in
which geo-referenced electronic drawings may be useful, such as
electronic personal injury reports, electronic vehicle accident
reports, any types of electronic property damage reports, and the
like. An example of a report 132 is shown with reference to FIG.
4.
[0128] Referring to FIG. 4, a traffic collision report 400 that is
generated using geo-referenced electronic drawing application 100
and that includes a geo-referenced drawing is presented. Traffic
collision report 400 is an example of a report 132. Traffic
collision report 400 may be, for example, a report used by accident
investigation companies, law enforcement agencies, and/or insurance
companies.
[0129] In this example, a certain event-specific image 126 is read
into a drawing field of traffic collision report 400. In this way,
the certain event-specific image 126 is integrated into traffic
collision report 400. The textual information of traffic collision
report 400 may be manually entered and/or automatically imported
from information associated with event-specific image 126, which
was captured using drawing tool GUI 122. For example, a
"Description of Accident" field may be populated with textural
information of callout 218 (see FIG. 2). Additionally, an entry
screen (not shown) of geo-referenced electronic drawing application
100 may be provided that allows the user to manually enter and/or
modify information in the text fields of a report 132, such as the
text fields of traffic collision report 400. The entry screen may
be incorporated in and/or operate in combination with drawing tool
GUI 122.
[0130] A report 132, such as traffic collision report 400, is not
limited to incorporating a single event-specific image 126 only.
For example, subsequent pages of traffic collision report 400 may
include all event-specific images 126 of a certain image series
130, such as those shown in FIG. 3.
[0131] Referring to FIG. 5, a flow diagram of an example of a
method 500 of operation of geo-referenced electronic drawing
application 100 is presented. Method 500 may include, but is not
limited to, the following steps, which are not limited to any
order.
[0132] At step 510, by use of drawing tool GUI 122, processing unit
110 of geo-referenced electronic drawing application 100 acquires
location information with respect to the event of interest. For
example, geographic location information from geo-location data 118
may be automatically read into address field 210 and/or
geo-location data field 212 of drawing tool GUI 122. Alternatively,
location information may be manually entered in address field 210
and/or geo-location data field 212.
[0133] At step 512, the collection of geo-referenced images is
queried, the matching geo-referenced image is read into drawing
tool GUI 122, and the geo-referenced image is rendered in the
viewing window of drawing tool GUI 122. For example, processing
unit 110 of geo-referenced electronic drawing application 100
queries input images 116, which are the geo-referenced images, in
order to find the input image 116 that matches the location
information of step 510. Once the matching input image 116 is
found, the input image 116 is read into drawing tool GUI 122 and
rendered in the viewing window thereof. In this way, a
geo-referenced image is provided to the user, upon which markings
that indicate the event of interest may be made. In one example and
referring to FIG. 2, an input image 116 that matches "263 Main St,
Reno, Nev." in address field 210 is located in the store of input
images 116 in memory 112 and then read into drawing tool GUI
122.
[0134] At step 514, processing unit 110 of geo-referenced
electronic drawing application 100 may process any symbols that are
selected from symbols library 114 along with any other markings
that are overlaid upon the geo-referenced image to depict the event
of interest. For example, any symbols that are selected using
drawing toolbar 216 of drawing tool GUI 122 may be overlaid upon
the certain input image 116 in order to the depict event of
interest, such as a vehicle accident. In one example and referring
to FIG. 2, a symbol for a car (vehicle #1) and a symbol for a light
truck (vehicle #2) are positioned and rendered upon the input image
116 that matches "263 Main St, Reno, Nev." in address field 210.
Additionally, geo-referenced electronic drawing application 100 is
designed to automatically render symbols to scale upon the certain
input image 116 according to the settings of scale 214.
[0135] Further, other markings (e.g., a polygon, a rectangle, a
circle, a line) may be overlaid upon input image 116. In one
example, using the sketching palette portion of drawing toolbar
216, lines to indicate skid marks may be drawn upon input image
116.
[0136] At step 516, processing unit 110 of geo-referenced
electronic drawing application 100 may process any textual
information related to the geo-referenced image. In one example and
referring to FIG. 2, callout 218 may be used for entering and
displaying textual information about the vehicle collision. Callout
218 is shown overlaid upon input image 116.
[0137] At step 518, processing unit 110 of geo-referenced
electronic drawing application 100 may render and save the
event-specific image along with its associated descriptor file. In
one example when the user has completed marking up (e.g., with
lines, shapes, symbols, text, etc.) the certain input image 116,
the marked up input image 116 may be saved as an event-specific
image 126. For example, during the save operation of geo-referenced
electronic drawing application 100, any event-specific images 126
created therein may be converted to any standard digital image file
format, such as PDF, JPG, and BMP file format, and saved. Further,
its associated descriptor file 128 is created and saved.
[0138] At decision step 520, the user of geo-referenced electronic
drawing application 100 determines whether an image series, such as
the example image series 130 of FIG. 3, is required in order to
adequately depict the event of interest. If yes, method 500
proceeds to step 522. If no, method 500 proceeds to step 526.
[0139] At decision step 522, the user of geo-referenced electronic
drawing application 100 determines whether the image series is
complete. If yes, method 500 proceeds to step 524. If no, method
500 returns to step 510 to begin creating the next event-specific
image.
[0140] At step 524, the descriptor files 128 of the event-specific
images 126 that are included in the image series 130 are associated
and the image series 130 is saved.
[0141] At step 526, the event-specific image 126 and/or all
event-specific images 126 of the image series 130 and any other
information are integrated into the electronic report of interest.
In one example, a certain event-specific image 126 is integrated
into a certain type of report 132, such as traffic collision report
400 of FIG. 4. Further, textual information associated with
event-specific image 126 may be automatically imported into traffic
collision report 400.
[0142] Referring to FIG. 6, a functional block diagram of a
networked system 600 that includes geo-referenced electronic
drawing application 100 for documenting and reporting events is
presented. In this embodiment, geo-referenced electronic drawing
application 100 may be a web-based application. Therefore,
networked system 600 may include an application server 610 upon
which geo-referenced electronic drawing application 100 is
installed.
[0143] Application server 610 may be any application server, such
as a web application server and/or web portal, by which one or more
user 612 may access geo-referenced electronic drawing application
100 with respect to documenting and reporting events. Application
server 610 may be accessed by users 612 via any networked computing
device, such as his/her local computing device 140. In one example,
users 612 may be any personnel associated with accident
investigation companies, law enforcement agencies, and/or insurance
companies.
[0144] Networked system 600 of the present disclosure may further
include an image server 614, which is one example of an entity
supplying input images 116 of FIG. 1. Image server 614 may be any
computer device for storing and providing input images 116, such as
aerial images of geographic locations.
[0145] Networked system 600 of the present disclosure may further
include a central server 616. In one example, central server 616
may be associated with accident investigation companies, law
enforcement agencies, and/or insurance companies. Certain business
applications, such as management applications 618, may reside on
central server 616. Management applications 618 may be, for
example, any incident management applications.
[0146] A network 620 provides the communication link between any
and/or all entities of networked system 600. For example, network
620 provides the communication network by which information may be
exchanged between application server 610, image server 614, central
server 616, and computing devices 140. Network 620 may be, for
example, any local area network (LAN) and/or wide area network
(WAN) for connecting to the Internet.
[0147] In order to connect to network 620, each entity of networked
system 600 includes a communication interface (not shown). For
example, the respective communication interfaces of application
server 610, image server 614, central server 616, and computing
devices 140 may be any wired and/or wireless communication
interface by which information may be exchanged between any
entities of networked system 600. Examples of wired communication
interfaces may include, but are not limited to, USB ports, RS232
connectors, RJ45 connectors, Ethernet, and any combinations
thereof. Examples of wireless communication interfaces may include,
but are not limited to, an Intranet connection, Internet,
Bluetooth.RTM. technology, Wi-Fi, Wi-Max, IEEE 802.11 technology,
radio frequency (RF), Infrared Data Association (IrDA) compatible
protocols, Local Area Networks (LAN), Wide Area Networks (WAN),
Shared Wireless Access Protocol (SWAP), any combinations thereof,
and other types of wireless networking protocols.
[0148] In certain embodiments, geo-referenced electronic drawing
application 100 may include a feature for attaching media files to
reports 132. For example, networked system 600 may include certain
media capture devices 622 for capturing media files 624. Media
capture devices 622 may be any media capture devices, such as
digital cameras, digital audio recorders, digital video recorders,
and the like. Therefore, media files 624 may be, for example,
digital image files, digital audio files, digital video files, and
the like. The media files 624 may likewise have descriptor files
(not shown) associated therewith for, for example, associating to
certain reports 132. In one example, the media files 624 may be
provided as attachments to reports 132. According to other
embodiments, computing device 140 may include one or more media
capture devices as described above.
[0149] The attached media files 624 may be stamped with time,
location and/or direction information. For example, a media file
624 may include a timestamp identifying a calendar date and/or time
that the media file was created and/or a calendar date and/or time
that the media file was stored in memory by the computing device
140. Similarly, the media file may include a location stamp
identifying a location (e.g., a city and state or geographic
coordinates) where the media file was created and/or a location
where the media file was stored in memory by the computing device
140. A media file may also include a direction stamp specifying
directional information associated therewith. For example, if the
media file is a photographic image or video that was taken with a
camera device associated with a compass, the photographic image or
video may be stamped with directional information based on an
output of the compass to indicate that the image or video was taken
while the camera lens was facing northwest. In certain embodiments,
the media files 624 may be automatically stamped with time,
location and/or direction information. The timestamp and location
stamp, particularly when automatically generated, may be used as
verification that the media file was stored at a particular time
and place, such as the time and place where the report associated
with the media file was created. The direction stamp may be used as
verification that the media file was created while a media capture
device was facing in a particular direction or otherwise had a
particular orientation. The location, time and/or direction data
used for the location stamp, timestamp and/or direction stamp may
originate from the computing device on which geo-referenced
electronic drawing application is installed, any other computing
device. For example, the computing device may be GPS-enabled and
may include a timer and a compass. Alternatively, the location,
time and/or direction data may be based on manual data entry by the
user. It should be appreciated that the media file need not be
modified to include the location, time and/or direction data
described above, as the data may alternatively be stored in
association with the media file as distinct data.
[0150] As discussed herein, the computing device 140 shown in FIG.
6 may have a communication interface that may receive information
from network 620, which may be a LAN and/or WAN for connecting to
the Internet. According to one embodiment, information about an
environmental condition may be received as a media file via the
communication interface. For example, weather information (e.g.,
temperature, visibility and precipitation information), traffic
information and/or construction information, may be received from
the Internet via the communication interface. Such information may
be received from a weather service, traffic service, traffic
records, construction service or the like. Received information may
be attached as files to reports 132. Alternatively, or in addition,
received information may incorporated within the reports 132
themselves. For example, if the received information indicates that
the weather at the time of an accident was sunny, such information
may be automatically input to the traffic collision report 400
discussed in connection with FIG. 4. In particular, the report
could include this information as text in a data field, or an
event-specific image 126 in the report could include an image of a
sun or another icon indicating sunny weather. As another example,
if the received information indicates that the visibility at the
time of the accident was 20 feet, the report could include this
information as text in a data field and/or represent this
information in an event-specific image 126. For example, to
represent the area that could not be viewed by a particular driver,
the area beyond a 20 foot radius of the driver in the
event-specific image 126 could be colored gray, blacked out, or
designated with hash marks. Alternatively, the traffic collision
report 400 could be manually updated to include weather
information, traffic information, construction information, or the
like. Condition information received via the communication
interface may be stored with and/or stamped with location, time
and/or direction data indicating when the condition information was
stored by the computing device 140.
[0151] In certain embodiments, central server 616 of networked
system 600 may include a collection of historical reports 626,
which are records of reports 132 that have been processed in the
past. In one example, in the context of vehicle accident reports,
historical reports 626 may be useful to inform current reports 132,
such as current accident reports that are being processed. For
example, being able to review historical information pertaining to
a certain intersection may be useful to add to an accident report
for fault analysis purposes, as certain trends may become apparent.
For example, historical reports 626 may indicate for a certain
highway or street intersection that a steep hill is present, the
traffic light malfunctions, the line of site to the stop sign is
obstructed, there is a poor angle of visibility at the
intersection, the intersection is an accident prone area in poor
weather conditions (e.g., a bridge approaching the intersection
freezes over), and the like. Referring again to step 526 of method
500 of FIG. 5, information from historical reports 626 may be other
information that may be integrated into reports 132.
[0152] In operation, each user of networked system 600 may access
geo-referenced electronic drawing application 100 via his/her local
computing device 140. Networked system 600 may provide a secure
login function, which allows users 612 to access the functions of
geo-referenced electronic drawing application 100. Once authorized,
users 612 may open drawing tool GUI 122 using, for example, the web
browsers of their computing devices 140. Geographic location
information is read into or manually entered into drawing tool GUI
122 and event-specific images 126, image series 130, and/or reports
132 may be generated as described with reference to FIGS. 1 through
5. In this process, input images 116 of image server 614 may be the
source of the geo-referenced images that are read into
geo-referenced electronic drawing application 100. Subsequently,
reports 132 that include geo-referenced images, such as
event-specific images 126, and, optionally, one or more media files
624 attached thereto may be transmitted in electronic form from the
computing devices 140 of users 612 to any entities connected to
network 620 of networked system 600. In one example, reports 132
that include geo-referenced images may be transmitted in electronic
form from the computing devices 140 of users 612 to central server
616 for further review and processing by authorized users only of
networked system 600. This is an example of how geo-referenced
electronic drawing application 100 is used in networked system 600
to provide improved distribution, tracking, and auditing of reports
among entities and to provide improved control over access to
reports.
[0153] Referring again to FIG. 6, networked system 600 is not
limited to the types and numbers of entities that are shown in FIG.
6. Any types and numbers of entities that may be useful in event
documenting and reporting systems may be included in networked
system 600. Further, in another embodiment, geo-referenced
electronic drawing application 100 may be a standalone application
that resides on each networked computing device 140. Therefore, in
this embodiment, networked system 600 of FIG. 6 need not include
application server 610.
[0154] In summary and referring to FIGS. 1 through 6,
geo-referenced electronic drawing application 100 of the present
disclosure provides the ability to electronically mark up real
world geo-referenced images, such as input images 116, with
symbols, shapes, and/or lines in order to provide improved accuracy
and consistent accuracy with respect to drawings that support
incident reports.
[0155] Further, geo-referenced electronic drawing application 100
of the present disclosure provides the ability to electronically
mark up real world geo-referenced images with symbols, shapes,
and/or lines to scale, again providing improved accuracy and
consistent accuracy with respect to drawings that support incident
reports.
[0156] Further, geo-referenced electronic drawing application 100
of the present disclosure provides a standard symbols library, such
as symbols library 114, thereby providing standardization with
respect to drawings that support incident reports.
[0157] Further, networked systems that include geo-referenced
electronic drawing application 100 of the present disclosure, such
as networked system 600, provide improved distribution, tracking,
and auditing of reports among entities and provide improved control
over access to reports.
CONCLUSION
[0158] While various inventive embodiments have been described and
illustrated herein, those of ordinary skill in the art will readily
envision a variety of other means and/or structures for performing
the function and/or obtaining the results and/or one or more of the
advantages described herein, and each of such variations and/or
modifications is deemed to be within the scope of the inventive
embodiments described herein. More generally, those skilled in the
art will readily appreciate that all parameters, dimensions,
materials, and configurations described herein are meant to be
exemplary and that the actual parameters, dimensions, materials,
and/or configurations will depend upon the specific application or
applications for which the inventive teachings is/are used. Those
skilled in the art will recognize, or be able to ascertain using no
more than routine experimentation, many equivalents to the specific
inventive embodiments described herein. It is, therefore, to be
understood that the foregoing embodiments are presented by way of
example only and that, within the scope of the appended claims and
equivalents thereto, inventive embodiments may be practiced
otherwise than as specifically described and claimed. Inventive
embodiments of the present disclosure are directed to each
individual feature, system, article, material, kit, and/or method
described herein. In addition, any combination of two or more such
features, systems, articles, materials, kits, and/or methods, if
such features, systems, articles, materials, kits, and/or methods
are not mutually inconsistent, is included within the inventive
scope of the present disclosure.
[0159] The above-described embodiments can be implemented in any of
numerous ways. For example, the embodiments may be implemented
using hardware, software or a combination thereof. When implemented
in software, the software code can be executed on any suitable
processor or collection of processors, whether provided in a single
computer or distributed among multiple computers.
[0160] Further, it should be appreciated that a computer may be
embodied in any of a number of forms, such as a rack-mounted
computer, a desktop computer, a laptop computer, or a tablet
computer. Additionally, a computer may be embedded in a device not
generally regarded as a computer but with suitable processing
capabilities, including a Personal Digital Assistant (PDA), a smart
phone or any other suitable portable or fixed electronic
device.
[0161] Also, a computer may have one or more input and output
devices. These devices can be used, among other things, to present
a user interface. Examples of output devices that can be used to
provide a user interface include printers or display screens for
visual presentation of output and speakers or other sound
generating devices for audible presentation of output. Examples of
input devices that can be used for a user interface include
keyboards, and pointing devices, such as mice, touch pads, and
digitizing tablets. As another example, a computer may receive
input information through speech recognition or in other audible
format.
[0162] Such computers may be interconnected by one or more networks
in any suitable form, including a local area network or a wide area
network, such as an enterprise network, and intelligent network
(IN) or the Internet. Such networks may be based on any suitable
technology and may operate according to any suitable protocol and
may include wireless networks, wired networks or fiber optic
networks.
[0163] FIG. 14 shows an illustrative computer 1400 that may be used
at least in part to implement the geo-referenced electronic drawing
application 100 described herein in accordance with some
embodiments. For example, the computer 1400 comprises a memory
1410, one or more processing units 1412 (also referred to herein
simply as "processors"), one or more communication interfaces 1414,
one or more display units 1416, and one or more user input devices
1418. The memory 1410 may comprise any computer-readable media, and
may store computer instructions (also referred to herein as
"processor-executable instructions") for implementing the various
functionalities described herein. The processing unit(s) 1412 may
be used to execute the instructions. The communication interface(s)
1414 may be coupled to a wired or wireless network, bus, or other
communication means and may therefore allow the computer 1400 to
transmit communications to and/or receive communications from other
devices. The display unit(s) 1416 may be provided, for example, to
allow a user to view various information in connection with
execution of the instructions. The user input device(s) 1418 may be
provided, for example, to allow the user to make manual
adjustments, make selections, enter data or various other
information, and/or interact in any of a variety of manners with
the processor during execution of the instructions.
[0164] The various methods or processes outlined herein may be
coded as software that is executable on one or more processors that
employ any one of a variety of operating systems or platforms.
Additionally, such software may be written using any of a number of
suitable programming languages and/or programming or scripting
tools, and also may be compiled as executable machine language code
or intermediate code that is executed on a framework or virtual
machine.
[0165] In this respect, various inventive concepts may be embodied
as a computer readable storage medium (or multiple computer
readable storage media) (e.g., a computer memory, one or more
floppy discs, compact discs, optical discs, magnetic tapes, flash
memories, circuit configurations in Field Programmable Gate Arrays
or other semiconductor devices, or other non-transitory medium or
tangible computer storage medium) encoded with one or more programs
that, when executed on one or more computers or other processors,
perform methods that implement the various embodiments of the
invention discussed above. The computer readable medium or media
can be transportable, such that the program or programs stored
thereon can be loaded onto one or more different computers or other
processors to implement various aspects of the present invention as
discussed above.
[0166] The terms "program" or "software" are used herein in a
generic sense to refer to any type of computer code or set of
computer-executable instructions that can be employed to program a
computer or other processor to implement various aspects of
embodiments as discussed above. Additionally, it should be
appreciated that according to one aspect, one or more computer
programs that when executed perform methods of the present
invention need not reside on a single computer or processor, but
may be distributed in a modular fashion amongst a number of
different computers or processors to implement various aspects of
the present invention.
[0167] Computer-executable instructions may be in many forms, such
as program modules, executed by one or more computers or other
devices. Generally, program modules include routines, programs,
objects, components, data structures, etc. that perform particular
tasks or implement particular abstract data types. Typically the
functionality of the program modules may be combined or distributed
as desired in various embodiments.
[0168] Also, data structures may be stored in computer-readable
media in any suitable form. For simplicity of illustration, data
structures may be shown to have fields that are related through
location in the data structure. Such relationships may likewise be
achieved by assigning storage for the fields with locations in a
computer-readable medium that convey relationship between the
fields. However, any suitable mechanism may be used to establish a
relationship between information in fields of a data structure,
including through the use of pointers, tags or other mechanisms
that establish relationship between data elements.
[0169] Also, various inventive concepts may be embodied as one or
more methods, of which an example has been provided. The acts
performed as part of the method may be ordered in any suitable way.
Accordingly, embodiments may be constructed in which acts are
performed in an order different than illustrated, which may include
performing some acts simultaneously, even though shown as
sequential acts in illustrative embodiments.
[0170] All definitions, as defined and used herein, should be
understood to control over dictionary definitions, definitions in
documents incorporated by reference, and/or ordinary meanings of
the defined terms.
[0171] The indefinite articles "a" and "an," as used herein in the
specification and in the claims, unless clearly indicated to the
contrary, should be understood to mean "at least one."
[0172] The phrase "and/or," as used herein in the specification and
in the claims, should be understood to mean "either or both" of the
elements so conjoined, i.e., elements that are conjunctively
present in some cases and disjunctively present in other cases.
Multiple elements listed with "and/or" should be construed in the
same fashion, i.e., "one or more" of the elements so conjoined.
Other elements may optionally be present other than the elements
specifically identified by the "and/or" clause, whether related or
unrelated to those elements specifically identified. Thus, as a
non-limiting example, a reference to "A and/or B", when used in
conjunction with open-ended language such as "comprising" can
refer, in one embodiment, to A only (optionally including elements
other than B); in another embodiment, to B only (optionally
including elements other than A); in yet another embodiment, to
both A and B (optionally including other elements); etc.
[0173] As used herein in the specification and in the claims, "or"
should be understood to have the same meaning as "and/or" as
defined above. For example, when separating items in a list, "or"
or "and/or" shall be interpreted as being inclusive, i.e., the
inclusion of at least one, but also including more than one, of a
number or list of elements, and, optionally, additional unlisted
items. Only terms clearly indicated to the contrary, such as "only
one of" or "exactly one of," or, when used in the claims,
"consisting of," will refer to the inclusion of exactly one element
of a number or list of elements. In general, the term "or" as used
herein shall only be interpreted as indicating exclusive
alternatives (i.e. "one or the other but not both") when preceded
by terms of exclusivity, such as "either," "one of," "only one of,"
or "exactly one of." "Consisting essentially of," when used in the
claims, shall have its ordinary meaning as used in the field of
patent law.
[0174] As used herein in the specification and in the claims, the
phrase "at least one," in reference to a list of one or more
elements, should be understood to mean at least one element
selected from any one or more of the elements in the list of
elements, but not necessarily including at least one of each and
every element specifically listed within the list of elements and
not excluding any combinations of elements in the list of elements.
This definition also allows that elements may optionally be present
other than the elements specifically identified within the list of
elements to which the phrase "at least one" refers, whether related
or unrelated to those elements specifically identified. Thus, as a
non-limiting example, "at least one of A and B" (or, equivalently,
"at least one of A or B," or, equivalently "at least one of A
and/or B") can refer, in one embodiment, to at least one,
optionally including more than one, A, with no B present (and
optionally including elements other than B); in another embodiment,
to at least one, optionally including more than one, B, with no A
present (and optionally including elements other than A); in yet
another embodiment, to at least one, optionally including more than
one, A, and at least one, optionally including more than one, B
(and optionally including other elements); etc.
[0175] In the claims, as well as in the specification above, all
transitional phrases such as "comprising," "including," "carrying,"
"having," "containing," "involving," "holding," "composed of," and
the like are to be understood to be open-ended, i.e., to mean
including but not limited to. Only the transitional phrases
"consisting of" and "consisting essentially of" shall be closed or
semi-closed transitional phrases, respectively, as set forth in the
United States Patent Office Manual of Patent Examining Procedures,
Section 2111.03.
* * * * *