U.S. patent application number 15/835284 was filed with the patent office on 2018-06-14 for method and system for risk modeling in autonomous vehicles.
The applicant listed for this patent is Zendrive, Inc.. Invention is credited to Jonathan Matus, Pankaj Risbood.
Application Number | 20180164825 15/835284 |
Document ID | / |
Family ID | 62489276 |
Filed Date | 2018-06-14 |
United States Patent
Application |
20180164825 |
Kind Code |
A1 |
Matus; Jonathan ; et
al. |
June 14, 2018 |
METHOD AND SYSTEM FOR RISK MODELING IN AUTONOMOUS VEHICLES
Abstract
A method for adaptive risk modeling for an autonomous vehicle,
the method comprising: retrieving parameters of an identified
driving mission of the autonomous vehicle; in response to the
parameters of the identified driving mission, generating values of:
a comparative autonomous parameter, a mix model parameter, a
surrounding risk parameter, a geographic operation parameter, and a
security risk parameter upon evaluating situational inputs
associated with the identified driving mission with a comparative
autonomous model, a mix model, a sensor-surrounding model, a
geography-dependent model, and a security risk model generated
using sensor and supplementary data extraction systems associated
with the autonomous vehicle; upon generating values, generating a
risk analysis with a rule-based algorithm; and contemporaneously
with execution of the identified driving mission, implementing a
response action associated with control of the autonomous vehicle,
based upon the risk analysis.
Inventors: |
Matus; Jonathan; (San
Francisco, CA) ; Risbood; Pankaj; (San Francisco,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Zendrive, Inc. |
San Francisco |
CA |
US |
|
|
Family ID: |
62489276 |
Appl. No.: |
15/835284 |
Filed: |
December 7, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62431949 |
Dec 9, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
B60W 2050/0075 20130101;
B60W 30/00 20130101; G08G 5/00 20130101; B64C 39/02 20130101; B60W
50/0098 20130101; G05D 1/00 20130101; G05D 1/0214 20130101; G06K
9/6218 20130101; B60W 40/09 20130101; G06K 9/00335 20130101; B60W
2050/0089 20130101; B60W 2556/50 20200201; G08G 1/096725 20130101;
G05D 2201/0213 20130101; G06K 9/00791 20130101; B64D 11/0624
20141201; G06Q 40/08 20130101; G05D 1/0088 20130101; G08G 5/003
20130101; B64C 39/024 20130101; G08G 1/0112 20130101; G06K 9/62
20130101; G06K 9/00 20130101 |
International
Class: |
G05D 1/02 20060101
G05D001/02; G05D 1/00 20060101 G05D001/00; G08G 1/01 20060101
G08G001/01; B60W 40/09 20060101 B60W040/09; G08G 1/0967 20060101
G08G001/0967 |
Claims
1. A method for adaptive risk modeling for an autonomous vehicle,
the method comprising: with sensor systems associated with the
autonomous vehicle, generating: a comparative autonomous model that
compares autonomous vehicle operation to human driving operation in
a set of driving scenarios, a mix model characterizing operation of
the autonomous vehicle in mixed-traffic driving scenarios, a
sensor-surrounding model characterizing surroundings of the
autonomous vehicle; with a supplementary data extraction system,
generating: a geography-dependent model characterizing geographic
location-specific acceptable driving behaviors; with a security
diagnostic system associated with the autonomous vehicle,
generating a security risk model characterizing security risks of
the autonomous vehicle; in response to an identified driving
mission of the autonomous vehicle, generating values of: a
comparative autonomous parameter, a mix model parameter, a
surrounding risk parameter, a geographic operation parameter, and a
security risk parameter upon evaluating situational inputs
associated with the identified driving mission with the comparative
autonomous model, the mix model, the sensor-surrounding model, the
geography-dependent model, and the security risk model; upon
generating values, generating a risk analysis upon processing
values of the comparative autonomous parameter, the mix model
parameter, the surrounding risk parameter, the geographic operation
parameter, and the security risk parameter with a rule-based
algorithm; and contemporaneously with execution of the identified
driving mission, implementing a response action associated with
control of the autonomous vehicle, based upon the risk
analysis.
2. The method of claim 1, wherein generating the comparative
autonomous model comprises extracting at least one of a position
characteristic, a velocity characteristic, and an acceleration
characteristic from sensor systems associated with the autonomous
vehicle during operation by a human entity and a non-human
entity.
3. The method of claim 1, wherein generating the sensor-surrounding
model comprises extracting an environmental dataset from an
environment-sensing sensor system associated with the autonomous
vehicle, and evaluating a set of specifications of the autonomous
vehicle against features of the environmental dataset to determine
appropriateness of the autonomous vehicle for the identified
driving mission.
4. The method of claim 1, wherein generating the mix model
comprises extracting a mixed traffic dataset characterizing
interactions of the vehicle with a set of traffic types from a set
of sensors, and tagging the mixed traffic dataset with mixed
traffic event objects.
5. The method of claim 1, wherein generating the
geography-dependent model comprises establishing a first interface
between the autonomous vehicle and a control system in
communication with a computing system and a second interface
between the computing system and the supplementary data extraction
system.
6. The method of claim 1, wherein processing at least one of the
comparative autonomous parameter, the mix model parameter, the
surrounding risk parameter, the geographic operation parameter, and
the security risk parameter is based on combination of model
outputs for improving accuracy of the risk analysis.
7. The method of claim 6, wherein combination of model outputs
comprises determining a first weight associated with at least one
of the comparative autonomous parameter, the mix model parameter,
the surrounding risk parameter, the geographic operation parameter,
and the security risk parameter; determining a second weight
associated with at least one of the comparative autonomous
parameter, the mix model parameter, the surrounding risk parameter,
the geographic operation parameter, and the security risk
parameter; and combining parameter values based on the first and
the second weights for improving the accuracy of the risk
analysis.
8. The method of claim 1, wherein implementing the response action
associated with control of the autonomous vehicle comprises
generating and executing control instructions for modified
operation of the autonomous vehicle using control interfaces with
subsystems of the autonomous vehicle.
9. A method for adaptive risk modeling for an autonomous vehicle,
the method comprising: retrieving parameters of an identified
driving mission of the autonomous vehicle; in response to the
parameters of the identified driving mission, generating values of:
a comparative autonomous parameter, a mix model parameter, a
surrounding risk parameter, a geographic operation parameter, and a
security risk parameter upon evaluating situational inputs
associated with the identified driving mission with a comparative
autonomous model, a mix model, a sensor-surrounding model, a
geography-dependent model, and a security risk model generated
using sensor and supplementary data extraction systems associated
with the autonomous vehicle; upon generating values, generating a
risk analysis upon processing values of the comparative autonomous
parameter, the mix model parameter, the surrounding risk parameter,
the geographic operation parameter, and the security risk parameter
with a rule-based algorithm; and contemporaneously with execution
of the identified driving mission, implementing a response action
associated with control of the autonomous vehicle, based upon the
risk analysis.
10. The method of claim 9, further comprising: with sensor systems
associated with the autonomous vehicle, generating: the comparative
autonomous model that compares autonomous vehicle operation to
human driving operation in a set of driving scenarios, the mix
model characterizing operation of the autonomous vehicle in
mixed-traffic driving scenarios, and the sensor-surrounding model
characterizing surroundings of the autonomous vehicle.
11. The method of claim 10, wherein generating the comparative
autonomous model comprises extracting at least one of a position
characteristic, a velocity characteristic, and an acceleration
characteristic from sensor systems associated with the autonomous
vehicle during operation by a human entity and a non-human
entity.
12. The method of claim 10, wherein generating the mix model
comprises extracting a mixed traffic dataset characterizing
interactions of the vehicle with a set of traffic types from a set
of sensors, and tagging the mixed traffic dataset with mixed
traffic event objects.
13. The method of claim 9, further comprising: with a supplementary
data extraction system, generating the geography-dependent model
characterizing geographic location-specific acceptable driving
behaviors.
14. The method of claim 9, further comprising with a security
diagnostic system coupled to the autonomous vehicle, generating the
security risk model characterizing security risks of the autonomous
vehicle.
15. The method of claim 9, wherein retrieving parameters of the
identified driving mission of the autonomous vehicle comprises
receiving an input sourced from at least one of a passenger and an
operator of the autonomous vehicle, the input indicative of a
destination.
16. The method of claim 9, wherein processing at least one of the
comparative autonomous parameter, the mix model parameter, the
surrounding risk parameter, the geographic operation parameter, and
the security risk parameter is based on weighted combination of
model outputs for improving accuracy of the risk analysis.
17. The method of claim 1, further comprising: obtaining a set of
computer-implemented rules defining the at least one of the
comparative autonomous parameter, the mix model parameter, the
surrounding risk parameter, the geographic operation parameter, and
the security risk parameter as a function of risk feature types
associated with outputs of at least one of the comparative
autonomous model, the mix model, the sensor-surrounding model, the
geography-dependent model, and the security risk model; and wherein
generating the risk analysis comprises generating the risk analysis
based on the set of computer-implemented rules.
19. The method of claim 9, wherein implementing the response action
associated with control of the autonomous vehicle comprises
generating and executing control instructions for modified
operation of the autonomous vehicle using control interfaces with
subsystems of the autonomous vehicle.
20. The method of claim 9, wherein implementing the response action
associated with control of the autonomous vehicle comprises
transmitting the risk analysis to a vehicle control platform, and
modifying, from the vehicle control platform, operational rules of
units of the autonomous vehicle.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of Provisional Patent
application Ser. No. 62/431,949, filed 9 Dec. 2016, which is
incorporated in its entirety by this reference.
TECHNICAL FIELD
[0002] This invention relates generally to the vehicle telematics
field, and more specifically to a new and useful method and system
for risk modeling in the vehicle telematics field.
BACKGROUND
[0003] Autonomous and semi-autonomous vehicles require robust
systems for handling and responding to various conditions and
hazards in their environments. Media responses to recent incidents
involving current autonomous and/or semi-autonomous vehicles have
focused on system-attributable failures (e.g., energy storage
issues, semi-autonomous mode related incidents, etc.); however,
without technology developments related to adaptive risk
mitigation, future incidents will draw into question the ability of
such autonomous systems to handle real-world driving conditions and
hazards in a safe manner. In particular, improved risk modeling
that can be used to provide adaptive and appropriate responses to
transit-related factors in a timely manner will be crucial.
[0004] Current algorithms and systems, while under a state of
improvement, currently lack an ability adequately account for or
adapt to risks attributed to human factors, non-human factors,
environmental factors, and other factors in an adaptive,
comprehensive, and suitably responsive manner. Thus, there is a
need in the vehicle telematics field to create a new and useful
method and system for adaptive risk modeling. This invention
provides such a new and useful system and method.
BRIEF DESCRIPTION OF THE FIGURES
[0005] FIGS. 1A-1C depict flowcharts of embodiments of methods for
autonomous vehicle risk modeling; and
[0006] FIGS. 2A and 2B are schematics of a method and system for
autonomous vehicle risk modeling.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0007] The following description of the preferred embodiments of
the invention is not intended to limit the invention to these
preferred embodiments, but rather to enable any person skilled in
the art to make and use this invention.
1. Overview
[0008] As shown in FIGS. 1A, 1B, and 2A, a method 100 for adaptive
risk modeling for an autonomous vehicle includes: evaluating
behavioral risk features of the autonomous vehicle according to a
comparative autonomous model S110, S110'; evaluating mixed traffic
features of the autonomous vehicle from a mix model S120, S120';
determining geography-dependent behavioral features associated with
a geography-dependent model S130, S130'; determining a surrounding
risk parameter upon evaluating sensor systems of the autonomous
vehicle and environmental conditions surrounding the autonomous
vehicle with a sensor-surrounding model S140, S140'; determining a
security threat parameter for the autonomous vehicle with a
security risk model S150, S150'; transforming outputs of the
comparative autonomous model, the mix model, the
geography-dependent model, the sensor-surrounding model, and the
security risk model, with an exposure parameter, into a risk
analysis S160, S160'; and implementing a response action associated
with the autonomous vehicle based upon the risk analysis S170,
S170'.
[0009] In one embodiment, as shown in FIG. 1C, the method 100 can
include one or more of: generating models including: a comparative
autonomous model S10 that compares autonomous vehicle operation to
human driving operation in a set of driving scenarios, a mix model
S20 characterizing operation of the autonomous vehicle in
mixed-traffic driving scenarios, a geography-dependent model S30
characterizing geographic location-specific acceptable driving
behaviors, a sensor-surrounding model S40 characterizing
surroundings of the autonomous vehicle, and a security risk model
S50 characterizing security risks of the autonomous vehicle; in
response to an identified driving mission of the autonomous
vehicle, generating values of: a comparative autonomous parameter,
a mix model parameter, a surrounding risk parameter, a geographic
operation parameter, and a security risk parameter upon evaluating
situational inputs associated with the identified driving mission
with the comparative autonomous model, the mix model, the
sensor-surrounding model, the geography-dependent model, and the
security risk model S60; upon generating values, generating a risk
analysis upon processing values of the comparative autonomous
parameter, the mix model parameter, the surrounding risk parameter,
the geographic operation parameter, and the security risk parameter
with a rule-based algorithm S65; and contemporaneously with
execution of the identified driving mission, implementing a
response action associated with control of the autonomous vehicle,
based upon the risk analysis S70.
[0010] The method 100 and/or associated systems function to provide
a methodology for assessing the safety of autonomous vehicle
functions (e.g., auto-pilot functions) based upon real life data in
specific locations and/or generalized locations, in order to
improve autonomous vehicle function (e.g., auto-pilot function,
etc.). Development of assessments and implementation/testing of
response can additionally or alternatively be used to increase
adaptability of autonomous vehicles to risks in real-time (or
near-real time) in order to improve autonomous vehicle safety in
real world situations. In specific applications, the method(s) 100
described herein can implement sensor systems (e.g., optical
sensors, radar modules, microphones, etc.) to collect data
describing real-world driving situations associated with different
environments and/or human behavioral factors, and test autonomous
vehicle function such situations (simulated or actual).
Additionally or alternatively, in specific applications, the
method(s) 100 described herein can implement supplementary data
extraction systems associated with the autonomous vehicle(s) to
retrieve, and transform information (e.g., using APIs) associated
with geographically idiosyncratic driving/traffic/pedestrian
behaviors or other driving formalities, in a manner that results in
improved control of the autonomous vehicle(s). Additionally or
alternatively, in specific applications, the method(s) 100
described herein can implement security diagnostic systems
associated with the autonomous vehicle(s) to generate, retrieve,
and transform information associated with security assessments of
the systems of the autonomous vehicle(s), in a manner that results
in improved control of the autonomous vehicle(s) relative to
security threats.
[0011] The method 100 further functions to leverage the
availability of position, velocity, and/or acceleration (PVA) data
from multiple data sources (and potentially the distinct set of
abilities available to each of those sources) to evaluate and/or
improve operation of autonomous vehicles in real-world situations.
Note that throughout this disclosure, the term PVA data refers to
any data related to the position, velocity, and/or acceleration of
the system operating the method 100 (or of an associated vehicle).
For example, in some cases PVA data may include only acceleration
data (and not position/velocity data). The method 100 may
additionally or alternatively leverage the availability of
additional data captured by the data sources (e.g., audio data,
vehicle sensor data, etc.).
[0012] One or more instances of the method 100 and/or processes
described herein can be performed asynchronously (e.g.,
sequentially), concurrently (e.g., in parallel; concurrently on
different threads for parallel computing to improve system
processing ability for determining device association conditions,
and/or other suitable data; etc.), in temporal relation to a
trigger event, and/or in any other suitable order at any suitable
time and frequency by and/or using one or more instances of the
system 200, elements, and/or entities described herein.
Additionally or alternatively, data described herein (e.g.,
movement data, vehicle movement characteristics, environmental
surrounding characteristics, security risk data,
geographically-dependent characteristics, etc.) can be associated
with any suitable temporal indicators (e.g., seconds, minutes,
hours, days, weeks, etc.; temporal indicators indicating when the
data was collected, determined and/or otherwise processed; temporal
indicators providing context to content described by the data, such
as temporal indicators indicating the time at which a vehicle
exhibited the vehicle movement characteristics associated with the
temporal indicators; etc.) and/or change in temporal indicators
(e.g., data over time; change in data; data patterns; data trends;
data extrapolation and/or other prediction; etc.). However, the
method 100 and/or systems can be configured in any suitable
manner.
[0013] The method 100 is preferably implemented using autonomous
vehicle systems, computing systems, and elements that are in
communication with and/or integrated with one or more autonomous
vehicles undergoing risk-modeling and/or response modification
according to the method 100. In variations, the autonomous vehicle
system(s) and/or computing system(s) can include or communicate
with one or more of: onboard computing systems integrated with the
vehicle(s), onboard sensor modules integrated with the vehicle(s),
onboard output devices (e.g., speakers, displays, haptic systems,
vehicle internal environment control systems, etc.) integrated with
the vehicle(s), computing systems of devices (e.g., wearable
computing devices, mobile computing devices, etc.) within or near
the vehicle(s), sensor modules of devices (e.g., e.g., speakers,
displays, haptic systems, vehicle internal environment control
systems, etc.), wearable computing devices, mobile computing
devices, etc.) within or near the vehicle(s), output devices of
devices (e.g., wearable computing devices, mobile computing
devices, etc.) within or near the vehicle(s), remote servers,
remote computing systems, cloud-based computing systems, and/or any
other suitable computing/sensor elements.
[0014] In one embodiment, as shown in FIGS. 2A and 2B, a system 200
associated with the method 100 can include one or more of: an
autonomous vehicle 210; a sensor system associated with the
autonomous vehicle and comprising a first set of vehicle system
sensors 221 and a second set of vehicle environment sensors 222; a
supplementary data extraction system 230 for retrieving
geographically-dependent driving information; and a security
diagnostic system 240 of the autonomous vehicle; wherein the system
is operable between: a risk analysis mode wherein the system
generates a set of models from the sensor system, the supplementary
data extraction system, and the security diagnostic system,
processes parameters of an identified driving mission with the set
of models; and outputs a risk analysis according to a rule-based
algorithm, and a response action mode that transforms the risk
analysis into a vehicle control schematic and contemporaneously
with execution of the identified driving mission, implements the
vehicle control schematic with control interfaces of the autonomous
vehicle.
[0015] The system 200 can additionally or alternatively include
elements of the system(s) described in one or more of: U.S.
application Ser. No. 14/566 titled "System and Method for Assessing
Risk Through a Social Network" and filed on 10 Dec. 2014, U.S.
application Ser. No. 14/206,721 titled "System and Method for
Determining a Driver in a Telematic Application" and filed on 12
Mar. 2014, U.S. application Ser. No. 15/243,513 titled "Method for
Accelerometer-Assisted Navigation" and filed on 22 Aug. 2016, U.S.
application Ser. No. 15/243,565 titled "Method for Smartphone-Based
Accident Detection" and filed on 22 Aug. 2016, and U.S. application
Ser. No. 15/702,601 titled "Method for Mobile Device-Based
Cooperative Data Capture" and filed on 12 Sep. 2017, which are
herein incorporated in their entireties by this reference. However,
the method 100 can additionally or alternatively be implemented
using any other suitable system(s).
2. Benefits.
[0016] In specific examples, the method 100 and/or system 200 can
confer several benefits over conventional methodologies. In
specific examples, the method 100 and/or system 200 can perform one
or more of the following:
[0017] First, the technology can confer improvements in
computer-related technology (e.g., vehicle telematics,
computational modeling associated with vehicle movement
characteristics, etc.) by leveraging non-generic location data
(e.g., GPS data), motion data (e.g., accelerometer data, gyroscope
data), and/or other suitable data from a plurality of mobile
devices (e.g., non-generalized mobile devices), sensor systems
associated with the vehicle and/or surroundings of the vehicle,
security diagnostic systems, and any other suitable systems to
improve accuracy of risk assessments related to driving missions of
the autonomous vehicle and/or vehicle movement characteristics
(e.g., which can thereby enable appropriately generated and/or
timed user-related actions). In examples, the technology can confer
improvements in the application of such technology by enabling
convenient and unobtrusive autonomous vehicle control, as well as
improved autonomous vehicle control over time, through passively
collected data. As such, that the technology can be applied, in
examples, without requiring a user to purchase additional
hardware.
[0018] Second, the technology can automatically initiate autonomous
vehicle operation modifications and/or user-related actions based
on the improved accuracy in risk assessments (e.g., which can be
determined in substantially real-time). For example, correction of
deviations in proper vehicle operation can be initiated, using
interfaces to control systems of the autonomous vehicle, in
response to detection of one or more risks surpassing defined
thresholds.
[0019] Third, the technology can provide technical solutions
necessarily rooted in computer technology (e.g., utilizing
different computational models to determine vehicle operation risk
analyses and movement characteristics based on data streams from
sensor systems, etc.) to overcome issues specifically arising with
computer technology (e.g., issues surrounding how to leverage a
plurality of autonomous vehicle-associated datastreams collected by
a plurality of computing devices; issues surrounding accurately and
appropriately initiating response actions for different vehicular
situations; etc.). In another example, the technology can apply
computer-implemented rules (e.g., feature engineering rules for
processing movement data and/or supplementary data into an operable
form for generating features; operator preference rules for
determining the types of data that are permitted to be collected;
data collection and/or processing rules for data from associated
computing devices, mobile devices, sensor systems; etc.).
[0020] Fourth, the technology can confer improvements in the
functioning of computational systems themselves. For example, the
technology can improve upon the processing of collected non-generic
data (e.g., stored in association with personalized user accounts
as a cohesive database system) by improving storage and retrieval
of different models tailored for device association condition
determination and/or vehicle operation risk characteristic
determination (e.g., selecting and retrieving risk analysis model
components indicated above and described below, etc.).
[0021] Fifth, the technology can amount to an inventive
distribution of functionality across a network including a
plurality of devices, vehicle computing systems, mobile devices
associated with a moving vehicle, and a vehicle movement
determination system (e.g., a remote computing system, etc.),
where, for example, movement data collection functionality can be
performed by the plurality of mobile devices, and determination of
device association conditions, vehicle movement characteristics,
and/or other data can be determined by the vehicle movement
determination system. In examples, functionality associated with
portions of the method 100 can be distributed in any suitable
manner (e.g., where one or more devices or computing subsystems of
the system can determine risk analysis parameter values, vehicle
movement characteristics, response actions, etc.).
[0022] Sixth, the technology can leverage specialized computing
devices (e.g., an autonomous vehicle computing platform including a
network of mobile devices with GPS location capabilities; sensor
devices with varied functionalities, etc.) to collect specialized
datasets for subsequent processing.
[0023] Seventh, adaptations of the technology can benefit
non-autonomous or semi-autonomous vehicle operation applications.
In an example, driving guidance (e.g., to improve safety, fuel
efficiency, etc.) can be provided in real-time to a driver at a
driver mobile device (e.g., classified as associated with the
driver based on data collected from a plurality of mobile devices,
the vehicle system platform, etc.) based on the risk assessments.
In another example, improved insurance processing can be
facilitated by the improved accuracy of vehicle movement
characteristics describing the movement of the vehicle and/or other
vehicles.
[0024] The technology can, however, provide any other suitable
benefit(s) in the context of using non-generalized computer systems
for determining, using, and/or otherwise processing vehicle
movement characteristics.
3. Method
3.1 Method--Behavioral Risk Analysis Using Comparative Autonomous
Model
[0025] Block S110 recites: evaluating behavioral risk features
according to a comparative autonomous model, which functions to
incorporate features associated with more intense driving maneuvers
into risk evaluations of the autonomous vehicle. As such, Block
S110 can include collecting data and processing data with the
comparative autonomous model to produce one or more outputs (e.g.,
parameter values) that can be further transformed into the risk
analysis in downstream portions of the method 100. Block S110
preferably includes implementing a comparative autonomous model
that compares autonomous vehicle behavior to human driver behavior
(e.g., an "average human" behavior determined upon analyzing
responses of a population of humans) across different driving
maneuvers. Preferably, the driving maneuvers of Block S110 include
driving maneuvers having a higher degree of severity or other
kinematic characteristics surpassing a certain threshold (e.g.,
driving above a specific velocity threshold, abrupt changes in
velocity, abrupt changes in direction, etc.) in relation to
magnitude and/or changes in position, velocity, and/or acceleration
of the autonomous vehicle. However, the driving maneuvers of Block
S110 can additionally or alternatively include driving maneuvers
having a less high degree of severity (e.g., driving below a
specific velocity threshold, non-abrupt changes in velocity,
non-abrupt changes in direction, etc.)
[0026] In relation to Block S110, the method 100 can include or
otherwise be associated with Block S10, which recites: generating a
comparative autonomous model that compares autonomous vehicle
operation to human driving in a set of driving scenarios. In
relation to model generation, Blocks S10 and S110 can include
extracting PVA data associated with different higher intensity
driving maneuvers (or other driving maneuvers, as indicated above).
As such, Block S110 can include collecting one or more of: vehicle
position data, vehicle velocity data, vehicle acceleration data,
from one or more position/velocity/acceleration (PVA) data
collecting modules (e.g., GPS modules, beacon systems, Bluetooth
units, accelerometers, gyroscopes, optical sensors, radar units,
LiDAR units, microphones, etc.) of systems integrated with or
otherwise associated with the vehicle as described above.
Extraction can be implemented by accessing data derived from mobile
devices on-board or otherwise associated with the vehicle, as
described in one or more of the incorporated applications.
Additionally or alternatively, extraction can be implemented
directly using computing interfaces to one or more sensor
subsystems indicated above, or indirectly through software
programming interfaces (e.g., application programming interfaces,
etc.).
[0027] Generation of the comparative autonomous model can include
implementing approaches including or derived from one or more of
the following: probabilistic properties, heuristic properties,
deterministic properties, and/or other portions of the method 100,
and/or any other suitable properties. Models described herein can
employ one or more of: supervised learning (e.g., using logistic
regression, using back propagation neural networks, using random
forests, decision trees, etc.), unsupervised learning (e.g., using
an Apriori algorithm, using K-means clustering), semi-supervised
learning, reinforcement learning (e.g., using a Q-learning
algorithm, using temporal difference learning), a regression
algorithm (e.g., ordinary least squares, logistic regression,
stepwise regression, multivariate adaptive regression splines,
locally estimated scatterplot smoothing, etc.), an instance-based
method (e.g., k-nearest neighbor, learning vector quantization,
self-organizing map, etc.), a regularization method (e.g., ridge
regression, least absolute shrinkage and selection operator,
elastic net, etc.), a decision tree learning method (e.g.,
classification and regression tree, iterative dichotomiser 3, C4.5,
chi-squared automatic interaction detection, decision stump, random
forest, multivariate adaptive regression splines, gradient boosting
machines, etc.), a Bayesian method (e.g., naive Bayes, averaged
one-dependence estimators, Bayesian belief network, etc.), a kernel
method (e.g., a support vector machine, a radial basis function, a
linear discriminate analysis, etc.), a clustering method (e.g.,
k-means clustering, expectation maximization, etc.), an associated
rule learning algorithm (e.g., an Apriori algorithm, an Eclat
algorithm, etc.), an artificial neural network model (e.g., a
Perceptron method, a back-propagation method, a Hopfield network
method, a self-organizing map method, a learning vector
quantization method, etc.), a deep learning algorithm (e.g., a
restricted Boltzmann machine, a deep belief network method, a
convolution network method, a stacked auto-encoder method, etc.), a
dimensionality reduction method (e.g., principal component
analysis, partial lest squares regression, Sammon mapping,
multidimensional scaling, projection pursuit, etc.), an ensemble
method (e.g., boosting, boostrapped aggregation, AdaBoost, stacked
generalization, gradient boosting machine method, random forest
method, etc.), and/or any suitable form of machine learning
algorithm. As such, in variations, model generation can include
extracting data, developing an architectural structure of the
model, and training the model with training data to improve the
model.
[0028] Block S110 preferably includes collecting data informative
of magnitude and changes (relative to time, relative to location,
relative to other factors) in PVA components; however, Block S110
can include collecting any other suitable data that can be used to
assess driving maneuvers in order to compare autonomous vehicles to
an "average human" benchmark. Block S110 can additionally or
alternatively implement methods for PVA data collection described
in U.S. application Ser. No. 14/206,721 filed on 12 Mar. 2014 and
titled "System and Method for Determining a Driver in a Telematic
Application", U.S. application Ser. No. 15/243,513 filed on 22 Aug.
2016 and titled Method for Accelerometer-Assisted Navigation",
and/or U.S. application Ser. No. 15/243,565 filed on 22 Aug. 2016
and titled "Method for Smartphone-based Accident Detection", which
are each incorporated in their entireties by this reference.
[0029] In relation to the driving maneuvers assessed, Blocks S10
and S110 can additionally or alternatively include collecting data
associated with factors (e.g., environmental factors, human
factors, vehicle system factors, etc.) that triggered performance
of such maneuvers. In variations, collecting data associated with
environmental factors can include implementing one or more of:
optical sensors (e.g., camera modules, infrared sensors, etc.),
radar modules, LiDAR modules, pressure sensors, moisture sensors,
temperature sensors, any other suitable sensors, vehicle subsystem
states (e.g., windshield wiper states, AC/heater states, lighting
system states, cruise control system states, gear shifting states,
overdrive states, etc.), and any other suitable component that can
be used to detect weather and/or road conditions conducive to
triggering different driving behaviors. In specific examples, Block
S110 can include processing data components from multiple sources
to extract and determine a weather condition factor (e.g., rain,
fog, high temperatures, etc.) and/or road condition factor (e.g.,
winding roads, rough roads, etc.) and associate such determined
conditions with the driving maneuver response.
[0030] Additionally or alternatively, in variations, collecting
data associated with human factors can include implementing one or
more of: optical sensors (e.g., camera modules, infrared sensors,
etc.), radar modules, LiDAR modules, motion sensors, vehicle
subsystem states, states of devices (e.g., mobile computing
devices, wearable computing devices, biometric monitoring devices,
etc.) and/or digital applications executing on such devices within
the vehicle, and any other suitable component that can be used to
detect human factors (e.g., pedestrian factors, vehicle occupant
factors) conducive to triggering different driving behaviors. In
specific examples, Block S110 can include processing data
components from multiple sources to extract and determine a human
factor (e.g., pedestrian/animal obstacle, vehicle occupant
distraction state, etc.) and associate such determined conditions
with the driving maneuver response.
[0031] Additionally or alternatively, in variations, collecting
data associated with vehicle system factors can include
implementing an on-board diagnostic (OBD) module including hardware
components from which vehicle subsystem states can be assessed to
identify vehicle system factors conducive to triggering different
driving behaviors. In variations, the OBD module can have any of
the following interfaces with the vehicle: an assembly line
diagnostic link (ALDL) interface, an OBD-1 interface, an OBD-1.5
interface, an OBD-II interface, a European OBD interface, a
Japanese OBD interface, an ADR79 interface, and any other suitable
interface that allows access and detection of vehicle subsystem
statuses. In specific examples, the OBD modules can include one of:
an Automatic OBD adaptor (i.e., from Automatic Labs) and a
Metromile OBD adaptor (i.e., from Metromile, Inc.) that plugs into
an OBD port of the vehicle and communicates with a mobile computing
device of the computing system (e.g., by way of a Bluetooth
connection, by way of any other wireless connection, by way of a
wired connection, etc.) in transmitting an output associated with a
vehicle subsystem state.
[0032] In relation to rating autonomous vehicle behavior against
human behavior in different high intensity driving maneuvers, the
comparative autonomous model implemented in Block S110 can generate
comparisons across different metrics related to one or more of:
safety (e.g., outcome of maneuver), vehicle system load, proper
selection of an implemented maneuver from various alternative
maneuvers, and any other suitable metric for evaluating performance
of a high intensity maneuver. For instance, in relation to safety,
Block S110 can include evaluating vehicle occupant harm (e.g., in
terms of injury presence, in terms of an injury severity metric,
etc.) and/or damage to non-vehicle occupants (e.g., other traffic,
pedestrians, property, etc.) in terms of metrics (e.g., damage
valuation metric, injury treatment cost metric, etc.) due to the
driving maneuver for both the autonomous vehicle and for the
"average human" benchmark. Then Block S110 can compare values of
the metric between the autonomous vehicle and for the "average
human" benchmark.
[0033] Additionally or alternatively, in relation to vehicle system
load, Block S110 can include evaluating stress on vehicle
subsystems (e.g., forces incurred by different vehicle mechanical
subsystems, electrical load incurred by different vehicle
electronic subsystems, processing load incurred by different
vehicle computing subsystems) due to the driving maneuver for both
the autonomous vehicle and for the "average human" benchmark. Then
Block S110 can compare values of the metric between the autonomous
vehicle and for the "average human" benchmark.
[0034] Additionally or alternatively, in relation to maneuver
selection, Block S110 can include evaluating appropriateness of
different maneuver options that could have been used to respond to
the triggering event, for both the autonomous vehicle and for the
"average human" benchmark. In an example of a road obstruction,
Block S110 can include ranking maneuvers (e.g., swerving, running
through the obstacle, braking abruptly, etc.), and determining a
value of a metric for maneuver appropriateness based upon the
ranking of the selected maneuver. Then Block S110 can compare
values of the metric between the autonomous vehicle and for the
"average human" benchmark.
[0035] The outputs(s) of the comparative autonomous model provided
by Block S110 can additionally or alternatively be derived from
methods described in U.S. application Ser. No. 14/206,721 filed on
12 Mar. 2014 and titled "System and Method for Determining a Driver
in a Telematic Application" and/or U.S. application Ser. No.
15/243,513 filed on 22 Aug. 2016 and titled Method for
Accelerometer-Assisted Navigation", which are herein incorporated
in their entireties by this reference, or in any other suitable
manner.
3.2 Method--Mixed Traffic Analysis Using Mix Model
[0036] Block S120 recites: evaluating mixed traffic features from a
mix model, which functions to incorporate features associated with
mixed traffic factors into risk evaluations of the autonomous
vehicle, and to ultimately evaluate the ability of the autonomous
vehicle to handle mixed-traffic situations. As such, Block S120 can
include collecting data and processing data with the mix model to
produce one or more outputs (e.g., parameter values) that can be
further transformed into the risk analysis in downstream portions
of the method 100. Block S120 preferably evaluates the ability of
the autonomous vehicle to appropriately function within mixed
traffic involving two or more of: human drivers of vehicles,
computer drivers of vehicles, pedestrians, cyclists, and
motorcyclists. Block S120 can additionally or alternatively
evaluate the ability of the autonomous vehicle to appropriately
function within mixed traffic involving: different vehicle types
different vehicle types (e.g., compact cars, mid-size, large, SUV,
convertible, sedan, truck, commercial car, carts, etc.), non-human
powered vehicles, human-powered transportation modes (e.g.,
bicycles, skateboards, skates, etc.), and/or any other suitable
type of mixed traffic.
[0037] In relation to Block S120, the method 100 can include or
otherwise be associated with Block S20, which recites: generating a
mix model characterizing operation of the autonomous vehicle in
mixed-traffic driving scenarios. In relation to model generation,
Blocks S20 and S120 can include extracting data associated with
mixed traffic vehicle operations. In Blocks S20 and S120, ability
to handle mixed traffic is preferably evaluated by the mix model
with collection of PVA data, OBD data, sensor data, and/or any
other suitable data (e.g., provided by a human occupant) that is
indicative of vehicle operation with surrounding traffic, collision
occurrence, collision avoidance, near-collision occurrence,
near-collision avoidance, transitions in state from a
computer-driven state to a human-driven state (e.g., from operator
state-recording units of the autonomous vehicle), and any other
suitable mixed traffic-associated vehicle operation. Similar to
Block S110, generation of the mix model can include implementing
machine learning and/or other described approaches for model
generation. In variations, model generation can thus include
extracting data, developing an architectural structure of the
model, and training the model with training data to improve the
model.
[0038] In relation to data collection, Blocks S20 and/or S120 can
include recording and/or determining human factors or environmental
factors associated with a mixed traffic incident. Furthermore, in
relation to the mixed traffic situations encountered, Block S120
can include determining qualities of the mixed traffic situation,
in order to provide more context for appropriateness of autonomous
vehicle responses in the mixed traffic conditions. For instance,
Block S120 can include determining distribution of vehicle types
(e.g., computer-driven vehicles vs. human-driven vehicles, vehicles
vs. pedestrians, vehicles vs. mopeds, etc.) from image/video data
of the location (e.g., satellite image data, drone data),
determining density of vehicles in a given area, determining
quality of infrastructure (e.g., existence of bike lanes, road
narrowness, road physical condition, etc.) that would affect mixed
traffic interactions, and/or any other suitable factor that
provides context for the mixed traffic driving situation.
[0039] Block S120 preferably includes evaluating positive and
negative events associated with the autonomous vehicle in mixed
traffic situations (e.g., based on tagging of data events
associated with mixed traffic situations), in order to determine
the ability and/or ultimately improve the ability of the autonomous
vehicle to handle such mixed traffic situations. In examples,
positive incidents associated with a mixed traffic can include one
or more of: successful avoidance of a collision with another
vehicle/pedestrian/animal/human-powered transportation (e.g.,
determined by a lack of PVA or other sensor data indicative of a
collision), successful avoidance of a near-collision with another
vehicle/pedestrian/animal/human-powered transportation (e.g.,
determined by a lack of PVA or other sensor data indicative of a
near-collision in combination with video/audio data indicative of a
near-collision situation), performance of a driving maneuver
appropriate for the mixed traffic situation successfully without
need or desire for human-override, and any other positive incident.
In examples, negative incidents associated with a mixed traffic
situation can include one or more of: unsuccessful avoidance of a
collision with another vehicle/pedestrian/animal/human-powered
transportation (e.g., determined by PVA or other sensor data
indicative of a collision), unsuccessful avoidance of a
near-collision with another vehicle/pedestrian/animal/human-powered
transportation (e.g., determined by PVA or other sensor data
indicative of a near-collision in combination with video/audio data
indicative of a near-collision situation), performance of a driving
maneuver inappropriate for the mixed traffic situation,
transitioning to a human-overridden state due to delayed response
to a situation requiring a response, and any other negative
incident.
[0040] In evaluating autonomous vehicle ability in Block S120,
Block S120 can include determining values of metrics (e.g., number
of positive incidents, number of negative incidents, severity of
negative incidents, appropriateness of positive incidents, etc.) in
relation to a time variable and/or in relation to a distance
variable. For instance, Block S120 can include collecting data,
processing data to determine metrics associated with mixed traffic
events, and then determining values of metrics in relation to time
(e.g., per hour of driving, per day of driving, per month of
driving, per year of driving), in relation to frequency (e.g.,
average amount of time between positive incidents, average amount
of time between negative incidents, etc.), in relation to distance
(e.g., per mile driven, per 10 miles driven, per 100 miles driven,
etc.), and/or any other suitable variable.
[0041] However, Block S120 can include evaluation of mixed traffic
handling by the autonomous vehicle in any other suitable manner,
and variations of mixed traffic situations and collision detection
are further described in U.S. application Ser. No. 15/243,565 filed
on 22 Aug. 2016 and titled "Method for Smartphone-based Accident
Detection".
3.3 Method--Geography-Dependent Analysis Using Geography-Dependent
Model
[0042] Block S130 recites: determining geography-dependent
behavioral features with a geography-dependent model associated
with traffic interactions involving vehicles, which functions to
incorporate human behavioral components into risk evaluations of
the autonomous vehicle. As such, Block S130 can include collecting
data and processing data with the geography-dependent model to
produce one or more outputs (e.g., parameter values) that can be
further transformed into the risk analysis in downstream portions
of the method 100. Block S130 preferably implements a supplementary
data extraction system (e.g., implementing an application
programming interface to access such geography-dependent driving
behaviors, etc.), but can additionally or alternatively include
detecting such behaviors and habits within a geographic location
using one or more sensors (e.g., optical sensors that provide image
and video data of behaviors, etc). Block S130 can, however, be
implemented in any other suitable manner.
[0043] In relation to Block S130, the method 100 can include or
otherwise be associated with Block S30, which recites: generating a
geography-dependent model S30 characterizing geographic
location-specific acceptable driving behaviors. In relation to
model generation, Blocks S30 and S130 can include generating and
implementing a model accounting for local driving, traffic, and/or
pedestrian habits including one or more of: geographically-driven
human driving behaviors, geographically-driven pedestrian
behaviors, and/or any other suitable geographically or
culturally-dependent factors associated with vehicle operation, in
relation to evaluation of risk and vehicle performance. In
variations, Blocks S30 and S130 can include extracting data
identifying the location of the autonomous vehicle as described
above (e.g., by implementing system components interfacing with or
otherwise including GPS components, components useful for
triangulation, etc.), based upon the location of the autonomous
vehicle, extracting and aggregating characteristics of
geographically-driven driving-related behaviors; and evaluating
performance of the autonomous vehicle against those
characteristics. Similar to other portions of the method 100,
generation of the geography-dependent model can include
implementing machine learning and/or other described approaches for
model generation. In variations, model generation can thus include
extracting data, developing an architectural structure of the
model, and training the model with training data to improve the
model.
[0044] Blocks S30 and S130 can thus include extracting and
processing characteristics of geographically-driven driving
"norms", and evaluating the ability of the autonomous vehicle to
drive appropriately according to the characteristics across
different metrics. In specific examples, characteristics of
geographically-driven driving "norms" can include one or more of:
right-of-way determination (e.g., right-of-way priority for
vehicles, right-of-way priority for marked vehicles, right-of-way
priority for pedestrians, right-of-way priority for scooters,
right-of-way in turns, etc.); cutting-off-vehicle behavior (e.g.,
cutting off drivers is more appropriate in some Asian countries
than in the U.S., etc.); tailgating vehicle behavior; traffic
sign/signal observation; lane observation; round-about driving
behavior; cultural aspects of saving face applied to driving
behavior; public transportation infrastructure integrated with
roads (e.g., tracks that cross or are adjacent to driving roads);
level of aggression in driving behavior; driving lane configuration
(e.g., driving on the right vs. left side of the road, etc.); and
any other suitable characteristics of geography-dependent driving
behavior.
[0045] Block S130 can additionally or alternatively include
aggregating characteristics of geographically-driven pedestrian
behavior "norms", and evaluating the ability of the autonomous
vehicle to drive appropriately according to the characteristics
across different metrics. In specific examples, characteristics of
geographically-driven driving "norms" can include one or more of:
eye contact behavior between drivers and pedestrians (e.g., eye
contact establishes cordiality, etc.); right-of-way determination
(e.g., right-of-way priority for vehicles vs. right-of-way priority
for pedestrians or right-of-way priority for scooters, etc.);
pedestrian/bicycle accommodating infrastructure factors; pedestrian
density factors; pedestrian number; traffic sign/signal observation
in relation to pedestrians; level of aggression in driving behavior
with pedestrian interactions; and any other suitable
characteristics of geography-dependent pedestrian behavior.
[0046] In evaluating performance of the autonomous vehicle against
those characteristics in Block S130, according to the
geography-dependent model, data for evaluation can be acquired from
one or more of: proximity sensors (e.g., to determine proximity of
the autonomous vehicle to other objects), optical sensors with
machine vision systems to assess pedestrian behaviors (e.g.,
perceived eye contact with the "driver" of the vehicle) or
behaviors of other vehicles/modes of transportation; optical
sensors with machine vision systems to assess nearby object/entity
types (e.g., adjacent scooters, adjacent bicycles, adjacent trucks,
adjacent pedestrians, etc.); sensors to detect maneuvers of nearby
vehicles/objects/entities; force sensors (e.g., to detect any
near-collision/collision events); audio sensors (e.g., to detect
sounds indicative of interactions between the autonomous vehicle
and other entities/objects, to detect vehicle horn usage, etc.);
motion/position sensors (e.g., to detect characteristics of vehicle
maneuvers and/or collision-related events); wearable computing
devices; biometric monitoring devices (e.g., to detect
physiological states of individuals involved in a traffic
situation, to detect cognitive states of individuals involved in a
traffic situation, etc.); and/or any other suitable sensors/sensor
systems.
[0047] In variations, metrics for evaluation can be derived from or
associated with one or more of: proximity metrics (e.g., proximity
of the autonomous vehicle to other objects/entities during a
maneuver, proximity of the autonomous vehicle to other
objects/entities while not maneuvering, etc.) and evaluation of
values of proximity metrics within ranges acceptable for the
driving behaviors geographic location; PVA-derived metrics
associated with driving aggression (e.g., abruptness in changes in
position, velocity, or acceleration of the vehicle); lane
position-derived metrics (e.g., number of times lanes are crossed
while driving straight ahead, distance to lane boundaries, etc.);
number/frequency of collision incidents per unit distance or time;
number/frequency of near-collision incidents per unit distance or
time; metrics indicative of interruption of "normal" traffic flow
for the geographic region (e.g., number of times other vehicles or
entities behaved abnormally for the geographic region due to a
behavior of the autonomous vehicle); and any other suitable
metrics.
[0048] In one specific example, optical sensor systems can be used
to identify that a pedestrian has attempted to make eye contact
with the vehicle's driver (e.g., using eye tracking modules), and
depending upon the location of the vehicle, the appropriateness of
the maneuver performed by the vehicle after the eye contact attempt
can be assessed (e.g., PVA metric values associated with vehicle
movement and indicative of vehicle right-of-way, PVA metric values
associated with vehicle stopping and indicative of pedestrian
right-of-way, etc.). In another specific example, Block S130 can
include determining an average proximity between the autonomous
vehicle and surrounding vehicles during lane changes, in relation
to geographic regions where cutting off other vehicles is
acceptable/unacceptable. However, other examples of evaluation
against metrics can be conducted in any other suitable manner.
[0049] Block S130 can additionally or alternatively include
generating a traffic behavior-vehicle fit parameter for the
autonomous vehicle according to the geography-dependent model,
which provides a method for quantifying how appropriate vehicle
physical features, vehicle aesthetic features, and/or driving
behaviors are for cultural norms associated with a particular
geographical region. Such fit parameters can account for one or
more of: color of the vehicle (e.g., in relation to color symbolism
in some cultures); interior styling of the vehicle (e.g., in
relation to perceptions of quality); exterior styling of the
vehicle (e.g., in relation to perceptions of luxury, in relation to
perceptions of modern-ness, etc.); size of the vehicle;
maximum/minimum seating capacity of the vehicle; safety features of
the vehicle; interior environmental control aspects of the vehicle
(e.g., AC in hotter climates, heating in colder climates, etc.),
and any other suitable features affecting adoption in different
geographical locations.
[0050] Evaluations of geography-dependent driving behavior can,
however, include any other suitable parameters/metrics and be
conducted in any other suitable manner.
3.4 Method--Sensor-Surrounding Analysis Using Sensor-Surrounding
Model
[0051] Block S140 recites: determining a sensor-surrounding risk
parameter upon evaluating sensor systems of the autonomous vehicle
and environmental conditions surrounding the autonomous vehicle
with a sensor-surrounding model, which functions to enable risk
evaluations of the autonomous vehicle to incorporate sensor
input/output deviations associated with different environmental
conditions. As such, Block S140 can include collecting data and
processing data with the comparative autonomous model to produce
one or more outputs (e.g., parameter values) that can be further
transformed into the risk analysis in downstream portions of the
method 100. Block S140 can include identifying characteristics of
the road/environment associated with the autonomous vehicle, and
generating values of parameters indicative of proper sensor
function in relation to the road/environment characteristics.
[0052] In relation to Block S140, the method 100 can include or
otherwise be associated with Block S40, which recites: generating a
sensor-surrounding model S40 characterizing surroundings of the
autonomous vehicle. In variations, generating and/or implementing
the sensor-surrounding model can include collecting data from one
or more of: optical sensors (e.g., camera modules, infrared
sensors, etc.), radar modules, LiDAR modules, PVA sensing systems
(e.g., to detect road-induced motions of the vehicle), pressure
sensors, moisture sensors, light sensors, temperature sensors, any
other suitable sensors, location identifying systems (e.g., GPS) in
combination with GPS-based weather services, vehicle subsystem
states (e.g., windshield wiper states, AC/heater states, lighting
system states, cruise control system states, gear shifting states,
overdrive states, etc.), and any other suitable component that can
be used to detect road and/or weather conditions that would affect
or require sensor performance. Similar to other blocks of the
method 100, generation of the sensor-surrounding model can include
implementing machine learning and/or other described approaches for
model generation. In variations, model generation can thus include
extracting data, developing an architectural structure of the
model, and training the model with training data to improve the
model. In specific examples, Blocks S40 and S140 can include
processing data components from multiple sources to extract and
determine a road condition characterization (e.g., winding roads,
rough roads, newly-placed asphalt, etc.) and/or weather conditions
(e.g., rain, fog, high temperatures, etc.) and evaluate sensor
system presence or function in relation to such conditions.
[0053] In relation to evaluation of acquired data using the
sensor-surrounding model, Block S140 can include evaluating
exposure to road/weather conditions or likelihood of exposure to
road/weather conditions, and determining if the appropriate systems
(e.g., sensor systems, systems that promote
visualization/observation of the vehicle by other parties, etc.)
for handling such conditions are present. Additionally, Block S140
can include determining if available systems for handling such
conditions would be adversely affected in performance under such
conditions. In variations, Block S140 can thus produce a sensor
system-appropriateness metric for external conditions that
characterizes exposure risk (e.g., in relation to time or distance)
quantitatively or qualitatively (e.g., low risk to high risk), upon
extracting sensor data, using the sensor data as inputs for the
sensor-surrounding model, and generating outputs. The sensor
system-appropriateness metric can account for, based upon
road/weather conditions, the appropriate subsystems available, and
the level to which each subsystem is compromised by conditions. In
a specific example involving foggy driving conditions, the
parameter can be determined by evaluating the systems available
with the vehicle (e.g., fog sensors, moisture sensors, temperature
sensors, GPS systems, beacons, lighting systems, infrared cameras,
etc.) and evaluating system function in relation to severity of the
foggy conditions. Alternatively, the sensor system-appropriateness
metric can be defined in any other suitable manner.
3.5 Method--Security Analysis using Sensor-Surrounding Model
[0054] Block S150 recites: determining a security risk parameter
for the autonomous vehicle with a security risk model, which
functions to enable risk evaluations of the autonomous vehicle to
incorporate features associated with external and/or internal
security threats. Block S150 can include collecting data and
processing data with the security model to produce one or more
outputs (e.g., parameter values) that can be further transformed
into the risk analysis in downstream portions of the method 100.
Block S150 is preferably implemented using a security diagnostic
system associated with the vehicle, but can additionally or
alternatively be implemented using any other system components
operable to diagnose and/or detect security threats.
[0055] In relation to Block S150, the method 100 can include or
otherwise be associated with Block S50, which recites: generating a
security risk model characterizing security risks of the autonomous
vehicle. Blocks S50 and S150 can thus include generating and
implementing models that determine if the systems associated with
the autonomous vehicle are unconnected or connected to a network
that can be compromised (e.g., fleet-wide system modification
networks of Tesla Motors.TM., Apple.TM. networks associated with
vehicles, OBD module networks, etc.). Blocks S50 and S150 can
additionally include determining the threat level associated with
the network connections, and/or the threat-defense measures of the
autonomous vehicle connected to the network(s). Similar to other
blocks of the method 100, generation of the security risk model can
include implementing machine learning and/or other described
approaches for model generation. In variations, model generation
can thus include extracting data, developing an architectural
structure of the model, and training the model with training data
to improve the model.
[0056] In variations of systems connected to networks, Blocks S50
and/or S150 can include generating and implementing models for
evaluating network weakness (e.g., level of ability to be hacked
using Qadium.TM. technologies or other network evaluation
technologies), severity of vehicle compromise if the network is
compromised (e.g., loss of vehicle subsystem control, loss of
personal information, etc.), and threat defense measures of the
vehicle and parameters associated with updates (e.g., frequency of
updates, update status, test of updates by reliable parties, etc.).
In variations of systems unconnected to networks, Block S150 can
include evaluating the level of fortification of computing
elements/hardware system elements of the vehicle. Block S150 can
thus implement models of S50 to identify if the appropriate
fortification systems are present and the robustness of available
fortification systems to attack. In specific examples, Block S150
can evaluate the level of fortification of the vehicle's computing
systems against undesired connection by electromagnetic induction,
undesired connection by wire exposure, undesired connection by
vehicle access by a threatening party, and any other suitable
threat to the vehicle's computing system(s). However, Block S150
can be conducted in any other suitable manner.
[0057] While several categories of risk model data components are
described above, Blocks S110-S150 can additionally or alternatively
include receiving and/or extracting other data types/components for
evaluating or modeling autonomous vehicle risk.
3.6 Method--Risk Model Data Combination/Processing
[0058] Block S160 recites transforming outputs of the comparative
autonomous model, the mix model, the geography-dependent model, the
sensor-surrounding model, and the security risk model, with an
exposure parameter, into a risk analysis. Block S160 functions to
process data retrieved in one or more of Blocks S110-S150 of the
method 100 (e.g., in a series of assessments, in parallel, etc.) in
order to produce a risk analysis based on outcomes (e.g., actual
autonomous vehicle responses to different risk factors, simulated
autonomous vehicle responses to different risk factors, etc.) of
operation of the autonomous vehicle in relation to different
factors. In particular, the exposure parameter can take into
account driving load of the autonomous vehicle, and in a specific
example, characterizes the expected distance (e.g., miles,
kilometers, etc.) driven per unit time (e.g., per day, per week,
per month, per year) of the autonomous vehicle.
[0059] Block S160 can include testing the ability of the autonomous
vehicle to handle situations associated with each of Blocks
S110-S150 (and other factors/situation) serially or in parallel, in
order to generate the risk analysis. The risk analysis can include
a risk score which, in some variations, aggregates or combines
scores/ranking/evaluations of the autonomous vehicle in multiple
risk categories. The risk score can include a weighted combination
of scores in separate categories in order to holistically/globally
describe the overall risk of the autonomous vehicle. Additionally
or alternatively, the risk analysis can distinguish between
different risk categories within which the autonomous vehicle was
evaluated, and report level of ability of the autonomous vehicle to
handle risks in each category.
[0060] In aggregating data in different risk categories/conditions,
the risk model and any outputs thereof (some of which are described
in Block S170 below) can be improved with collection of training
data (i.e., from the autonomous vehicle in different geographical
environments, from a fleet of autonomous vehicles in different
geographical environments, etc.) and implementation of appropriate
machine learning techniques in order to improve data collection
associated with identifying conditions experienced by the vehicle,
identifying which data features are most relevant in evaluating the
vehicles, performing the vehicle evaluation in different risk
categories, and/or improving vehicle response in different
conditions, such that the vehicle response approaches or exceeds
that of the best human drivers.
[0061] As such, in some variations, the method 100 can optionally
include Blocks S60 and S65, which recite: in response to an
identified driving mission of the autonomous vehicle, generating
values of: a comparative autonomous parameter, a mix model
parameter, a surrounding risk parameter, a geographic operation
parameter, and a security risk parameter upon evaluating
situational inputs associated with the identified driving mission
with the comparative autonomous model, the mix model, the
sensor-surrounding model, the geography-dependent model, and the
security risk model S60; and upon generating values, generating a
risk analysis upon processing values of the comparative autonomous
parameter, the mix model parameter, the surrounding risk parameter,
the geographic operation parameter, and the security risk parameter
with a rule-based algorithm S65. Identification of the driving
mission can be based upon passenger input, remote operator input,
on-board operator input, GPS-enabled detection, and/or any other
suitable manner.
[0062] In variations, the comparative autonomous parameter can be
associated with one or more of: safety (e.g., a
safety-characterizing metric that describes the outcome of maneuver
for a human operator vs. a non-human operator), vehicle system load
(e.g., a metric characterizing the amount of load on vehicle
systems and/or occupants of the vehicle), proper selection of an
implemented maneuver from various alternative maneuvers, and any
other suitable metric for evaluating performance of a high
intensity maneuver. Additionally or alternatively, in relation to
vehicle system load, the comparative autonomous parameter can be
associated with one or more of: stress on vehicle subsystems (e.g.,
forces incurred by different vehicle mechanical subsystems,
electrical load incurred by different vehicle electronic
subsystems, processing load incurred by different vehicle computing
subsystems) due to the driving maneuver for both the autonomous
vehicle and for the "average human" benchmark. Additionally or
alternatively, in relation to maneuver selection, the comparative
autonomous parameter can be associated with one or more of:
appropriateness of different maneuver options that could have been
used to respond to the triggering event, for both the autonomous
vehicle and for the "average human" benchmark. In an example of a
road obstruction, the parameter can be derived from an evaluation
of maneuvers (e.g., swerving, running through the obstacle, braking
abruptly, etc.), and thus include a rank or value of maneuver
appropriateness for the selected maneuver. However, the comparative
autonomous metric can be derived from or otherwise characterize any
other suitable aspect of the comparative autonomous model in any
other suitable manner.
[0063] In variations, the mix model parameter can characterize one
or more of: mixed-traffic incident occurrence (e.g., number of
positive incidents, number of negative incidents, severity of
negative incidents, appropriateness of positive incidents, etc.) in
relation to a time variable (e.g., per hour of driving, per day of
driving, per month of driving, per year of driving), in relation to
a distance variable (e.g., per mile driven, per 10 miles driven,
per 100 miles driven, etc.), in relation to a frequency variable
(e.g., average amount of time between positive incidents, average
amount of time between negative incidents, etc.), and/or in
relation to any other suitable variable.
[0064] In variations, the surrounding risk parameter can be
associated with a sensor system-appropriateness metric for external
conditions that characterizes exposure risk (e.g., in relation to
time or distance) quantitatively or qualitatively (e.g., low risk
to high risk), The sensor system-appropriateness metric can
describe the appropriateness of vehicle subsystems available,
and/or the level to which each subsystem is compromised by
conditions. The surrounding risk parameter can characterize one or
more of: road conditions (e.g., with a value describing road
suitability for driving in terms of quality, in terms of icing, in
terms of slickness, in terms of obstructions, etc.) and weather
conditions (e.g., with a value describing weather suitability for
driving in terms of visibility, in terms of moisture, in terms of
wind, in terms of dust, etc.). In a specific example, the
surrounding-risk parameter can provide a value of how appropriate
the vehicle systems are for the environmental conditions of the
driving mission (e.g., "the specifications of the vehicle are 72%
appropriate in the current stormy conditions along the intended
route"); However, the surrounding risk parameter can be derived
from or otherwise characterize any other suitable aspect of the
sensor-surrounding model in any other suitable manner.
[0065] In variations, the geographic operation parameter can
characterize or be derived from proximity metrics (e.g., proximity
of the autonomous vehicle to other objects/entities during a
maneuver, proximity of the autonomous vehicle to other
objects/entities while not maneuvering, etc.) and evaluation of
values of proximity metrics within ranges acceptable for the
driving behaviors geographic location; PVA-derived metrics
associated with driving aggression (e.g., abruptness in changes in
position, velocity, or acceleration of the vehicle); lane
position-derived metrics (e.g., number of times lanes are crossed
while driving straight ahead, distance to lane boundaries, etc.);
number/frequency of collision incidents per unit distance or time;
number/frequency of near-collision incidents per unit distance or
time; metrics indicative of interruption of "normal" traffic flow
for the geographic region (e.g., number of times other vehicles or
entities behaved abnormally for the geographic region due to a
behavior of the autonomous vehicle); and any other suitable
metrics. However, the geographic operation parameter can be derived
from or otherwise characterize any other suitable aspect of the
geography-dependent model in any other suitable manner.
[0066] In variations, the security risk parameter can describe the
level of fortification of the vehicle's computing systems against
undesired connection by electromagnetic induction, undesired
connection by wire exposure, undesired connection by vehicle access
by a threatening party, and any other suitable threat to the
vehicle's computing system(s). In a specific example, the
surrounding-risk parameter can provide a value of how easily
compromised the vehicle is in relation to security threats (e.g.,
"the systems of the vehicle are robust against 99% of potential
security threats"); However, the security risk parameter can be
derived from or otherwise characterize any other suitable aspect of
the security threat model in any other suitable manner.
[0067] In variations or examples, similar to that described in
relation to Blocks S110-S150 above, algorithms for generating the
risk analysis with rule-based algorithms can employ approaches
including any one or more of: supervised learning (e.g., using
logistic regression, using back propagation neural networks, using
random forests, decision trees, etc.), unsupervised learning (e.g.,
using an Apriori algorithm, using K-means clustering),
semi-supervised learning, reinforcement learning (e.g., using a
Q-learning algorithm, using temporal difference learning), and any
other suitable learning style. Each module of the plurality can
implement any one or more of: a regression algorithm (e.g.,
ordinary least squares, logistic regression, stepwise regression,
multivariate adaptive regression splines, locally estimated
scatterplot smoothing, etc.), an instance-based method (e.g.,
k-nearest neighbor, learning vector quantization, self-organizing
map, etc.), a regularization method (e.g., ridge regression, least
absolute shrinkage and selection operator, elastic net, etc.), a
decision tree learning method (e.g., classification and regression
tree, iterative dichotomiser 3, C4.5, chi-squared automatic
interaction detection, decision stump, random forest, multivariate
adaptive regression splines, gradient boosting machines, etc.), a
Bayesian method (e.g., naive Bayes, averaged one-dependence
estimators, Bayesian belief network, etc.), a kernel method (e.g.,
a support vector machine, a radial basis function, a linear
discriminate analysis, etc.), a clustering method (e.g., k-means
clustering, expectation maximization, etc.), an associated rule
learning algorithm (e.g., an Apriori algorithm, an Eclat algorithm,
etc.), an artificial neural network model (e.g., a Perceptron
method, a back-propagation method, a Hopfield network method, a
self-organizing map method, a learning vector quantization method,
etc.), a deep learning algorithm (e.g., a restricted Boltzmann
machine, a deep belief network method, a convolution network
method, a stacked auto-encoder method, etc.), a dimensionality
reduction method (e.g., principal component analysis, partial lest
squares regression, Sammon mapping, multidimensional scaling,
projection pursuit, etc.), an ensemble method (e.g., boosting,
boostrapped aggregation, AdaBoost, stacked generalization, gradient
boosting machine method, random forest method, etc.), and any
suitable form of machine learning algorithm. Each processing
portion of the method 100 can additionally or alternatively
leverage: a probabilistic module, heuristic module, deterministic
module, or any other suitable module leveraging any other suitable
computation method, machine learning method or combination
thereof.
3.7 Method--Risk Model-Derived Outputs/Responses
[0068] Block S170 recites: implementing a response action
associated with the autonomous vehicle based upon the risk
analysis, which functions to improve autonomous vehicle responses
(e.g., immediately, at a later time point) upon analyzing different
risk factors and/or responses of the autonomous vehicle to
different risk factors. In relation to identified vehicle missions,
the method 100 can additionally or alternatively analogously
include Block S70, which recites: contemporaneously with execution
of the identified driving mission, implementing a response action
associated with control of the autonomous vehicle, based upon the
risk analysis. In particular, the response action can improve
vehicle evaluation at a subsequent time point or for another
vehicle. Additionally or alternatively, the response action can be
configured to improve vehicle performance (e.g., immediately, at a
later time point) for the specific autonomous vehicle being
evaluated and/or for other autonomous vehicles.
[0069] In some variations, the response action can include
providing the risk analysis and risk score to an entity that can
modify system configuration or operation of the autonomous vehicle,
in order to improve performance of the autonomous vehicle at a
future time point. As such, Block S170 and/or Block S70 can include
generating an alert that is provided to an entity, wherein the
alert notifies the entity regarding a status of the autonomous
vehicle.
[0070] Additionally or alternatively, in some variations, the risk
analysis can be used to identify gaps in evaluation of the
autonomous vehicle, in order to guide or implement additional
evaluations that should be conducted in the future.
[0071] Additionally or alternatively, the outputs of the risk
analysis can be used to perform vehicle-specific and/or fleet-wide
modifications of hardware/software/firmware, in order to improve
performance of the autonomous vehicle(s) in different risk
environments. In one variation, Block S170 and/or Block S70 can
include automatically updating or customizing software/firmware of
computing elements or vehicle subsystems in order to improve the
ability of the autonomous vehicles to response to a specific
situation encountered by the autonomous vehicle. In a specific
example of this variation, the autonomous vehicle's system
configurations can be updated automatically to appropriately slow
down and drive around pedestrians in the middle of the road who
have made eye contact, in the United States.
[0072] Additionally or alternatively, in some variations, the
response action can include a transition into a state that allows a
human driver to override the autonomous vehicle's operation. The
transition into the overridden state can be triggered based upon an
analysis that the autonomous vehicle is in unknown territory,
experiencing conditions with an above-threshold tolerance of
unknowns, in a compromised state due to a security threat, and/or
by any other suitable rationale. The transition into the overridden
state can be manually performed (e.g., a human driver receives an
alert and then provides an input that indicates that he/she is
taking control of driving). Alternatively, the transition into the
overridden state can be automatic, such that the autonomous vehicle
stops driving itself (e.g., pulls over when safe and slows to a
stop) and the human operator must intervene. However, transitioning
into an overridden state can alternatively be conducted in any
other suitable manner. Furthermore, data that characterizes human
handling of the vehicle and collected in the overridden state can
be used as training data to improve future autonomous vehicle
operation within the conditions associated with the override.
[0073] Additionally or alternatively, outputs of the risk analysis
can be used to generate, in real time or near-real time, control
instructions for the vehicle, in order to allow the vehicle to
automatically respond to present risks, or to decrease vehicle
operation risk by modifying vehicle operation aspects. Additionally
or alternatively, in another variation, outputs of the risk
analysis of Block S160 can be used to guide vehicle
sensor/subsystem design, in order to improve future performance of
such autonomous vehicles. However, any other suitable response
action can be implemented in Block S170.
[0074] The system and methods of the preferred embodiment and
variations thereof can be embodied and/or implemented at least in
part as a machine configured to receive a computer-readable medium
storing computer-readable instructions. The instructions are
preferably executed by computer-executable components preferably
integrated with multiple mobile electronic devices. The
computer-readable medium can be stored on any suitable
computer-readable media such as RAMs, ROMs, flash memory, EEPROMs,
optical devices (CD or DVD), hard drives, floppy drives, or any
suitable device. The computer-executable component is preferably a
general or application specific processor, but any suitable
dedicated hardware or hardware/firmware combination device can
alternatively or additionally execute the instructions.
[0075] The FIGURES illustrate the architecture, functionality and
operation of possible implementations of systems, methods and
computer program products according to preferred embodiments,
example configurations, and variations thereof. In this regard,
each block in the flowchart or block diagrams can represent a
module, segment, step, or portion of code, which includes one or
more executable instructions for implementing the specified logical
function(s). It should also be noted that, in some alternative
implementations, the functions noted in the block can occur out of
the order noted in the FIGURES. For example, two blocks shown in
succession may, in fact, be executed substantially concurrently, or
the blocks can sometimes be executed in the reverse order,
depending upon the functionality involved. It will also be noted
that each block of the block diagrams and/or flowchart
illustration, and combinations of blocks in the block diagrams
and/or flowchart illustration, can be implemented by special
purpose hardware-based systems that perform the specified functions
or acts, or combinations of special purpose hardware and computer
instructions.
[0076] The embodiments include every combination and permutation of
the various system components and the various method processes,
including any variations, examples, and specific examples.
[0077] As a person skilled in the art will recognize from the
previous detailed description and from the figures and claims,
modifications and changes can be made to the preferred embodiments
of the invention without departing from the scope of this invention
defined in the following claims.
* * * * *