U.S. patent application number 15/906910 was filed with the patent office on 2018-07-05 for collision avoidance using auditory data augmented with map data.
The applicant listed for this patent is Ford Global Technologies, LLC.. Invention is credited to Brielle Reiff, Madeline Jane Schrier, Nithika Sivashankar.
Application Number | 20180186369 15/906910 |
Document ID | / |
Family ID | 57571233 |
Filed Date | 2018-07-05 |
United States Patent
Application |
20180186369 |
Kind Code |
A1 |
Reiff; Brielle ; et
al. |
July 5, 2018 |
Collision Avoidance Using Auditory Data Augmented With Map Data
Abstract
A controller for an autonomous vehicle receives audio signals
from one or more microphones and identifies sounds. The controller
further identifies an estimated location of the sound origin and
the type of sound, i.e. whether the sound is a vehicle and/or the
type of vehicle. The controller analyzes map data and attempts to
identify a landmark within a tolerance from the estimated location.
If a landmark is found corresponding to the estimated location and
type of the sound origin, then the certainty is increased that the
source of the sound is at that location and is that type of sound
source. Collision avoidance is then performed with respect to the
location of the sound origin and its type with the certainty as
augmented using the map data. Collision avoidance may include
automatically actuating brake, steering, and accelerator actuators
in order to avoid the location of the sound origin.
Inventors: |
Reiff; Brielle; (Cincinnati,
OH) ; Schrier; Madeline Jane; (Palo Alto, CA)
; Sivashankar; Nithika; (Canton, MI) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Ford Global Technologies, LLC. |
Dearborn |
MI |
US |
|
|
Family ID: |
57571233 |
Appl. No.: |
15/906910 |
Filed: |
February 27, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14876269 |
Oct 6, 2015 |
9937922 |
|
|
15906910 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G05D 1/0088 20130101;
B60W 10/18 20130101; B60W 30/0956 20130101; B60W 2552/00 20200201;
G05D 2201/0213 20130101; G01S 2013/9316 20200101; G01S 5/28
20130101; B60W 2720/10 20130101; G01S 2013/9322 20200101; B60W
2710/20 20130101; B60W 2710/18 20130101; G01S 5/16 20130101; G05D
1/0255 20130101; H04R 2499/13 20130101; B60W 30/09 20130101; G05D
1/0274 20130101; B60W 2420/10 20130101; H04R 1/326 20130101; B60W
10/04 20130101; B60W 10/20 20130101; B60W 2420/54 20130101 |
International
Class: |
B60W 30/09 20120101
B60W030/09; B60W 10/18 20120101 B60W010/18; B60W 10/20 20060101
B60W010/20; G05D 1/02 20060101 G05D001/02; B60W 10/04 20060101
B60W010/04; H04R 1/32 20060101 H04R001/32; G05D 1/00 20060101
G05D001/00 |
Claims
1. A controller for an autonomous vehicle comprising: one or more
processing devices programmed to: receive one or more audio streams
from one or more microphones; retrieve map data in a region about
the autonomous vehicle; detect, in the one or more audio streams, a
probable vehicle-originated sound; identify a first predicted
location of the probable vehicle-originated sound; retrieve map
data in a region proximate the first predicted location; and when
the map data indicates a vehicle-specific landmark within a
threshold distance from the first predicted location, perform
obstacle avoidance with respect to the first predicted
location.
2. The controller of claim 1, wherein the one or more processing
devices are further programmed to invoke obstacle avoidance with
respect to the first predicted location by actuating at least one
of a steering actuator, accelerator actuator, and brake actuator of
the autonomous vehicle effective to avoid the first predicted
location.
3. The controller of claim 1, wherein: the one or more microphones
are an array of microphones and the one or more audio streams are a
plurality of audio streams from the array of microphones; the one
or more processing devices are further programmed to identify the
first predicted location of the probable vehicle-originated sound
by comparing at least one of time of arrival and intensity of the
probable vehicle-originated sound in the plurality of audio
streams.
4. The controller of claim 1, wherein the one or more microphones
are directional microphones.
5. The controller of claim 1, wherein the one or more processing
devices are further programmed to: identify landmarks in the region
proximate the first predicted location in the map data; identify,
from the map data, the vehicle-specific landmark among the
landmarks in the region proximate the first predicted location; and
select, as a probable location of another vehicle, the
vehicle-specific landmark.
6. The controller of claim 5, wherein the vehicle-specific landmark
is at least one of a parking lot and emergency vehicle station.
7. The controller of claim 6, wherein the one or more processing
devices are further programmed to: identify a predicted vehicle
type from the probable vehicle-originated sound; if the predicted
vehicle type corresponds to the vehicle specific landmark, confirm
the predicted vehicle type.
8. The controller of claim 7, wherein the one or more processing
devices are further programmed to: if the predicted vehicle type
corresponds to the vehicle specific landmark and the vehicle
specific landmark is an emergency vehicle station, invoke pulling
over and stopping of the autonomous vehicle.
9. The controller of claim 1, wherein the first predicted location
is not within a line of sight of an imaging system of the
autonomous vehicle.
10. The controller of claim 1, wherein the one or more processing
devices are further programmed to: receive outputs from one or more
sensors, the one or more sensors including at least one of a Light
Detection and Ranging (LIDAR) sensor and a Radio Detection and
Ranging (RADAR) sensor; detect an obstacle set in the outputs from
the one or more sensors; add the first predicted location to the
obstacle set; and perform obstacle avoidance with respect to the
obstacle set.
11. An autonomous vehicle comprising: a vehicle including an engine
and wheels selectively coupled to the engine; at least one of a
steering actuator, accelerator actuator, and brake actuator; one or
more microphones; a controller operably coupled to the one or more
microphones and the at least one of the steering actuator, the
accelerator actuator, and the brake actuator, the controller
including one or more processing devices programmed to-- receive
one or more audio streams from the one or more microphones; detect,
in the one or more audio streams, a probable vehicle-originated
sound; identify a first predicted location of the probable
vehicle-originated sound; retrieve map data in a region proximate
the first predicted location; when the map data indicates a
vehicle-specific landmark within a threshold distance from the
first predicted location, perform obstacle avoidance with respect
to the first predicted location.
12. The autonomous vehicle of claim 11, wherein the one or more
processing devices are further programmed to invoke obstacle
avoidance with respect to the first predicted location by actuating
at least one of a steering actuator, accelerator actuator, and
brake actuator of the autonomous vehicle effective to avoid the
first predicted location.
13. The autonomous vehicle of claim 11, wherein: the one or more
microphones are an array of microphones and the one or more audio
streams are a plurality of audio streams from the array of
microphones; the one or more processing devices are further
programmed to identify the first predicted location of the probable
vehicle-originated sound by comparing at least one of time of
arrival and intensity of the probable vehicle-originated sound in
the plurality of audio streams.
14. The autonomous vehicle of claim 11, wherein the one or more
microphones are directional microphones.
15. The autonomous vehicle of claim 11, wherein the one or more
processing devices are further programmed to: identify landmarks in
the region proximate the first predicted location; identify, from
the map data, the vehicle-specific landmark among the landmarks in
the region proximate the first predicted location; and select, as a
probable location of another vehicle, the vehicle-specific
landmark.
16. The autonomous vehicle of claim 15, wherein the vehicle
specific landmark is at least one of a parking lot and emergency
vehicle station.
17. The autonomous vehicle of claim 16, wherein the one or more
processing devices are further programmed to: identify a predicted
vehicle type from the probable vehicle-originated sound; if the
predicted vehicle type corresponds to the vehicle specific
landmark, confirm the predicted vehicle type.
18. The autonomous vehicle of claim 17, wherein the one or more
processing devices are further programmed to: if the predicted
vehicle type corresponds to the vehicle specific landmark and the
vehicle specific landmark is an emergency vehicle station, invoke
pulling over and stopping of the autonomous vehicle.
19. The autonomous vehicle of claim 11, wherein the first predicted
location is not within a line of sight of an imaging system of the
autonomous vehicle.
20. The autonomous vehicle of claim 11, wherein the one or more
processing devices are further programmed to: receive outputs from
one or more sensors, the one or more sensors including at least one
of a Light Detection and Ranging (LIDAR) sensor and a Radio
Detection and Ranging (RADAR) sensor; detect an obstacle set in the
outputs from the one or more sensors; add the first predicted
location to the obstacle set; and perform obstacle avoidance with
respect to the obstacle set.
Description
CROSS REFERENCE TO RELATED PATENT APPLICATION
[0001] The present application is a continuation of U.S. patent
application Ser. No. 14/876,269, filed on Oct. 6, 2015, which is
incorporated by reference in its entirety.
BACKGROUND
Field of the Invention
[0002] This invention relates to performing obstacle avoidance in
autonomous vehicles.
Background of the Invention
[0003] Autonomous vehicles are equipped with sensors that detect
their environment. An algorithm evaluates the output of the sensors
and identifies obstacles. A navigation system may then steer the
vehicle, brake, and/or accelerate to both avoid the identified
obstacles and reach a desired destination. Sensors may include both
imaging system, e.g. video cameras, as well as RADAR or LIDAR
sensors.
[0004] The systems and methods disclosed herein provide an improved
approach for detecting obstacles.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] In order that the advantages of the invention will be
readily understood, a more particular description of the invention
briefly described above will be rendered by reference to specific
embodiments illustrated in the appended drawings. Understanding
that these drawings depict only typical embodiments of the
invention and are not therefore to be considered limiting of its
scope, the invention will be described and explained with
additional specificity and detail through use of the accompanying
drawings, in which:
[0006] FIG. 1 is a schematic block diagram of a system for
implementing embodiments of the invention;
[0007] FIG. 2 is a schematic block diagram of an example computing
device suitable for implementing methods in accordance with
embodiments of the invention;
[0008] FIGS. 3A and 3B are diagrams illustrating obstacle detection
using auditory and map data; and
[0009] FIG. 4 is a process flow diagram of a method for performing
collision avoidance based on both auditory and map data in
accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
[0010] It will be readily understood that the components of the
present invention, as generally described and illustrated in the
Figures herein, could be arranged and designed in a wide variety of
different configurations. Thus, the following more detailed
description of the embodiments of the invention, as represented in
the Figures, is not intended to limit the scope of the invention,
as claimed, but is merely representative of certain examples of
presently contemplated embodiments in accordance with the
invention. The presently described embodiments will be best
understood by reference to the drawings, wherein like parts are
designated by like numerals throughout.
[0011] Embodiments in accordance with the present invention may be
embodied as an apparatus, method, or computer program product.
Accordingly, the present invention may take the form of an entirely
hardware embodiment, an entirely software embodiment (including
firmware, resident software, micro-code, etc.), or an embodiment
combining software and hardware aspects that may all generally be
referred to herein as a "module" or "system." Furthermore, the
present invention may take the form of a computer program product
embodied in any tangible medium of expression having
computer-usable program code embodied in the medium.
[0012] Any combination of one or more computer-usable or
computer-readable media may be utilized. For example, a
computer-readable medium may include one or more of a portable
computer diskette, a hard disk, a random access memory (RAM)
device, a read-only memory (ROM) device, an erasable programmable
read-only memory (EPROM or Flash memory) device, a portable compact
disc read-only memory (CDROM), an optical storage device, and a
magnetic storage device. In selected embodiments, a
computer-readable medium may comprise any non-transitory medium
that can contain, store, communicate, propagate, or transport the
program for use by or in connection with the instruction execution
system, apparatus, or device.
[0013] Computer program code for carrying out operations of the
present invention may be written in any combination of one or more
programming languages, including an object-oriented programming
language such as Java, Smalltalk, C++, or the like and conventional
procedural programming languages, such as the "C" programming
language or similar programming languages. The program code may
execute entirely on a computer system as a stand-alone software
package, on a stand-alone hardware unit, partly on a remote
computer spaced some distance from the computer, or entirely on a
remote computer or server. In the latter scenario, the remote
computer may be connected to the computer through any type of
network, including a local area network (LAN) or a wide area
network (WAN), or the connection may be made to an external
computer (for example, through the Internet using an Internet
Service Provider).
[0014] The present invention is described below with reference to
flowchart illustrations and/or block diagrams of methods, apparatus
(systems) and computer program products according to embodiments of
the invention. It will be understood that each block of the
flowchart illustrations and/or block diagrams, and combinations of
blocks in the flowchart illustrations and/or block diagrams, can be
implemented by computer program instructions or code. These
computer program instructions may be provided to a processor of a
general purpose computer, special purpose computer, or other
programmable data processing apparatus to produce a machine, such
that the instructions, which execute via the processor of the
computer or other programmable data processing apparatus, create
means for implementing the functions/acts specified in the
flowchart and/or block diagram block or blocks.
[0015] These computer program instructions may also be stored in a
non-transitory computer-readable medium that can direct a computer
or other programmable data processing apparatus to function in a
particular manner, such that the instructions stored in the
computer-readable medium produce an article of manufacture
including instruction means which implement the function/act
specified in the flowchart and/or block diagram block or
blocks.
[0016] The computer program instructions may also be loaded onto a
computer or other programmable data processing apparatus to cause a
series of operational steps to be performed on the computer or
other programmable apparatus to produce a computer implemented
process such that the instructions which execute on the computer or
other programmable apparatus provide processes for implementing the
functions/acts specified in the flowchart and/or block diagram
block or blocks.
[0017] Referring to FIG. 1, a controller 102 may be housed within a
vehicle. The vehicle may include any vehicle known in the art. The
vehicle may have all of the structures and features of any vehicle
known in the art including, wheels, a drive train coupled to the
wheels, an engine coupled to the drive train, a steering system, a
braking system, and other systems known in the art to be included
in a vehicle.
[0018] As discussed in greater detail herein, the controller 102
may perform autonomous navigation and collision avoidance. In
particular, auditory and map data may be analyzed to identify
potential obstacles.
[0019] The controller 102 may include or access a database 104
housed in the vehicle or otherwise accessible by the controller
102. The database 104 may include data sufficient to enable
identification of an obstacle using map data. For example, sound
data 106 may contain data describing sounds generated by one or
more types of vehicles or other potential obstacles. For example,
sound data 106 may include samples of the sounds made by one or
more types of vehicles, animals (e.g. a dog barking), people
conversing, and the like. Alternatively, sound data 106 may contain
data describing such sounds, such as a spectrum of such sounds, or
other data derived from a recording of such sounds.
[0020] The database 104 may further include map data 108. The map
data 108 may include maps in the region of the vehicle, such as the
city, state, or country in which the vehicle is located. The maps
may include data describing roads, landmarks, businesses, public
buildings, etc. In particular, the map data 108 may include the
locations of emergency vehicle stations (fire stations, hospitals
with ambulance service, police stations, etc.).
[0021] In some embodiments, the controller 102 may periodically
connect to a network 110, such as the Internet or other network.
The controller 102 may retrieve some or all of the data stored in
the database 104 from one or more servers 112 hosting or accessing
a database 114 storing such information. For example, sound
signatures or samples of sounds of one or more vehicles or other
potential obstacles may be retrieved from the database 114.
Likewise, current map data 108 may be periodically retrieved from a
database 114.
[0022] The controller 102 may receive one or more image streams
from one or more imaging devices 116. For example, one or more
cameras may be mounted to the vehicle and output image streams
received by the controller 102.
[0023] The controller 102 may further receive audio signals from
one or more microphones 118. The one or more microphones 118 may be
an array of microphones offset from one another such that
differences in amplitude and time of arrival of a sound may be used
to determine one or both of the direction to a source of the sound
and the distance to the sound. The one or more microphones may be
directional microphones that are more sensitive to sounds
originating from a particular direction. The microphones 118 and
the circuits or algorithms used to derive one or both of the
distance and direction to a source of a sound may be according to
any method known in the art of SONAR or any other approach for
identifying the location of a source of sound known in the art.
[0024] The controller may execute a collision avoidance module 120
that receives the image streams and audio signals and identifies
possible obstacles and takes measures to avoid them. In the
embodiments disclosed herein, only image and auditory data is used
to perform collision avoidance. However, other sensors to detect
obstacles may also be used such as RADAR, LIDAR, SONAR, and the
like.
[0025] The collision avoidance module 120 may include an obstacle
identification module 122a that analyzes the one or more image
streams and identifies potential obstacles, including people,
animals, vehicles, buildings, curbs, and other objects and
structures. In particular, the obstacle identification module 122a
may identify vehicle images in the one or more image streams. The
obstacle identification module 122a may include a sound processing
module 124 that identifies potential obstacles using the audio
signals in combination with map data 108 and possibly the sound
data 106. The method by which auditory and map data are used to
identify potential obstacles is described in greater detail
below.
[0026] The collision avoidance module 120 may further include a
collision prediction module 122b that predicts which obstacle
images are likely to collide with the vehicle based on its current
trajectory or current intended path. A decision module 122c may
make a decision to stop, accelerate, turn, etc. in order to avoid
obstacles. The manner in which the collision prediction module 122b
predicts potential collisions and the manner in which the decision
module 122c takes action to avoid potential collisions may be
according to any method or system known in the art of autonomous
vehicles.
[0027] The decision module 122c may control the trajectory of the
vehicle by actuating one or more actuators 126 controlling the
direction and speed of the vehicle. For example, the actuators 126
may include a steering actuator 128a, accelerator actuator 128b,
and a brake actuator 128c. The configuration of the actuators
128a-128c may be according to any implementation of such actuators
known in the art of autonomous vehicles.
[0028] FIG. 2 is a block diagram illustrating an example computing
device 200. Computing device 200 may be used to perform various
procedures, such as those discussed herein. The controller 102 may
have some or all of the attributes of the computing device 200.
[0029] Computing device 200 includes one or more processor(s) 202,
one or more memory device(s) 204, one or more interface(s) 206, one
or more mass storage device(s) 208, one or more Input/Output (I/O)
device(s) 210, and a display device 230 all of which are coupled to
a bus 212. Processor(s) 202 include one or more processors or
controllers that execute instructions stored in memory device(s)
204 and/or mass storage device(s) 208. Processor(s) 202 may also
include various types of computer-readable media, such as cache
memory.
[0030] Memory device(s) 204 include various computer-readable
media, such as volatile memory (e.g., random access memory (RAM)
214) and/or nonvolatile memory (e.g., read-only memory (ROM) 216).
Memory device(s) 204 may also include rewritable ROM, such as Flash
memory.
[0031] Mass storage device(s) 208 include various computer readable
media, such as magnetic tapes, magnetic disks, optical disks,
solid-state memory (e.g., Flash memory), and so forth. As shown in
FIG. 2, a particular mass storage device is a hard disk drive 224.
Various drives may also be included in mass storage device(s) 208
to enable reading from and/or writing to the various computer
readable media. Mass storage device(s) 208 include removable media
226 and/or non-removable media.
[0032] I/O device(s) 210 include various devices that allow data
and/or other information to be input to or retrieved from computing
device 200. Example I/O device(s) 210 include cursor control
devices, keyboards, keypads, microphones, monitors or other display
devices, speakers, printers, network interface cards, modems,
lenses, CCDs or other image capture devices, and the like.
[0033] Display device 230 includes any type of device capable of
displaying information to one or more users of computing device
200. Examples of display device 230 include a monitor, display
terminal, video projection device, and the like.
[0034] Interface(s) 206 include various interfaces that allow
computing device 200 to interact with other systems, devices, or
computing environments. Example interface(s) 206 include any number
of different network interfaces 220, such as interfaces to local
area networks (LANs), wide area networks (WANs), wireless networks,
and the Internet. Other interface(s) include user interface 218 and
peripheral device interface 222. The interface(s) 206 may also
include one or more peripheral interfaces such as interfaces for
printers, pointing devices (mice, track pad, etc.), keyboards, and
the like.
[0035] Bus 212 allows processor(s) 202, memory device(s) 204,
interface(s) 206, mass storage device(s) 208, I/O device(s) 210,
and display device 230 to communicate with one another, as well as
other devices or components coupled to bus 212. Bus 212 represents
one or more of several types of bus structures, such as a system
bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
[0036] For purposes of illustration, programs and other executable
program components are shown herein as discrete blocks, although it
is understood that such programs and components may reside at
various times in different storage components of computing device
200, and are executed by processor(s) 202. Alternatively, the
systems and procedures described herein can be implemented in
hardware, or a combination of hardware, software, and/or firmware.
For example, one or more application specific integrated circuits
(ASICs) can be programmed to carry out one or more of the systems
and procedures described herein.
[0037] Turning now to FIGS. 3A and 3B, in many instances a vehicle
housing the controller 102 (hereinafter the vehicle 300) may be
prevented from visually detecting a potential object, such as
another vehicle 302, by an occluding object 304 such as building,
tree, sign, etc. Accordingly, imaging devices 116 may not be
effective at detecting such obstacles. However, the vehicle 300 may
be close enough to detect sound generated by the other vehicle 302
or other obstacle. Although the methods disclosed herein are
particularly useful where there is an occluding object 304, the
identification of obstacles as described herein may be performed
where image data is available and may, for example, confirm the
location of an obstacle that is also visible to imaging devices
116.
[0038] Audible signals detected from the other vehicle 302 or other
obstacle, as shown in FIG. 3A may be compared to map data as shown
in FIG. 3B. For example, the position 306 of the vehicle 300 may be
identified in the map using a GPS (global positioning system)
receiver mounted to the vehicle 300 and landmarks in the region of
the position 306 may be identified from map data. The identity and
location of the occluding object 304 may also be identified. A
landmark 308 corresponding to the vehicle 302 or other obstacle may
be selected from the map data as corresponding to one or both of
the direction and distance to a source of sound as detected using
the one or more microphones 118. For example, a direction and or
location to a sound source as detected using the one or more
microphones 118 may have an uncertainty or tolerance associated
therewith. The landmark 308 corresponding to the sound source may
be selected due to the landmark 308 being positioned within that
tolerance from the direction and/or location of the sound source as
determined from the audio signals from the microphones 118.
[0039] For example, where the landmark 308 corresponding to a sound
source is determined to be a parking garage, it may be inferred
that a vehicle is exiting the parking garage and measures may be
taken to avoid it. Likewise, where the landmark 308 is an emergency
vehicle station and the sound detected is a siren, it may be
inferred that an emergency vehicle is leaving the station and
measures may be taken to pull over or otherwise take measures to
avoid it. If the vehicle 300 is driving on a first road and the
landmark 308 is a second road that intersects with the first road,
it may be inferred that a vehicle on the second road could be about
to turn onto the first road.
[0040] FIG. 4 illustrates a method 400 that may be executed by the
controller by processing audio signals from the one or more
microphones 118.
[0041] The method 400 may include detecting 402 a sound and
determining 404 one or more likely sources of the sound. For
example, a wave form or spectrum of the sound may be compared to
those of one or more sources in the sound data 106. Candidate sound
sources 404 may be identified that have similarity to the detected
sound exceeding a threshold condition. Candidate sound sources may
be estimated to be a vehicle, person, animal, or other sound
producing entity for which sound data 106 is stored.
[0042] Some or all of the remaining steps of the method 400 may be
executed for all sounds detected 402 or only for sounds
corresponding to vehicles or other potential obstacles.
Accordingly, if, at step 404, the sound is found not to match a
vehicle or other potential obstacle, then the remaining steps of
the method 400 may be omitted.
[0043] The method 400 may include one or both of estimating 406 a
distance to the origin of the sound and estimating 408 a direction
to the origin of the sound. In some instances, by determining
differences in a time of arrival of the sound at offset microphones
108, both the distance to the origin and its direction may be
determined simultaneously, i.e. a location estimate is derived. In
other embodiments, separate microphones 118 or processing steps are
used to estimate 406, 408 the distance and direction to the origin
of the sound.
[0044] The method 400 may include retrieving 410 map data in a
region including the estimated location of the sound origin as
determined at steps 406 and 408. And evaluating 412 whether the map
data includes a landmark corresponding to the location and
candidate source of the sound origin. For example, a landmark
closest to the location of the sound origin may be identified 412.
For example, where the candidate sound source is determined at step
404 to be a vehicle and a parking garage is within a specified
tolerance from the location determined at steps 406 and 408, then
it may be determined that the parking garage is the landmark
corresponding to the sound detected at step 402. As noted above,
the tolerance may be a region or range of angles and distances
corresponding to the uncertainty in determining the location,
direction, and distance, respectively of the sound origin. In
another scenario, an emergency vehicle station is within the
tolerance from the sound origin and the candidate sound source is
an emergency vehicle, then it may be determined at step 412 that
the landmark corresponding to the sound detected at step 402 is the
emergency vehicle station.
[0045] If a corresponding landmark is identified at step 412, then
the method 400 may include increasing 414 a certainty or confidence
value indicating that a vehicle is located at the location
determined at steps 406, 408. For example, a collision avoidance
algorithm may identify potential obstacles. An obstacle may have a
confidence value associated therewith that indicates the likelihood
that an artifact in an image or detected in audio signals actually
corresponds to a vehicle. Only those obstacles having a confidence
value higher than a threshold may be considered for collision
avoidance.
[0046] Alternatively, increasing 414 the certainty may increase
options available to avoid the vehicle at the sound origin. For
example, if a vehicle is detected but there is low certainty as to
its location, a collision avoidance module 120 may slow down the
vehicle in order to avoid a potential collision at a wide range of
possible locations. However, if the location of the vehicle at the
sound origin is known with high certainty (i.e. as increased at
step 414), then the collision avoidance module 120 need only adjust
speed and direction to avoid that known location along with any
other identified obstacles.
[0047] The method 400 may further include increasing 416 certainty
as to candidate source of the sound based on the landmark
identified at step 412. For example, if the candidate source of the
sound is an emergency vehicle and the landmark determined at step
412 is determined to be an emergency vehicle station, then the
confidence that the source of the sound was in fact an emergency
vehicle may be increased 416. The collision avoidance module 120
may therefore take steps to pull over or otherwise avoid the
emergency vehicle.
[0048] In either outcome of step 412 collision avoidance is
performed 418 with respect to obstacles detected. As noted above,
increasing 414, 416 the certainty as to the location and source of
a sound may be used by the collision avoidance module 120 to avoid
collisions. However, the source of the sound detected at step 402
may not necessarily be ignored during collision avoidance 418, but
rather its possible locations may be greater. This is particularly
true where the candidate sound source determined at step 404 is a
vehicle or person.
[0049] The present invention may be embodied in other specific
forms without departing from its spirit or essential
characteristics. The described embodiments are to be considered in
all respects only as illustrative, and not restrictive. The scope
of the invention is, therefore, indicated by the appended claims,
rather than by the foregoing description. All changes which come
within the meaning and range of equivalency of the claims are to be
embraced within their scope.
* * * * *