Mobility assist device

Donath, Max ;   et al.

Patent Application Summary

U.S. patent application number 10/626953 was filed with the patent office on 2004-04-08 for mobility assist device. Invention is credited to Alexander, Lee, Cheng, Pi-Ming, Donath, Max, Gorjestani, Alec, Lim, Heon Min, Newstrom, Bryan, Pardhy, Sameer, Shankwitz, Craig R..

Application Number20040066376 10/626953
Document ID /
Family ID32043605
Filed Date2004-04-08

United States Patent Application 20040066376
Kind Code A1
Donath, Max ;   et al. April 8, 2004

Mobility assist device

Abstract

The present invention is directed to a visual mobility assist device which provides a conformal, augmented display to assist a moving body. When the moving body is a motor vehicle, for instance (although it can be substantially any other body), the present invention assists the driver in either lane keeping or collision avoidance, or both. The system can display objects such as lane boundaries, targets, other navigational and guidance elements or objects, or a variety of other indicators, in proper perspective, to assist the driver.


Inventors: Donath, Max; (St. Louis Park, MN) ; Shankwitz, Craig R.; (Minneapolis, MN) ; Lim, Heon Min; (St. Paul, MN) ; Newstrom, Bryan; (Blaine, MN) ; Gorjestani, Alec; (Minneapolis, MN) ; Pardhy, Sameer; (Eden Prairie, MN) ; Alexander, Lee; (Woodbury, MN) ; Cheng, Pi-Ming; (Roseville, MN)
Correspondence Address:
    Brian D. Kaul
    Westman, Champlin & Kelly
    Suite 1600
    900 Second Avenue South
    Minneapolis
    MN
    55402-3319
    US
Family ID: 32043605
Appl. No.: 10/626953
Filed: July 25, 2003

Related U.S. Patent Documents

Application Number Filing Date Patent Number
10626953 Jul 25, 2003
09618613 Jul 18, 2000

Current U.S. Class: 345/169
Current CPC Class: B60R 2300/8093 20130101; B60T 2201/08 20130101; B60T 2201/086 20130101; B60R 2300/205 20130101; B60R 2300/307 20130101; B60R 2300/302 20130101; B60R 2300/804 20130101; B60R 2300/305 20130101; G01C 21/365 20130101; B60R 1/00 20130101; B60R 2300/301 20130101; B60R 2300/60 20130101
Class at Publication: 345/169
International Class: G09G 005/00

Claims



What is claimed is:

1. A display on a mobile body, comprising: a conformal, augmented display.

2. The display of claim 1 wherein the conformal, augmented display, comprises: displayed objects, displayed at a perspective approximately equal to a perspective that would be perceived from an operator position at a location of the mobile body by an operator who has visual contact with actual objects corresponding to the displayed objects.

3. The display of claim 2 wherein the displayed objects include blocking templates displayed in a position to reduce glare.

4. The display of claim 2 wherein the displayed objects include enhanced text of signage located proximate to the mobile body.

5. The display of claim 1 wherein the conformal, augmented display comprises: a guidance indicator guiding the mobile body in a desired direction.

6. The display of claim 2 wherein the displayed objects are positioned within a field of view of the operator in the operator position, at a location which approximately overlies the actual objects in the field of view.

7. The display of claim 6 wherein the displayed objects are see through.

8. The display of claim 6 wherein the displayed objects are displayed in a forward-looking field of view of the operator.

9. The display of claim 6 wherein the displayed objects are displayed in a rear or side view of the operator.

10. The display of claim 9 wherein the mobile body is a vehicle and wherein the displayed objects are displayed in a location simulating a perspective from the operator through a rearview mirror.

11. The display of claim 6 wherein the displayed objects are displayed in a side view of the operator.

12. The display of claim 11 wherein the mobile body is a vehicle and wherein the displayed objects are displayed in a location simulating a perspective from the operator through a side view mirror.

13. The display of claim 6 wherein the displayed objects comprise: at least one of traffic lane markings or virtual path boundaries.

14. The display of claim 13 wherein the displayed objects comprise: at least one of traffic lights, traffic signals and traffic signs.

15. The display of claim 13 wherein the displayed objects comprise: landmarks.

16. The display of claim 1 wherein the conformal, augmented display, comprises: displayed target objects, displayed at a perspective approximately equal to a perspective that would be perceived from an operator position at a location of the mobile body by an operator who has visual contact with actual targets corresponding to the displayed target objects.

17. The display of claim 16 wherein the displayed target objects are positioned within a field of view of the operator in the operator position, at a location which approximately overlies the actual target objects in the field of view.

18. The display of claim 17 wherein the displayed target elements are displayed in a forward-looking view of the operator.

19. The display of claim 18 wherein the mobile body comprises a vehicle and wherein the vehicle travels over a roadway and wherein the displayed target elements correspond to transitory targets, not fixed in place during normal operating circumstances of the roadway.

20. The display of claim 19 wherein the transitory targets comprise: other vehicles proximate to the roadway.

21. The display of claim 19 wherein the transitory targets comprise: pedestrians or animals proximate to the roadway.

22. The display of claim 6 and further comprising: an object display indicative of objects outside the field of view of the driver.

23. The display of claim 22 wherein the object display is indicative of service or goods available in a vicinity of the mobile body.

24. The display of claim 1 and further comprising a warning display, warning of an object which the mobile body is approaching.

25. A mobility assist device, comprising: a location system providing a location signal indicative of a location of a mobile body; a data storage system storing object information indicative of objects located in a plurality of locations; a display system; and a controller coupled to the location system, the data storage system and the display system, and configured to receive the location signal and retrieve object information based on the location signal and provide a display signal to the display system such that the display system displays objects in substantially a correct perspective of an observer located at the location of the mobile body.

26. The mobility assist device of claim 25 wherein the display system is configured to provide a conformal augmented display of the objects based on the display signal.

27. The mobility assist device of claim 25 wherein the controller provides the display signal such that the objects are displayed at a position in a field of view of the observer at a location which substantially overlies the actual objects in the field of view.

28. The mobility assist device of claim 26 wherein the display system comprises: a projection system providing a projection of an image of the objects; and a partially reflective, partially transmissive screen, positioned in the field of view of the observer and positioned to receive the projection to allow the observer to see through the screen and to see the image of the objects projected thereon.

29. The mobility assist device of claim 25 and further comprising: a ranging system, coupled to the controller and configured to detect transitory objects and provide a detection signal to the controller indicative of the location of the transitory object relative to the mobile body.

30. The mobility assist device of claim 29 wherein the controller is further configured to provide the display signal, based at least in part on the detection signal, such that the display system displays the transitory objects in substantially a correct perspective of an observer located at the location of the mobile body.

31. The mobility assist device of claim 25 wherein the controller is configured to filter the display signal such that the display system displays only transitory objects based on operator-selected criteria.

32. The mobility assist device of claim 25 wherein the controller is configured to filter the display signal such that the display system displays only transitory objects and selected objects indicated by the object information that have been selected for display.

33. The mobility assist device of claim 25 and further comprising: a mobile body orientation detection system, coupled to the controller and the mobile body, detecting an orientation of the mobile body and providing an orientation signal to the controller.

34. The mobility assist device of claim 25 wherein the observer comprises a human with a head and further comprising: a head orientation tracking system, coupled to the controller, detecting an orientation of the observer's head and providing a head orientation signal to the controller.

35. The mobility assist device of claim 25 wherein the object information is intermittently updated.

36. The mobility assist device of claim 25 wherein the display system comprises a helmet-mounted display system.

37. The mobility assist device of claim 25 wherein the display system comprises a visor-mounted display system.

38. The mobility assist device of claim 25 wherein the display system comprises an eyeglass-mounted display system.

39. A method of monitoring operation of a mobility assist device having a location system providing a location signal indicative of a location of a mobile body, a data storage system storing object information indicative of objects located in a plurality of locations, a display system, a ranging system detecting a location of objects and transitory objects relative to the mobile body and providing an object detection signal based thereon, and a controller coupled to the location system, the data storage system, the ranging system and the display system, and configured to receive the location signal and the object detection signal and retrieve object information based on the location signal and provide a display signal to the display system such that the display system displays objects and transitory objects in substantially a correct perspective of an observer located at the location of the mobile body, the method comprising: receiving the object detection signal; determining whether the object detection signal correlates to the object information in the data storage system; and providing an output at least indicative of a system problem when the object detection signal and the object information are determined not to correlate.

40. The method of claim 39 wherein determining whether the object detection signal correlates to the object information in the data storage system comprises: accessing the data storage system based on the location signal; and determining whether the object detection signal indicates the presence of objects indicated by the object information for the location of the mobile body.

41. The method of claim 39 wherein providing an output comprises: when the object detection signal does not indicate the presence of objects indicated by the object information for the location of the mobile body, providing a user observable indication of a possible malfunction.

42. The method of claim 40 wherein providing an output comprises: when the object detection signal indicates the presence of objects indicated by the object information for the location of the mobile body, providing a user observable indication of proper operation.

43. The method of claim 39 wherein providing an output comprises: providing a visual display.

44. A method of controlling a mobility assist device having a location system providing a location signal indicative of a location of a mobile body, a data storage system storing object information indicative of objects located in a plurality of locations, a display system, a ranging system detecting a location of objects and transitory objects relative to the mobile body and providing an object detection signal based thereon, and a controller coupled to the location system, the data storage system, the ranging system and the display system, and comprising: receiving the location signal and the object detection signal; retrieving object information based on the location signal; and providing a filtered display signal to the display system, the display signal being filtered such that the display system displays objects and transitory objects, based on operator selected filtering criteria, in substantially a correct perspective of an observer located at the location of the mobile body.

45. A mobility assist device, comprising: a location system providing a location signal indicative of a location of a mobile body; a data storage system storing object information indicative of objects located in a plurality of locations; a neurostimulation system; and a controller coupled to the location system, the data storage system and the neurostimulation system, and configured to receive the location signal and retrieve object information based on the location signal and provide a stimulation signal to the neurostimulation system.

46. The mobility assist device of claim 45 and further comprising: a ranging system, coupled to the controller and configured to detect transitory objects and provide a detection signal to the controller indicative of the location of the transitory object relative to the mobile body.

47. The mobility assist device of claim 46 wherein the controller is further configured to provide the display signal, based at least in part on the detection signal.
Description



BACKGROUND OF THE INVENTION

[0001] The present invention deals with mobility assistance. More particularly, the present invention deals with a vision assist device in the form of a head up display (HUD) for assisting mobility of a mobile body, such as a person non-motorized vehicle or motor vehicle.

[0002] Driving a motor vehicle on the road, with a modicum of safety, can be accomplished if two different aspects of driving are maintained. The first is referred to as "collision avoidance" which means maintaining motion of a vehicle without colliding with other obstacles. The second aspect in maintaining safe driving conditions is referred to as "lane keeping" which means maintaining forward motion of a vehicle without erroneously departing from a given driving lane.

[0003] Drivers accomplish collision avoidance and lane keeping by continuously controlling vehicle speed, lateral position and heading direction by adjusting the acceleration and brake pedals, as well as the steering wheel. The ability to adequately maintain both collision avoidance and lane keeping is greatly compromised when the forward-looking visual field of a driver is obstructed. In fact, many researchers have concluded that the driver's ability to perceive the forward-looking visual field is the most essential input for the task of driving.

[0004] There are many different conditions which can obstruct (to varying degrees) the forward-looking visual field of a driver. For example, heavy snowfall, heavy rain, fog, smoke, darkness, blowing dust or sand, or any other substance or mechanism which obstructs (either partially or fully) the forward-looking visual field of a driver makes it difficult to identify obstacles and road boundaries which, in turn, compromises collision avoidance and lane keeping. Similarly, even on sunny, or otherwise clear days, blowing snow or complete coverage of the road by snow, may result in a loss of visual perception of the road. Such "white out" conditions are often encountered by snowplows working on highways, due to the nature of their task. The driver's forward-looking vision simply does not provide enough information to facilitate safe control of the vehicle. This can be exacerbated, particularly on snow removal equipment, because even on a relatively calm, clear day, snow can be blown up from the front or sides of snowplow blades, substantially obstructing the visual field of the driver.

[0005] Similarly, driving at night in heavy snowfall causes the headlight beams of the vehicle to be reflected into the driver's forward-looking view. Snow flakes glare brightly when they are illuminated at night and make the average brightness level perceived by the driver's eye higher than normal. This higher brightness level causes the iris to adapt to the increased brightness and, as a result, the eye becomes insensitive to the darker objects behind the glaring snowflakes, which are often vital to driving. Such objects can include road boundaries, obstacles, other vehicles, signs, etc.

[0006] Research has also been done which indicates that prolonged deprivation of visual stimulation can lead to confusion. For example, scientists believe that one third of human brain neurons are devoted to visual processing. Pilots, who are exposed to an empty visual field for longer than a certain amount of time, such as during high-altitude flight, or flight in thick fog, have a massive number of unstimulated visual neurons. This can lead to control confusion which makes it difficult for the pilot to control the vehicle. A similar condition can occur when attempting to navigate or plow a snowy road during daytime heavy snowfall in a featureless rural environment.

[0007] Many other environments are also plagued by poor visibility conditions. For instance, in military or other environments one may be moving through terrain at night, either in a vehicle or on foot, without the assistance of lights. Further, in mining environments or simply when driving on a dirt, sand or gravel surface particulate matter can obstruct vision. In water-going vehicles, it can be difficult to navigate through canals, around rocks, into a port, or through lock and dams because obstacles may be obscured by fog, below the water, or by other weather conditions. Similarly, surveyors may find it difficult to survey land with dense vegetation or rock formations which obstruct vision. People in non-motorized vehicles (such as in wheelchairs, on bicycles, on skis, etc . . . can find themselves in these environments as well. All such environments, and many others, have visual conditions which act as a hindrance to persons working in, or moving through, those environments.

SUMMARY OF THE INVENTION

[0008] The present invention is directed to a visual assist device which provides a conformal, augmented display to assist in movement of a mobile body. In one example, the mobile body is a vehicle (motorized or non-motorized) and the present invention assists the driver in either lane keeping or collision avoidance, or both. The system can display lane boundaries, other navigational or guidance elements or a variety of other objects in proper perspective, to assist the driver. In another example, the mobile body is a person (or group of people) and the present invention assists the person in either staying on a prescribed path or collision avoidance or both. The system can display path boundaries, other navigational or guidance elements or a variety of other objects in proper perspective, to assist the walking person.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 is a block diagram of a mobility assist device in accordance with one embodiment of the present invention.

[0010] FIGS. 2 is a more detailed block diagrams of another embodiment of the mobility assist device.

[0011] FIG. 3A is a partial-pictorial and partial block diagram illustrating operation of a mobility assist device in accordance with one embodiment of the present invention.

[0012] FIG. 3B illustrates the concept of a combiner and virtual screen.

[0013] FIGS. 3C, 3D and 3E are pictorial illustrations of a conformal, augmented projection and display in accordance with one embodiment of the present invention.

[0014] FIGS. 3F, 3G, 3H and 3I are pictorial illustrations of an actual conformal, augmented display in accordance with an embodiment of the present invention.

[0015] FIGS. 4A-4C are flow diagrams illustrating general operation of the mobility assist device.

[0016] FIG. 5A illustrates coordinate frames used in accordance with one embodiment of the present invention.

[0017] FIGS. 5B-1 to 5K-3 illustrate the development of a coordinate transformation matrix in accordance with one embodiment of the present invention.

[0018] FIG. 6 is a side view of a vehicle employing the ranging system in accordance with one embodiment of the present invention.

[0019] FIG. 7 is a flow diagram illustrating a use of the present invention in performing system diagnostics and improved radar processing.

[0020] FIG. 8 is a pictorial view of a head up virtual mirror, in accordance with one embodiment of the present invention.

[0021] FIG. 9 is a top view of one embodiment of a system used to obtain position information corresponding to a vehicle.

[0022] FIG. 10 is a block diagram of another embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0023] The present invention can be used with substantially any mobile body, such as a human being,a motor vehicle or a non-motorized vehicle. However, the present description proceeds with respect to an illustrative embodiment in which the invention is implemented on a motor vehicle as a driver assist device. FIG. 1 is a simplified block diagram of one embodiment of driver assist device 10 in accordance with the present invention. Driver assist device 10 includes controller 12, vehicle location system 14, geospatial database 16, ranging system 18, operator interface 20 and display 22.

[0024] In one embodiment, controller 12 is a microprocessor, microcontroller, digital computer, or other similar control device having associated memory and timing circuitry. It should be understood that the memory can be integrated with controller 12, or be located separately therefrom. The memory, of course, may include random access memory, read only memory, magnetic or optical disc drives, tape memory, or any other suitable computer readable medium.

[0025] Operator interface 20 is illustratively a keyboard, a touch-sensitive screen, a point and click user input device (e.g. a mouse), a keypad, a voice activated interface, joystick, or any other type of user interface suitable for receiving user commands, and providing those commands to controller 12, as well as providing a user viewable indication of operating conditions from controller 12 to the user. The operator interface may also include, for example, the steering wheel and the throttle and brake pedals suitably instrumented to detect the operator's desired control inputs of heading angle and speed. Operator interface 20 may also include, for example, a LCD screen, LEDs, a plasma display, a CRT, audible noise generators, or any other suitable operator interface display or speaker unit.

[0026] As is described in greater detail later in the specification, vehicle location system 14 determines and provides a vehicle location signal, indicative of the vehicle location in which driver assist device 10 is mounted, to controller 12. Thus, vehicle location system 14 can include a global positioning system receiver (GPS receiver) such as a differential GPS receiver, an earth reference position measuring system, a dead reckoning system (such as odometery and an electronic compass), an inertial measurement unit (such as accelerometers, inclinometers, or rate gyroscopes), etc. In any case, vehicle location system 14 periodically provides a location signal to controller 12 which indicates the location of the vehicle on the surface of the earth.

[0027] Geospatial database 16 contains a digital map which digitally locates road boundaries, lane boundaries, possibly some landmarks (such as road signs, water towers, or other landmarks) and any other desired items (such as road barriers, bridges etc . . . ) and describes a precise location and attributes of those items on the surface of the earth.

[0028] It should be noted that there are many possible coordinate systems that can be used to express a location on the surface of the earth, but the most common coordinate frames include longitudinal and latitudinal angle, state coordinate frame, and county coordinate frame.

[0029] Because the earth is approximately spherical in shape, it is convenient to determine a location on the surface of the earth if the location values are expressed in terms of an angle from a reference point. Longitude and latitude are the most commonly used angles to express a location on the earth's surface or in orbits around the earth. Latitude is a measurement on a globe of location north or south of the equator, and longitude is a measurement of the location east or west of the prime meridian at Greenwich, the specifically designated imaginary north-south line that passes through both geographic poles of the earth and Greenwich, England. The combinations of meridians of longitude and parallels of latitude establishes a framework or grid by means of which exact positions can be determined in reference to the prime meridian and the equator. Many of the currently available GPS systems provide latitude and longitude values as location data.

[0030] Even though the actual landscape on the earth is a curved surface, it is recognized that land is utilized as if it is a flat surface. A Cartesian coordinate system whose axes are defined as three perpendicular vectors is usually used. Each state has its own standard coordinate system to locate points within their state boundaries. All construction and measurements are done using distance dimensions (such as meters or feet). Therefore, a curved surface on the earth needs to be converted into a flat surface and this conversion is referred to as a projection. There are many projection methods used as standards for various local areas on the earth's surface. Every projection involves some degree of distortion due to the fact that a surface of a sphere is constrained to be mapped onto a plane.

[0031] One standard projection method is the Lambert Conformal Conic Projection Method. This projection method is extensively used in a ellipsoidal form for large scale mapping of regions of predominantly east-west extent, including topographic, quadrangles for many of the U.S. state plane coordinate system zones, maps in the International Map of the World series and the U.S. State Base maps. The method uses well known, and publicly available, conversion equations to calculate state coordinate values from GPS receiver longitude and latitude angle data.

[0032] The digital map stored in the geospatial database 16 contains a series of numeric location data of, for example, the center line and lane boundaries of a road on which system 10 is to be used, as well as construction data which is given by a number of shape parameters including, starting and ending points of straight paths, the center of circular sections, and starting and ending angles of circular sections. While the present system is described herein in terms of starting and ending points of circular sections it could be described in terms of starting and ending points and any curvature between those points. For example, a straight path can be characterized as a section of zero curvature. Each of these items is indicated by a parameter marker, which indicates the type of parameter it is, and has associated location data giving the precise geographic location of that point on the map.

[0033] In one embodiment, each road point of the digital map in database 16 was generated at uniform 10 meter intervals. In one embodiment, the road points represent only the centerline of the road, and the lane boundaries are calculated from that centerline point. In another embodiment, both the center line and lane boundaries are mapped. Of course, geospatial database 16 also illustratively contains the exact location data indicative of the exact geographical location of street signs and other desirable landmarks. Database 16 can be obtained by manual mapping operations or by a number of automated methods such as, for example, placing a GPS receiver on the lane stripe paint spraying nozzle or tape laying mandrel to continuously obtain locations of lane boundaries.

[0034] Ranging system 18 is configured to detect targets in the vicinity of the vehicle in which system 10 is implemented, and also to detect a location (such as range, range rate and azimuth angle) of the detected targets, relative to the vehicle. Targets are illustratively objects which must be monitored because they may collide with the mobile body either due to motion of the body or of the object. In one illustrative embodiment, ranging system 18 is a radar system commercially available from Eaton Vorad. However, ranging system 18 can also include a passive or active infrared system (which could also provide the amount of heat emitted from the target) or laser based ranging system, or a directional ultrasonic system, or other similar systems. Another embodiment of system 18 is an infrared sensor calibrated to obtain a scaling factor for range, range rate and azimuth which is used for transformation to an eye coordinate system.

[0035] Display 22 includes a projection unit and one or more combiners which are described in greater detail later in the specification. Briefly, the projection unit receives a video signal from controller 12 and projects video images onto one or more combiners. The projection unit illustratively includes a liquid crystal display (LCD) matrix and a high-intensity light source similar to a conventional video projector, except that it is small so that it fits near the driver's seat space. The combiner is a partially-reflective, partially transmissive beam splitter formed of optical glass or polymer for reflecting the projected light from the projection unit back to the driver. In one embodiment, the combiner is positioned such that the driver looks through the combiner, when looking through the forward-looking visual field, so that the driver can see both the actual outside road scene, as well as the computer generated images projected onto the combiner. In one illustrative embodiment, the computer-generated images substantially overlay the actual images.

[0036] It should also be noted, however, that combiners or other similar devices can be placed about the driver to cover substantially all fields of view or be implemented in the glass of the windshield and windows. This can illustratively be implemented using a plurality of projectors or a single projector with appropriate optics to scan the projected image across the appropriate fields of view.

[0037] Before discussing the operation of system 10 in greater detail, it is worth pointing out that system 10 can also, in one illustrative embodiment, be varied, as desired. For example, FIG. 2 illustrates that controller 12 may actually be formed of first controller 24 and second controller 26 (or any number of controllers with processing distributed among them, as desired). In that embodiment, first controller 24 performs the primary data processing functions with respect to sensory data acquisition, and also performs database queries in the geospatial database 16. This entails obtaining velocity and heading information from GPS receiver and correction system 28. First controller 24 also performs processing of the target signal from radar ranging system 18.

[0038] FIG. 2 also illustrates that vehicle location system 14 may illustratively include a differential GPS receiver and correction system 28 as well as an auxiliary inertial measurement unit (IMU) 30 (although other approaches would also work). Second controller 26 processes signals from auxiliary IMU 30, where necessary, and handles graphics computations for providing the appropriate video signal to display 22.

[0039] In a specific illustrative embodiment, differential GPS receiver and correcting system 28 is illustratively a Novatel RT-20 differential GPS (DGPS) system with a 20-centimeter accuracy, while operating at a 5 Hz sampling rate or Trimble MS 750 with 2 cm accuracy operating at 10 Hz sampling rate.

[0040] FIG. 2 also illustrates that system 10 can include optional vehicle orientation detection system 31 and head tracking system 32. Vehicle orientation detection system 31 detects the orientation (such as roll and pitch) of the vehicle in which system 10 is implemented. The roll angle refers to the rotational orientation of the vehicle about its longitudinal axis (which is parallel to its direction of travel) . The roll angle can change, for example, if the vehicle is driving over a banked road, or on uneven terrain. The pitch angle is the angle that the vehicle makes in a vertical plane along the longitudinal direction. The pitch angle becomes significant if the vehicle is climbing up or descending down a hill. Taking into account the pitch and roll angles can make the projected image more accurate, and more closely conform to the actual image seen by the driver.

[0041] Optional head tracking system 32 can be provided to accommodate for movements in the driver's head or eye position relative to the vehicle. Of course, in one illustrative embodiment, the actual head and eye position of the driver is not monitored. Instead, the dimensions of the cab or operator compartment of the vehicle in which system 10 is implemented are taken and used, along with ergonomic data, such as the height and eye position of an operator, given the dimension of the operator compartment, and the image is projected on display 22 such that the displayed images will substantially overlie the actual images for an average operator. Specific measurements can be taken for any given operator as well, such that such a system can more closely conform to any given operator.

[0042] Alternatively, optional head tracking system 32 is provided. Head tracking system 32 tracks the position of the operator's head, and eyes, in real time.

[0043] FIGS. 3A-3E better illustrate the display of information on display 22. FIG. 3A illustrates that display 22 includes projector 40, and combiner 42. FIG. 3A also illustrates an operator 44 sitting in an operator compartment which includes seat 46 and which is partially defined by windshield 48.

[0044] Projector 40 receives the video display signal from controller 12 and projects road data onto combiner 42. Combiner 42 is partially reflective and partially transmissive. Therefore, the operator looks forward through combiner 42 and windshield 48 to a virtual focal plane 50. The road data (such as lane boundaries) are projected from projector 40 in proper perspective onto combiner 42 such that the lane boundaries appear to substantially overlie those which the operator actually sees, in the correct perspective. In this way, when the operator's view of the actual lane boundaries becomes obstructed, the operator can safely maintain lane keeping because the operator can navigate by the projected lane boundaries.

[0045] FIG. 3A also illustrates that combiner 42, in one illustrative embodiment, is hinged to an upper surface or side surface or other structural part 52, of the operator compartment. Therefore, combiner 42 can be pivoted along an arc generally indicated by arrow 54, up and out of the view of the operator, on days when no driver assistance is needed, and down to the position shown in FIG. 3A, when the operator desires to look through combiner 42.

[0046] FIG. 3B better illustrates combiner 42, window 48 and virtual screen or focal plane 50. Combiner 42, while being partially reflective, is essentially a transparent, optically correct, coated glass or polymer lens. Light reaching the eyes of operator 44 is a combination of light passing through the lens and light reflected off of the lens from the projector. With an unobstructed forward-looking visual field, the driver actually sees two images accurately superimposed together. The image passing through the combiner 42 comes from the actual forward-looking field of view, while the reflected image is generated by the graphics processor portion of controller 12. The optical characteristics of combiner 42 allow the combination of elements to generate the virtual screen, or virtual focal plane 50, which is illustratively projected to appear approximately 30-80 feet ahead of combiner 42. This feature results in a virtual focus in front of the vehicle, and ensures that the driver's eyes are not required to focus back and forth between the real image and the virtual image, thus reducing eyestrain and fatigue.

[0047] In one illustrative embodiment, combiner 42 is formed such that the visual image size spans approximately 30.degree. along a horizontal axis and 15.degree. along a vertical axis with the projector located approximately 18 inches from the combiner.

[0048] Another embodiment is a helmet supported visor (or eyeglass device) on which images are projected, through which the driver can still see. Such displays might include technologies such as those available from Kaiser Electro-Optics, Inc. of Carlsbad, Calif., The MicroOptical Corporation of Westwood, Mass., Universal Display Corporation of Ewing, N.J., Microvision, Inc. of Bothell, Wash. and IODisplay System LLC of Menlo Park, Calif.

[0049] FIGS. 3C and 3D are illustrative displays from projector 40 which are projected onto combiner 42. In FIGS. 3C and 3D, the left most line is the left side road boundary. The dotted line corresponds to the centerline of a two-way road, while the right most curved line, with vertical poles, corresponds to the right-hand side road boundary. The gray circle near the center of the image shown in FIG. 3C corresponds to a target detected and located by ranging system 18 described in greater detail later in the application. Of course, the gray shape need not be a circle but could be any icon or shape and could be transparent, opaque or translucent.

[0050] The screens illustrated in FIGS. 3C and 3D can illustratively be projected in the forward-looking visual field of the driver by projecting them onto combiner 42 with the correct scale so that objects (including the painted line stripes and road boundaries) in the screen are superimposed on the actual objects in the outer scene observed by the driver. The black area on the screens illustrated in FIGS. 3C and 3D appear transparent on combiner 42 under typical operating conditions. Only the brightly colored lines appear on the virtual image that is projected onto combiner 42. While the thickness and colors of the road boundaries illustrated in FIGS. 3C and 3D can be varied, as desired, they are illustratively white lines that are approximately 1-5 pixels thick while the center line is also white and is approximately 1-5 pixels thick as well.

[0051] FIG. 3E illustrates a virtual image projected onto an actual image as seen through combiner 42 by the driver. The outline of combiner 42 can be seen in the illustration of FIG. 3E and the area 60 which includes the projected image has been outlined in FIG. 3E for the sake of clarity, although no such outline actually appears on the display. It can be seen that the display generated is a conformal, augmented display which is highly useful in low-visibility situations. Geographic landmarks are projected onto combiner 42 and are aligned with the view out of the windshield. Fixed roadside signs (i.e., traditional speed limit signs, exit information signs, etc.) can be projected onto the display, and if desired virtually aligned with actual road signs found in the geospatial landscape. Data supporting fixed signage and other fixed items projected onto the display are retrieved from geospatial database 16.

[0052] FIGS. 3F-3H are pictorial illustrations of actual displays. FIG. 3F illustrates two vehicles in close proximity to the vehicle on which system 10 is deployed. It can be seen that the two vehicles have been detected by ranging system 18 (discussed in greater detail below) and have icons projected thereover. FIG. 3G illustrates a vehicle more distant than those in FIG. 3F. FIG. 3G also shows line boundaries which are projected over the actual boundaries. FIG. 3H shows even more distant vehicles and also illustrates objects around an intersection. For example, right turn lane markers are shown displayed over the actual lane boundaries.

[0053] The presence and condition of variable road signs (such as stoplights, caution lights, railroad crossing warnings, etc.) can also be incorporated into the display. In that instance, processor 12 determines, based on access to the geospatial database, that a variable sign is within the normal viewing distance of the vehicle. At the same time, a radio frequency (RF) receiver (for instance) which is mounted on the vehicle decodes the signal being broadcast from the variable sign, and provides that information to processor 12. Processor 12 then proceeds to project the variable sign information to the driver on the projector. Of course, this can take any desirable form. For instance, a stop light with a currently red light can be projected, such that it overlies the actual stoplight and such that the red light is highly visible to the driver. Other suitable information and display items can be implemented as well.

[0054] For instance, text of signs or road markers can be enlarged to assist drivers with poor night vision. Items outside the driver's field of view can be displayed (e.g., at the top or sides of. the display) to give the driver information about objects out of view. Such items can be fixed or transitionary objects or in the nature of advertising such as goods or services available in the vicinity of the vehicle. Such information can be included in the geospatial database and selectively retrieved based on vehicle position.

[0055] Directional signs can also be incorporated into the display to guide the driver to a destination (such as a rest area or hotel), as shown in FIG. 3I. It can be seen that the directional arrows are superimposed directly over the lane.

[0056] It should be noted that database 16 can be stored locally on the vehicle or queried remotely. Also, database 16 can be periodically updated (either remotely or directly) with a wide variety of information such as detour or road construction information or any other desired information.

[0057] The presence and location of transitory obstacles (also referred to herein as unexpected targets) such as stalled cars, moving cars, pedestrians, etc. are also illustratively projected onto combiner 42 with proper perspective such that they substantially overlie the actual obstacles. Transitory obstacle information indicative of such transitory targets or obstacles is derived from ranging system 18. Transitory obstacles are distinguished from conventional roadside obstacles (such as road signs, etc.) by processor 12. Processor 12 senses an obstacle from the signal provided by ranging system 18. Processor 12, then during its query of geospatial database 16, determines whether the target indicated by ranging system 18 actually corresponds to a conventional, expected roadside obstacle which has been mapped into database 16. If not, it is construed as a transitory obstacle, and projected, as a predetermined geometric shape, or bit map, or other icon, in its proper perspective, on combiner 42. The transitory targets basically represent items which are not in a fixed location during normal operating conditions on the roadway.

[0058] Of course, other objects can be displayed as well. Such objects can include water towers, trees, bridges, road dividers, other landmarks, etc . . . Such indicators can also be warnings or alarms such as not to turn the wrong way on a one-way road or an off ramp, that the vehicle is approaching an intersection or work zone at too high a high rate of speed. Further, where the combiner is equipped with an LCD film or embedded layer, it can perform other tasks as well. Such tasks can include the display of blocking templates which block out or reduce glare from the sun or headlights from other cars. The location of the sun can be computed from the time, and its position relative to the driver can also be computed (the same is true for cars). Therefore, an icon can simply be displayed to block the undesired glare. Similarly, the displays can be integrated with other operator perceptible features, such as a haptic feedback, sound, seat or steering wheel vibration, etc.

[0059] FIGS. 4A-4C illustrate the operation of system 10 in greater detail. FIG. 4A is a functional block diagram of a portion of system 10 illustrating software components and internal data flow throughout system 10. FIG. 4B is a simplified flow diagram illustrating operation of system 10, and FIG. 4C is a simplified flow diagram illustrating target filtering in accordance with one embodiment of the present invention.

[0060] It is first determined whether system 10 is receiving vehicle location information from its primary vehicle location system. This is indicated by block 62 in FIG. 4B. In other words, where the primary vehicle location system constitutes a GPS receiver, this signal may be temporarily lost. The signal may be lost, for instance, when the vehicle goes under a bridge, or simply goes through a pocket or area where GPS or correction signals can not be received or is distorted. If the primary vehicle location signal is available, that signal is received as indicated by block 64. If not, system 10 accesses information from auxiliary inertial measurement unit 30.

[0061] Auxiliary IMU 30 may, illustratively, be complimented by a dead reckoning system which utilizes the last known position provided by the GPS receiver, as well as speed and angle information, in order to determine a new position. Receiving the location signal from auxiliary IMU 30 is illustrated by block 66.

[0062] In any case, once system 10 has received the vehicle location data, system 10 also optionally receives head or eye location information, as well as optional vehicle orientation data. As briefly discussed above, the vehicle orientation information can be obtained from a roll rate gyroscope 68 to obtain the roll angle, and a tilt sensor 70 (such as an accelerometer) to obtain the pitch angle as well as a yaw rate sensor 69 to obtain yaw angle 83. obtaining the head or eye location data and the vehicle orientation data are illustrated by optional blocks 72 and 74 in FIG. 4B. Also, the optional driver's eye data is illustrated by block 76 in FIG. 4A, the vehicle location data is indicated by block 78, and the pitch and roll angles are indicated by blocks 80 and 82, respectively.

[0063] A coordinate transformation matrix is constructed, as described in greater detail below, from the location and heading angle of the moving vehicle, and from the optional driver's head or eye data and vehicle orientation data, where that data is sensed. The location data is converted into a local coordinate measurement using the transformation matrix, and is then fed into the perspective projection routines to calculate and draw the road shape and target icons in the computer's graphic memory. The road shape and target icons are then projected as a virtual view in the driver's visual field, as illustrated in FIG. 3B above.

[0064] The coordinate transformation block transforms the coordinate frame of the digital map from the global coordinate frame to the local coordinate frame. The local coordinate frame is a moving coordinate frame that is illustratively attached to the driver's head. The coordinate transformation is illustratively performed by multiplying a four-by-four homogeneous transformation matrix to the road data points although any other coordinate system transformations can be used, such as the Quaternion or other approach. Because the vehicle is kept moving, the matrix must be updated in real time. Movement of the driver's eye that is included in the matrix is also measured and fed into the matrix calculation in real time. Where no head tracking system 32 is provided, then the head angle and position of the driver's eyes are assumed to be constant and the driver is assumed tobe looking forward from a nominal position.

[0065] The heading angle of the vehicle is estimated from the past history of the GPS location data. Alternatively, a rate gyroscope can be used to determine vehicle heading as well. An absolute heading angle is used in computing the correct coordinate transformation matrix. As noted initially, though heading angle estimation by successive differentiation of GPS data can be used, any other suitable method to measure an absolute heading angle can be used as well, such as a magnetometer (electronic compass) or an inertial measurement unit. Further, where pitch and roll sensors are not used, these angles can be assumed to be 0.

[0066] In any case, after the vehicle position data 78 is received, the ranging information from ranging system 18 is also received by controller 12 (shown in FIG. 2). This is indicated by blocks 83 in FIG. 4A and by block 86 in FIG. 4B. The ranging data illustratively indicates the presence and location of targets around the vehicle. For example, the radar ranging system 18 developed and available from Eaton Vorad, or Delphi, Celsius Tech, or other vendors provides a signal indicative of the presence of a radar target, its range, its range rate and the azimuth angle of that target with respect to the radar apparatus.

[0067] Based on the position signal, controller 12 queries the digital road map in geospatial database 16 and extracts local road data 88. The local road data provides information with respect to road boundaries as seen by the operator in the position of the vehicle, and also other potential radar targets, such as road signs, road barriers, etc. Accessing geospatial database 16 (which can be stored on the vehicle and receive periodic updates or can be stored remotely and accessed wirelessly) is indicated by block 90 in FIG. 4B.

[0068] Controller 12 determines whether the targets indicated by target data 83 are expected targets. Controller 12 does this by examining the information in geospatial database 16. In other words, if the targets correspond to road signs, road barriers, bridges, or other information which would provide a radar return to ranging system 18, but which is expected because it is mapped into database 16 and does not need to be brought to the attention of the driver, that information can be filtered out such that the driver is not alerted to every single possible item on the road which would provide a radar return. Certain objects may a priori be programmed to be brought to the attention of the driver. Such items may be guard rails, bridge abutments, etc . . . and the filtering can be selective, as desired. If, for example, the driver were to exit the roadway, all filtering can be turned off so all objects are brought to the driver's attention. The driver can change filtering based on substantially any predetermined filtering criteria, such as distance from the road or driver, location relative to the road or the driver, whether the objects are moving or stationary, or substantially any other criteria. Such criteria can be invoked by the user through the user interface, or they can be pre-programmed into controller 12.

[0069] However, where the geospatial database does not indicate an expected target in the present location, then the target information is determined to correspond to an unexpected target, such as a moving vehicle ahead of the vehicle on which system 10 is implemented, such as a stalled car or a pedestrian on the side of the road, or some other transitory target which has not been mapped to the geospatial database as a permanent, or expected target. It has been found that if all expected targets are brought to the operator's attention, this substantially amounts to noise such that when real targets are brought to the operator's attention, they are not as readily perceived by the operator. Therefore, filtering of targets not posing a threat to the driver is performed as is illustrated by block 92 in FIG. 4B.

[0070] Once such targets have been filtered, the frame transformation is performed using the transformation matrix. The result of the coordinate frame transformation provides the road boundary data, as well as the target data, seen from the driver's eye perspective. The road boundary and target data is output, as illustrated by block 94 in FIG. 4B, and as indicated by block 96 in FIG. 4A. Based on the output road and target data, the road and target shapes are generated by processor 12 for projection in the proper perspective.

[0071] Generation of road and target shapes is illustrated by block 98 in FIG. 4A, and the perspective projection is illustrated by blocks 100 in FIG. 4A and 102 in FIG. 4B.

[0072] It should also be noted that the actual image projected is clipped such that it only includes that part of the road which would be visible by the operator with an unobstructed forward-looking visual field. Clipping is described in greater detail below, and is illustrated by block 104 in FIG. 4A. The result of the entire process is the projected road and target data as illustrated by block 106 in FIG. 4A.

[0073] FIG. 4C is a more detailed flow diagram illustrating how targets are projected or filtered from the display. First, it is determined whether ranging system 18 is providing a target signal indicating the presence of a target. This is indicated by block 108. If so, then when controller 12 accesses geospatial database 16, controller 12 determines whether sensed targets correlate to any expected targets. This is indicated by block 110. If so, the expected targets are filtered from the sensed targets. It should be noted that ranging system 18 may provide an indication of a plurality of targets at any given time. In that case, only the expected targets are filtered from the target signal. This is indicated by block 112. If any targets remain, other than the expected targets, the display signal is generated in which the unexpected, or transitory, targets are placed conformally on the display. This is indicated by block 114.

[0074] Of course, the display signal is also configured such that guidance markers (such as lane boundaries, lane striping or road edges) is also placed conformally on the display. This is indicated by block 116. The display signal is then output to the projector such that the conformal, augmented display is provided to the user. This is indicated by block 118.

[0075] It can thus be seen that the term "conformal" is used herein to indicate that the "virtual image" generated by the present system projects images represented by the display in a fashion such that they are substantially aligned, and in proper perspective with, the actual images which would be seen by the driver, with an unobstructed field of view. The term "augmented", as used herein, means that the actual image perceived by the operator is supplemented by the virtual image projected onto the head up display. Therefore, even if the driver's forward-looking visual field is obstructed, the augmentation allows the operator to receive and process information, in the proper perspective, as to the actual objects which would be seen with an unobstructed view.

[0076] A discussion of coordinate frames, in greater detail, is now provided for the sake of clarity. There are essentially four coordinate frames used to construct the graphics projected in display 22. Those coordinate frames include the global coordinate frame, the vehicle-attached coordinate frame, the local or eye coordinate frame, and the graphics screen coordinate frame. The position sensor may be attached to a backpack or helmet worn by a walking person in which case this would be the vehicle-attached coordinate frame. The global coordinate frame is the coordinate frame used for road map data construction as illustrated by FIG. 5A. The global coordinate frame is illustrated by the axes 120. All distances and angles are measured about these axes. FIG. 5A also shows vehicle 124, with the vehicle coordinate frame represented by axes 126 and the user's eye coordinate frame (also referred to as the graphic screen coordinate frame) illustrated by axes 128. FIG. 5A also shows road point data 130, which illustrates data corresponding to the center of road 132.

[0077] The capital letters "X", "Y" and "Z" in this description are used as names of each axis. The positive Y-axis is the direction to true north, and the positive X-axis is the direction to true east in global coordinate frame 120. Compass 122 is drawn to illustrate that the Y-axis of global coordinate frame 120 points due north. The elevation is defined by the Z-axis and is used to express elevation of the road shape and objects adjacent to, or on, the road.

[0078] All of the road points 130 stored in the road map file in geospatial database 16 are illustratively expressed in terms of the global coordinate frame 120. The vehicle coordinate frame 126, (V) is defined and used to express the vehicle configuration data, including the location and orientation of the driver's eye within the operator compartment, relative to the origin of the vehicle. The vehicle coordinate frame 126 is attached to the vehicle and moves as the vehicle moves. The origin is defined as the point on the ground under the location of the GPS receiver antenna. Everything in the vehicle is measured from the ground point under the GPS antenna. Other points, such as located on a vertical axis through the GPS receiver antenna or at any other location on the vehicle, can also be selected.

[0079] The forward moving direction is defined as the positive y-axis. The direction to the right when the vehicle is moving forward is defined as the positive x-axis, and the vertical upward direction is defined as the positive z-axis which is parallel to the global coordinate frame Z-axis. The yaw angle, i.e. heading angle, of the vehicle, is measured from true north, and has a positive value in the clockwise direction (since the positive z-axis points upward). The pitch angle is measured about the x-axis in coordinate frame 126 and the roll angle is measured as a rotation about the y-axis in coordinate frame 126.

[0080] The local L-coordinate frame 128 is defined and used to express the road data relative to the viewer's location and direction. The coordinate system 128 is also referred to herein as the local coordinate frame. Even though the driver's eye location and orientation may be assumed to be constant (where no head tracking system 30 is used) the global information still needs to be converted into the eye-coordinate frame 128 for calculating the perspective projection. The location of the eye, i.e. the viewing point, is the origin of the local coordinate frame. The local coordinate frame 128 is defined with respect to the vehicle coordinate frame. The relative location of the driver's eye from the origin of the vehicle coordinate frame is measured and used in the coordinate transformation matrix described in greater detail below. The directional angle information from the driver's line of sight is used in constructing the projection screen. This angle information is also integrated into the coordinate transformation matrix.

[0081] Ultimately, the objects in the outer world are drawn on a flat two-dimensional video projection screen which corresponds to the virtual focal plane, or virtual screen 50 perceived by human drivers. The virtual screen coordinate frame has only two axes. The positive x-axis of the screen is defined to be the same as the positive x-axis of the vehicle coordinate frame 126 for ease in coordinate conversion. The upward direction in the screen coordinate frame is the same as the positive z-axis and the forward-looking direction (or distance to the objects located on the visual screen) is the positive y-axis. The positive x-axis and the y-axis in the virtual projection screen 50 are mapped to the positive x-axis and the negative y-axis in computer memory space, because the upper left corner is deemed to be the beginning of the video memory.

[0082] Road data points including the left and right edges, which are expressed with respect to the global coordinate frame {G} as P.sub.k, shown in FIG. 5B-1, are converted into the local coordinate frame {L} which is attached to the moving vehicle 124 coordinate frame {V}. Its origin (Ov) and direction (.theta.v) are changing continually as the vehicle 124 moves. The origin (O.sub.L) of the local coordinate frame {L}, i.e. driver's eye location, and its orientation (.theta..sub.E) change as the driver moves his or her head and eyeballs. Even though the driver's orientation (.theta..sub.E) can be assumed as constant for a simplified embodiment of system 10, all of the potential effects are considered in the coordinate transformation equations below for a more detailed illustrative embodiment of system 10. All road data that are given in terms of the global coordinate frame {G} ultimately need to be converted into the eye coordinate frame {L}. Then they are projected into the video screen 22 using a perspective transformation.

[0083] A homogeneous transformation matrix [T] was defined and used to convert the global coordinate data into local coordinate data. The matrix [T] is developed illustratively, as follows. The parameters in FIGS. 5B-1 and 5B-2 are as follows:

[0084] P.sub.k is the k-th road point;

[0085] O.sub.G is the origin of the global coordinate frame;

[0086] O.sub.V is the origin of the vehicle coordinate frame with respect to the global coordinate frame; and

[0087] O.sub.E is the origin of the local eye-attached coordinate frame.

[0088] Any point in 3-dimensional space can be expressed in terms of either a global coordinate frame or a local coordinate frame. Because everything seen by the driver is defined with respect to his or her location and viewing direction (i.e. the relative geometrical configuration between the viewer and the environment) all of the viewable environment should be expressed in terms of a local coordinate frame. Then, any objects or line segments can be projected onto a flat surface or video screen by means of the perspective projection. Thus, the mathematical calculation of the coordinate transformation is performed by constructing the homogenous transformation matrix and applying the matrix to the position vectors. The coordinate transformation matrix [T] is defined as a result of the multiplication of a number of matrices described in the following paragraphs.

[0089] To change the global coordinate data to the local coordinate data, the translation and rotation of the frame should be considered together. The translation of the coordinate frame transforms point data using the following matrix equation (with reference to FIG. 5C):

X=X-O.sub.LX

Y=Y-O.sub.LY

Z=X-O.sub.LZ Eq. 1

[0090] or 1 [ x y z 1 ] = [ 1 0 0 - O L X 0 1 0 - O L Y 0 0 1 - O L Z 0 0 0 1 ] [ X Y Z 1 ] or L p = [ T tran ] G G L P Eq . 2

[0091] where, 2 L p = [ x y z 1 ] , G P = [ X Y Z 1 ] , and G L [ T tran ] = [ 1 0 0 - O L X 0 1 0 - O L Y 0 0 1 - O L Z 0 0 0 1 ] Eq . 3

[0092] The letter .sup.GP is a point in terms of coordinates X, Y, Z as referenced from the global coordinate system. The letter .sup.LP represents the same point in terms of x, y, z in the local coordinate system. The transformation matrix .sup.L.sub.G[T.sub.tran] allows for a translational transformation from the global G coordinate system to the local L coordinate system.

[0093] The rotation of the coordinate frame about the Z-axis can be expressed by the following matrix equation (with respect to FIG. 5D):

x=X cos .theta.+Y sin .theta.

y=-X sin .theta.+Y cos .theta.

z=Z Eq. 4

[0094] or, in matrix form 3 [ x y z 1 ] = [ cos sin 0 0 - sin cos 0 0 0 0 1 0 0 0 0 1 ] [ X Y Z 1 ] Eq . 5

[0095] This equation can be written using the following matrix equation,

.sup.LP=.sup.L.sub.G[T.sub.tran].sup.GP Eq. 6

[0096] where, the rotational transformation from the G to the L coordinate system is 4 G L [ T rot ] = [ cos sin 0 0 - sin cos 0 0 0 0 1 0 0 0 0 1 ] Eq . 7

[0097] For rotation and translation at the same time, these two matrices can be combined by the following equations,

.sup.LP=.sup.L.sub.G[T].sup.GP Eq. 8

[0098] where 5 G L [ T ] = G L [ T rot ] G L [ T trans ] = [ cos sin 0 0 - sin cos 0 0 0 0 1 0 0 0 0 1 ] [ 1 0 0 - O L X 0 1 0 - O L Y 0 0 1 - O L Z 0 0 0 1 ] = [ cos sin 0 - O L X cos - O L Y sin - sin cos 0 O L X sin - O L Y cos 0 0 1 - O L Z 0 0 0 1 ] Eq . 9

[0099] This relationship can be expanded through the {G}, and {V}, and {L} coordinate frames. The coordinate transform matrix [T] was defined as follows assuming that only heading angles .theta..sub.E and .theta..sub.V are considered as rotational angle data;

.sup.LP=.sup.L.sub.V[T].sup.V.sub.G[T].sup.GP=[T].sup.GP Eq. 10

[0100] where, 6 [ T ] = [ C E S E 0 - O L XC E - O L YS E - S E C E 0 + O L XS E - O L YC E 0 0 1 - O L Z 0 0 0 1 ] [ C V S V 0 - O V XC V - O X YS V - S V C V 0 + O V XS V - O V YC V 0 0 1 - O V Z 0 0 0 1 ] Eq . 11

[0101] and,

c.sub.E=cos .theta..sub.E,s.sub.E=sin .theta..sub.E,c.sub.v=cos .theta..sub.v,and s.sub.v=sin .theta..sub.v

c.sub.E+V=cos (.theta..sub.E+.theta..sub.V), and s.sub.E+V=sin (.theta..sub.E+.theta..sub.V) Eq. 12

[0102] The resultant matrix [T] is then as follows: 7 [ T ] = [ T 11 T 12 T 13 T 14 T 21 T 22 T 23 T 24 T 31 T 32 T 33 T 34 T 41 T 42 T 43 T 44 ] Eq . 13

[0103] where,

T.sub.11=c.sub.Ec.sub.v-s.sub.Es.sub.v=cos (.theta..sub.E+.theta..sub.V) Eq. 14

T.sub.12=c.sub.Es.sub.v+s.sub.Ec.sub.v=sin (.theta..sub.E+.theta..sub.V) Eq. 15

T.sub.13=0 Eq. 16 8 T 14 = c E ( - O V Xc v - O V Ys V ) = s E ( - O V Xs v - O V Yc V ) + ( - O L Xc E - O L Ys E ) = - c E + V ( - O V X + O L Xc V - O L Ys V ) - s E + V ( O V Y + O L Xs V + O L Xc V ) Eq . 17 T.sub.21=-s.sub.Ec.sub.v-c.sub.Es.sub.- v=-sin (.theta..sub.E+.theta..sub.V) Eq. 18

T.sub.22=-s.sub.Es.sub.V+c.sub.Ec.sub.V=cos (.theta..sub.E+.theta..sub.V) Eq. 19

[0104] T.sub.23=0 Eq. 20 9 T 24 = s E ( O V Xc V + O V Ys V ) + c E ( O V Xs V - O V Yc v ) + ( O L Xs E - O L Yc E ) = s E + V ( O V X + O L Xc V - O L Ys V ) - c E + V ( O V Y + O L Xs V + O L Yc V ) Eq . 21 T.sub.31=0 Eq. 22

T.sub.32=0 Eq. 23

T.sub.33=1 Eq. 24

T.sub.34=-.theta..sub.L.V-.theta.L.Z Eq. 25

T.sub.41=0 Eq. 26

T.sub.42=0 Eq. 27

T.sub.43=0 Eq. 28

T.sub.44=1 Eq. 29

[0105] By multiplying the road points P by the [T] matrix, we will have local coordinate data p. The resultant local coordinate value p is then fed into the perspective projection routine to calculate the projected points on the heads up display screen 22. The calculations for the perspective projection are now discussed.

[0106] After the coordinate transformation, all the road data are expressed with respect to the driver's viewing location and orientation. These local coordinate data are illustratively projected onto a flat screen (i.e., the virtual screen 50 of heads up display 22). Shown in FIGS. 5E-1 to 5F-3).

[0107] Projecting the scene onto the display screen can be done using simple and well-known geometrical mathematics and computer graphics theory. Physically, the display screen is the virtual focal plane. Thus, the display screen is the plane, which is located at S.sub.y position, parallel to the z-x plane, where s.sub.x, s.sub.z, are the horizontal and vertical dimensions of the display screen. Where the object is projected onto the screen, it should be projected with the correct perspective so that the projected images match with the outer scene. It is desirable that the head up display system match the drawn road shapes (exactly or at least closely) the actual road which is in front of the driver. The perspective projection makes closer objects appear larger and further objects appear smaller.

[0108] The prospective projection can be calculated from triangle similarity as shown in FIGS. 5G to 5H-2. From the figures, one can find the location of the point s(x,z) for the known data p(x,y,z).

[0109] The values of s.sub.x and s.sub.z can be found by similarity of triangles.

P.sub.y:S.sub.y=p.sub.x:s.sub.x Eq. 30

[0110] so, 10 s x = p x s y p y Eq . 31 s z = p z s y p y Eq . 32

[0111] As expected, s.sub.x and s.sub.z are small when the value p.sub.y is big (i.e. when the object is located far away). This is the nature of perspective projection.

[0112] After calculating the projected road point on the display screen by the prospective projection, the points are connected using straight lines to build up the road shapes. The line-connected road shape provides a better visual cue of the road geometry than plotting just a series of dots.

[0113] The road points that have passed behind the driver's moving position do not need to be drawn. Furthermore, because the projection screen has limited size, only road points and objects that fall within the visible field of view need to be drawn on the projection screen. Finding and then not attempting to draw these points outside the field of view can be important in order to reduce the computation load of controller 12 and to enhance the display refresh speed.

[0114] The visible limit is illustrated by FIGS. 5I to 5J-3. The visible three-dimensional volume is defined as a rectangular cone cut at the display screen. Every object in this visible region needs to be displayed on the projection screen. Objects in the small rectangular cone defined by O.sub.L and the display screen, a three dimensional volume space between the viewer's eye and the displaying screen, is displayed in an enlarged size. If the object in this region is too close to the viewer, then it results in an out of limit error or a divide by zero error during the calculation. However, usually there are no objects located in the "enlarging space." FIGS. 5J-1 to 5J-3 and the following equations of lines were used for checking whether an object is in the visible space or not. Using these clipping techniques, if the position of a point in the local coordinate frame is defined as p(x, y, z) then this point is visible to the viewer only if:

[0115] the point p is in front of the y=+c.sub.1x plane (which is marked as dark in the top view diagram of FIG. 5J-1);

[0116] the point p is in front of the y=-c.sub.1x plane;

[0117] the point p is in front of the y=+c.sub.2z plane (the dark region in the right hand side view diagram of FIG. 5J-3);

[0118] the point p is in front of the y=-c.sub.2z plane; and

[0119] the point p is in front of the display screen.

[0120] Equations in the diagram of FIGS. 5J-1 to 5J-3 (e.g. y=+c.sub.1x) are not line-equations but equations of planes in 3 dimensional space. The above conditions can be expressed by the following equations mathematically, which describe what we mean by "in front of"

p.sub.y>+c.sub.1p.sub.x Eq. 33

p.sub.y>-c.sub.1p.sub.x Eq. 34

p.sub.y>+c.sub.2p.sub.z Eq. 35

p.sub.y>-c.sub.2p.sub.z Eq. 36

and

p.sub.y>s.sub.y Eq. 37

[0121] Only those points that satisfy all of the five conditions are in the visible region and are then drawn on the projection screen.

[0122] In some cases, there could be a line segment of the road whose one end is in the visible region and the other is out of the visible region. In this case, the visible portion of the line segment should be calculated and drawn on the screen. FIGS. 5K-1 to 5K-3 show one of many possible situations. FIG. 5K-1 is a top view, which is a projection of the xy plane. It will now be described how to locate point p so that only the contained segment is drawn.

[0123] The range of the ratio value k marked as the distance between point p and p.sub.1 is from 0.0 to 1.0. The position of point p can be written as,

p=p.sub.1+k(p.sub.2-p.sub.1)=p.sub.1+k.DELTA.p Eq. 38

[0124] where,

[0125] k is an arbitrary real number, (0.ltoreq.k.ltoreq.1) and

[0126] p.sub.1=(p.sub.1x, p.sub.1y, p.sub.1z),p.sub.2=(p.sub.2x, p.sub.2y, p.sub.2z), and

[0127] .DELTA.p=p.sub.2-p.sub.1=(p.sub.2x-p.sub.1x, p.sub.2y-p.sub.1y, p.sub.2z-p.sub.1z)

[0128] The x and y components of the above equation can be written as follows:

p.sub.x=p.sub.1x+k.DELTA.p.sub.x Eq. 39

P.sub.y=p.sub.1y+k.DELTA.p.sub.y Eq. 40

[0129] The x and y components of point p also should satisfy the line equation y=+c.sub.1x in order to intersect with the line. Therefore,

p.sub.y=p.sub.1y+k.DELTA.p.sub.y=c.sub.1(p.sub.1x+k.DELTA.p.sub.x)=c.sub.p- .sub.1x+kc.sub.1.DELTA.p.sub.x Eq. 41

k(.DELTA.p.sub.y-c.sub.1.DELTA.p.sub.x)=c.sub.1p.sub.1x-p.sub.1y Eq. 42

[0130] then, 11 k = c 1 p 1 x - p 1 y p y - c 1 p x Eq . 43

[0131] Applying the value k to the above equation p.sub.x, p.sub.y and p.sub.z can be determined as follows, 12 p x = p 1 x + p x c 1 p 1 x - p 1 y p y - c 1 p x Eq . 44 p y = p 1 y + p y c 1 p 1 x - p 1 y p y - c 1 p x Eq . 45 p z = p 1 z + p z c 1 p 1 x - p 1 y p y - c 1 p x Eq . 46

[0132] Using these values of p.sub.x, p.sub.y and p.sub.z, the projected values s.sub.x, and s.sub.z can be calculated by a perspective projection in the same manner as the other parameters.

[0133] FIG. 6 illustrates a vehicle 200 with placement of ranging system 18 thereon. Vehicle 200 is, illustratively, a snow plow which includes an operator compartment 202 and a snow plow blade 204. Ranging system 18, in the embodiment illustrated in FIG. 6, includes a first radar subsystem 206 and a second radar subsystem 208. It can be desirable to be able to locate targets closely proximate to blade 204. However, since radar subsystems 206 and 208 are directional, it is difficult, with one subsystem, to obtain target coverage close to blade 204, yet still several hundred meters ahead of vehicle 200, because of the placement of blade 204. Therefore, in one embodiment, the two subsystems 206 and 208 are employed to obtain ranging system 18. Radar subsystem 208 is located just above blade 204 and is directed approximately straightforwardly, in a horizontal plane. Radar subsystem 206 is located above blade 204 and is directed downwardly, such that targets can be detected closely proximate the front of blade 204. The radar subsystems 206 and 208 are each illustratively an array of aligned radar detectors which is continuously scanned by a processor such that radar targets can be detected, and their range, range rate and azimuth angle from the radar subsystem 206 or 208 can be estimated as well. In this way, information regarding the location of radar targets can be provided to controller 12 such that controller 12 can display an icon or other visual element representative of the target on the head up display 22. of course, the icon can be opaque or transparent.

[0134] It should also be noted that, while the target illustrated in FIG. 3C is round, and could represent a pedestrian, a vehicle, or any other radar target, the icon representative of the target can be shaped in any desirable shape. In addition, bit maps can be placed on the head up display 22 which represent targets. Further, targets can be small, colored or otherwise coded to indicate distance. In other words, if the targets are very close to vehicle 200, they can be large, begin to flash, or turn red. Similarly, if the targets are a long distance from vehicle 200, they can maintain a constant glow or halo.

[0135] FIG. 7 is a flow diagram illustrating how ranging system 18 can be used, in combination with the remainder of the system, to verify operation of the subsystems. First, controller 12 receives a position signal. This is indicated by block 210. This is the signal, illustratively, from the vehicle location system 14. Controller 12 then receives a ranging signal, as indicated by block 212 in FIG. 7. This is the signal from ranging system 18 which is indicative of targets located within the ranging field of vehicle 200. Next, controller 12 queries geospatial database 16. This is indicated by block 214. In querying geospatial database 16, controller 12 verifies that targets, such as street signs, road barriers, etc. are in the proper places, as detected by the signal received by ranging system 18 in block 212. If the targets identified by the target signal correlate to expected targets in geospatial database 16 given the current position of the vehicle, then controller 12 determines that system 10 is operating properly. This is indicated by block 216 and 218. In view of this determination, controller 12 can provide an output to user interface 20 indicating that the system is healthy.

[0136] If, however, the detected targets do not correlate to expected targets in the geospatial database for the current vehicle position, then controller 12 determines that something is not operating correctly, either the ranging system 18 is malfunctioning, the vehicle positioning system is malfunctioning, information retrieval from the geospatial database 16 is malfunctioning or the geospatial database 16 has been corrupted, etc. In any case, controller 12 illustratively provides an output to user interface (UI) 20 indicating a system problem exists. This is indicated by block 220. Therefore, while controller 12 may not be able to detect the exact type of error which is occurring, controller 12 can detect that an error is occurring and provide an indication to the operator to have the system checked or to have further diagnostics run.

[0137] It should also be noted that the present invention need not be provided only for the forward-looking field of view of the operator. Instead, the present system 10 can be implemented as a side-looking or rear-looking virtual mirror. In that instance, ranging system 18 includes radar detectors (or other similar devices) located on the sides or to the rear of vehicle 200. The transformation matrix would be adjusted to transform the view of the operator to the side looking or rear looking, field of view as appropriate.

[0138] Vehicles or objects which are sensed, but which are not part of the fixed geospatial landscape are presented iconically based on the radar or other range sensing devices in ranging system 18. The fixed lane boundaries, of course, are also presented conformally to the driver. Fixed geospatial landmarks which may be relevant to the driver (such as the backs of road signs, special pavement markings, bridges being passed under, watertowers, trees, etc.) can also be presented to the user, in the proper prospective. This gives the driver a sense of motion as well as cues to proper velocity.

[0139] One illustration of the present invention as both a forward looking driver assist device and one which assists in a rear view is illustrated in FIG. 8. A forward-looking field of view is illustrated by block 250 while the virtual rear view mirror is illustrated by block 252. It can be seen that the view is provided, just as the operator would see when looking in a traditional mirror. It should also be noted that the mirror may illustratively be virtually gimbaled along any axis (i.e., the image is rotated from side-to-side or top-to-bottom) in software such that the driver can change the angle of the mirror, just as the driver currently can mechanically, to accommodate different driver sizes, or to obtain a different view than is currently being represented by the mirror.

[0140] FIG. 9 gives another illustrative embodiment of a vehicle positioning system which provides vehicle position along the roadway. The system illustrated in FIG. 9 can, illustratively, be used as the auxiliary vehicle positioning system 30 illustrated in FIG. 2A. This can provide vehicle positioning information when, for example, the DGPS signal is lost, momentarily, for whatever reason. In the embodiment illustrated in FIG. 9, vehicle 200 includes an array of magnetic sensors 260. The road lane 262, is bounded by magnetic strips 264 which, illustratively, are formed of tape having magnetized portions 266 therein. Although a wide variety of such magnetic strips could be used, one illustrative embodiment is illustrated in U.S. Pat. No. 5,853,846 to the 3M Company of St. Paul, Minn. The magnetometers in strip 260 are monitored such that the field strength sensed by each magnetometer is identified. Therefore, as the vehicle approaches strip 260 and begins to cross lane boundary 268, magnetometers 270 and 272 begin to provide a signal indicating a larger field strength.

[0141] Scanning the array of magnetometers is illustratively accomplished using a microprocessor which scans them quickly enough to detect even fairly high frequency changes in vehicle position toward or away from the magnetic elements in the marked lane boundaries. In this way, a measure of the vehicle's position in the lane can be obtained, even if the primary vehicle system is temporarily not working. Further, while FIG. 9 shows magnetometers mounted to the front of the vehicle, they can be mounted to the rear as well. This would allow an optional calculation of the vehicle's yaw angle relative to the magnetic strips.

[0142] FIG. 10 is a block diagram of another embodiment of the present invention. All items are the same as those illustrated in FIG. 1 and are similarly numbered, and operate substantially the same way. However, rather than providing an output to display 22, controller 12 provides an output to neurostimulator 300. Neurostimulator 300 is a stimulating device which operates in a known manner to provide stimulation signals to the cortex to elicit image formation in the brain. The signal provided by controller 12 includes information as to eye perspective and image size and shape, thus enhancing the ability of neurostimulator 300 to properly stimulate the cortex in a meaningful way. Of course, as the person using the system moves and turns the head, the image stimulation will change accordingly.

[0143] It can thus be seen that the present invention provides a significant advancement in the art of mobility assist devices, particularly, with respect to moving in conditions where the outward looking field of view of the observer is partially or fully obstructed. In an earth-based motor vehicle environment, the present invention provides assistance in not only lane keeping, but also in collision avoidance, since the driver can use the system to steer around displayed obstacles. Of course, the present invention can also be used in many environments such as snow removal, mining or any other environment where airborne matter obscures vision. The invention can also be used in walking or driving in low light areas or at night, or through wooden or rocky areas where vision is obscured by the terrain. Further, the present invention can be used on ships or boats to, for example, guide the water-going vessel into port, through a canal, through lock and dams, around rocks or other obstacles.

[0144] Of course, the present invention can also be used on non-motorized, earth-based vehicles such as bicycles, wheelchairs, by skiers or substantially any other vehicle. The present invention can also be used to aid blind or vision impaired persons.

[0145] Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed