Method Of Perspective Transformation In Scanned Raster Visual Display

Woycechowsky April 3, 1

Patent Grant 3725563

U.S. patent number 3,725,563 [Application Number 05/211,372] was granted by the patent office on 1973-04-03 for method of perspective transformation in scanned raster visual display. This patent grant is currently assigned to The Singer Company. Invention is credited to Brian J. Woycechowsky.


United States Patent 3,725,563
Woycechowsky April 3, 1973
**Please see images for: ( Certificate of Correction ) **

METHOD OF PERSPECTIVE TRANSFORMATION IN SCANNED RASTER VISUAL DISPLAY

Abstract

A general method of providing perspective transformations in a visual display system having an image generated by a scanned raster device such as a CRT or television projector is shown. The television display is a window out of which an observer views a simulated picture of terrain. The line of sight from the observer passing through the instantaneous spot position on the window is used to find a ground intersection point, the location of this point on an image source, such as film, is found and a video signal representing that image is generated by positioning the scan of an image pick up device to that point on the image source.


Inventors: Woycechowsky; Brian J. (Binghamton, NY)
Assignee: The Singer Company (Binghamton, NY)
Family ID: 22786663
Appl. No.: 05/211,372
Filed: December 23, 1971

Related U.S. Patent Documents

Application Number Filing Date Patent Number Issue Date
134238 Apr 15, 1971

Current U.S. Class: 434/43; 348/123; 708/2; 708/442
Current CPC Class: G09B 9/302 (20130101)
Current International Class: G09B 9/30 (20060101); G09B 9/02 (20060101); G09b 009/08 (); H04n 003/30 (); G01s 007/20 ()
Field of Search: ;178/DIG.20,DIG.35 ;35/10.2,12N ;235/186 ;343/7.9

References Cited [Referenced By]

U.S. Patent Documents
3098929 July 1963 Kirchner
3261912 July 1966 Hemstreet
3060596 October 1962 Tucker et al.
Primary Examiner: Morrison; Malcolm A.
Assistant Examiner: Dilidine, Jr.; R. Stephen

Parent Case Text



This invention relates to visual systems in general and more particularly to a method and apparatus for raster shaping to alter a perspective point in a visual system and is a continuation-in-part of application Ser. No. 134,238 filed Apr. 15, 1971; and now abandoned.
Claims



What is claimed is:

1. In a display system for presenting to an observer a desired simulated scene of the earth's surface as viewed from the observer's viewpoint, comprising an image source depicting a portion of the earth's surface as viewed from an image viewing point, at least part of which scene contains the same information as that contained in the desired scene; a device with a controllable first spot for scanning the image source to develop a video signal, a display located within the observer's field of view to form a simulated window through which the observer may view said simulated scene, said display being of the type formed by scanning a second spot across the display to form a raster and modulating said second spot with the video signal developed by said device, a method of driving said first spot to obtain an image of the desired scene in proper perspective on said display comprising:

a. determining the simulated point of intersection with the earth's surface of a line from the observer passing through the instantaneous position of the second scanning spot on the display window;

b. determining the location on the image source of the depicting thereon of said earth intersection point; and

c. positioning the first spot so that the video signal developed corresponds to said location on said image source.

2. The invention according to claim 1 wherein said image source is optically rotated to simulate rotation of said display window and further including the step of compensating for said rotation whereby the scanning first spot will generally follow a path more nearly approximating a normal raster.

3. The invention according to claim 1 wherein the steps of determining said earth intersection point and said location on said image source comprise:

a. determining a first set of direction cosines of the trainer body axes to a horizontal referenced axis system;

b. using said direction cosines to compute the components of said observers' eye position with respect to the simulated center of gravity in said horizontal reference axis system;

c. determining a second set of direction cosines of the display window axes to said body axes;

d. determining from the scan waveforms of said second spot the direction of a line from the observer to the instantaneous position of said second spot in the window axes frame;

e. using said second set of direction cosines to determine the direction of a line from the origin of said window axes passing through said second spot with respect to said body axes;

f. using said first set of direction cosines to reference the direction of said line to the horizontal reference axes;

g. determining the location of said observers' eyepoint with respect to said image viewing point;

h. determining from said line referenced to said horizontal axes and the altitude and location of said observers' eyepoint with respect to said image viewing point the intersection point with the earth's surface referenced to said image viewing point with respect to the horizontal axes system;

i. determining the direction of a line from said image of viewing point to said intersection point with respect to an axes system referenced to the image source;

j. using the direction of said line reference to said image source axes to determine the location on said image source of the depiction of said earth intersection point.

4. The invention according to claim 3 wherein said display is a wide angle spherical display having a fixed frame of reference, only a relatively small portion of which is modulated by said video signal, the center of said portion is movable and may be defined by a latitude and longitude, said second set of direction cosines are the fixed display frame to body axes direction cosines; and the direction of said line in said display frame is obtained by adding the scan wave forms of said second spot to said latitude and longitude.

5. The invention according to claim 3 wherein the steps of determining said first set of direction cosines, using said first set of direction cosines to compute said observer's eye position, determining said second set of direction cosines, and determining the location of said observer's eyepoint with respect to said image viewing point are performed in a digital computer and the remaining steps performed by analog computing means.

6. The invention according to claim 5 wherein the results of the digital computation are combined into a third set of direction cosines so that the steps of determining the direction of a line through said second spot, referencing said spot to the horizontal reference axes, and determining the intersection point on the earth's surface are combined in the digital computer and said third set of direction cosines is then used in an analog computer to determine from said third set and the direction lines of said second spot with respect to the window axes, the direction of a line from said image viewing point.

7. The invention according to claim 5 wherein computations are done using instantaneous position information.

8. The invention according to claim 5 wherein computations in the analog computer are done by the integration of rate information developed in the digital computer and further including a step to periodically initialize the analog computer.

9. The invention according to claim 8 wherein said initialization step is done for each horizontal scan of said second spot.

10. The invention according to claim 3 wherein said first set of direction cosines and said second set of direction cosines are computed and combined in a digital computer to form a fourth set of direction cosines and said fourth set is then used to reference said direction lines of said second spot to the horizontal axes.

11. The invention according to claim 10 wherein the image is rolled optically and further including the step of determining by computation in the digital computer an image source axes rolled by the amount of image roll and using said axes in determining the direction of said line from said image viewing point.

12. In a display system for presenting to an observer a desired simulated scene of the earth's surface as viewed from the observer's viewpoint, comprising an image source depicting a portion of the earth's surface as viewed from an image viewing point, at least part of which scene contains the same information as that contained in the desired scene; a device with a controllable first spot for scanning the image source to develop a video signal, a display located within the observer's field of view to form a simulated window through which the observer may view said simulated scene, said display being of the type formed by scanning a second spot across the display to form a raster and modulating said second spot with the video signal developed by said device, apparatus for driving said first spot to obtain an image of the desired scene in proper perspective on said display comprising:

a. means for determining the simulated point of intersection with the earth's surface of a line from the observer passing through the instantaneous position of the second scanning spot on the display window;

b. means for determining the location on the image source of the depicting thereon of said earth intersection point; and

c. means for positioning the first spot so that the video signal developed corresponds to said location on said image source.

13. The invention according to claim 12 wherein said display is used in combination with a fixed-base vehicle trainer and said observer is the trainee.

14. The invention according to claim 13 wherein said trainer is an aircraft simulator.

15. The invention according to claim 12 wherein said image source is a frame of a motion picture and said image source viewing point is the point from which said frame was taken.

16. The invention according to claim 15 wherein said device is a TV camera on which said frame is imaged.

17. The invention according to claim 15 wherein said device is a flying spot scanner, pickup photomultiplier tube and associated optics and wherein said device is arranged to scan said frame.

18. The invention according to claim 12 wherein said image source is the image obtained from an optical probe viewing a model, said model is a portion of the earth's surface and said device is a TV camera on which said image is focused.

19. The invention according to claim 12 wherein said image source is an orthophoto and said device is a flying spot scanner with associated pickup and optics arranged to scan said orthophoto.
Description



Visual systems for use in aircraft simulators and other types of trainers have gained widespread use due to the increased cost of training in an actual aircraft or other vehicle or device. In general, four basic types of visual systems have been used, one of which is a camera model system in which a probe containing a TV camera is moved over a scale terrain model in accordance with computed attitude and position of the simulator. The resulting image is displayed to the trainee with a TV projector or CRT.

A second type system is the film-based system in which a predetermined path is flown by an aircraft and a motion picture recorded. The motion picture is then shown to the trainee as he "flies" the same path. Deviations may be simulated by optical distortion as is shown in patents granted to H. S. Hemstreet such as U.S. Pat. No. 3,233,508 granted on Feb. 8, 1966 and U.S. Pat. No. 3,261,912 granted on July 19, 1966. Also disclosed therein is a variation of the system of the present invention in which the film image is viewed by a TV camera and the resulting image projected via TV projector or CRT. Distortion in that case is accomplished by raster shaping.

A third type of system is a scan-transparency system wherein an image is generated by scanning a transparency containing othophotographic information. The information generated is displayed via TV as in the previous example. Such a system is shown in U.S. Pat. No. 3,439,105, granted to W. C. Ebeling et al. on Apr. 15, 1969.

A fourth type system is a digital image generation system. In such a system image information is stored in a computer which selects the desired information for display in a TV type display.

A variation of the film based system viewed by a TV camera is a film based system scanned by a flying spot scanner to generate an image. The film based system and the scan transparency system have in common an important aspect. The recorded information on them is from a specific viewpoint. To produce a scene as it would appear if viewed from another viewpoint requires raster shaping. Although the camera model and digital systems do not have this restriction there may be cases where it is desired to cause a change in viewpoint by raster shaping rather than moving the camera probe or reconstructing the digital image. For example, in the former case problems arise as the probe gets close to the model. In the later, construction of images uses considerable computer time.

The present invention provides a system which may be used for raster shaping in any visual system where it is desired to transform an image containing information as viewed from one viewpoint to an image which appears as if viewed from another viewpoint.

It is the object of this invention to provide a general system which controls the shape of a raster in a visual simulation system such that a desired perspective change is achieved.

Other objects of the invention will in part be obvious and will in part appear hereinafter.

The invention accordingly comprises the several steps and the relation of one or more of such steps with respect to each of the others, and the apparatus embodying features of construction, combination(s) of elements and arrangement of parts which are adapted to effect such steps, all as exemplified in the following detailed disclosure, and the scope of the invention will be indicated in the claims.

For a fuller understanding of the nature and objects of the invention reference should be had to the following detailed descriptions taken in connection with the accompanying drawings, in which:

FIG. 1 is a block diagram of a preferred embodiment of the invention in combination with an aircraft simulator;

FIG. 2 is a flow diagram of a preferred set of equations for use with the invention;

FIG. 3 is a perspective view of the relationship between an observer's view through the display and the view on the image source;

FIG. 4 is a schematic view of a first type of scanned raster device;

FIG. 5 is a schematic view of a second type of scanned raster device;

FIG. 6 is a block diagram of a preferred embodiment of a raster computer for implementing the equations of FIG. 2;

FIG. 7 is a block diagram of a modification to the embodiment of FIG. 6 for compensating for image roll in those systems where it is desirable to roll the image before raster shaping is introduced;

FIG. 8 is a block diagram showing a second form of the equations of FIG. 2;

FIG. 9 is a block diagram of the implementation of the equations of FIG. 8;

FIG. 10 is a block diagram of the equations of FIG. 8 in rate rather than position form; and

FIG. 11 is a block diagram of a third form which the equations of FIG. 2 may take.

FIG. 1 is a basic representation of the systems with which the present invention may be used. Block 11 is an image source. It may be an image recorded on a frame of film, the image picked up by a probe in a camera model system, a digitally generated image, an orthophoto or other image. Block 13 is a scanned raster device. It may be a TV camera viewing image source 11 or a flying spot scanner device scanning image source 11 to produce a video signal.

Display 15 may be one or more TV projectors, CRTs, laser projectors or other similar devices capable of projecting a video signal. Raster computer 17 is the system of the present invention which shapes the raster of device 13 to obtain the desired perspective. Sync generator 19 provides sync commands to synchronize the scans of raster device 13 and display 15.

In each case the image presented by block 11 will represent a scene as it would appear from a predetermined viewpoint. If it is film it will be as viewed from the location of the taking camera; if a probe image it will depend on the probe position and altitude. Likewise, if an othophoto it will appear as a map view from a certain altitude and if digitally generated will represent a view based on computer inputs. In each case, however, the viewpoint of the image is known.

Examining the balance of FIG. 1 will further show the problem to be solved by the present invention. Display 15 is in a position to be viewed by a trainee in simulator cockpit 21. This cockpit will contain controls and instruments duplicating those of the actual aircraft being simulated. Control movements will be supplied as inputs to computer 23 which will use these inputs in equations of motion to compute the aircraft state vector (position, attitude, velocity). From this computed data, outputs from computer 23 drive the instruments in cockpit 21 such as altimeter, airspeed, etc.

The state vector information of the aircraft is available in computer 23 and may be used along with the information concerning the viewpoint from which the image was made by raster computer 17 in determining proper raster shape. This viewpoint information is contained in block 11 and is provided to raster computer 17 and/or simulator computer 23. These two computers work together, as will be explained later.

The information may, for example, be recorded on the film and picked up by a device in block 11 in a film based system. In a camera model system the position and attitude of the probe will be available. In a scan transparency system, the scale of the orthophoto will be known; and in a computer generated image, the inputs used in constructing the image will be known. Thus, computers 17 and 23 have available the state vector of the simulated aircraft and the state vector of the image present in image source 11. This information will of course be constantly updated as the simulator "flies" and as the image changes due to film advancement, probe movement, etc.

A third type of information is used in the present invention. This is the instantaneous position of the scanning spot on the display as referenced to the eyepoint of the trainee. This information is known indirectly through sync generator 19 which controls the scanning of the spot on display 15. For an explanation of how the display raster may be made quite accurate see U.S. application Ser. No. 130217 filed by R. F. H. McCoy et al. on Apr. 1, 1971 and assigned to the same assignee as the present invention.

In general terms, it is known from the sync command 19 when the sweep is started and the characteristics of the sweep are known. From this information it is possible to compute the instantaneous spot position as will be shown in more detail later.

Using these items of information, i.e., the aircraft state vector, the image source state vector, and the instantaneous spot position, along with the relationship between the aircraft body axes and the display axes, it is possible to compute the intersection of a line from the pilot's nominal eye-point passing through the instantaneous spot position with the ground, to then determine where or if that point intersects the image source, and then to position the scan of device 13 to that spot.

FIG. 2 shows a flow diagram of the computations. From the state vector of the simulated aircraft, the rotation of the aircraft with respect to a horizontal frame of reference is known. These rotations are .theta..sub.s, the simulated pitch angle; .phi..sub.s, the simulated roll angle; and .psi..sub.s, the simulated heading angle. From these angles, computer 23 of FIG. 1 may compute the sines and cosines of the angles; and from the sines and cosines, the direction cosines of the simulated aircraft body to ground reference axes. This computation is shown in block 25 of FIG. 2 and results in .alpha..sub.ij matrix.

Computer 23 may also compute the direction cosines of the window axes referenced to the body axes, .omega..sub.ij, from .psi..sub.W/B, the window heading with respect to the body axes; .theta..sub.W/B, the window pitch with respect to the body axes; and .phi..sub.W/B, the window roll with respect to the body axes. The computation required to evaluate the .omega..sub.ij corresponds to the .alpha..sub.ij computation shown in block 25. If the window axes are fixed with respect to the body axes, the .omega..sub.ij are constant and therefore need not be continuously computed. The evaluation of the .omega..sub.ij is indicated in block 27.

In general, the simulated eyepoint is located some distance away from the simulated center of gravity. In situations where the eyepoint displacement is significant (e.g., takeoff and landing situations for transport aircraft), the eyepoint displacement with respect to the center of gravity must be taken into account. The components of eyepoint displacement with respect to the center of gravity are referenced to the horizontal frame of reference by multiplying the body axes coordinates of the eyepoint (x.sub.BEP, y.sub.BEP, z.sub.BEP) by the .alpha..sub.ij matrix. The evaluation of the horizontal frame components of the eyepoint with respect to the center of gravity (x.sub.EP, y.sub.EP, z.sub.EP) is shown in block 29. Eyepoint altitude with respect to the horizontal plane of reference, h.sub.EP, is also computed in block 29 by subtracting z.sub.EP from the altitude of the simulated aircraft, h.sub.s.

Horizontal frame of reference components of eyepoint position relative to image position (.DELTA.x and .DELTA.y) are found by respectively adding x.sub.EP and y.sub.EP to the horizontal frame components of the simulated aircraft's center of gravity (x.sub.s and y.sub.s) and then subtracting the image position coordinates (x.sub.F and y.sub.F). This computation is shown in block 31 of FIG. 2.

The altitude associated with a frame of film (h.sub.F) or with a probe in a camera model system, etc.; and image attitude, represented by .psi..sub.F, .theta..sub.F, and .phi..sub.F, are provided directly to the camera raster computer 17 of FIG. 1 by the image source 11.

The remainder of the computations must be done in raster computer 17 of FIG. 1, which is an analog computer, due to the fact that computations are being done for an instantaneous spot position. The angles .psi..sub.W and (.theta..sub.W) , representing the coordinates of the instantaneous spot position as viewed from the pilot's eyepoint, are generated in a manner to be described later. For a rectangular window located a unit's distance from a nominal pilot's eyepoint, the window axes coordinates of the instantaneous spot position are 1, tan .psi..sub.W, tan (.theta..sub.W) . These quantities are transformed to the body axes in block 33 through the use of the .omega..sub.ij matrix to obtain the direction lines of a line passing through the instantaneous spot position referenced to the body frame (e.sub.1, e.sub.2, e.sub.3). If the screen is spherical rather than planar, this line is defined by the direction cosines: cos .psi..sub.W cos .theta..sub.W, sin .psi..sub.W cos .theta..sub.W, and sin .theta..sub.W, where .psi..sub.W and .theta..sub.W are the respective window referenced longitude and latitude of the instantaneous spot position. These direction lines must be transformed to the horizontal reference system. This is done by multiplying them by the .alpha..sub.ij matrix in block 35. The .omega..sub.ij matrix and .alpha..sub.ij matrix may be combined to form the window to body axes matrix prior to entering the analog computer in which case blocks 33 and 35 would be combined.

Referring now to FIG. 3, point 37 is a fixed point on the ground which is the reference for x.sub.F, y.sub.F, and x.sub.s, y.sub.s. The X and Y position of the simulated eyepoint 39 with respect to the image axes 41, .DELTA.x and .DELTA.y, are also shown. Line 43 is the line passing through the instantaneous spot 45 on display face 47. Since the direction lines of line 43 and the eyepoint altitude have been obtained, it is now possible to find the horizontal components (h.sub.EP d.sub.1 /d.sub.3 and h.sub.EP d.sub.2 /d.sub.3) of line 43. This computation is done in block 51 of FIG. 2 where they are added to the horizontal components of the eyepoint with respect to the image position. The results of the computation done in block 51 of FIG. 2 are the horizontal components of the ground intersection point 49 of line 43 with respect to the image position. If the ground is assumed to be horizontal, the vertical component of the ground intersection point 49 with respect to the image position is the altitude of the image position h.sub.F. However, these components are multiplied by d.sub.3 in block 51 in order to avoid divisions by d.sub.3. It can be seen from FIG. 2 that when the image coordinates are obtained in block 59, the multiplications of these components by d.sub.3 are cancelled.

Now that the ground intersection point 49 of FIG. 3 is known, referenced to the image axes 41, it is only necessary to determine the image plane coordinates of the intersection of line 53 (the line from the image axis origin 41 to ground intersection point 49) and the image plane 55. This is done in blocks 57 and 59. Block 57 transforms x.sub.HF .sup.. d.sub.3, y.sub.HF .sup.. d.sub.3, and z.sub.HF .sup.. d.sub.3 into a frame having two of its axes in the image plane 55 using horizontal frame to image frame direction cosines. This direction cosines are defined in terms of the trigonometric functions of .psi..sub.F, .theta..sub.F, and .phi..sub.F just as the .alpha..sub.ij are made up of terms containing trigonometric functions of .psi..sub.s, .theta..sub.s, and .phi..sub.s. The final step is shown in block 59. By similar triangles the Y and Z co-ordinates in the image plane, d.sub.F and z.sub.F, are found from x.sub.F .sup.. d.sub.3, y.sub.F .sup.. d.sub.3, and z.sub.F .sup.. d.sub.3. In a film system, f is the local length of the taking camera and hence block 59 shows the value of f multiplying y.sub.F .sup.. d.sub.3 /x.sub.F .sup.. d.sub.3 and z.sub.F .sup.. d.sub.3 /x.sub.F .sup.. d.sub.3. If the image is obtained from a camera viewing a model, the focal length will again be equal to an appropriate focal length. In a digitally generated image, this value will be stored, since it is used in the generation of the image.

Knowing where the ground intersection point is located on the image source 11, it is only necessary to position the spot of scanning device 13 so that it intersects that point on the image. For example, if device 13 of FIG. 1 is a flying spot scanner and the image is on film, a system such as FIG. 4 would be used. Flying spot scanner 61 will have an electron gun 63 and horizontal and vertical deflection plates 65 (only the vertical plates are shown). Electrons emitted by gun 63 will be deflected by plates 65 and impinge on the face of the flying spot scanner which is coated with phosphor. The light emitted by the phospher surface will pass through film 67 and be collected by lens 69 to be imaged on photomultiplier tube 71 which provides a video signal to display 15 of FIG. 1. The relationship between the voltage on plates 65 and the resulting spot position is well known. Thus, it is only necessary to scale the values of y.sub.f and z.sub.f obtained in FIG. 2 so that the proper voltages are input to the plates.

Thus, as a spot moves across display 15 of FIG. 1 its associated instantaneous ground intersection point will be computed and used to find where that spot is on the film. This information will then be used to drive the flying spot scanner spot to that point resulting in the proper ground point being displayed on display 15 for all points in time. The same system would be used if 67 were on orthophoto rather than a normal frame of movie film. The raster shape would differ but the computation and driving of the spot would be the same.

In a camera model system, y.sub.F and z.sub.F are the positions on the camera tube of the instantaneous ground intersection point. Thus, it is only necessary to drive the scan on the camera tube to that point which corresponds to the instantaneous line of sign associated with a display CRTs electron beam.

If the system is one where TV camera is viewing an image as shown on FIG. 5, one additional step is necessary. The image 73, which could be a projected film image or a computer generated image, or other image on a screen (or CRT), is imaged on camera tube 75 through lens 77. Since the position on image 73 is known, but not the position on tube 75, it is necessary to multiply y.sub.f and z.sub.f by the ratio of image to object distance in the system to obtain the values used in scanning tube 75.

FIG. 6 shows a typical embodiment of raster computer 17 of FIG. 1. Sweep generator 81 will have an input on line 83 from the sync generator 19 of FIG. 1 to synchronize it with the display 15. If the display is planar, as assumed for block 33 of FIG. 2, the sweeps generated represent tan (.theta..sub.W) and tan .psi..sub.W. This may be done by generating a normal TV type linear sweep since, with the distance to the center of the display fixed, the tangenets of (.theta..sub.W) and .psi..sub.W will correspond directly to the X and Y positions of the spot on the display. If a spherical display is involved, sines and cosines of .theta..sub.W and .psi..sub.W and the direction cosines cos .psi..sub.W cos .theta..sub.W, sin .psi..sub.W cos .theta..sub.W, and sin .theta..sub.W must be generated. Apparatus to generate such scans is disclosed in U.S. application Ser. No. 108446 filed by T. Cwynar et al. on Jan. 21, 1971.

The outputs of sweep generator 81 are inputs to block 85, a transformation apparatus. This apparatus comprises three servos each driving sine-cosine potentiometers. The three servos correspond to .psi..sub.W/B, .theta..sub.W/B, .phi..sub.W/B and are driven by inputs corresponding to these values from computer 23. The computation done in this block is equivalent to that of blocks 27 and 33 of FIG. 2 combined. The servo driven potentiometers are connected together to perform the required multiplications. A system which describes how such multiplications are performed is shown in U.S. Pat. No. 3,003,252 granted to E. G. Schwarm on Oct. 10, 1961.

The outputs from block 85 are inputs to a similar transformation block 87 which has servo inputs .psi..sub.s, .theta..sub.s, and .phi..sub.s. This block will do the computations of the combined blocks 25 and 35 of FIG. 2. Two of the outputs of block 87, d.sub.1 and d.sub.2 are multiplied by h.sub.EP obtained from computer 23 in multipliers 89 and 91 respectively. Values of .DELTA.x, .DELTA. y and h.sub.F also obtained from computer 23 are respectively multiplied by the third output of block 87 (d.sub.3) in multipliers 93, 95 and 97. (All multipliers may be Analog Devices Model 422J or their equivalent).

In summing amplifier 99 the h.sub.EP d.sub.1 output from multiplier 89 is added to the .DELTA.x .sup.. d.sub.3 output of multiplier 93 and in summing amplifier 101 the h.sub.EP d.sub.2 output of multiplier 91 is added to the .DELTA.y .sup.. d.sub.3 output of multiplier 95. The h.sub.F d.sub.3 output of multiplier 97 and the outputs of amplifiers 99 and 101 (h.sub.EP d.sub.1 + .DELTA.x .sup.. d.sub.3 and h.sub.EP d.sub.2 + .DELTA.y .sup.. d.sub.3) form the x.sub.HF .sup.. d.sub.3, y.sub.HF .sup.. d.sub.3 and z.sub.HF .sup.. d.sub.3 of block 51 of FIG. 2.

These three signals are inputs to block 103, another transformation block similar to blocks 85 and 87, wherein the computations of block 57 of FIG. 2 are performed resulting in x.sub.F .sup.. d.sub.3, y.sub.F .sup.. d.sub.3 and z.sub. F.sup.. d.sub.3. The servo inputs from computer 23 in this case are .psi..sub.F, .theta..sub.F, and .phi..sub.F. The y.sub.F .sup.. d.sub.3 and x.sub.F .sup.. d.sub.3 are provided as inputs to divider 105 and z.sub.F .sup.. d.sub.3 and x.sub.F .sup.. d.sub.3 to divider 107. By scaling using normal analog techniques the constant f of block 59 of FIG. 2 may be included in this computation thus causing dividers 105 and 107 to have respective outputs representing the y.sub.f and z.sub.f of block 59 of FIG. 2. These outputs are then used as inputs to scanned raster device 13 of FIG. 1. The dividers used may be constructed using the instructions given on the data sheet for Analog Devices Multiplier Model 422J published by Analog Devices of Norwood, Mass.

As shown in FIG. 6 the matrix multiplications are done using servo multipliers. It should be noted that the .omega..sub.ij matrix of block 27 of FIG. 2 and the .alpha..sub.ij matrix of block 25 may be multiplied in the simulator computer, in which case only one set of angles and thus only one block 85 or 87 would be required in the embodiment of FIG. 6. It is also possible to compute the required sines and cosines in computer 23 and perform the matrix multiplications using additional multipliers similar to blocks 89, 91, etc.

Various modifications may be made without departing from the principles of the invention. One such modification is shown in FIG. 7. Because of screen shape it is often desirable to roll the image optically. However, since the equations implicitly take roll into account, if optical roll is used, derotation in the raster computer is required.

Basically, the circuits of FIG. 7 perform the function of a resolver transforming the coordinates y.sub.f and z.sub.f in one axis system to the coordinates y.sub.C and z.sub.C in an axis system rotated an angle .phi..sub.D from the original system. Values of sin .phi..sub.D and cos .phi..sub.D are obtained from computer 23 and the values - y.sub.f sin .phi..sub.D, - y.sub.f cos .phi..sub.D, + z.sub.f sin .phi..sub.D and - z.sub.f cos .phi..sub.D obtained in multipliers 111, 113, 115, and 117. In summing amplifier 119 y.sub.C is found by adding z.sub.f sin .phi..sub.D and y.sub.f cos .phi..sub.D and in amplifier 121 z.sub.C is found by adding z.sub.f cos .phi..sub.D and - y.sub.f sin .phi..sub.D. (Signs are inverted through amplifiers 119 and 121.) In this manner optical roll, for example, is compensated for in the camera raster computer output.

An examination of FIG. 6 shows that a relatively large number of multiplications and transformations must be done in the raster computer. Each function performed contributes to the noise in the system with the analog multipliers causing the greatest problems because of internal noise generation. Thus, it is desirable to have as few functions performed in the raster computer as possible.

The only variables changing at a rate which requires the use of analog computations are .psi..sub.W and .phi..sub.W. It was previously noted that blocks 33 and 35 may be combined by doing further computation in the digital computer. It is possible to go even further and combine not only blocks 33 and 35 but also 51 and 57 to end up with one matrix multiplication. Such an arrangement is shown in FIG. 8.

Only three blocks of computation are shown being done at fast computation rates in the analog raster computer. Sweep generator 81 provides the .psi..sub.W and .theta..sub.W to block 123 where g.sub.1, g.sub.2 and g.sub.3 are computed. The equations of block 33 of FIG. 2 are for a flat display and tangent functions used. In block 123 the equations for a spherical display are used. If block 123 were computing for a flat display the equations would be g.sub.1 = 1, g.sub.2 = tan .psi..sub.W and g.sub.3 = tan (.theta..sub.W) . These quantities go to block 125 where A.sub.C, B.sub.C, and C.sub.C are computed from the g.sub.i 's and .pi..sub.ij 's. These two computations replace all those shown in blocks 33, 35, 51, and 57 of FIG. 2. The .pi..sub.ij 's are found in the digital computer 23 using the quantities in the above mentioned blocks of FIG. 2. The final block 127 corresponds to block 59 of FIG. 2. The precise way of combining all the various transformations is not shown as it will be well within the capability of those skilled in the art to derive the equations for the .pi..sub.ij 's.

The implementation of these equations is shown in FIG. 9. Sweep generator 81 is the type previously described in connection with FIG. 6. In block 129 the g.sub.i 's are obtained using the types of multipliers previously mentioned in describing FIG. 6 to obtain g.sub.1 and g.sub.2 and an operational amplifier to invert sin .theta..sub.W for g.sub.3. Blocks 131 are multiplying digital-to-analog converters such as Model 2254 available from Data Device Corporation of Hicksville, N.Y. In the implementation of FIG. 6 the quantities developed by the computer 23 were required to be converted to analog quantities before being used. This resulted in any noise on the analog lines being further amplified by the analog multipliers. By using the digital signals directly as multiplying D/A inputs, significant noise reduction is possible. The multiplied .pi..sub.ij, g.sub.i quantities are summed in amplifiers 133 to obtain A.sub.C, B.sub.C, and C.sub.C. The final outputs y.sub.C and z.sub.C are obtained by dividing B.sub.C and C.sub.C by A.sub.C in block 135. (Basically the same computation as was done in blocks 105 and 107 of FIG. 6.)

It may be that the noise reduction of the systems of FIGS. 8 and 9 is not sufficient for some purposes. The equations for a system which uses the integration of rates is shown in FIG. 10. Since positions will be obtained using analog integrators a filtering effect will result which should further reduce noise. The equations shown are essentially the rate equivalents of the position equations of FIG. 8.

The block 137, where the g.sub.i 's are computed, 139, where the A.sub.C, B.sub.C, and C.sub.C are computed, and 141, where the y.sub.C and z.sub.C are computed are the equivalents of blocks 123, 125 and 127 of FIG. 8. In addition a block 142 wherein g.sub.i 's are computed for use in block 137 and 139 is required. And as indicated digital computer 23 computes both the .pi..sub.ij 's and .pi..sub.ij 's. The final step, of integration, which provides the filtering to reduce noise is shown in block 143.

As with any integration initial values are required. The method of obtaining these values is shown in blocks 145, 147 and 149 in the lower part of FIG. 10. The system is initialized for each horizontal line. Thus in block 145 (g.sub.i) 's are computed for a line beginning at a value of .psi..sub.W = 30.degree. (in a particular embodiment. In other embodiments another proper constant defining the azimuth of the starting position would be used.) Thus g.sub.i 's for each line based on the constant - 30.degree. and the .theta..sub.W associated with a given line are computed. In 147, (A.sub.C) , (B.sub.C) , and (C.sub.C) for these starting points are computed and in block 149 the (y.sub.C) and (z.sub.C) are computed. These three blocks are the same as blocks 123, 125 and 127 of FIG. 8 except that instead of computing continuous values, they only compute the initial starting point of horizontal lines.

Initialization might also be done only each field or frame if the integrators used are accurate enough. A line by line initialization, however, assures that each line will start at the same azimuth independent of integrator accuracy. It should also be noted that the initial values need not be computed in real-time and may thus be precomputed and stored. A particular implementation of these equations is not shown as the techniques of FIG. 6 and 8, along with other well known analog methods, may be used in implementation as will be recognized by those skilled in the art.

These last two sets of equations, although offering many advantages, have certain disadvantages in cost due to the large number of technically sophisticated components. Another set of equations which provides a raster computer which is simpler and more noise-free than that of FIG. 6 is shown in FIG. 11. This set of equations allows the type of servo multipliers described in connection with FIG. 6 to be used in matrix multiplications. It will be recognized that the .pi..sub.ij used in the equations of FIGS. 8 and 10 do not lend themselves to use with servos and thus multipliers were required.

In block 123 the g.sub.i 's are computed as before (in FIGS. 8 and 10). In block 151 d.sub.i 's are computed in a manner similar to that done in block 33 of FIG. 2 (block 87 of FIG. 6). Here in effect the .omega..sub.ij 's of block 33 and the .alpha..sub.ij 's of block 35 of FIG. 2 have been combined into a matrix composed of .psi..sub.AOI, .theta..sub.AOI, and .phi..sub.AOI terms. Block 153 is essentially the same as block 51 of FIG. 2. Additional digital computer computations have been used to provide X.sub.R, Y.sub.R, and H.sub.R to eliminate some of the analog multiplications associated with block 51 of FIG. 2. Block 155 is the same as block 57 of FIG. 2 except that, instead of finding film image plane coordinates, the scanned raster coordinates are found directly. (This is also true in the equations of FIGS. 8 and 10.) The final step in block 157 corresponds directly to that of block 59 of FIG. 2, again with the exception that y.sub.C and z.sub.C rather than y.sub.f and z.sub.f are obtained. (The f subscript denotes film image plane coordinates and the C subscript scanned raster device coordinates.)

Implementation is essentially the same as that shown in FIG. 6. One of blocks 85 or 87 will be eliminated since .psi..sub.W/B, .theta..sub.W/B, .phi..sub.W/B, .psi..sub.s, .theta..sub.s, and .phi..sub.s have been combined into .psi..sub.AOI, .theta..sub.AOI, and .phi..sub.AOI. Multipliers 89 and 91 are eliminated since d.sub.1 and d.sub.2 are added directly to the products of multipliers 93 and 95 (93, 95 and 97 will now have as inputs X.sub.R, Y.sub.R, and H.sub.R respectively) and the final circuit output will be y.sub.C and z.sub.C rather than y.sub.f and z.sub.f since the inputs to block 103 will be .psi..sub.C, .theta..sub.C, and .phi..sub.C rather than .psi..sub.F, .theta..sub.F, and .phi..sub.F.

The equations above assume that the relationship between the center of the window and the angles .psi..sub.W and .theta..sub.W remain fixed. Such would be the case in a single fixed display window and in some cases where the center of the display (meaning here the imagry displayed) is allowed to move.

However in certain types of systems the equations described above will have to be varied to achieve the result of always defining the line of sight from the observers eye through the instantaneous spot position. For example in the type of system described in application Ser. No. 66729 filed by R. F. H. McCoy on Aug. 25, 1970 wherein a total wide angle spherical display is made up of tiers of narrow angle displays the display raster will be generally made to trace circles of latitude. The center of a high resolution image to be displayed is capable of being positioned anywhere on the display and .psi..sub.W and .theta..sub.W, which are associated with the high resolution image, define at each point in time latitude and longitude increments referenced to the fixed display frame. The .psi..sub.W and .theta..sub.W will then define the spot position with respect to the center of the moving window. To reference .psi..sub.W and .theta..sub.W to this fixed frame it is then only necessary to add the latitude and longitude (of the center of the moving window) respectively to .psi..sub.W and .theta..sub.W, and then take the sines and cosines of the resulting angular sums in order to find the direction cosines of the instantaneous line of sight.

At this point a more detailed explanation seems in order particularly in view of changes required in the equations of FIG. 8 and those following. In FIG. 8 et seq. where g.sub.i terms are computed the .psi..sub.W and .theta..sub.W would have to be changed to (.psi..sub.W + .psi..sub.O) and (.theta..sub.W + .theta..sub.O), where .psi..sub.O and .theta..sub.O represent the respective longitude and latitude of the center of the moving window. In practice it has been found difficult to combine these angles and then take their sines and cosines. This difficulty may be overcome by using the well known trigonometric relationships for finding the sine and cosines of the sum of two angles. Doing this however requires that three additional g.sub.i terms be computed.

The additional terms to be computed are:

g.sub.4 = cos .psi..sub.W sin .theta..sub.W

g.sub.5 = sin .psi..sub.W sin .theta..sub.W

g.sub.6 = cos .theta..sub.W

These are then multiplied by the .pi..sub.ij 's (which must be approximately altered in such a way that properly takes the sines and cosines of .psi..sub.O and .theta..sub.O into account) in block 125 of FIG. 8 to result in the following equations:

A.sub.C = .pi..sub.11 g.sub.1 + .pi..sub.12 g.sub.2 + .pi..sub.13 g.sub.3 + .pi..sub.14 g.sub.4 + .pi..sub.15 g.sub.5 + .pi..sub.16 g.sub.6

B.sub.C = .pi..sub.21 g.sub.1 + .pi..sub.22 g.sub.2 + .pi..sub.23 g.sub.3 + .pi..sub.24 g.sub.4 + .pi..sub.25 g.sub.5 + .pi..sub.26 g.sub.6

C.sub.C = .pi..sub.31 g.sub.1 + .pi..sub.32 g.sub.2 + .pi..sub.33 g.sub.3 + .pi..sub.34 g.sub.4 + .pi..sub.35 g.sub.5 + .pi..sub.36 g.sub.6

These additional terms will of course require additional hardware computing elements which may be constructed in the same manner as shown in FIG. 9.

Thus a general method and a number of specific implementations of that method for changing the apparent perspective of an image which is of general application in a visual system utilizing scanned raster devices has been disclosed. A general set of equations and straight forward implementation was first shown and then various improvements which result in increased efficiency and noise reduction disclosed.

Although specific systems which are useful in flight simulators have been disclosed herein, the invention may be used in similar applications such as space simulators, ship simulators, driver trainers, etc.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed