Display terminal system

Swallow December 9, 1

Patent Grant 3925776

U.S. patent number 3,925,776 [Application Number 05/462,171] was granted by the patent office on 1975-12-09 for display terminal system. This patent grant is currently assigned to Research Corporation. Invention is credited to Ronald J. Swallow.


United States Patent 3,925,776
Swallow December 9, 1975
**Please see images for: ( Certificate of Correction ) **

Display terminal system

Abstract

A system of display terminals each receiving from an image generator edges which commonly represent a selected two-dimensional projection of three-dimensional objects. Each edge is defined by its intended position on a two-dimensional screen and by attributes for a portion of the screen adjacent the edge. These attributes may include color, brightness, shading and the like. Each display terminal has an edge memory which stores a set of edges ordered by the y-coordinate of their tops and are partially x-ordered. The stored edges are decoded into scanline segments, each segment being the scanline interval between two adjacent edges intersecting that scanline. Each segment is defined by its intended position on the display screen and by its other attributes such as color brightness. The segments are subsequently decoded into video raster points, each point being defined by its position on the display screen and by attributes such as brightness and color. Since only edges are transmitted from the image generator to the terminals, as opposed to a video raster, a relatively low capacity communication link can be used. Since edges are buffered at each display terminal, rather than video raster data, the edge memories inside the terminals need not be excessively large. The conversion of edges to segments and of segments to video raster is fast and efficient, and allows flickerless display of complex color images.


Inventors: Swallow; Ronald J. (Upper Marlboro, MD)
Assignee: Research Corporation (New York, NY)
Family ID: 23835421
Appl. No.: 05/462,171
Filed: April 18, 1974

Current U.S. Class: 345/1.1; 375/240.25
Current CPC Class: G06F 3/153 (20130101); G09G 5/42 (20130101); G09G 5/02 (20130101)
Current International Class: G06F 3/153 (20060101); G09G 5/02 (20060101); G09G 5/42 (20060101); G06F 003/14 ()
Field of Search: ;340/324A,324AD ;178/DIG.6

References Cited [Referenced By]

U.S. Patent Documents
3214631 October 1965 Anderson
3480943 November 1969 Manber
3624633 November 1971 Hofstein
3778811 December 1973 Gicca et al.
Primary Examiner: Curtis; Marshall M.
Attorney, Agent or Firm: Cooper, Dunham, Clark, Griffin & Moran

Claims



I claim:

1. A system of display terminals, each for displaying a raster pattern along regular scanlines on a two-dimensional screen and each receiving edge definitions, each edge definition comprising signals defining a continuous, visible line on the screen and a plurality of selected visible attributes for a two-dimensional portion of the screen adjacent said continuous line, each display terminal comprising:

edge memory means for storing edge definitions and means for storing in the edge memory means a selected set of edge definitions ordered along at least one coordinate of the screen;

edge-to-segment decoder means for converting at least a selected subset of the edge definitions stored in the edge memory means to definitions of one-dimensional segments along each of a plurality of defined scanlines across the screen, each segment definition comprising signals defining the position and said attributes of a scanline portion adjacent the intersection of the scanline by an edge line, and

segment-to-video decoder means for converting the segment definition signals into video raster points on the screen, each video raster point being produced by an electrical signal defined by the attributes of the coinciding segment, to provide on the screen a video raster representation of at least a selected subset of the set of edges.

2. A system as in claim 1 wherein said attribute defined by an edge definition include brightness and brightness gradient for the screen portion adjacent the edge line, and said decoder means include means for converting said brightness and brightness gradient attributes of an edge into signals defining corresponding characteristics of corresponding segments and raster points on the screen.

3. A system as in claim 2 wherein the attributes defined by an edge definition include color for the screen portion adjacent the edge line, and said decoder means include means for converting the signals defining said color attributes into signals defining corresponding characteristics of corresponding segments and raster points on the screen.

4. A system as in claim 3 wherein the attributes defined by an edge definition include a linear brightness variation flag specifying linear brightness variation between the two ends of a segment disposed between two intersections of its scanline by an edge line, and the converting means include means for converting the signals defining said linear brightness variation signals into corresponding signals defining corresponding raster points to provide smooth shading of pictorial information displayed on the screen.

5. A system as in claim 4 wherein the set of edge definitions in the edge memory means comprises definitions of a plurality of blocks of edges, each block including all edges which have at least a portion within a selected span of one of the coordinates of the screen and wherein the edge-to-segment decoder means comprise means for converting into segment signals the signals defining said blocks of edges.

6. A system as in claim 5 wherein the edge definitions stored in the edge memory means are ordered along each of two coordinates of the screen and the edge-to-segment decoding means comprise means for converting the edge signals into segment signals in accordance with said order of the edges.

7. A system as in claim 1 wherein the attributes defined by an edge definition include color for the screen portion adjacent the edge line, and said decoder means include means for converting the signals defining said color attributes into signals defining corresponding characteristics of corresponding segments and raster points on the screen.

8. A system as in claim 1 wherein the attributes defined by an edge definition include linear brightness variation between the two ends of a segment disposed between two intersections of its scanline by an edge line, and the converting means include means for converting the signals defining said linear brightness variation signals into corresponding signals defining corresponding raster points to provide smooth shading of pictorial information displayed on the screen.

9. A system as in claim 1 wherein the set of edge definitions in the edge memory means comprises definitions of a plurality of blocks of edges, each block including all edges which have at least a portion within a selected span of one of the coordinates of the screen and wherein the edge-to-segment decoder means comprise means for converting into segment signals the signals defining said blocks of edges.

10. A system as in claim 1 wherein the edge definitions in the edge memory means are ordered along each of two different coordinates of the screen and the edge-to-segment decoding means comprise means for converting the edge signals into segment signals in accordance with said order of the edges.

11. A system as in claim 1 wherein: the edge memory means include a small memory section storing a subset of the set of edge definitions and a large memory section storing the remainder of the set of edge definitions; the edge-to-segment decoder means comprises means for converting to segments only the edge definitions stored in the small memory section; and the storing means comprises means for replacing the subset of edge definitions stored in the small memory section with a subset of edge definitions obtained from the large memory section after all edge definitions in the small memory section have been converted to segments.

12. A system as in claim 1 wherein the set of edge definitions in the edge memory means comprises definitions of a plurality of blocks of edges, each block including all edges which have at least a portion within a selected span of one of the coordinates of the screen, and wherein the storing means comprises means for storing in the small memory section the definitions of a complete block of edges.

13. A system as in claim 1 wherein the edge memory means comprises N banks and the storing means comprises means for storing the definitions of edges 1, N + 1, 2N + 1 . . . in the first bank, for storing the definitions of edges 2N + 2, 2N + 2 . . . in the second bank, . . . and for storing the definitions of edges N, 2N, 2N, . . . of said set of definitions of edges in the N-th bank.

14. A system as in claim 1 wherein each edge definition comprises signals defining the x, y screen coordinates of one end of the edge line, the height and slope of the edge line, the brightness at one end of the edge line and the color for said adjacent portion of the screen, and said decoder means include means for converting the last recited signals into corresponding characteristics of corresponding segments and raster points on the screen.

15. A system as in claim 14 wherein said adjacent portion of the screen defined by an edge definition is the screen portion on one side of the edge line extending across the screen up to any other edge lines and said decoder means include means for converting the last-recited signals into corresponding characteristics of corresponding segments and raster points on the last-recited portions of the screen.

16. A system as in claim 4 wherein the edge definition includes signals defining a gradient of brightness along the edge line and the decoding means includes means for converting the last-recited signals into corresponding characteristics of corresponding segments and raster points on the screen.

17. A system as in claim 16 wherein the edge definition includes signals defining a linear brightness variation for said screen portion adjacent the edge line and the decoder means include means for converting the last-recited signals into corresponding characteristics of corresponding segments and raster points on the screen.

18. A system as in claim 1 wherein: the edge memory means comprise a plurality of banks and the storing means comprise means for storing in each bank a subset of said set of definitions of edges; each bank comprises a small section and a large section and the storing means comprises means for storing in the small section a sub-subset of the subset of definitions of edges stored in the bank and for storing the remaining definitions of edges of the subset in the large section; and the edge-to-segment decoder means comprises means for converting to segments only the definition of edges stored in the small sections of the banks.

19. A system as in claim 18 wherein: the set of edges is divided into blocks, each block including all edges which have a least a portion within a selected span of one of the screen coordinates; and the storing means comprises means for storing the definitions of a single block of edges distributed among the small sections of the memory banks, said small sections containing only the definitions of said single block of edges, and for storing the definitions of any other blocks of edges in the large memory sections.

20. A display terminal, for displaying a raster pattern along regular scanlines on a screen, comprising:

edge memory means;

means for storing in the memory means a set of edge definitions, each edge definition comprising signals defining a continuous edge position on the screen and a set of attributes for an associated continuous, two-dimensional portion of the screen; and

means for converting at least a selected subset of the set of the edge definitions into signals defining the points of a video raster displayed on the screen, each raster point reflecting said attributes for the screen portion with which it coincides.

21. A display terminal as in claim 20 wherein the converting means comprise:

edge-to-segment decoding means for converting the edge definitions of said subset into signals defining continuous segments of scanlines across said screen, each segment defined by a position on the screen and by a set of the attributes for the portion of the screen which which it coincides; and

segment-to-video decoder means for converting each of said segments into signals defining the points of said video raster.

22. A display terminal as in claim 21 wherein the means for storing the set of definitions of edges in the memory means comprises means for ordering the stored definitions of edges along a selected first coordinate of the screen.

23. A display terminal as in claim 22 wherein the storing means comprises means for storing the set of definitions of edges in the memory means divided into a plurality of blocks, each block including the definitions of all edges which have a portion within a selected span of said first screen coordinate.

24. A display terminal as in claim 23 wherein storing means comprises means for storing the definitions of edges in the memory means ordered along a selected second coordinate of the screen.

25. A display terminal as in claim 24 wherein the position of an edge and the associated continuous portion of the screen are adjacent.

26. A display terminal as in claim 25 wherein the attributes for the continuous portion of the screen associated with an edge include the color of said screen portion.

27. A display terminal as in claim 26 wherein said set of attributes include brightness and brightness gradient for the edge position.

28. A display terminal as in claim 20 wherein the set of edges stored in the memory means are ordered along a selected first coordinate of the screen.

29. A display terminal as in claim 20 wherein the set of edges stored in the memory means are divided into a plurality of blocks, each block including all edges which have a portion within a selected span of a first screen coordinate.

30. A display terminal as in claim 20 wherein the storing means comprises means for ordering the definitions of edges stored in the memory means along each of two coordinates of the screen.

31. A display terminal as in claim 20 wherein the position of an edge and of the associated continuous portion of the screen are adjacent.

32. A display terminal as in claim 20 wherein the attributes for the continuous portion of the screen associated with an edge include signals defining the color of said screen portion and the decoder means include means converting the last-recited signals into corresponding characteristics of corresponding segments and raster points on the screen.

33. A display terminal as in claim 20 wherein said set of attributes include signals defining the brightness and brightness gradient for the edge position and the decoder means include means converting the last-recited signals into corresponding characteristics of corresponding segments and raster points on the screen.

34. A display terminal system receiving definitions of edges from a single image generator and comprising a plurality of display terminals, each comprising:

a display screen;

edge memory means for storing definitions of edges;

means for storing in the memory means a set of definitions of edges, each edge definition comprising signals defining a continuous edge line position on the screen and a set of attributes for an associated continuous portion of the screen;

means for converting at least a selected subset of the set of definitions of edges into a video raster, each raster point defined by signals reflecting said attributes for the screen portion with which it coincides; and

means for displaying said video raster on the display screen.

35. A method of operating a display terminal for displaying a raster pattern along regular scanlines on a screen comprising the steps of:

storing in the display terminal a selected set of definitions of edges ordered along at least one coordinate of the screen, each stored edge definition comprising signals defining a visible continuous line on the screen and selected visible attributes of a two-dimensional portion of the screen adjacent said continuous line;

combining the signals comprising said selected set of definitions of edges to generate definitions of segments along each of a plurality of the scanlines on the screen, each segment definition comprising signals positioning a portion of a scanline adjacent an edge and causing selected brightness and other characteristics of said scanline portion; and

combining the signals defining said segments to derive a sequence of video raster points on the screen to display thereby on the screen on video raster pattern representing at least a selected subset of the edges and of the adjacent two-dimensional portions of the screen.

36. A method of operating a display terminal for displaying a raster pattern along regular scanlines on a screen comprising the steps of:

storing in a memory means a set of definitions of edges, each edge definition comprising signals defining a continuous visible edge position on the screen and a set of attributes for an associated continuous, two-dimensional portion of the screen; and

combining said edges to generate a sequence of video raster signals for causing selected video raster points on the screen to form a visible pattern corresponding to said visible edges and adjacent screen portions.
Description



BACKGROUND OF THE INVENTION

The invention is in the field of computer graphics and relates specifically to a system of display terminals for displaying graphical information received from a central image generator.

In computer graphics, a representation of a generally three-dimensional world is stored in a memory, and selected two-dimensional projections of selected portions of it are displayed on a two-dimensional display surface such as a television screen. The stored information and the display surface may take a variety of forms. A simple example of a system of this type is a video tape serving as a memory and storing a selected set of two-dimensional views of a three-dimensional world, combined with a television set. A more complex example is a specially programmed digital computer system which stores a three-dimensional object such as a cube by identifying the coordinates of its edges, generates selected two-dimensional projections of the three-dimensional object and the coordinates of the lines making up the projection, and either transmits the coordinates of the lines representing the projection to a stroke type display device or converts the line coordinates of the projection to a set of raster points and transmits that set to a raster display device such as a television receiver.

A survey of computer graphic techniques may be found in Sutherland, I.E., A Characterization of Ten Hidden Surface Algorithms, ACM Computing Surveys, Volume 6, No. 1, March 1974, pages 1-55; and in the references listed at page 45 of the article, and particularly in Newman, W.M. et al., Principles of Interactive Computer Graphics, Mc-Graw-Hill, 1973.

A major factor which has prevented widespread use of computer graphics has been the cost of storing great amounts of information and of transmitting information to the display devices at a high rate. For example, if the purpose of a system is to show on a display surface any selected view of a three-dimensional object such as a cube, it is theoretically possible to store a nearly infinite number of views on a video tape and to find and display a selected one on a television screen, but this would be prohibitively expensive. A great reduction in the amount of stored information results when a three-dimensional object is stored in computer memory not as it looks in a specific two-dimensional view but as it actually is in three-dimensions, e.g., by storing the three-dimensional coordinates of the apices or the edges of a cube. This three-dimensional information can be computer-processed to generate almost any perspective view of the three-dimensional object, to thereby reduce storage costs as compared to a video tape storage. The remaining question is then how to transmit a representation of the two dimensional view of the object to a display device and how to display it. The answer to this question must take into account and reconcile a number of conflicting factors, such as: the desirability of a low transmission rate so as to avoid expensive communication links between the central computer and the display device, the desirability of fast operation of the display device so as to be able to show a complex image without flicker and the desirability of having minimal storage at the display device so as to minimize cost.

In various approaches to reconciling these conflicting factors, some prior art systems use stroke-type display devices (IBM 2250 and Tectronics) which form lines by random positioning and stroking of the CRT beam, others use video gating over a TV raster (Anagraph) and still others use random point plotting (Plasma). All these types of display devices refresh the display either by repeated image generation from encoded form such as lines (IBM 2250), by reading a video storage device such as a video disc (Anagraph) or shift register memory (TICCIT), by reading a storage tube (Tectronics) or by use of a special memory display panel (Plasma).

While the stroke type display devices afford relatively low transmission rate between the images generator and the display device, since line identification data is transmitted and not video raster, these devices cannot display complex images because of limited stroking rates. While the other types of display devices discussed immediately above can theoretically display complex images, they are severely limited in resolution because of the high cost of locally storing each point of an image and because of the required high transmission rate.

There are only two systems known to applicant which use point display devices and are capable of displaying area graphics where arbitrary shapes or surfaces can be simulated: the system developed by the Evans and Sutherland Computer Corporation and the system developed by G.E. for pilot simulator-like application. Both are systems in which the terminal display device includes a special purpose image generator computer. Both systems generate colored perspective views from three-dimensional descriptions of objects, but both are limited to one display device since the output of the image generator is a color video raster to a color monitor. Modification of these two systems by the use of a blackboard memory in the display monitor, in order to share the image generator for several users, is uneconomical because of the high cost of buffering colored video raster data. Additionally, both of these two systems are limited in speed, only one picture per 1/30th second being possible.

A need remained therefore, prior to this invention, for a graphics display system using a relatively low transmission rate between the image generator and the display terminals, capable of supporting a number of display terminals by the same image generator, needing minimal storage of data at the display terminal and capable of producing flickerless display of complex images, all this at a relatively low price per terminal.

SUMMARY OF THE INVENTION

The invention is in the field of computer graphics and relates specifically to a system of display terminals associated with an image generator providing information such as definitions of selected two-dimensional projections of a three-dimensional world.

The invented system each identified the design goals of relatively low transmission rate between the image generator and the display terminals, of relatively small storage capacity at the display terminals and of flickerless display of complex images by (1) transmitting to the display terminals pictorial information coded as visible two-dimensional edges each indentified by its intended position on a two-dimensional display screen and by desired attributes of a portion of the display screen associated with the edge and extending adjacent the edge, such as color, brightness, shading and the like, thus affording relatively low transmission rate between the image generator and the display terminals, (2) buffering the edges at the display terminal, rather than buffering video raster information, so as to need relatively low storage capacity at the terminals, and (3) providing fast but relatively inexpensive translation of edges to video raster, so that complex images can be displayed with no flicker and at low cost.

Each display terminal includes an edge memory for storing a selected set of edges which are y-ordered and x-ordered in a specific manner to facilitate their conversion to video raster. The edges may be divided into blocks, each block including all edges that have portions within a selected span of the screen y-coordinate. Then, the memory may be divided into a small section, of sufficient size to store a complete block of edges, and a large section storing the remaining blocks of edges. In edge decoding, only the small section of the memory has to be accessed for any given scanline for as long as that scanline belongs to the y span of the block currently residing in the small section in memory.

Each edge defines a continuous line on the screen and selected attributes for an associated portion of the screen. The line is continuous, as opposed to a line defined by series of raster point, and it is a straight line in the disclosed embodiment of the invention, although the invented principles would apply to a curved line as well. The screen portion associated with an edge is the screen area adjacent the edge and extending to the right of it up to any other edges, but the invented principles apply to other relationships between an edge line and an associated screen area, such as a screen portion to the left of an edge line. The screen portion associated with an edge is a continuous area, rather than an area defined by a collection of raster points.

Additionally, each edge defines other attributes of the edge line and of the associated screen portion, such as brightness, color, brightness gradient and brightness variation. These attributes provide the invented system with the capability of displaying realistic color and realistic shading so that three-dimensional objects can be simulated on the two-dimensional display screen. For example, the edge defines the desired brightness at the top of the edge line, the color to the right of the edge line, the gradient of brightness downward along the edge line, a brightness variation flag which if set means that the color of a screen portion between two edges will vary linearly, and a flag which if set means that this is the first edge of a set.

The conversion of edges to video raster takes place in two steps: first the edges are converted to scanline segments, each segment being the portion of a scanline disposed between the intersections of that scanline by edge lines or between an edge line and the end of the screen, each segment being identified by its position on the screen as by the attributes of the screen portion with which it coincides; and second, the scanline segments are converted to a video raster. The conversion of edges to segments takes place in a pipeline decoder whose input is the word defining an edge and whose output is a word defining the position of the segment and the attributes of that segment. Several of the segment defining words output from the edge-to-segment decoder are buffered in a ping-pong buffer, and the buffer output is converted into a video raster which is in color, at a resolution several times that of a conventional TV receiver, and may include smooth shading for more realistic depiction of three-dimensional information on a two-dimensional screen. The output of the segment-to-video decoder is applied to a commercial color TV receiver modified in accordance with the invention to operate at a higher resolution and to accept the specifically formatted output of the segment-to-video decoder.

A major aspect of the invented display terminals is that they receive and store edges, rather than video raster, but display a video raster rather than strokes. By this feature, the invented display terminal combines the desirable goals of relatively low need for storage capacity at the display terminal and the capability of displaying flickerless complex images. An additional major aspect of the invented display terminals is that the edges stored at the display terminal are y-ordered and x-ordered, so that no sorting needs to be done as the edges are converted to video raster. Still another major aspect of the invention is the blocking of edges, which reduces the time necessary for cycling the edge memory. Still another major aspect of the invention is that it allows for flexibity in defining edges (for example, it is possible under the principles of the invention to define curved edge lines and to define associated screen areas which bear different relationship to the defined edge line). Still another major aspect of the invention is the provision for smooth shading of the displayed information, to provide a more realistic appearance of simulated three-dimensional objects. Other aspects of the invention are discussed in the detailed description of the invention and are brought out in the appended claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of the invented system of display terminals, as connected to a central image generator through a communication controller.

FIG. 2 illustrates an edge definition word.

FIG. 3 illustrates the manner of defining an edge by its position on a display screen and by the color and other attributes to the right of the edge until a next edge is encountered, as used in this invention.

FIG. 4 is a block diagram of an edge memory forming a part of each display terminal.

FIG. 5 is a more detailed block diagram of one bank of the memory shown in FIG. 4.

FIG. 6 illustrates the sorting of a set of edges into blocks of edges in accordance with the invention.

FIGS. 7, 8, 9, and 10 illustrate the contents of the memory shown in FIG. 4 during different stages of the operation thereof.

FIG. 11 is a generalized block diagram of an edge-to-segment decoder.

FIGS. 12, 13, 14 and 15 are more detailed block diagrams of portions of the decoder shown in FIG. 11.

FIG. 16 is a block diagram of a segment buffer receiving the output of the edge-to-segment decoder of FIG. 11.

FIG. 17 is a partly block and partly circuit diagram of a segment-to-video decoder.

FIG. 18 is a block diagram of a TV receiver modified in accordance with the invention.

FIG. 19 is a circuit diagram of a portion of an antilog amplifier as used in the embodiment of FIG. 18.

FIGS. 20 and 21 are timing diagrams illustrating the operation of the invention.

DETAILED DESCRIPTION

Overview -- FIGS. 1, 2, and 3

Referring to FIG. 1, an image generator 10 generates edges each defining the intended location on a two-dimensional display screen of a line that typically represents a portion of the two-dimensional projection of a three-dimensional object, as well as desired attributes of a portion of the display screen associated with the edge and extending adjacent the edge.

Referring to FIGS. 2 and 3, each edge may be defined by a 76-bit word <X,Y,H,S,B,C,G,F,E> where:

a. X = x.sup.t, an 11 bit x-coordinate of the edge's top;

b. Y = y.sup.t, an 11 bit y-coordinate of the edge's top;

c. H = y.sup.b - y.sup.t, an 11 bit height, where y.sup.b is an 11 bit y-coordinate of the edge's bottom;

d. S = (x.sup.b - x.sup.t) . 2.sup.-.sup.M.sup.+11 /(y.sup.b - y.sup.t), a 13 bit signed integer (12 bit integer plus 1 bit sign) representing the slope where x.sup.b is an 11 bit x-coordinate of the edge's bottom and where M is the number of consecutive leading zeros from the left in the 11 bit H up to 10 zeros;

e. B = b.sup.t, where b.sup.t is the 7 bit brightness (log) at the top of the edge;

f. C = c, color to the right made up of a 4 bit red, a 4 bit blue, and a 4 bit green component (each a log function);

g. G = (b.sup.b - b.sup.t) . 2.sup.-.sup.M.sup.+11 /(y.sup.b - y.sup.t), a 9 bit signed integer representing the gradient of brightness defined downward along the edge where b.sup.b is the 7 bit brightness (log) at the bottom of the edge;

h. F, a 1 bit flag which, if zero, implies that the brightness to the right is to vary linearly (in the log domain) toward the value of the next edge;

i. E, a 1 bit flag marking the beginning of a set of edges in memory.

Edges may be generated by a system similar to that utilized in the computer graphics system of the Evans and Sutherland Computer Corporation, but since the image generator does not form a part of this invention, it may be assumed for simplicity that the image generator 10 is a storage device, such as a drum memory or a disc memory, storing a plurality of edges each defined by a 76-bit word of the type shown in FIG. 2, the collection of stored edges representing a desired two-dimensional image, such as a selected two-dimensional projection of a three-dimensional world. The edges stored in the image generator 10 are only those which should be visible on the display screen, and they are x-ordered and y-ordered according to rules discussed in detail below. Additionally the generator 10 stores control information of the type discussed in detail below.

A plurality of edges of the type shown in FIG. 2 together with eight bits of control information per edge, making up a total of 84 bits per edge, are transmitted via a communication link 11, which may be a coaxial cable of sufficient capacity, to a communication controller 12 comprising a receiver 14, a timing generator 16 and a terminal load control 18. The communication controller 12 serves a number of display terminals 1, 2 . . . R. The receiver 14 is a serial-to-parallel converter for the edges and control information transmitted from the image generator 10, the timing generator 16 provides the necessary timing and control signals for transmission to each of the display terminals via a timing bus 20, and the terminal load control 18 decodes the edge data and control data from the coax receiver 14 and applies the decoded information to the display terminals via a memory input bus 22.

Each of the display terminals 1 through R comprises three major portions: a memory 24, which comprises a plurality of memory banks and a control for the memory banks, and temporarily stores a selected plurality of edges that are x-ordered and y-ordered in a selected manner; a decoder 26, which comprises an edge-to-segment decoder 28, a segment buffer 30 and a segment-to-video decoder 32; and a display monitor 34, which may be in color. The purpose of the decoder 26 is to convert the edges from the memory 24 into color video raster for driving the display monitor 34, and it carries out the conversion by decoding the edges to segments of scanlines, buffering the segments and converting the buffered segments to a color video raster.

Because the edges are a highly compressed representation of pictorial information, the communication link 11 can be of a relatively low capacity, and the same image generator 10 can serve up to hundreds of display terminals, unlike the prior art known to applicant where each terminal must have its own image generator. Because the memory 24 of each terminal stores edges rather than video raster, and because of the memory 24 is organized in a novel and efficient manner, it can provide sufficient information to the display monitor 34 by storing only a few thousand edges. Because the memory 24 outputs for each scanline of the display monitor 34 only the edges that intersect it, and because of the novel and efficient organization of the decoder 26, the decoder can operate with sufficient speed to ensure flickerless display at the monitor 34 of complex images, and it can operate with a segment buffer 28 comprising only a few shift registers.

It is noted that a polygon of arbitrary shape and color can be represented by edges in accordance with this invention. Because the brightness may very linearly along the edges bounding the polygon and linearly in x between edges, a polygon can represent a portion of a curved surface. By means of such polygons, both flat and curved surfaces can be represented in any illumination environment, limited only by the capacity of the edge memory 24 and by the number of bits used to define an edge. For a planar surface and a point light source at infinity, the edges bounding the left side of a polygon require F to be 1 and G to be 0. The result is a polygon of arbitrary but constant brightness and color. The F, B and G of the edges bounding the right side of this type of a polygon do not influence the brightness within the polygon. Planar surfaces which are illuminated by neighboring light sources require a varying shading and thus fall into the category of curved shaded surfaces. The brightness within the polygon is a piecewise linear function of that at the corners of the polygon because of the way brightness is defined along an edge at scanline y by:

which reduces to

which is the linear interpolated brightness along the edge.

If on the same scanline the brightness of two adjacent edges are b.sub.1 (y) and b.sub.2 (y), then the brightness at x between the two adjacent edges bounding a curved shaded surface is defined by

where x.sub.1 and x.sub.2 are the x intercepts of the two edges and where the form of the equation is that of a linear interpolation.

The edges on the left of curved shaded polygons have their F set to zero. The edges on the right of these polygons must, of course, have the appropriate G's and B's. However, their F's depend upon the nature of the surface to their right. If a discontinuity in brightness is required to the right of a curved shaded surface, the edges on the right of the shaded polygon must be followed by another set of parallel edges which redefine the brightness and color further to the right.

Aside from the obvious cost advantage afforded by the data compression due to edge encoding, there is a second advantage related to image generation. The generation of 2-D perspectives out of 3-D objects requires planar approximations to curved surfaces so as to allow low cost, high speed visible surface calculations. The perspectives of these planar surfaces are polygons which are conveniently represented by edges. Also, the brightness of the 3-D objects is generated only for the corners of the facets representing curved 3-D surfaces where linear interpolations in the perspective domain are assumed for all other portions of the surfaces. This interpolation is identical to that used at the display terminal according to this invention, permitting a perfect match between image generation and display terminal decoding.

Overview of Operation (FIGS. 1-3)

Prior to operating the disclosed system of display devices, the image generator 10 must be able to provide one or more sets of edges and control data. Although a specially programmed computer system can be used to provide the necessary data, since the image generator does not form a part of this invention, it can be assumed for the purposes of this disclosure that the image generator 10 is a memory (a drum or disc memory) that is loaded through a keyboard (through an intermediate tape storage, if necessary) with the required data. The required data includes one or more sets of edges each defined by a word of the type illustrated in FIG. 2, and each accompanied by several bits of control information, e.g. 8 bits. Each set of edges in the image generator 10 would normally represent a two-dimensional projection of a three-dimensional object, but this need not be the case, and the set of edges may represent any suitable pictorial information. The edges of a set are y-ordered and x-ordered as described in detail later in this specification. Further, the edges are blocked, again as described in detail later in this specification, and include marker edges defining the blocks of edges and include null edges, as necessary. The several bits of control information accompanying in each edge definition word indicate to the terminal load control 18 which of the several display terminals is to receive a specific set of edges, which of the edge defining words already stored in a specific memory 24 of a specific display terminal should be replaced, updated or deleted, what timing the timing generator 16 should provide, and the like. Since these control bits are used to carry out functions which are well known in the prior art, such as loading and updating selected ones of a plurality of memories, they are not described in detail in this specification.

In actual operation of the invented system of display terminals, the image generator 10 delivers over the communication link 11 a set of edges, e.g. an integer multiple of 1282 edges. As the edges are being received by the communications controller 12, the receiver 14 converts the serially received data to parallel data, the timing generator 16, which includes internal clocks, generates the necessary timing information for loading the selected memory 24, and the terminal load control 18 determines which memory 24 receives the edges and directs the edges to that memory 24 in the format suitable for loading it.

The set of edges is loaded in the selected memory 24 in the manner described in detail later in this specification. Then, the edge-to-segment decoder 28 looks at a subset of the edges stored in the memory 24 (looks at a block of edges) to see which, if any, cross the scanline of the monitor 34 which is currently being decoded. Based on detecting the edge crossings of that scanline, the edge-to-segment decoder 28 divides the scanline into segments, and defines each segment by its position on the scanline and by the attributes (color, brightness and the like) corresponding to the edge crossings bounding the segment. The segment buffer 30 buffers a few of the segments provided by the decoder 28, and the segment-to-video decoder 32 takes the buffered segments and decodes them into a video raster, each point of which is defined by its position on the screen of the monitor 34 and by its attributes such as brightness and color. The video raster from the decoder 32 is applied to the monitor 34 for display at a resolution several times that of the conventional color TV receiver (e.g. 1600 points in x and 1200 points in y).

It is noted that while a specific hardware embodiment of the invention is described in detail in this specification, many variations are possible within the scope of the invention. For example, edges may be defined by words differing in format from the words shown in FIG. 2, the edge may define the attributes of a screen area to its left, as well as to its right, or some other associated area of the screen, and different components may be used for specific functions. Further, parts, or all of the invented display terminal system may be simulated on specially programmed general purpose digital computers.

The invention is described in detail below under the general headings of: memory 24; decoder 26, including edge-to-segment decoder 28, segment buffer 30 and segment-to-video decoder 32; and display monitor 34.

Memory 24 -- FIGS. 4 and 5

The memory 24 (FIG. 1) is interposed between the memory input bus 22 and the decoder 26, and its purpose is to temporarily store (buffer) a set of edges and to provide to the decoder 26 those edges which cross the scanline that is currently being decoded. In order not to have to read out all of the edges in the memory 24 during each scanline period (e.g. each 64 microseconds) the memory is organized into two sections, a small section which is read out once for each scanline period and a large section which is read out only once for each vertical trace period (e.g. every 1/60th of a second) of the display monitor 34.

Referring to FIG. 4, the memory 24 of each of the display terminals comprises N banks 1, 2 . . . N. (where N is four, for example) each bank comprising a large section 36 capable of storing 1,024 edges, and a small section 38 capable of storing 258 edges. The set of the small sections 38 of the N banks is referred to below as the "small section of the memory 24" and the set of the N large sections 36 -- as the "large section of the memory 24". A memory and terminal control 40 is connected to each of the N banks. Each of the small sections 38 is connected to the memory input bus 22 to receive new edges, and each small section 38 can either recirculate its own contents in a small loop, or circulate its contents and the contents of the large section 36 of its own bank in a larger loop. Each large section 36 may comprise 76 MOS dynamic shift registers each with 1024 bits capacity, while each small section 38 may comprise 76 MOS dynamic shift registers each having 256 bits capacity and two 76-bit capacity TTL registers.

The output of the small sections 38 is applied to a memory output bus 42. Each of the small sections 38 is read out (by recirculation) once each scanline period (each 64 microseconds) while the combination of the contents of the small section 38 and the large section 36 is read out (by circulation) once every vertical trade period (every 1/60th of a second) of the display monitor 34.

Referring to FIG. 5 which shows a detailed block diagram of one of the N memory blocks shown in FIG. 4, the small section 38 actually comprises a portion 38a storing 256 edges, an output register 38b interposed in the large loop from the large section 36 to the portion 38a and an input register 38c interposed in the large loop between the portion 38a and the large section 36.

In addition to the two sections 36 and 38, each bank of the memory 24 has (1) a scanline cross-detector 44 with a bank control 46 and a delayed output register 48, and (2) a bus interface 50. Because the entire contents of the large section of memory 36 can be functionally inserted into the loop of the small section 38, blocks of edges can be moved in and out of the small section 38 during certain scanline periods without altering the order of the edges. The 258 edge memory section 38 is cycled each scanline. This 258 edge memory loop is always clocked (cycled) completely around between the insertions of the large section of memory to keep data in same order. Because the N banks of memory are clocked together during image decoding, blocks of up to N.256 edges can be moved into the small sections of memory during a scanline period.

A portion of the edge data passing through the output register 38b of a bank is fed into the scanline cross detector 44 which compares the y of the current scanline with by Y and Y + H of the edge. Results of this comparison are loaded into the delayed register 48 exactly one memory clock time later, when a new edge enters the output register 38b. The contents of the delayed output register 48 at this time are .DELTA.y(= y - Y), M (the number of consecutive leftmost zeros in H up to 10 zeros), F, K and I. I is a one bit flag which if set means that the edge is crossing the decoded scanline, and K is a one bit flag which if set means this is the last scanline that this edge will cross.

In order that the .DELTA.y, M, F, K and I data for an edge be available in the delayed output register 48 at the same time the S, X, B, G and C data for that same edge are in the output register, 38b, a word in the memory 24 contains portions of the data for two edges, the E.sub.i.sub.+1 F.sub.i.sub.+1, y.sub.i.sub.+1 and H.sub.i.sub.+1 for one edge and the S.sub.i, X.sub.i, B.sub.i, G.sub.i and C.sub.i data for the preceding edge i. This eliminates the need for a delayed output register which must hold the data for an entire edge. This staggering of edge data is described in more detail below. E.sub.i.sub.+1 , rather than being buffered in the delayed output register 48, goes directly to a memory control 52 which is a part of the memory and terminal control 40, and it is buffered there. The control 52 is shared by all N banks of a memory 24. F.sub.i is buffered along .DELTA.Y.sub.i and M.sub.i in the delayed output register 48 where one spare register bit is available with an interface to the bus 42. (11 bit .DELTA.y and 4 bit M leaves 1 bit when 4 bit register chips are used.)

The delayed output register 48 has tri-state outputs for directly interfacing of F, .DELTA.y and M to the memory output bus 42. The remaining portion of the edge data coming from the output register 38 is interfaced to this same bus 42 through tri-state logic in the bus interface 50.

The edge data are enabled onto the output bus 42 under the control of the I flags of the N banks. Because more than one edge may be valid among the N edges of N banks (a valid edge is an edge which intersects the current scanline), the banks are each given a different priority corresponding to the x-ordering of the edges in the memory 24, as described in detail later. Thus, if more than one edge is valid, one by one they are enabled onto the memory output bus 42 in the proper x-ordered sequence.

The memory control 52 receives the I flags from all of the banks 1 through N. When a memory bank's I is 1 and those of the banks to its left are zero, the bank places its data on the bus 42 and resets its I flag, permitting the other banks to gain control of the memory output bus 42 in turn, under the control of the memory control 52.

In addition to the bank control 46 in each bank, the memory banks are under control of the memory control 40 (FIG. 4) which clocks and control the N banks. This memory control 40 receives I and J flags from each bank, and if none of the N edges which are clocked into the output registers 38b and 48 are valid (all I's = O), flags the data on the output bus 52 as invalid. A J flag equal to 0 means that I = 1 and that there is an I to the left which is 1. Although this information can be rederived from the I's by the memory control 40, it happens to be already available in the bank control 46. A J = 1 means that there is yet another valid edge to be enabled on the output bus 42 and thus that the memory 24 must not yet be clocked.

The E and K flags from the left bank are used by memory control 40 to locate the first and thus the last edges within subsets of edges. Before describing further the operation of the memory, the organization of the data within memory must be described.

Data Organization Within the Memory 24 (FIGS. 6-10)

In edge blocking in accordance with the invention, the data in the two sections of the memory 24 are blocked in y and ordered in x so that up to 256 valid edges may be detected and fed in x order to the decoder 26 every scanline period. Specifically, prior to being sent to a memory 24, a set of edges is partitioned into groups, and each group is used alone later, in the display terminal, to decode a range of y of the image. All portions of the image are represented by the groups. This process of blocking the edges transforms the original set of edges into a larger set containing duplicates of some of the original edges, where the edges of the larger set can be partitioned into disjoint groups. This partitioning or blocking of edges (1) puts the edge data in a form where only one portion (block) needs to be in the decoding process at a time and (2) orders the blocks so that blocks of edges can be exchanged between the large and small sections 36 and 38 of the memory 24 in such a way that the small sections 38 always hold the block of edges which was used to decode the previous scanline and which may be needed to decode the next scanline. See. FIG. 6.

The edges within a block may extend above or below the range of y over which the block is used for decoding. Each block of edges includes at the beginning a marker edge and at its end a small number of null edges. The marker edge is defined to cross those scanlines over which a block is defined as valid. When this edge is detected during a scanline decode period as crossing a decoded scanline, it signals that the following edges are valid for decoding. If the marker edge is detected as valid for the last time, (K.sub.i = E.sub.i = 1), the memory control 40 sets a memory loop flipflop 56 (FIG. 5) to cause a new block of edges to be used during the next scanline decode period.

Null edges are used to occupy unused memory locations. A null edge is defined as one whose Y is impossible (i.e., 1313 < Y < 2.sup.11) and E = 0.

Referring to FIG. 6, blocks are formed out of the original set of edges as follows. First the edges are y-ordered from top to bottom of the screen of the display monitor 34 using their Y's. Blocks are formed from the top of this list and from those edges used to fill the preceding block which extend below the range in y over which that block is valid for decoding.

Starting with block 1 whose range begins at y = Y.sub.1 =1, edges are moved into this block from a y-ordered list. All edges starting on the same scan line must be moved together as a group until the number moved into the block exceeds the function f where

where Y.sub.2 is the Y of the last group of edges moved. This last group of edges, which overflowed the block, is put back in the y-ordered list and a marker edge is added with Y = Y.sub.1 and H = Y.sub.2-Y.sub.1 -1. Then null edges are added to fill the block to an integer multiple of N. All edges in this block which extend to or beyond Y.sub.2 are copied into block 2 keeping their y-orders the same.

Block 2, now containing those edges copied in from block 1, is then filled from the y-ordered list until a group of edges causes it to overflow. Just as in block 1, the last group is put back in the y-ordered list, a marker edge and nulls are added, edges extending too far are copied out, etc., until the memory 24 is filled. FIG. 6 illustrates an image broken into 2 blocks where 5 edges are common to the blocks.

Out of the original set of edges sufficient to define the image, up to 4 times 256 edges may be duplicated in blocks 2 through 5, where 4 is the number of small sections 38 of the memory 24 that fit into the large section 36 of the same memory and where 256 is the maximum number of edges allowed to cross a given scanline.

As noted above, blocking and the insertion of marker and null edges takes place before a set of edges is loaded into a memory 24. Thus, blocking edge data forces each of the invented display terminals to have a slightly larger memory 24 than the theoretical minimum. Were memory not organized into two sections but rather into only the smaller section and were N made greater, for example 5 times greater, memory would consist of only 1 block eliminating the need to duplicate some of the edges. The trade-off between the greater overhead costs per bit of a large N uniblock memory and the greater cost of more bits of a small N multiblocked memory favors the multiblocked memory at the present level of technology; however a uniblock memory can be used in the future, should it become less expensive to do so.

After the memory 24 is loaded, it appears as illustrated in FIG. 7. The bottom layer of memory is the set of N input registers 38c (in the illustrated case, N is 3), the next 256 layers are in the N small sections 38a of shift register memory, the next layer is the set of N output registers 38b, and the remaining 1024 layers are in the N large sections 36 of shift register memory.

Data paths are indicated by the arrows on the left side and the arrows down the middle of the memory. When the output registers 38b select the data from the input registers 38c, N 258 layer memory loops are formed, holding N . 258 edge words. This is the mode for reading the small sections 38a of the memory 24. When the output registers 38b select the data from the 1024 layer section of memory, N 1282 layer memory loops are formed, holding N . 1282 edge words. This is the mode where the large sections 36 of the memory 24 are being read. Data outputting from the memory 24 come from the output registers 38b and data inputting to the memory 24 enter the input registers 38c replacing the data that would have come from the 257 layer section of memory 38a.

During loading, data for each bank enter at bottom and follow the data path of the 1282 memory loop, i.e., directly to the top, then shifting downward.

Notice the staggering of the edge data within said bank, data for an edge reside in two words of memory. During loading, an edge is not staggered. Upon completion of loading, the shown left portions of all N banks are clocked once more, thereby producing the desired staggering. The memory is cyclic so that there need not be any unused memory locations. The image generator 10 can unload its blocks of data, each as they are formed without the need for buffers which hold the entire image.

Edges in each block are x-ordered from left to right and upward as indicated in block 1, where there are L.sub.1 edges and where L.sub.1 / N is an integer. The first word of a block must be in the left bank of memory and the last word of a block must be in the right bank of memory, the first word is flagged as such by a 76th bit, E, as shown in blocks 1, 2 and K.

The definition of the "partial" x-ordering is that edge j must follow edge i if edge j crosses the same scanline as edge i and edge j's xintercept is greater than that of edge i. Refer to FIG. 3 where edge 3 must follow edges 1, 2 and 4, where edge 2 must follow edge 1, but where edge 4 need not follow edges 1 and 2. As indicated previously, the x-ordering takes place prior to loading the memory 24.

As indicated by the data flow, there are two sources of input to the output register 38b and two sources of input to the input register 38c. Normally, the output register 38b in each bank is selecting input from the input register 38c, forming a 258 . N word loop where the small sections 38 of memory will be read whenever memory and the two TTL registers 38b and 38c are clocked. On approximately 16 scanlines during a portion of scanline decode period, each output register 38b selects input from the large sections 36 of memory, forming 1282 . N word loops represented by the dashed arrows in FIGS. 7-10. These occasions (when N loops, each 1282 words long, are selected and the larger sections 36 are clocked along with the smaller sections 38 are either when an entire block of edges is being exchanged between the two sections of memory or when N edges (one layer of words of a block) must be exchanged in order to "revitalize" the dynamic memory of the larger sections 36 of memory. These single layer exchanges move data from the next block located in the larger sections 36 of the memory 24 into the smaller sections 38 ahead of the time it is needed for decoding. Thus, the block of edges cycling in the smaller sections 38 must be small enough to not be shifted out by these early entries. Thus, function f in Equation 4 has included a term to guarantee the integrity of a block of data in the smaller section.

After memory is loaded, the memory loop flipflop 56 has been set, forcing the memory into a 1282 layer loop so that data passing through the output registers 38b during the next scanline period is not from block 1, and likely not from all of the first block encountered (block 2 in FIG. 10) because part of it will already have been moved into the smaller sections 38. This is the only scanline decoding period where an entire block of memory does not pass through the output registers 38b and should really be considered a part of the loading cycle.

At the end of the first scanline period following loading, and at the end of all other scanline decode periods, memory always appears as in FIGS. 8 or 10, where FIG. 10 differs from FIG. 8 because of the "early reading" of portions of the next block required to rejuvinate the dynamic memory. Notice that the righthand portion of the last edge of the block of edges just decoded is in the output registers 38b at this time. The E flag of the first edge of the next block is used to end the cycling of the memory 24 during a scanline decode period. If invalid data happen to be in the memory such that there is no flag set to mark the beginning of blocks, then the memory cycle is terminated after 260 memory clocks, as detailed below. This protects the small sections 38 from excessive average clocking frequencies.

Notice that block i + 1's data are always put behind block i's data in the small sections 38 of the memory 24; that block i's data has entered behind that of block i - 1, and that most of the data for block i - 1, has been pushed out and around into the large sections 36.

Bceause the end of the data for block i - 1, which has entered prematurely, is not marked by a flag, a 258 state alignment counter 54 (FIG. 5) in the memory control 40 is held to a value of minus one whenever the larger loop is clocked. Otherwise, this counter 54 always increments along with the clocking of the small sections 38 of memory (cyclcing through values -258, -257 . . . -1) so that a count of minus one means that the data between the two sections 36 and 38 are "realigned", i.e., the end of block i + 1's data in the small section 38 of memory have entered the output register 38b. Only when this counter 54 is minus one is the larger section of memory allowed to be selected and clocked.

If during a scanline decode period where block i is being decoded the first edge (marker edge) is detected as valid for the last time (K = 1), the memory loop flipflop 56 in the memory control 40 is set, otherwise it is reset. Thus, if at the beginning of a scan decode period the memory loop flipflop 56 has been set or if 16 scanline decode periods have passed without reading the larger section of memory, the larger loop will be read when the alignment counter 54 has reached minus one either at the beginning of the scanline decode period as in FIG. 8 or further along as in FIG. 9, where at the beginning of the scanline period memory appeared as in FIG. 10. That is, during a decode cycle requiring the decoding of block i + 1, most of which still resides in the larger section of the memory 24, the small section of memory 24 is read and decoded until the counter 54 is minus one, after which all further reading is from the larger section of the memory 24. In other words, at the beginning of a cycle, the image in the memory 24 appears as in FIG. 10, and when the image appears as in FIG. 9, where the counter 44 is minus one, the larger memory loop is read until the image appears as in FIG. 8 where i + 1 is substituted for i. Similarly, during a decode cycle requiring a single revitalizing read of the large section of the memory 24, the larger loop will be read for one memory cycle when the counter 54 reaches minus one.

After the first memory cycle clocking, the contents of the output registers 38b contain the X, G, S, B, and C data for the first set of N edges to be outputted where the Y, H, F of these edges has passed through the cross-detectors 44 producing .alpha.y, M, F and I data in the delayed output registers 48. Normally these first N edges belong to either block i + 1 or block i - 1, block i always residing near the top of the small section of the memory 24 at this time. Thus, if this is a cycle where block i is to be decoded again, sets of N edges are scanned by at peak rate, the memory control 52 flagging them as invalid to the memory output bus 52, until a "first valid marker edge" in block i is detected. Only block i's marker edge will be detected as valid. When the flag is detected, normal crossing detection control of memory clocking is activated so that each of the valid edges (those crossing the scanline) are enabled onto the output bus 42 before the memory 24 is clocked again (using J's). When a marker edge is detected (valid or invalid) following a valid marker edge, or when 260 memory clocks have occurred, the cycle ends. During scanline decode periods when a new block of edges is sought, the cycle ends on the first marker edge found whether or not a valid marker edge has yet been encountered. An extra "overflow" counter 58 in the memory control 40 (FIG. 5) signals the end if no marker edge is detected, to thus guarantee that the 256-layer shift register system does not clock in excess of 4 megacycles.

There should always be a valid scanline crossing of a marker edge during a decoding cycle if the data has been properly formatted. If not, the memory loop flipflop 56 is set, forcing a new block to move down during the next scanline decode period. This also permits the memory 24 to quickly find the correct block after a loading cycle, up to about K (number of blocks) scanline periods being required to hit the right block out to K blocks. However, the large sections 36 of memory are not allowed to be cycled more than 64 times during a vertical trace period. This protects the shift registers of the sections 36 from clocking at higher rates than their clock drivers can stand. Again, this protects memory in cases of invalid or incorrect data residing in the larger sections 36 of memory in case each block moved down has no valid edges.

Clocking of the memory occurs at multiples of a 134 nanosecond period (this 134 nanoseconds is determined by the maximum frequency of the shift registers used in a specific embodiment of the invention). During each period of 134 nanoseconds, one of the N edges read into the output registers 38b is put on the memory output bus 42 and flagged by the one bit V as valid or invalid. If among the N edges, n are valid (n N), then n .times. 134 nanoseconds (134 nanoseconds if n = 0) will pass before the memory 24 is clocked again. If among the N edges none are valid, the edge from the right bank is placed on the output bus 42 and flagged as invalid.

During each scanline decode period, the smaller section of the memory 24 is clocked up to 258 times and the larger section of memory is clocked up to 257 times -- once if for refresh purposes and up to 257 times if a new block of edges must be read.

After 258 memory clocks, up to 256 valid edges and up to 258-256 / N invalid edges may have been placed upon the memory output bus 42. Thus, if all N valid edges happen to be together in only 256 / N layers of the memory 24, a minimum of 514-256 / N clock periods of 134 nanoseconds each are necessary to decode this worst case number and distribution of valid edges within memory.

After each scanline decode period, the small sections 38 of the memory 24 contain the last block of edges decoded. If a marker edge was detected as valid but not for the last time during this last decode period, only the small sections 38 are read during the next scanline decode period, the large sections 36 being read possibly once if they have not been read for 16 scanline decode periods.

If after a scanline decode period a marker edge was detected as valid for the last time, the small sections 38 of memory are read until the data in the small sections 38 are aligned with that in the large sections 36, after which the large sections of memory are read, the large section of memory clocking along with the small section until the small section has been clocked possibly up to 258 or even 260 times depending upon the condition that ends the cycle. There are only two possible reasons why no edge crossings will be detected during a scanline decode period. Either the data in memory are invalid (because of unknown memory state just after turn-on, because of a transmission error, because of a data format error), or the data are valid but the edges are not valid (not crossings detected) because memory has not caught up to the beam y-value.

Decoder 26 (FIGS. 11-15)

The decoder 26, whose job is to convert the edge information stored in the memory 24 to a color video raster for driving the color monitor 34, comprises three main components: the edge-to-segment decoder 28, the segment buffer 30, and the segment-to-video decoder 32. A segment is defined as the scanline interval between two adjacent edges. The parameters used to define a segment are its width .DELTA.x, its color C, its normalized gradient of brightness g (10 bits), and the smooth shading bit F.

The edge-to-segment decoder 28 is a pipeline device with 8 levels, the first level receiving edges from the memory output data bus 42, and the 7th and 8th levels outputting one or two segments to the segment buffer 30 for storage in a single word of memory.

The inputs to the decoder 26 are the data on the memory output bus 42,<.DELTA.Y, M, X, S, B, G, C, F, V>, and Pipe Clock Enable (PCE) as well as a 134 nanosecond clock and cycle reset pulse (CRS) from the communications controller 12, where CRS marks the end of a load cycle for the segment buffer 30. The pipe is clocked at multiples of 134 nanoseconds, enabled by PCE from memory controller.

Levels 1 through 6 convert edge data to segment data. Levels 7 and 8 buffer one or two segments for storage in one word of the segment buffer 30. Two consecutive segments must be stored together whenever the one on the left has a .DELTA.x.ltoreq.3. A flag, e, is stored in each word of the segment buffer 30. If e is zero, it means that two segments are present, the left one of which has a .DELTA.x.ltoreq.3. Because this left segment .DELTA.x is less than or equal to 3, only .DELTA.x.sub.O (its low order bit in .DELTA.x), an 8 bit .DELTA.b, and a 12 bit C are required to define that segment. The following right segment is defined by an 11 bit .DELTA.x', a 3 bit N', a 10 bit g', a 1 bit F', a 12 bit C'; and a 1 bit "first word" marker, m'. An m = 1 flags the word (when ready by the segment-to-video decoder 32) as containing the first segment(s) of a scanline. A write enable, buffer write clock enable (BWCE), is generated by level 6 whenever data are present for storage in the segment buffer 30.

Levels 1 and 2 of the edge-to-segment decoder 28. (FIG. 12)

Levels 1 and 2 generate the x and b intercepts for each edge passing through the scanline which is being decoded. Two levels are required because the multiplications involved cannot be done in one clock period of 134 nanoseconds in the illustrative embodiment of the invention discussed here. Level 1 receives as data input .DELTA.y (= y - Y), M (exponent for S), S, X, B, G, C, F, and V (an edge validity bit) from the memory output bus 42. Level 1 buffers this data, where .DELTA.Y . 2.sup.M is remembered rather than .DELTA.y and M individually, and produces as output intermediate products toward the solution for the intercepts.

Level 2 buffers the intermediate results and produces as output V (1 bit), C (12 bits), F (1 bit), x (11 bits), and b (7 bits) where:

The 134 nanosecond clock for these and following levels is enabled by PCE. The pipe's clock is inhibited, i.e., PCE = 0 if the edge on the memory output bus 42 is invalid and is from the block of edges required for decoding the scanline being worked on. This is, an invalid edge will be placed upon the memory output bus 42 when all N edges entering the output registers 38b from a valid block of edges happen not to cross the decoded scanline. By inhibiting the clocking of the pipe decoder for invalid edges from a valid block, neighboring edges in the pipe will always be valid once, will be in x order, and thus will be organized so that the lower levels of the pipe can form segment data out of neighboring edge data.

Until the first valid edge is placed upon the memory output bus 42 and after the last valid edge is placed upon the bus 42, the pipe is clocked at 134 nanosecond intervals independent of the validity of the data, guaranteeing that the block of valid edges that entered the pipe will pass through the pipe. The last valid edge that enters the pipe during a decode period is followed by invalid data as it moves down the pipe. The segment to be generated to represent the scanline portion to the right of this last valid edge will be forced in level 6 to have a .DELTA.x (width) sufficient to guarantee that the scanline video to the right of this last valid edge's x intercept will extend all the way to the right on the screen of the display monitor 34. Note that if this last edge is a smooth shaded edge (with F = 0), the video will be incorrect. In such a case there should have been one more valid edge whose x intercept corresponds with the right edge of the picture (at x = 1600).

For notation convenience, primed variables shall refer to those of the next edge. Thus, for a scanline, if B refers to the brightness of an edge i, then B', B", etc., refers to the brightness of the next and following edges just to the right of edge i as they would appear on the screen of the display monitor 34.

Levels 3 and 4 of the edge-to-segment decoder 28 (FIG. 13)

Level 3 buffers the data from level 2. Level 4 buffers the data from level 3 and produces as output V, C, F, .DELTA.x (11 bits), and .DELTA.b(8 bits, 7 bit magnitude plus a sign bit) where:

A carry sign, CRY, which reduces .DELTA.x by one in this level 4 is true if the previous .DELTA.x, now in level 5, is less than 2 (.DELTA.x in level 5 are forced up to a value of 2 if they are 1 or 0 in level 4).

Level 5 of the edge-to-segment decoder 28 (FIG. 14) (see FIG. 19)

Level 5 buffers its input from level 4 and produces as output .DELTA.x, .DELTA.b, r (10 bits), f, C, V, less-than-four (LTF), and valid-not-last (VNL). This level relays a .DELTA.b, F, C and V. LTF is true if .DELTA.x is less than four. If .DELTA.x is less than 2, .DELTA.x is set equal to 2 by a .DELTA.x correction register 50 and gates 64, and CRY is generated by gates 62, forcing the .DELTA.x' of the next edge to be reduced by one unit.

Thus, .DELTA.x, if not greater than 1, is forced equal to 2. The .DELTA.x for the following edge will then be reduced by one and any remainder remembered in the correction register 60. If this next edge's .DELTA.x is then also too small, its too set to 2 forcing still the reduction of the next edge's .DELTA.x until some .DELTA.x can absorb the correction. Because edges will be generated so as to be 2 units apart after x is truncated to 11 bits but before S is truncated to 13 bits, the corrections to .DELTA.x in the pipe will never be required to move an x intercept more than 1 unit in x and only when two neighboring edge x intercepts happen to round off to a 1 unit separation. .DELTA.x's are forced to a minimum of 2 units in order that the segment-to-video decoder 32 need not do register-to-register transfers at clock intervals less than 67 nanoseconds (equivalent to 2 units in x along a scanline) and in order that the segment bufffer 30 does not clock at intervals less than 134 nanoseconds.

.DELTA.b is in a magnitude-plus-sign form because it is used by a multiplier in a following pipeline level and because the segment-to-video decoder 32 requires that form.

The 10 bit positive interger, r, (no sign bit) is defined by the equation for F = 0 segments ##EQU1## where N is the number of consecutive leftmost zeros in .DELTA.x up to 9 and where [.DELTA.x/2] is an integer formed by the upper 10 bits of the 11 bit integer .DELTA.x (i.e., truncate, [], to the 10 bit number .DELTA.x/2).

For f = 1, r is set equal to 2.sup.9, the value in the above equation where .DELTA.x/2 = 1. Since r will be multiplied times .DELTA.b in the next levels (6 & 7) to form a brightness gradient g, an r of 2.sup.9 guarantees that .DELTA.b can be recovered from g later in the segment-to-video decoder 32 via a shift network rather than via an expensive divider network (actually g = 2.sup.2 .DELTA.b when F = 1). Thus, for F = 1 cases where g must not be used to alter the brightness (b) along a scanline over the segment's range in x until the end of the segment, the brightness must jump by .DELTA.b. This is discussed in detail fruther, in connection with the segment-to-video decoder 32.

The variable r is obtained by a ROM (read-only memory) table look-up memory 66. The function [.DELTA.x/2] behaves in such a way for each increment in .DELTA.x that a 256, 11-bit word table can be employed, each word having an 8 bit 1/x mantissa and 3 correction bits.

To use the ROM table 66, a first .mu.(up to 2 zeros) are shifted out of [.DELTA.x/2] if possible. Out of the resultant number A(a.sub.9 . . . a.sub.O), the upper eight bits (a.sub.9 . . . a.sub.2) are used to look up the value of ##EQU2## This denominator differs from ##EQU3## only in that two 1's reside in the lower 2 bits. If a.sub.1 = a.sub.o = 1, and correction bits are ignored and a 1 is added (cancels the -1 in above equation). If a.sub.1 = 1 and a.sub.o = 0, one correction bit plus one is added. If a.sub.1 = 0 and a.sub.o = 1, two correction bits plus one is added. If a.sub.1 0 and a.sub.o = 0, all the correction bits plus one are added to T. Equation 10 employs a -1 in order to make the 9th bit of T a 1 and the 10th bit zero for all cases (except that when a.sub.9 = a.sub.8 = . . . a.sub.2 = 1). Thus, the 9th and 10th bits of r need not be stored in the table. After corrections, a 10 bit integer results whose maximum value is 512.

Levels 6, 7 and 8 of the edge-to-segment decoder 28 (See FIG. 15)

These last three levels obtain the brightness gradient for each segment, buffering up to two segments for storage in a single word of the segment buffer 30.

Level 6 receives as input V, .DELTA.x, r, .DELTA.b, C, F, VNL, LTF and CRS, and buffers all but VNL, and CRS in a TTL register 68. If VNL is true, the segment is valid and not last, in which case .DELTA.x is not replaced by .DELTA.x.sub.max when buffered in a TTL register 68 under the control a select network 70.

A multiplier network distributed in level 6 and level 7 and comprising multipliers 72 and 74 forms the product g where

is a 10 bit signed (magnitude plus sign) integer which is rounded off to the nearest integer. The number N of consecutive leftmost zeros up to 9 in .DELTA.x is formed by gates 76. Both g.sub.i and N, along with V, .DELTA.x, C and F are buffered in level 7 where a ripple carry can be completed in the formation of g, and where a segment can reside during a write cycle of the segment buffer 30.

Segments with .DELTA.x.gtoreq.4 are stored in the segment buffer 30 while in level 7. If a segment's .DELTA.x.ltoreq.3 and that narrow segment follows a segment stored in the buffer 32 while in level 7, the storage clock enabling single e is held false so that when that narrow segment enters level 7's TTL register 78, it will not be stored until it reaches level 8.

Level 8 buffers .DELTA.x.sub.O, .DELTA.b (equal to g . 2.sup.-), C and e for a segment. The variables e and .DELTA.x.sub.O are sufficient to (1) define .DELTA.x and (2) flag the segment as valid. That is, if e = 1, in level 8, it means it had been stored in level 7; therefore, it is not valid in 8. If the following segment in level 7 is being stored, the data in level 8 thus will be flagged as invalid (i.e., already stored). If e = 0 in level 8 it means .DELTA.x was less than 4, i.e., .DELTA.x = 2 or 3, in which case .DELTA.x.sub.O (the low order bit of .DELTA.x) is sufficient to define .DELTA.x. Because a segment with a .DELTA.x.ltoreq.3 has [.DELTA.x/2] = 1, no smooth shading through intermediate brightnesses is necessary. Thus F is not needed.

The buffer write enable e is "anded" with PCE and BNF (buffer not full) at gate 80 to form BWCE which causes the segment buffer 30 to store the word. PCE is required so that each segment is stored only once. BNF is required to prevent overflow in the segment buffer 30 in cases where more than 256 edges are decoded (error in image definition). Each time a word is stored in the segment buffer 30, a counter 82 is incremented (from -1, the to -256, -255 . . . to -1). Initially this counter is at -1 (forced to -1 by CRS at the beginning of the buffer 30 load cycle) or having been left at -1 during the previous load cycle during which 256 "writes" were made. Often, there is not enough time and/or edge crossing during a load cycle to write 256 words into the buffer 30, thus leaving the counter 82 below (-1).

BNF goes false if the counter 82 is at (-1) and the segment is not the first valid one encountered during a load cycle. The first valid segment where m = 1 from level 6 forces BNF true in spite of the -1 value, resetting the counter to -256, where it will be incremented by following valid data. If the first valid segment has a .DELTA.x.ltoreq.3, even though BNF goes true, e does not, preventing the storage of this segment while in level 7. In such cases m is set to 1 for the next segment also. Thus when the pair of segments is stored (an e of 0 will guarantee that e is 1 during the next clock period) m will be 1 in the first word to enter the segment buffer.

A combinatorial logic block 71 receives as inputs LTF", V", e', m', V' EM1 (indicating the counter 82 is at -1), and m", and outputs e", m" and BNF based on the following logic relationships:

A segment is detected as "first" if it is valid (V = 1) and the previous segment was invalid (V = 0) (V" . V' in the diagram). Normally the memory control 52 sets V equal to zero for all but valid edges. However, if it happens that no valid edges (including the marker edge) are detected during a scanline decode period, the last invalid segment is flagged as valid, in order that the segment buffer 30 will store a word with m = 1. This is necessary in order that the segment-to-video decoder 32 can find the first segment in the segment buffer 30 when less than 256 words are loaded into the buffer.

When V = 0 and PCE is 1 following the block of valid segments that have passed down the decoder 28 through level 7, BWCE is held high until (1) the buffer 30 fills with "nulls" or until (2) the load cycle times out. A "null" follows the segments which define the visible scanline. In cases where two valid segments enter each word of the segment buffer 30, only 128 words will have been written in, yet another 128 shifts are required to move the segment data to the output of the shift register buffer 30. Thus, it is necessary to use up any remaining cycles of the load cycle (40 of them) to move these data forward as far as possible, minimizing the number of shifts that must be done by the buffer-to-video decoder 32 during the horizontal retrace period before an m = 1 is detected and the visible trace beings.

When one-half of the segment buffer 30 (which is actually a pair of pingpong buffers, as discussed in detail below) is switched over to the video decoder 32 at the end of a load cycle, m is forced low to that half of the "pingpong" buffer 30 so that when an m = 1 is loaded into the other half of the buffer 30, it does not enter the first half also. This "extra" m = 1 may not be shifted out during the following load cycle for this half of the buffer because this would lock the video decoder 32 into the wrong first word.

Remember that for segments entering level 8 as valid, 2.sup.-2. g = .DELTA.b (since [.DELTA.x/2] = 1). Also remember that segments where F = 1 have g =.DELTA.b.2.sup.-2 also (that is, .DELTA.b is upper 8 bits of 10 bit g's).

Segment Buffer 30 (see FIG. 16)

The segment buffer 30 comprises two pingpong shift register buffers 84 and 86 feeding a single output register 88. The 60 bit data word from levels 7 and 8 is fed to both buffers 84 and 86. During a scanline decode period, one of the buffers 84 and 86 is loaded while the other is unloaded through the output register 88, the roles of each buffer alternating every scan period. The buffer being loaded is clocked (shifted) at the end of the 134 nanosecond period when BWCE = 1. The buffer being unloaded is under the control of the segment-to-video decoder 32. However, m to a buffer being unloaded is held to 0 to guarantee that no false "first" segments enter.

At the beginning of a load cycle which begins 6 clock periods (6 . 134 nanoseconds) after a scanline decode period because of the delay for a new set of segments to pass down the pipe, the counter 82 in level 6 already has been reset (by CRS) and a buffer load/unload select flipflop ((BA) 90 has just been switched to the opposite state. Each word written into the segment buffer 30 causes the counter 82 in level 6 to be incremented (enabled by BWCE). After the last valid segment is dropped in levels 7 or 8 (i.e., following V's = 0), BWCE remains high (so that the buffer 32 will be clocked steadily at peak rate, at 134 nanoseconds intervals) until either it fills (counter 82 has cycled) or it is time for a new load cycle. At the end of a load/unload cycle, the roles of the two buffers 84 and 86 are interchanged.

The segment-to-video decoder 32 via BCRE clocks the buffer 84 or 86 (whichever is to be unloaded) once and, if m = 0 in the buffer output register 88, continues to clock the buffer 30 until the marker bit, m, is detected as equal to 1 indicating that the data in the buffer 30 have been shifted forward and the first word now resides in the output register 88 of the buffer 30. During this "search" clocking, m entering the buffer 30 is forced to zero by gates 92 (FIG. 15). If, as a worst case, two valid segments (for instance) were loaded into each word for a total of 128 words of the buffer 30, and if there was not sufficient time for clocking the buffer 30 another 128 times before a new load cycle, the segment data in the buffers 84 or 86 to be unloaded will not have been shifted forward to the output of the shift register 88. Thus, this partially loaded buffer 84 or 86 must be clocked by the video decoder 32 unit until the first valid segment has been loaded into the output register 88. The video decoder 32 takes over the control of a buffer 84 or 86 at the beginning of a horizontal retrace period and thus has time to shift forward the data.

The memory 24 requires 450 clock cycles (for N = 4 at intervals of 134 nanoseconds) so that the segment buffer 30 will require 450 clock cycles to elapse during a load cycle before it can begin to shift forward its contents. An additional 40 clock cycles for a scanline decode period will guarantee that at least 128 + 40 or 168 words have entered the buffer 30 by the time the unload cycle ends. Thus, during retrace time, the buffer 30 will have to be shifted by the video decoder 32 only at the most 256 - 168 or 88 times plus two more in order to get data into buffer output register 88. Thus at 134 nanosecond clock rate, 90 .times. 134 = 12.1 microseconds are required. Because the visible trace time is 400 clock periods and the scan period has to be equal to a multiple of 5 (for a 5:1 interlace) the trace period of 90 is used to bring the scan period to 490, i.e., 98 .times. 5. Thus the retrace time is 12.1 microseconds (90 .times. 134 nanoseconds). The visible scan period is that required to output 1600 points or 400 times 134 nanoseconds which come to 53.6 microseconds for a total scanline period of 53.6 + 12.1 or 65.7, i.e., 400 + 90 clock cycles of which 490 clock cycles are required to decode the data.

A marker segment (generated out of marker edge) should be the first edge to enter the segment buffer 32. Its .DELTA.b assumes B = O on its left. Thus .DELTA.b will be equal to the B of the next edge down the pipe.

A scanline period of 65.7 microseconds is for all practical purposes the same as that for commercial TV which is around 64 microseconds. Thus, the display monitor 34 is compatible with commercial TV input.

Segment-to-Vidieo Decoder 32 (FIG. 17)

The segment-to-video decoder 32 receives the segment definition words from the register 88 of the segment buffer 30 (FIG. 16) and generates a video raster (in color) for driving the display monitor 34.

Referring to FIG. 17, there are 4 major registers in the video decoder 32. They are the color output register (CR) 94, the brightness register (BR) 96, the brightness increment register (BIR) 98, and a .DELTA.x counting register, DXR 100.

Most of the data for a segment resides in the BIR, CR and DXR registers 98, 94 and 100 for exactly [.DELTA.x/2] clock periods all of which are 67 nanoseconds long except the last period -- which must be 100.5 nanoseconds long if .DELTA.x is odd. The extended clock period occurs in the last period during which a segment's data is in the registers of the DA converters 102 discussed below. The data defining m', F', e, .DELTA.x.sub.0 and .DELTA.x.sub.0 ' are left in segment buffer output register 88. The BIR register 98 is loaded either (1) with g' appropriately shifted by N' and high order bits of .DELTA.x (N' is held to 3 bits with high bits of .DELTA.x' used to derive the 4th bit later), or (2) with .DELTA.b . 2.sup.2 of a left (narrow) segment. The contents of the BIR register 98 are used to increment the brightness register BR 96 whose upper 7 bits define the brightness of the video to be sent to the monitor 34. Both the brightness and brightness increment registers 96 and 98 are 18 bits wide.

For right segments, if F is 0, the contents of the BIR register 98 are added to that of the BR register 96 at a sum unit 102, the updated contents of the BR register 96 being clocked back into the BR register 96 through gates 104 at the end of each of the [.DELTA.x'/2] clock periods.

If F is 1 for the right segment, the contents of the BIR register 98 (where g' = 2.sup.2. .DELTA.b') are added to the contents of the BR register 96, the updated BR contents being clocked back into the BR register 96 only at the end of the last clock period of the segment's decoding.

For left segments the contents of the BIR register 98 (containing .DELTA.b) are added to that of the Br register 96, the updated BR contents being clocked back into the BR register 96 at the end of the one and only one clock period over which that segment is valid (remember only segments with .DELTA.x = 2 or 3 will be left segments where [.DELTA.x/2] = 1. (See, in FIG. 18, the interval between X.sub.2 + X.sub.3.) When new segment data are loaded into the BIR, DXR and CR registers 98, 100 and 94, the BR register 96 is loaded with its input truncated to an 8 bit number, the lower of the 8 bits being set to 1 and the remaining 10 bits being set to zero. This prevents the accumulation of round-off errors stemming from the generation and truncation of g to a 10 bit number, and effectively rounds off the output 7 bits of B to the nearest integer.

The DXR register 100 is loaded with [.DELTA.x'/2] data only from the right segment, the only segment which can have a .DELTA.x>3. After having been loaded from the right segment, this register 100 works as a counter counting down and, as it approaches zero, enables various clocks. For example, the segment buffer memory (registers 84, 86 and 88) is read (clocked) always at the beginning of the last 67 or 100.5 nanosecond period over which the right segment is valid.

For an odd .DELTA.x, clock intervals of 67 nanoseconds are extended to 100.5 nanoseconds by a video decoder controller 106 during the last clock period where a segment's data resided in the internal registers of the DA converter unit 102.

The controller 106 of the video decoder 32 contains a register (not shown) used for load selecting and clock enabling of the other registers.

The outputs of the BR and CR registers 96 and 94 pass through the unit 102, which comprises 4 D-A converters with hold registers, and the analog outputs of the unit 102 are added together by a simple register network 108. The 3 color video signals (red, blue and green) are fed to the color monitor 34 through suitable coax drivers 110. Sync is obtained directly from the communication controller 12, which supplies the basic timing and control signals for run and load modes.

Brightness Decode Error Analysis (FIGS. 17 and 18).

Because segment data uses a gradient approach which could accumulate truncation errors if insufficient bits are employed, an error analysis is carried out in the segment-to-video decoder 32.

All data discussed so far has been treated as integers. Thus, various powers of 2 have been injected into various equations discussed above. This was done to simplify this error analysis.

Substituting Equation 9 in Equation 11, it follows that ##EQU4## But the BIR register 98 stores g2.sup.N.

Thus, when the contents of the BR register 96 are truncated to Bl + 1/2 (assuming radix point to the right of the 7 most significant bits in the BR register 96), the following change in brightness, db, after [.DELTA.x/2] brightness update results: ##EQU5## where ##EQU6## Since (a 10 bit interger and .DELTA.b<2.sup.7 - 1 (a 7 bit integer), it follows that

The 18 bit BR register 96 can resolve to 2.sup.-.sup.11. Thus the BR register 96 just before truncation contains

It follows that the maximum possible contents at truncation will be B.sub.l + .DELTA.b + (1 - 1/2.sup.9) which truncates B.sub.l + .DELTA.b, and the minimum possible contents will be B.sub.l + .DELTA.b + 1/2.sup.9 which also truncates to B.sub.l .+-..DELTA.b. Thus, via the truncation technique employed in accordance with the invention, each edge's brightness intercept is restored at the beginning of the associated segment s arrival in BR.

Color Monitor 34 (FIGS. 18 and 19)

The color monitor 34 comprises a conventional color TV receiver modified as described below to accommodate the output of the segment-to-video decoder 32. Specifically, the modifications comprise disabling the internal sync of a conventional receiver and applying an external sync for both horizontal and vertical syncs, the external sync being supplied from the time generator 16, to cause a 5-to-1 interlace, and passing each of the red, blue and green outputs of the segment-to-video decoder 32 through an antilog amplifer before applying them to the corresponding video amplifier sections of the conventional receiver. Needless to say, the conventional signal inputs to the video amplifiers are disabled. A simple way to modify the internal circuitry of a conventional TV receiver is to reduce the value of its video output resistors by 5-to-1, to lower the supply voltage by a factor of 2 and/or to use higher power transistors.

Referring to FIG. 18, the red, blue and green signals from the coax interface 110 of FIG. 17 are applied to a bank 116 of antilog amplifiers, and the output of each of the three antilog amplifiers is applied to the corresponding video amplifier of a conventional color TV receiver 118. The timing generator 16 in the communication controller 12 supplies the horizontal and vertical sync signals necessary for the scanline resolution utilized in this invention, and these are used by the conventional TV receiver instead of its internal sync.

Each of the antilog amplifiers of the bank 116 should have a response time of about 60 nanoseconds, and this can be done by using the log characteristics of a transistor. However, a biasing feedback with a somewhat slower response is desirable to stabilize the antilog amplifier over the temperature variations resulting from signal level variations of the video signal. An example of such feedback network is shown in FIG. 19, where the feedback network is labelled 120, and it is the only modification of an otherwise conventional antilog amplifier. Any other sufficiently fast antilog amplifiers may be used in the bank 116.

Overview of Timing (FIGS. 20 and 21)

Referring to FIG. 20, the edge decoding for a scanline is distributed over portions of two successive scanlines intervals of the monitor 34. As the memory 24 is cycled, the first valid edge of a block of edges is placed on the memory output bus whenever it occurs during a scanline interval, for example during the clock period labelled 130 in FIG. 20. This first valid edge goes from the memory output bus 42 into the pipeline of the edge-to-segment decoder 28, and the same is repeated for any subsequent valid edge. This may continue into the next scanline interval, and in the worst case, the last valid edge of a block will be placed on the memory output bus at about the 353 clock period of the following scanline interval, at the point labelled 132 in FIG. 20. As valid edges are delivered to the edge-to-segment decoder 28 over the memory output bus 42, they proceed down the levels of the decoder 28 with each clock, and are placed in the segment buffer 30 as clocked by the same 134 nanosecond clock that cycles the memory 24 and the decoder 38. The segment buffer 30 is clocked by the same 134 nanosecond clock into the segment-to-video decooder 32, for conversion to a video raster as described in connection with FIG. 17.

Referring to FIG. 21, where the vertical coordinate is brightness and the horizontal coordinate is time, the registers 96 and 94 in FIG. 17 are loaded at each of the times indicated by the reference numerals 134, 136, 138 and 140 for different edge line intersections of the scanline which is being decoded. The input to the video monitor 34, in terms of brightness, is indicated by the staircase in FIG. 21, where the broken line indicates the 18 bit output of the register 96 of FIG. 17 and the solid line indicates the truncation of that output to 7 bits.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed