GB2051525A - C.G.I.-Surface textures - Google Patents

C.G.I.-Surface textures Download PDF

Info

Publication number
GB2051525A
GB2051525A GB7920882A GB7920882A GB2051525A GB 2051525 A GB2051525 A GB 2051525A GB 7920882 A GB7920882 A GB 7920882A GB 7920882 A GB7920882 A GB 7920882A GB 2051525 A GB2051525 A GB 2051525A
Authority
GB
United Kingdom
Prior art keywords
display
image
plane
raster
perspective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB7920882A
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales Training and Simulation Ltd
Original Assignee
Thales Training and Simulation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales Training and Simulation Ltd filed Critical Thales Training and Simulation Ltd
Priority to GB7920882A priority Critical patent/GB2051525A/en
Priority to GB8018838A priority patent/GB2061074B/en
Priority to FR8013166A priority patent/FR2466061A1/en
Priority to US04/159,442 priority patent/US4343037A/en
Priority to CA000353946A priority patent/CA1141468A/en
Priority to DE19803022454 priority patent/DE3022454A1/en
Publication of GB2051525A publication Critical patent/GB2051525A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/301Simulation of view from aircraft by computer-processed or -generated image
    • G09B9/302Simulation of view from aircraft by computer-processed or -generated image the image being transformed by computer processing, e.g. updating the image to correspond to the changing point of view

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

A visual display system of the computer generated image type, for a ground based flight-simulator having a rectangular raster-scanned display 101 and having a surface detail information store for providing a visual display of textured surfaces in perspective during simulated flight. The system includes a surface detail generator comprising a perspective transformation computer 108 and a surface detail store 111. The perspective transformation computer is organised as a "pipeline" to compute in real time the perspective transformation from the textured surface (ground) plane to the display plane. Scanning of the surface detail store is controlled in a manner comparable with a trapezium scan of a photographic image, for the purpose of the perspective transformation i.e. surface sampling points xp, yp are calculated using equations which take into account A/C and observer displacement and attitude, relative to a surface origin (Xo Yo, H), and the position in the raster of the display scanning spot. The surface-detail memory consists of a hierarchy of texture stores. <IMAGE>

Description

SPECIFICATION C. G. 1.-Surface textures Introduction to the Description This invention relates to real-time computer-generated image displays of three-dimensional scenes for ground-based flight simulation and is particularly concerned with the provision of textured surfaces.
Ground-based flight simulators are increasingly used for flight training. The simulator provides a dummy cockpit with dummy controls and a flight computer which computes in real time the characteristics of real flight during an exercise. Forward of the dummy cockpit there is frequently provided a visual display of terrain overflown during the exercise. The present invention relates to such visual displays.
Electronically produced visual displays conventionally employ one or more cathode ray tubes or a television type projector which projects the display upon a forward projection screen which is viewed by the trainee pilot through the cockpit forward windows.
The method of image production may then be either calligraphic or raster scan.
Although an earlier method of image generation used a closed-circuit television camera moving over a scale model of terrain, the majority of ground-based flight simulators now manufactured use digital computer generated images (C.G.I.).
As stated, the display method may be either calligraphic or raster-scan. The calligraphic method lends itself particularly to the display of night scenes, which consist almost solely of a display of light points on the simulated ground plane.
A typical night-only C.G.I. generator comprises a database holding digital data models of selected airports, stored on one or more floppy discs for example, a minicomputer for preliminary model processing and in the memory of which the currently-used airport model is held, a special-purpose transformation processor and a beam-penetration type display tube.
Such a typical system is able to display a ground scene containing 6,000 light points and, in addition, construct surface involving up to 64 edges. Such a scene is, of course, computed and displayed in true perspective in real time during a flight exercise.
A limitation of calligraphic image display is that the time required to paint one frame of the image is a function of the complexity of the scene. Human sensitivity to image flicker demands a frame repetition rate of about 30 Hz. and this sets a practical limitation upon the scene complexity at present possible for a real-time display.
A high-complexity daylight scene, involving for example the display not only of runway features, but also solid surfaces of surrounding terrain features and airport and neighbouring city buildings, demands a raster scan type display.
The first-known real-time C.G.I. image system depicting solid surfaces was able to provide a scene based on up to 240 edges. An edge, that is a line dividing two distinguishable surfaces, defined the visual environment for such a display. The edges were transformed into the display plane and incrementally generated by hardware edge generators. In the presentation of a threedimensional scene in true perspective, hidden surfaces were eliminated by programming priorities among the edge generators.
Subsequent C.G.I. systems provided higher complexity scenes, including the display or curved surfaces. These systems use the polygon to define the visual environment and can generate scenes involving a few hundred polygons. Such systems at present in development are likely to provide scenes of one order greater scene complexity, that is involving some few thousand polygons.
However, scenes defined solely by edges or polygons may never be adequately realistic for flight training purposes. Although the scenes provided by available C.G.I. systems have proved valuable for training airline pilots, particularly in take-off or landing manoeuvres, they are inadequate for many military operations.
The known techniques are unable to provide, economically and in real time, realisticallytextured surfaces in correct and changing perspective.
The object of the present invention is to provide a C.G.I. system capable of displaying textured plane surfaces.
Accordingly, the invention provides a visual display system of the computer generated image type, for a ground-based flight simulator, and for providing a raster scanned, perspective visual display of a simulated texture surface, including a surface detail generator comprising a perspective transformation computer and a surface detail store, the perspective transformation computer being a pipeline calculator for computing in real time the perspective transformation from the simulated ground plane to the display system display plane during simulated flight, thereby determining the required scanning of the surface detail store corresponding to a rectangular raster scanned display.
Short Description of the Drawings In order that the invention may readily be carried into practice, relevant prior art will be referred to and one embodiment of the invention will be described in detail by way of example, both with reference to the accompanying drawings, of which: Figs. 1-8 relate to prior art and Figs. 9-41 relate to the present invention; and in which:: Figure 1 is a diagram showing the effect of differently shaped camera and display rasters; Figure 2 is a diagram illustrating the mapping of a display raster onto a different plane; Figure 3 is perspective and block schematic diagram showing apparatus for raster shaping with a photographic image; Figure 4 and Figure 5 are diagrams illustrating undistorted scanning of a photographic image; Figure 6 is a diagrammatic display showing defects in a flying-spot scanned image; Figure 7 is a block schematic diagram referred to in a description of raster shaping with an electronically-stored image; Figure 8 is a diagram defining levels of surface detail; Figure 9 is a block schematic diagram of the visual display system of the present invention; Figure 10 is a perspective diagram illustrating the principle of linear perspective;; Figure 11 is an isometric diagram showing the geometry of perspective transformation; Figure 12 is a diagram defining the three sets of three-dimensional co-ordinates used; Figure 13 is a diagram defining the location of image points on the display plane raster; Figure 14 is a diagram showing the orientation of a defined plane with respect to rectangular axes; Figure 15 is a logic diagram showing the sequence of calculation steps required for a surface memory '"pipeline"; Figure 16 is a diagram explaining the progress of sequential calculations through a pipeline; Figure 1 7 is a logic diagram showing a parallel pipeline configuration; Figure 18 is a logic diagram showing the direct method for solving Equations 3 and 4 herein; and Figure 19 and Figure 20 show alternative method to that of Fig. 18 for solving the same equations; Figure 21 is a block diagram of a two-dimensional linear function generator; Figure 22 is a block diagram of a pipelined multiplier; Figure 23 is a diagram illustrating the process of two's component non-restoring division; Figure 24 is a diagram showing one stage of a divider pipeline using the process of Fig. 23; Figure 25 is a diagram explaining the arithmetic pipeline using the configuration of Fig. 17; Figure 26 is a flowchart of the general purpose computer scanner control program; Figure 27 is perspective diagram illustrating the production of a television camera image of a plane surface; and Figure 28 is a diagram illustrating the corresponding process using an imaginary mapped aperture; Figure 29 explains the filter effect of a mapped aperture in an accurate simulation; and Figure 30 explains the modified filter effect of an approximated aperture; Figure 31 represents the sampling grid of a discretised display showing scene edges and quantised display; Figure 32 illustrates the process of sampling with matching grids, showing the stored image with the display sampling grid superimposed; Figure 33 corresponds to the diagram of Fig. 32 but shows a displaced sampling grid; Figure 34 shows the use of a fine image memory grid; Figure 35 is a diagram of surface resolution elements showing a sample spacing of five elements; Figure 36 is the corresponding diagram wherein two samples define the entire pattern in one of its dimensions; ; Figure 37 shows the addressing of level zero, which contains 4096 samples; Figure 38 is a block diagram showing the complete memory system; Figure 39 shows mapped raster lines for three angles of aircraft roll; Figure 40 shows approximated level code mapping; Figure 41 is a diagram showing the form of the mapped raster with constant sample spacing along raster lines.
Description of the Prior Art For the correct representation of texture in a surface of a computed perspective image, that texture must be defined as a part of the geometrical database which defines the visual environment. It is then possible to subject the texture to the same transformations as are applied to the points, lines and polygons of the scene. This process ensures that textures are firmly attached to their respective surfaces and exhibit the same perspective variations both static and dynamic.
In contrast, methods have been proposed for applying texture to the transformed image, that is display plane texturing.
Display plane texturing can be used for certain effects. Thus, scintillation of sunlight from water surfaces can be simulated simply by injecting random pulses in areas of the scene representing water. Another application is for the simulation of water droplets on sights or windshields fixed relatively to the observer, and therefore not subject to perspective change.
An approximation to correct texture perspective has been proposed, in which the proximity of texture elements is increased in the direction towards the displayed horizon. While some realism is added to the overall scene, this expedient is inadequate for manoeuvres where depth and slant perception are important.
An always present limitation of display plane texturing is that the textures are not attached to their respective surfaces so that, with a changing scene, the effect is analagous to viewing the world through a patterned net curtain.
Texturing may alternatively be added to a C.G.I. display by defining the texture in terms of edges, similarly to other features of the scene. For a real-time system, however, this method is impracticable because of the large number of edges needed.
Display plane texturing is not effective or not economical except for limited effects and some alternative method must be used.
One such is the raster shaping principle of the present invention, as described herein.
This principle may be described as follows, in terms of a television system: If the camera and display tube scanning rasters differ, the displayed image is distorted.
If the camera raster is shaped and the display tube raster has the normal format, the displayed image undergoes a transformation which is the inverse of that applied to the camera raster.
Fig. 1 shows the effect of such raster shaping. The left diagram represents the viewed scene comprising a square object on a rectangular field. The centre diagram shows trapezium shaping of the camera raster and the right diagram shows the inversely transformed displayed object.
The effect extends further to two-dimensional distortion and even to curvi-linear distortion.
A particular case is inverse perspective distortion. If an inverse perspective transformation is applied to shape the camera raster, then the object is displayed in perspective.
Fig. 2 explains the principle of such inverse perspective distortion. An observer at 0 views the plane S. In front of the observer is erected a display plane D, defined by a scanning raster R. In a simulation, an image in the plane D is required to be identical to the observer's view of the real-world plane S.
Considering the projection of the display raster R upon the plane S, as shown at T, the raster T distorted by projection represents the required inverse perspective distortion. Surface details on the plane S, explored by the distorted raster R, would be displayed in the plane D in correct perspective.
This same principle is applied by the invention to synthetic image generation from stored image data representing the plane S, by using scanned sampling which is subject to inverse perspective distortion.
The implementation of this method involves the real-time calculation of the inverse perspective transformation continuously as the position and attitude of the observer and his display plane change with respect to the simulated plane viewed.
Fig. 3 is a diagram, part perspective and part block schematic, showing a known application of the raster shaping method in a closed circuit television system.
In Fig. 3, an object plane 1 carries a rectangular Fig. 2, which is scanned by a flying spot scanner 3 by way of an associated lens system 4. The video signal is provided by a phototube 5, amplified by a video amplifier 6 and fed to a cathode ray display tube 7 to provide an image 8. A single synchronising pulse generator 9 serves both the flying spot scanner sweep generator 10 and the CRT sweep generator 1 1. The raster scan of the CRT 7 is a normal rectilinear scan.
The raster scan of the flying spot scanner is a trapezium narrower at the top than at the bottom, as shown in Fig. 1. The shape of the raster scan of the plane 1 is determined by a transformation generator 12 under control from distortion controls 13.
Because the flying spot scan is a trapezium narrower at the top than at the bottom and because the raster scan of CR tube 7 is rectilinear, the rectangle 2 is transformed into a trapezium 8, which is the inverse of the flying spot scan and thus is wider at the top than at the bottom.
If the object plane 1 is a terrain view photograph or transparency, the CRT image 8 shows a perspective transformation which, if the distortion controls 13 are set as required, can effect the perspective transformation of Fig. 2, where the plane 1 corresponds to the plane S and the image 8 is provided in the plane D.
Photographic image scanning in the manner of Fig. 3 was subject to a serious problem, that of improper sampling.
Figs. 4 and 5 illustrate the requirements of correct sampling.
In Fig. 4, parts of successive scan lines 15 are shown on the plane 1. The line 16 is shown perpendicular to the direction of line scan in the plane 1.
Fig. 5 represents a section on the line 16 of Fig. 4 showing the flying spot profiles 15.1, 15.2 and 15.3 of three successive lines of the raster scan 15. The scanning spot configuration in plane 1, profile in Fig. 5, is chosen so that the maximum information from the image in plane 1 is extracted by the scan 15. Too narrow a spot profile give rise to aliasing in the frequency spectrum of the sampled image. Too wide a spot profile results in unnecessary video frequency reduction, that is image blurring.
Fig. 6 represents a display resulting from perspective transformation by a system similar to that of Fig. 3, where the display comprises the perspective transformed image of a runway 20 in the plane 1, extending from the observer towards an horizon 22 separating the plane 1 from a sky area 21.
To provide the image of Fig. 6, the photographic image scan raster is distorted to be wide at the horizon 22 and narrow in the foreground 24. In consequence, the reproduction in the displayed image is satisfactory only in the middle ground 23. The foreground 24 becomes blurred and in the background 25, the image breaks up.
Development of such a system with a variable image scanning spot size would present considerable electronic and optical problems. No such system has in fact been developed.
The principle of perspective transformation applies equally to an electronically-stored image as to a - photographically-stored image.
Fig. 7 is a diagram showing the basic elements of a known wholly electronic system. In Fig.
7, a synchronising pulse generator 9 controls both a perspective transformation computer 26 and the display means 28. A surface detail store 27 is sampled under control of the transformation computer 26. Position and attitude of eyepoint data are supplied to the transformation computer at 30, so that the co-ordinates of sampling points are defined at 31.
The inverse perspective transformation is computed at 26 to provide an output of sampling point position defined either as two analogue voltages or as two digital numbers. The store 27 is addressed accordingly and a corresponding output signal defining the brightness and colour of the surface image at the defined sampling point is fed to modulate the display scan at 28.
The first such system known was the NASA surface generator developed in 1964 for the space program by the General Electric Company. The output of the transformation computer 26 was two digital numbers representing the sample point computed at a rate of 5 MHz.
In this system, the surface detail store or "map table" was implemented as a purely synthetic logically-defined pattern. By using only the lower order bits of the sampling vectors, it was possible to create a repetitive pattern covering an entire surface, the so-called "contact analog".
The map table was structured as a four-level hierarchy, with each level defined as an 8 X 8 matrix providing a one-bit output designating one of two possible colours.
Complex networks of patterns were obtained by logically combining the outputs of the map levels, each of which contributed to the textural design over repetitive regions of defined size.
These regions are structured so that patterns corresponding to the several levels are nested within one another.
Fig. 8 shows the principle applied. In Fig. 8, the axes 32, 33 define the X axis and Y axis of the map plane 34. Areas 35, 36, 37 and 38 respectively show map areas corresponding to level 1, level 2, level 3 and level 4, each level defining an area one-quarter the area of the previous level.
Such a heirarchy of pattern levels made possible transitions from one level of detail to another by deleting the contribution from the map whose cells approached the raster cell structure. The display results of this system, while showing numerous sampling defects, was nevertheless a very satisfactory state of art at the time.
More recently, in patent application No. 7913058 there is described a system for providing simple surface detail for cloud systems overflown. That specification describes visual display apparatus for a ground-based craft flight simulator including a flight computer, comprising raster scan type display means for viewing by a trainee pilot observer synthetic image generating means for supplying to the display means a signal representing an image of sky, horizon and a simulated patterned surface extending to the horizon, said patterned surface being displayed in true perspective in accordance with the simulated altitude and position in space of the craft simulated and a general purpose programmable computer connected to interface the said flight computer and the said synthetic image generating means, said synthetic image generating means comprising a digital store for holding a single pattern cycle of a repetitive pattern for patterning the said patterned surface in one dimension thereof, a perspective computer for computing the track of a ray from the observer's eye, through the scanning spot of the display means, in its instantaneous position, and to a point of intersection on said simulated patterned surface, a computing element for providing a signal output for the display means representative of a variable brightness portion of sky and switch means for selectively supplying to the display means either the signal representative of the patterned surface or the signal representative of the variable brightness portion of sky, continuously during the raster scan of the display means.
No proposals have been published for the application of the shaped image-scanning raster technique to digitally-stored half-tone images, such as would be required for realistically textured terrain or target displays.
Description of the Example A specific embodiment of the invention will now be described firstly with reference to Fig. 9, which is a block schematic diagram showing a complete surface texture generator for ground plane display. The block schematic elements of Fig. 9 are then described and explained in greater detail with reference to Figs. 10-41.
In Fig. 9, a trainee pilot 100, seated in the dummy cockpit of a ground-based flight simulator, has a visual display provided before him on a back-projection screen 101 and has dummy flight controls represented by a control column 102. Control setting data is fed by line 103 to a flight simulation computer 104 in the usual manner. Aircraft position and attitude data is supplied twenty times per second from the flight simulation computer 104, by way of line 105 to a general purpose computer 106.
The visual display image on the screen 101 is produced by a television type projector 119 fed with a video signal on line 118. The raster scan of the projected image is controlled by a synchronising pulse generator 120 which provides line and frame synchronising pulses to the projector 119 on line 122.
Set-up data is sent from the general purpose computer 106 to the surface scanner 108 by way of line 107 once for each display field.
The surface scanner 108 is a pipelined scanner more particularly described with reference to Fig. 19 and Fig. 20.
The values of xp and yp, defined according to equations (1) and (2) later herein are supplied for each display field from the surface scanner 108 to the texture memory 111 by way of lines 109 and 110, respectively, and synchronising pulses are supplied to the surface scanner 108 from the pulse generator 120 by way of line 121.
The texture memory 111 is arranged in the manner described with reference to Fig. 38, later herein, and the texture detail accessed by the input co-ordinates is supplied as a digital number by way of line 112 to a switch 113.
It will be appreciated that the ground plane surface texture information, with which the present invention is particularly concerned, relates to that part of the displayed image which lies below the simulated horizon. In the display exemplified upon the screen 101 in Fig. 9, there is shown a runway on the ground plane and sky above the horizon. For completeness of the visual display system of Fig. 9, there is included a sky generator 11 4 which provides an alternative digital input to switch 11 3 by way of line 11 5. The switch 11 3 selects either ground plane or sky formation, from line 112 or line 115 respectively, during each line scan at the ground/sky transition defined by the horizon.This transition is effected in known manner as is described in Patent Application No. 7913058, referred to earlier.
The selected output from switch 113 is supplied by line 116 to a digital-analogue converter 117 and the analogue video signal is supplied by line 118 to the projector 119, as already stated.
In order to provide a surface detail generator according to Fig. 9 in hardware, the two main sub-systems need to be designed. The first is the perspective transformation computer, or surface detail scanner, 108 and the second is the surface detail store, or texture memory, 111.
An explanation and description of the surface scanner 108 will be given first.
Before doing so, however, it is necessary to define the objectives of perspective transformation and the co-ordinates and the mathematical terms used in the description.
Fig. 10 is a perspective diagram showing a solid real-world object 40 viewed directly by an observer 0. An image 41 is displayed in the intermediate display plane 28 which is the visual equivalent of the object 40, that is the eye of the observer 0 cannot in any way distinguish the image 41 from the object 40. As shown, the image 41 is subject to a perspective transformation with respect to the object 40.
Fig. 11 is a three-dimensional diagram showing the geometry of such perspective transformation from an object plane 42 onto the display plane 28.
Fig. 12 is a three-dimensional diagram showing the relationship of three sets of threedimensional co-ordinates relating respectively to a fixed real-world origin, the simulated aircraft origin and the observer's eye origin. A point P on a display plane is defined in position relatively to the three sets of co-ordinates.
Fig. 12 defines the location of that point P on the raster in the display plane 28.
The function of the surface scanner is to provide continuous real-time solutions to the two equations:
These equations relate the position of the surface sampling point (xp yp) to the viewer's attitude defined by +, H and +, his displacement from the surface origin (XO, YO H) and the position of the display scanning spot, defined by tanh and tany. As can be seen from Fig. 12, tanA and tany are the actual rectangular co-ordinates of the scanning spot on the display plane 28, and thus vary linearly when a rectangular scan is used.The scanner must compute a value for xp and yp for every value of tanX and tany the limits of the display in the display plane 28, in synchronism with the scan of the display device.
The structure and properties of the perspective transformation itself, as represented by Equations (1) and (2), will be examined.
The two transformation equations (1) and (2) must be computed for every value of tanA and tany, that is for every picture element of every display frame.
For a 625 line 50 Hz system with square picture elements, tanA changes at a rate of 15 mHz and tany at a rate of 1 5.625 kHz.
The television type display standard selected determines the required perspective transformation computation rate. The form of the computation required is given by the two equations following:
Considering first those parts which change at picture element rate, that is those parts which are functions of tanA, it is seen that tanA is proportional to the horizontal distance of an element from the centre of a raster line. It is in fact equal to this distance on the unit display, as shown in Fig. 12. Thus, both the numerators and denominators of equations (3) and (4) are linear functions of this distance, which itself is known at equal steps along a line. The values of ax, ay and c are likewise linear functions of tany, which is proportional to the vertical distance of a line from the screen centre. A two-dimensional representation of one of these functions in equations (3) or (4) F, is given in Fig. 13, in which the slope of the plane is given in the tanA by FA and in the tany direction by F,. This is consistent with the geometrical interpretation of Fig. 11, where the three linear functions of equations (3) and (4) represent the distance of the scanning spot from the eye point in the three ground axes X, Y and Z. Referring to Fig. 13, the (tanA, tany) plane can be identified with the display plane 28 and the values of F with one of the three distances.
In general, the values of F0, Fox and F, change for each display frame, thus defining a new twodimensional linear function. These functions may be computed in an incremental manner, thus:
numerator of Eqn. (3) (or Eqn. 1) numerator of Eqn. (4) (or Eqn. 2) denominator of these equations picture element number (m = 1,2,3, ) line number (n = 1,2,3 elemental increment of tanA line increment of tany Ax, A, B slopes in A direction Ax, A,, B slopes in y direction In the most direct implementation the desired results xp and yp may be computed by performing the division A/B and Ay/B, the multiplications by H and the subtractions from XO and YO all at picture element rate.
In order to achieve the required computation rate, a form of parallel computing is used. In an efficient parallel processor, that is, one in which the arithmetic elements are never idle, the time needed to compute Eqn. (3) or Eqn. (4) is easily found. The input parameters of this calculation are the performance characteristics of the circuit technology used and of the arithmetic algorithms chosen.
The types of arithmetic algorithms which may be considered are limited by the form of system architecturc the pipeline-which is chosen.
The pipeline form of parallel processing has been chosen mainly for its ease of implementation and flexibility. No complex control logic is necessary and the structure of the computation is "built in". The surface memory sub-system is also conveniently organised as a pipeline, so that a completely homogeneous system has been constructed.
Pipelining is effective in a system where the minimum system computation rate is a constraint, as it is in the present surface scanner. A calculation is pipelined by splitting it into steps, each of which is performed by a logic network in one time period, (66 2/3) ns for a 15 MHz clock frequency). At the end of each calculation step, the result is resynchronised with the clock in a synchronisation register. Fig. 14 shows the form of a pipelined system, while Fig. 15 illustrates the progress of an n-step calculation through such a pipeline. It is to be noted that the time for one complete calculation is n clock cycles, and that new results emerge every cycle. It is of interest that pipelines may be operated in parallel and may be split and merged, as shown in Fig. 16, to fit the particular calculation.
The solution of Equations (3) and (4) can be organised in different ways. Fig. 17 shows the most obvious way, while Figs. 18 and 19 show more economical methods. The choice between the method of Fig. 18 or Fig. 19 depends on the relative costs of the pipelined multipliers and dividers. which again depends on the number of bits required. Scaling and accuracy considerations define this. The function generator resolutions can be determined by examining their effect on the displayed horizon, whose location is given as the locus of points where B = O. The smallest resolvable roll angle is approximately tan~' (1/800), given by the size of the picture element grid. With pitch set to zero.
B(A,y) = cosss tany - sin(p tanA (8) For the incremental generation of this function (according to Equation (7)) the magnitudes of BA and By are approximately tan - 1(1 /800) QtanA (or tan - 1(1 /800) Atany, since AtanA = Atan7) which is between 2-'9 and 2-20. As the maximum absolute value of B is 1.16, 22 bits are required for the B function generation. The same type of argument is applied for the pitch and heading angles by use of the B and Ax and Ay functions respectively. In practice, 22 bits are found to be sufficient in all cases. However, 24 bits are used for the function generators, since arithmetic circuit blocks generally come in four-bit units.With the 20 bits used for the representation of xp and yp, all arithmetic units were designed for 24 bit wide inputs, four bits being unused at the outputs to allow for rounding errors.
The choice between the arrangements of Figs. 17 and 18 can now be made. The arrangement of Fig. 18 has been chosen, as two fewer circuit cards are needed.
The arithmetic units include: (a) a 24 bit two-dimensional linear function generator, (b) a 24 bit pipelined multiplier, and (c) a 24 bit pipelined divider.
The purpose of a function generator is to produce values of the function F, according to the formula:
Fig. 20 shows in block diagram form how such a function can be computed. This process is further described in Patent Application No. 7913058 referred to earlier herein.
The pipelined multiplier, shown in Fig. 21, is based on the Texas Instruments 74LS261. This device, which operates on a modified form of Booth's algorithm in which multiplier bits are examined three at a time, is used in the recommended configuration.
No special logic is needed to handle negative operands and the whole multiplier produces a correct two's component result. The partial product bits generated are combined in an adder tree using 74H 183 carry-save adders. Two sets of pipeline registers are required, one after the multiplication and one in the adder tree.
Division is performed by the non-restoring algorithm. Figure 22 shows a graph for the example: (- 3/8 s /2) calculated by this method. The non-restoring methods adds or subtracts powers of the divisor so as to reduce the absolute magnitude of the partial dividend and for a proper division the remainder has a magnitude less than that of the divisor, and may have the opposite sign. In this case, the quotient should be further corrected, but as this adds an error only in the least significant place it is ignored.
The algorithm is easily implemented as a pipeline; one stage is shown in Fig. 23. Each calculation is performed by an adder/subtractor controlled by the exclusive - OR of the most significant bits of the partial dividend and divisor. After each addition/subtraction the shift takes place through the discarding of the partial dividend most significant bit and the substitution of a new partial dividend bit in the least significant position. After each stage in the division the quotient/unused dividend register contains one more quotient bit and one less unused divident bit until at the end it contains only quotient bits. The final stage of the division is the quotient correction which converts the + 1, - 1 coded number into two's complement.
All adder/subtractors in the divider are Texas Instruments 74S181 arithmetic/logic units with Intel 3003 look-ahead carry units.
The pipeline diagram from the whole transformation calculation can now be drawn following the scheme of Fig. 16, this is given in Fig. 24. It is to be noted that the Ax and Ay function generators are fed modified initial conditions, Ax( - a0,fl0) and Ay( - a0,fl0), because of their position in the pipeline. Fig. 24 also shows the scaling of all operands.
The three function generators, each with three inputs, are interfaced to a general purpose computer by its I/O bus. Transfers of data between the computer and the function generators occur whenever an output instruction containing one of the nine function generator addresses is executed. These transfers occur during display blanking periods. The computer program must therefore have some means of determining when these periods occur. This is achieved by use of the computer input/output flag system. A flag consists of a flip-flop which can be set by an external signal. The state of this flip-flop can be sensed by the computer (using input and conditional branch instructions) and it can be reset (using an output instruction).
Two flags are used for synchronisation of the program with the display raster, the odd-field flag and the even-field flag. The odd-field flag is set at the start of the odd-field blanking period and the even-field flag is set at the start of the even-field blanking period. The computer must be idle at the end of an odd or even field and thus able to continuously test the two flags. When one of these flags has been set, the program updates the function generators with data computed during the previous field, and clears the flag. Computations of the set-up data for the next field may now proceed using the latest available input data. This computation must end before the end of the current field.
The flow chart of Fig. 25 shows how the program is organised. It is to be noted that, during the even field, the odd field set up conditions are calculated and vice-versa. New data defining the eye point attitude and position is read every field.
The second sub-system of the surface detail generator of Fig. 7, that is the surface detail store 27, will now be described.
The surface memory accepts as input the vector (xp, yp) at a rate of 15 MHz representing the position of the surface intersection, and delivers as output a video signal which may be sent directly to the display device, after digital/analogue conversion. The design problem is totally that of aliasing control.
Fig. 26 and Fig. 27 are two perspective diagrams representing the process of viewing a textured plane surface in perspective. Fig. 26 represents viewing by a television camera. Fig. 26 shows a textured plane 46 viewed by a camera along the axis 47. The camera image plane is shown at 48. Fig. 27 shows the textured plane 46 viewed along the line 47 and the plane 48 represents the display plane.
Considering the television camera image production process of Fig. 26, on any position of the television raster scan, a ray may be drawn from the centre of projection (the imaged eyepoint) to the surface 46, where it intersects at point A. The action of the camera optical system and scanning beam may be combined by imaging an "aperture" with a distribution similar to that of Fig. 26. The effect of the aperture is to integrate information from surrounding parts of the brightness distribution on the image plane into a single value which is the video signal for that point of the raster scan. In a well-adjusted camera, the extent of this aperture will be such that the maximum possible amount of information will be extracted from the scene. In other words, the pre-sampling filter is correct. Too small an aperture produces aliasing, whereas too large an aperture causes unnecessary blurring.The image, therefore, corresponding to point A is made up from the brightnesses of a region of the surface in the neighbourhood of A. The further A is from the image plane the greater the size of this neighbourhood will be and the less detailed the image.
Fig. 27 shows an alternative way of viewing this process, one less related to the physical process. Here the aperture is mapped onto the surface 46 itself. The video signal corresponding to point B of the scan is made up of those elements of the surface falling under the mapped aperture. In signal processing terms, a two-dimensional filter is applied to the brightness distribution of the surface. The shape and extent of this filter is a function of the distance and tilt of the surface and is in general asymmetrical.
A direct implementation of this process is impossible, due to the huge memory access and computing problems than would arise. The parallel access of texture values at a 15 MHz rate and their combination in a variable two-dimensional filter is not considered practical.
The first simplifying assumption is in the use of a two-dimensionally symmetrical mapped aperture This is subjectively justifiable, as the greatest distortion of the aperture occurs when the displayed detail is the least. With this assumption, off-line and predefined, rather than on-line, filtering is possible. The additional memory required for holding pre-filtered textures requires less board space than a filter required to operate at 15 MHz. Memory architecture can also be considerably simplified, as only one sample has to be read every 66 2/3 ns.
The second problem is that of filter selection; the determination in real time of what degree of texture filter is needed. One possibility is to use the distance from the eyepoint to the surface.
This distance is given by:
The value of d may be approximated to:
with a maximum error of about 14% at the corners of the display. This quantity is available "for free" in the arithmetic pipeline of Figs. 17-19. The use of this value must be rejected however, as no simple relation exists between the mapped aperture dimension and the range.
Since the approach to the filter design is based on aliasing control, the distance between intersections is a useful and easily obtainable measure.
The memory system is based on the hierarchy principle, as previously described, but now the hierarchy levels are real images computed by a variable two-dimensional filter. The system provides a variable number of levels so that the choice of the optimum number can be made subjectively.
The texture memory system design is based on the two following assumptions: (a) The filter applied to the texture is symmetrical and undistorted, and (b) The selection of which filtered version of the texture to map is a picture element-by-picture element decision based on the distance between successive samples.
Both of these assumptions introducle approximations, but bring about local geometrical errors only. The main sampling grid is computed exactly and aliasing can be completely controlled.
The effect of the first assumption is to apply a greater degree of filtering than would apply in an exact simulation. In this, as has been mentioned previously, the mapped aperture is elongated in the direction of the line of sight for all cases except that of viewing in a direction normal to the surface. The filtering effect is thus greater in the line of sight direction than at right angles.
However, since it is assumed that the filter has the same filtering effect in both directions. a greater degree of filtering must be applied across the line of sight in order to avoid aliasing along the line of sight. Figs. 28 and 29 illustrate this effect. The elongation of the real mapped aperture increases as the line of sight touches the plane at smaller angles, as occurs at points near to those corresponding to the displayed horizon. However, as the total degree of filtering (detail reduction) becomes greater at these points, the effect of this overfiltering becomes less noticeable. The question of aperture elongation and its effects is considered in more detail later herein.
The second assumption above is also related to aperture elongation, for the definition of the "distance between samples" must be decided. As can be seen in Figs. 28 and 29, the distance between successive samples along a mapped raster line, a, is not the same as the distance between mapped lines, b. A definition of the "distance between samples" is also considered later herein.
Much work has been done recently on the aliasing problem in digital television image synthesis. The known studies are summarised below, as they have an important influence on the design of the texture memory system.
Television image formation can be considered as a three-dimensional sampling process, the three dimensions being the two linear dimensions of the image and time. The sampling aspects of television are well understood and the theory can be profitably applied to television-based computer generated imagery. By studying the operating of a television camera system in terms of sampling theory and then comparing synthetic image generation, the reasons for aliasing in the latter systems can be understood. It is also possible to devise methods for the reduction of aliasing in computed imaged through this knowledge of television systems.
The classic theory of television image generation derives the spectrum of the television camera output signal by an analysis of the scanning process and the form of sampling used in a television system. The effect of the scanning aperture on aliasing can be considered therefrom.
Considering image synthesis in the light of television theory, it is possible to conclude that computed images are analogous to non-band-limited scenes sampled without the use of a presampling filter. Aliasing may be reduced by sampling at a higher frequency and synthesising a pre-sampling filter. Experience has shown that a sampling frequency four times higher in both the vertical and horizontal dimensions produces satisfactory results when using a 625-line projected display in a simulator.
Fig. 30 represents the sampling grid of a discretised display together with some scene details in their exact positions as would be computed by a polygon-based image generator of infinite resolution. Since only one value of brightness and colour can exist over the area of a picture element, some method of mapping the computer image onto the element grid must be employed .
In early image generators the brightness and colour of a picture element was taken to be that of the scene surface visible at the centre of the element, so that for example, in Fig. 30, element (3,3) would be assigned the value B,. This method resulted in edges portrayed with stepped boundaries, and a slight positional change in a computed edge might result in a displayed change in position of one picture element. Distracting effects such as these were accepted in the first image generators, but solutions to these problems have since been found. The method used is to synthesise each displayed picture element from a number of "sub-elements" which in effect, sample the scene on a finer grid. Element (4, 10) in Fig. 30 shows the sample points when 16 sub-elements are used. The brightness and colour values computed for each subelement are combined in a two-dimensional filter to yield the field element to be displayed. The cut-off frequency of this filter must be equal to the maximum spatial frequency which can be represented on the display grid. This is given by sampling theory and is shown by the two sine wave cycles portrayed in Fig. 30.
In simple systems, sub-elements lying within one element are added with equal weights, (the area times colour rule"), while in more complex systems, sub-elements lying in adjacent elements may be used as well. All currently manufactured raster-scan computer image generators for simulation now incorporate some form of anti-aliasing.
It is noted that some so-called solutions to the aliasing problem are not in fact cures, but merely treatment of the symptoms. "Edge smoothing" and post-filtering of an image with aliasing errors can only produce acceptable results at the cost of a reduction in the resolution of the displayed image. In other words, elements are treated as sub-elements in a gross filter.
Consider, now, the sampling of an image stored in a memory. For simplicity it will be assumed that the image is defined on a grid identical to that of the display picture element grid.
Fig. 31 shows the stored image with the display sampling grid superimposed. There is thus an exact correspondence between stored and displayed images. For a correct representation of the image, before storage, its spatial frequency content must be such that no higher spatial frequencies than those shown in the figure exist. This is achieved by application of the usual pre-sampling filter.
Now assume that the stored image is displaced with respect to the sampling grid, as shown in Fig. 32, as would occur with scene or observer movement. The displacement represented is one-quarter of an element in both vertical and horizontal dimensions. The sampling points all lie within the same boundaries on the memory grid and thus the displayed image is identical to that existing before the displacement. No image change will occur until a displacement of onehalf of an element has occurred in either direction. A further change of one element spacing is then required until the next change occurs. A smooth movement of the computed image thus produces abrupt changes in the displayed image.
To match the performance of such a sampled memory system to image generators with antialiasing which do not exhibit this element-to-element jumping of scene components, the number of stored image samples must be increased by a factor of 16, the number of sub-elements in each element of the polygon-based system. A smooth movement of the computed image then produces displayed image changes which change four times more often in any linear dimension, thus producing a closer approximation to smooth movement. The stored image must still be prefiltered to the same degree, except that there are now eight rather than two samples per spatial cycle at maximum spatial frequency. Fig. 33 shows the two new grids and the maximum spatial frequency which can be stored. This arrangement is now exactly analogous to the polygon system with anti-aliasing.As far as rotation of the computed image is concerned, the same argument applies. The same approximation to smooth rotation is used as in polygon systems with sub-element sampling.
The design of a texture memory system can now be considered using this data and the two assumptions previously referred to.
At the closest possible eye-surface distance one sub-element, as defined above, maps into a surface element of about 2 inches square. For smooth motion, the eye must not be allowed to approach the simulated surface any closer. If the surface is viewed normally at this minimum eye-surface distance, then the situation depicted in Fig. 33 applies.
A memory containing a pattern stored in this manner and filtered to the degree shown could thus constitute the top level of a memory hierarchy, designed to be sampled every four surface elements. This level is called level 0 and is produced from the basic texture pattern by applying a symmetrical two-dimensional filter with cut-off spatial frequencies iT/4 with respect to the fine grid.
Consider the case now where sample points are five surface resolution elements apart. With normal viewing, this corresponds to an eye-surface distance of 5/4 of that producing the four surface element spacing. Fig. 34 shows the dimensions of the memory required for this spacing, where each element has 5/4 the linear dimension of a surface resolution element. The new memory grid, the top right-hand one in Fig. 34, maps onto the display exactly as the level 0 grid shown in Fig. 33. The degree of pre-filtering required for this level to prevent aliasing requires a low-pass filter with cut-off frequencies ~err/4 with respect to the larger mapped subelement grid, or sir/5 with respect to the resolution element grid.Ideally, the total amount of storage required for this level of detail would be 4/5 X 4/5 = 0.64 of that required for level 0.
However, this amount is not achievable in practice.
Level 2, pre-filtered for a sample spacing of 6 resolution elements, would require 4/6 x 4/6 = 0.44 of the storage, level 3 4/7 X 4/7 = 0.33, and so on. The final level in the hierarchy will contain only one sample, representing the average brightness of the whole textured area. The next-to-last level, shown in Fig. 35 needs to contain 64 samples and corresponds to the situation where 4 mapped display elements cover the entire pattern. These last levels will exist whatever the initial texture pattern size.
In the implementation a texture pattern size of 64 X 64 = 4096 surface resolution elements has been chosen as the largest practical size in a practical system, considering the programma ble read-only memories with suitable access times available at the present time.
With this pattern size, the following Table sets out the theoretical and practical memory size for the number of levels needed, 29 levels in this case.
STORAGE QUANTITIES Sample Theoretical Practical Level Spacing memory size memory size Address bits used 0 4 4096 64 > < 64=4096 x5x4x3x2xlxoy5Y4Y3y2YlYo 1 5 2621 4096 2 6 1820 4096 3 7 1337 4096 4 8 1024 32 X 32 = 1024 X4X3X2X,XOY4Y3Y2YlYo 5 9 809 1024 6 10 655 1024 7 11 542 1024 8 12 455 1024 9 13 388 1024 10 14 334 1024 11 15 291 1024 12 16 256 16 x 16 = 256 X3X2X1X0Y3Y2Y1Y0 13 17 227 256 14 18 202 256 15 19 182 256 16 20 164 256 17 21 149 256 18 22 135 256 19 23 124 256 20 24 114 256 21 25 105 256 22 26 97 256 23 27 90 256 24 28 84 256 25 29 78 256 26 20 73 256 27 31 68 256 28 32 64 1 none Total: 16,584 28,673 Expansion ratio: 4.05 7 Memory addressing difficulties prevent the optimum use of memory capacity.The texture memory is addressed by two vectors, xp and yp, changing at picture element rate, which represent the mapped sub-element positions. Fig. 36 shows how level 0, which contains 4096 samples, would be addressed (XO and XO are the least significant bits of the X and Y vectors and represent one surface resolution element). Level 1 ideally requires 2621 samples which have to be mapped onto the same address bits used by level 0.
This mapping is feasible using either look-up tables or arithmetic circuitry, and a memory could be constructed to hold the required number of samples.
The practical solution is to simplify the addressing at the expense of storage economy. For level 1, for example, 4096 samples are used and addressed in the same manner as level 0. The above Table shows how the complete hierarchy is stored and the addressing used. The memory expansion factor is 7 for the case of a 4096 sample pattern, as opposed to 4 for the optimum memory use scheme. The simplification in addressing hardware far outweighs the increased memory requirement. Prefiltered images can be computed by application of a suitable twodimension low-pass filter.
The final design decision is how the memory hierarchy level selection is to operate. The earlier discussion is based on the assumption that the sample spacing is known. In general, as can be seen in Fig. 28, the sample spacing is different in the x and y directions and the square sampling grid is a special case. To prevent aliasing, the largest distance, b in Fig. 28, is used.
In the example given, this distance is that between samples on adjacent lines, but it can equally well be that between adjacent elements on the same line.
Computation of the distance between samples on the same line is simple and only the previous value of (xp, yp) needs to be stored. On the other hand, computation of the distance between samples on adjacent lines requires the storage of one whole line's worth of (xp, yp)'s.
This amounts to approximately 800 X 24 X 2 bits of storage in high speed memory. While this is practical, it would require an extra circuit card. The simpler solution of computing the distance between samples adjacent in time and providing a correction for aperture elongation is considered to be adequate.
This correction factor is applied by use of a level code mapping memory. This memory converts the computed sample spacing into a texture memory level code, which is a number in the range 0 to 28, according to a table loaded from the general purpose computer once per field.
The whole set of texture memories are addressed in parallel using the xp and yp bits shown in the foregoing Table. Monolithic Memories 6353-1 integrated circuit programmable read-only memories were used.
Fig. 37 is the block diagram of the complete texture memory system described above, which generates the image of a complete textured ground plane.
The final, eight-bit wide sequence of digital texture values is fed to a horizon switch identical to that built for the cloud/sky generator. Here a sky signal is combined to form a final image which is sent to the video digital-analogue converter and fed to the display monitor.
A sample spacing correction must be computed and loaded to the level code mapping memory each field to ensure correct operation in all attitudes. It is not practical to compute this exactly, but an acceptacle approximation can be made. Figure 38 shows the mapped raster lines for three angles of roll; it is immediately apparent that the computed sample spacing is only correct when +==90 . The error may be simply computed by considering the mapped raster as a continuous function and comparing the rates of change of x and y in the tanA (along lines) and tany (across lines) directions.
First, note that the mapped raster shape is independent of XO, YO, H and #. If #p is made zero, as in Fig. 38, the surface x and y axes are aligned as shown and a suitable correction factor can be defined. This is given by the ratio:
This ratio can be shown to be:
Since the correction factor has to apply to a complete field, the tany factor is approximated by using a constant value. Fig. 39 shows the effect of this constant correction factor, which replaces the exact, curved level code mapping functions with straight lines. The value of tany is determined empirically by setting 8= (p = 0 and adjusting the level code mapping until no aliasing is visible.Use of this value in the general correction factor produces acceptable results for all attitudes.
The Surface Detail Generator described above is suitable for integration into a polygon-based raster scan image generator. Selected surfaces defined as "textured", have their single luminance value modulated by the Surface Detail Generator output.
The system described is capable of transforming texture for a single surface in any one frame.
A modification, however, would allow the transformation processor to be re-loaded during a line blanking interval to allow a different plane to be defined. A change in the transformation along a line would also be possible if delaying stages were added to the pipeline so that all inputs referred to the same time instant. The pipeline would of course have to be loaded with the correct number of elements before the surface change was desired due to the computation delay, see Fig. 24.
A problem arises at the boundary of a textured polygon due to the way aliasing is handled at edges in most image generators. To correctly represent the transition, values of the texture luminance have to be known to a resolution finer than one picture element. This is not possible, as the transformation computer can only produce one sample per element. However, since the texture is only modulating an overall luminance value, the filtered edge retains its correct appearance.
Recent advances in high-speed semiconductor memories make possible the construction of a texture generator with a smaller high-speed computing requirement. This may be achieved using the known "rolled raster" principle. Fig. 41 shows such a system in block diagram form.
The texture memory is accessed by vectors xp and yp computed according to Equations (3) and (4), that is with f = O. The high speed arithmetic required for real-time solution of these equations amounts to two adders-the multiplications and divisions required may be performed on a line-by-line basis in the general purpose computer. The texture memory output is read into one half of the frame store constructed from high speed semiconductor memories. The distance between samples is constant along any line and between lines, as is shown in Fig. 41. Both of these distances are available for any line in the general purpose computer which may then determine which is the greater and select the texture memory level accordingly. No problem of sample spacing correction exists. While one half of the frame store is being loaded, the other half is sending its contents to the display. Roll is introduced at this point by a simple linear address mapping. In other words, the partially transformed texture is read out in lines at an angle of f to those on which it was written. The address mapping again requires two high speed adders for implementation. The whole system thus requires a frame store, a general purpose computer and a small amount of high speed arithmetic circuitry. This represents a considerable saving over the original design if 16K or larger memories are used for the frame store. The disadvantage of this alternative is that only surfaces with the same roll angle with respect to the observer can be transformed in the same frame.

Claims (1)

1. A visual display system of the computer generated image type, for a ground-based flight simulator, and for providing a rastor scanned, perspective visual display of a simulated textured surface, including a surface detail generator comprising a perspective transformation computer and a surface detail store, the perspective transformation computer being a pipeline calculator for computing in real time the perspective transformation from the simulated ground plane to the display system display plane during simulated flight, thereby determining the required scanning of the surface detail store corresponding to a rectangular raster scanned display.
GB7920882A 1979-06-15 1979-06-15 C.G.I.-Surface textures Withdrawn GB2051525A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
GB7920882A GB2051525A (en) 1979-06-15 1979-06-15 C.G.I.-Surface textures
GB8018838A GB2061074B (en) 1979-06-15 1980-06-09 Visual display systems
FR8013166A FR2466061A1 (en) 1979-06-15 1980-06-13 IMPROVEMENT TO VISUALIZATION SYSTEMS OF THE IMAGE TYPE GENERATED BY COMPUTER
US04/159,442 US4343037A (en) 1979-06-15 1980-06-13 Visual display systems of the computer generated image type
CA000353946A CA1141468A (en) 1979-06-15 1980-06-13 Visual display apparatus
DE19803022454 DE3022454A1 (en) 1979-06-15 1980-06-14 OPTICAL IMAGE SYSTEM WITH COMPUTER GENERATED IMAGE FOR A FLOOR-RESISTANT FLIGHT SIMULATOR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB7920882A GB2051525A (en) 1979-06-15 1979-06-15 C.G.I.-Surface textures

Publications (1)

Publication Number Publication Date
GB2051525A true GB2051525A (en) 1981-01-14

Family

ID=10505868

Family Applications (2)

Application Number Title Priority Date Filing Date
GB7920882A Withdrawn GB2051525A (en) 1979-06-15 1979-06-15 C.G.I.-Surface textures
GB8018838A Expired GB2061074B (en) 1979-06-15 1980-06-09 Visual display systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB8018838A Expired GB2061074B (en) 1979-06-15 1980-06-09 Visual display systems

Country Status (1)

Country Link
GB (2) GB2051525A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0064078A1 (en) * 1980-11-07 1982-11-10 The Singer Company Landing light pattern generator for simulator with selective addressing of memory
EP0137109A1 (en) * 1981-05-22 1985-04-17 The Marconi Company Limited Image generating apparatus for producing from the co-ordinates of the end points of a line, a two-dimensional image depicting the line as viewed by the observer
GB2171579A (en) * 1985-02-20 1986-08-28 Singer Link Miles Ltd Apparatus for generating a visual display
US4615013A (en) * 1983-08-02 1986-09-30 The Singer Company Method and apparatus for texture generation
GB2181929A (en) * 1985-10-21 1987-04-29 Sony Corp Generation of video signals representing a movable 3-d object
EP0507548A2 (en) * 1991-04-05 1992-10-07 General Electric Company Texture for real time image generation

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3204135A1 (en) * 1982-02-06 1983-08-18 Honeywell Gmbh, 6050 Offenbach DEVICE FOR SIMULATING THE VIEW OUT BY MEANS OF AN OPTICAL DEVICE
GB2120506B (en) * 1982-04-16 1986-03-26 Jpm Improvements relating to video apparatus
GB2138252B (en) * 1983-04-12 1986-10-22 Marconi Co Ltd Image generator
US4682160A (en) * 1983-07-25 1987-07-21 Harris Corporation Real time perspective display employing digital map generator
EP0281615B1 (en) * 1986-09-11 1991-09-11 Hughes Aircraft Company Scenographic process and processor for fast pictorial image generation
JPH0812705B2 (en) * 1986-09-29 1996-02-07 株式会社東芝 Image processing device
US4912659A (en) * 1987-10-30 1990-03-27 International Business Machines Corporation Parallel surface processing system for graphics display

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0064078A1 (en) * 1980-11-07 1982-11-10 The Singer Company Landing light pattern generator for simulator with selective addressing of memory
EP0064078A4 (en) * 1980-11-07 1984-06-13 Singer Co Landing light pattern generator for simulator with selective addressing of memory.
EP0137109A1 (en) * 1981-05-22 1985-04-17 The Marconi Company Limited Image generating apparatus for producing from the co-ordinates of the end points of a line, a two-dimensional image depicting the line as viewed by the observer
US4616217A (en) * 1981-05-22 1986-10-07 The Marconi Company Limited Visual simulators, computer generated imagery, and display systems
US4615013A (en) * 1983-08-02 1986-09-30 The Singer Company Method and apparatus for texture generation
GB2171579A (en) * 1985-02-20 1986-08-28 Singer Link Miles Ltd Apparatus for generating a visual display
GB2181929A (en) * 1985-10-21 1987-04-29 Sony Corp Generation of video signals representing a movable 3-d object
GB2181929B (en) * 1985-10-21 1989-09-20 Sony Corp Methods of and apparatus for video signal processing
US4953107A (en) * 1985-10-21 1990-08-28 Sony Corporation Video signal processing
EP0507548A2 (en) * 1991-04-05 1992-10-07 General Electric Company Texture for real time image generation
EP0507548A3 (en) * 1991-04-05 1994-05-11 Gen Electric Texture for real time image generation

Also Published As

Publication number Publication date
GB2061074B (en) 1983-02-09
GB2061074A (en) 1981-05-07

Similar Documents

Publication Publication Date Title
US4343037A (en) Visual display systems of the computer generated image type
US4667190A (en) Two axis fast access memory
US4835532A (en) Nonaliasing real-time spatial transform image processing system
CA1254655A (en) Method of comprehensive distortion correction for a computer image generation system
US5841441A (en) High-speed three-dimensional texture mapping systems and methods
US4645459A (en) Computer generated synthesized imagery
US4586038A (en) True-perspective texture/shading processor
EP0137108A1 (en) A raster display system
EP0300703B1 (en) Depth buffer priority processing for real time computer image generating systems
US4811245A (en) Method of edge smoothing for a computer image generation system
US4677576A (en) Non-edge computer image generation system
EP0284619A1 (en) Method and apparatus for sampling images to simulate movement within a multidimensional space
US5823780A (en) Apparatus and method for simulation
EP0100097B1 (en) Computer controlled imaging system
GB2051525A (en) C.G.I.-Surface textures
GB2256568A (en) Image generation system for 3-d simulations
EP0313101A2 (en) Fractional pixel mapping in a computer-controlled imaging system
US5550959A (en) Technique and system for the real-time generation of perspective images
EP0315051A2 (en) Perspective mapping in a computer-controlled imaging system
EP0250588B1 (en) Comprehensive distortion correction in a real time imaging system
CA1206260A (en) Image processing system
JPH0241785B2 (en)
Applegate The use of interactive raster graphics in the display and manipulation of multidimensional data
Evans Computer generated images for aircraft use
WO1988001414A1 (en) Data decompression using a polynomial computation engine

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)