CN113593027B - Three-dimensional avionics display control interface device - Google Patents

Three-dimensional avionics display control interface device Download PDF

Info

Publication number
CN113593027B
CN113593027B CN202110880604.XA CN202110880604A CN113593027B CN 113593027 B CN113593027 B CN 113593027B CN 202110880604 A CN202110880604 A CN 202110880604A CN 113593027 B CN113593027 B CN 113593027B
Authority
CN
China
Prior art keywords
dimensional
vertex
data
camera
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110880604.XA
Other languages
Chinese (zh)
Other versions
CN113593027A (en
Inventor
孙亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Hanke Computer Information Technology Co ltd
Original Assignee
Sichuan Hanke Computer Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Hanke Computer Information Technology Co ltd filed Critical Sichuan Hanke Computer Information Technology Co ltd
Priority to CN202110880604.XA priority Critical patent/CN113593027B/en
Publication of CN113593027A publication Critical patent/CN113593027A/en
Application granted granted Critical
Publication of CN113593027B publication Critical patent/CN113593027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Abstract

The invention relates to a three-dimensional avionics display control interface device, which comprises a three-dimensional digital earth, wherein the three-dimensional digital earth generates three-dimensional topography by adopting a modern OpenGL rendering mode, and realizes the rotation operation of the digital earth through three different observation visual angles; the three-dimensional graphic component is used for providing various three-dimensional graphic types and rendering and displaying in the three-dimensional digital earth; the three-dimensional container component is used for inheriting the container base class from the VAPS XT, so that the container base class has the capability of mounting the child nodes; the three-dimensional model component is used for managing 3ds files through lib3ds, placing the parts to be drawn in a display list glGenLists through a list display mode, and pre-compiling the list. The three functions of three-dimensional global camera control, ambient light control and graphic component editing control are independent in architecture design to form three unified logic control modules, and the three-dimensional graphic component direct low coupling and high cohesion are realized.

Description

Three-dimensional avionics display control interface device
Technical Field
The invention relates to the technical field of avionics display control, in particular to a three-dimensional avionics display control interface device.
Background
The display control interface is a medium for interaction and information exchange between the system and the user, and can realize conversion between the internal form of the information and the human acceptable form. The avionics display control system is one of important parts of the avionics system, and the development process of the avionics display control system is from the beginning to the present, and the avionics display control system respectively goes through five processes of a first generation of airplane instrument, an electromechanical servo instrument period, a comprehensive guide instrument, a CRT (cathode ray tube) electro-optical display instrument and a modern display control system.
However, the currently mainstream avionics display control interface design platform (such as VAPS XT) only can provide the visual development function of the two-dimensional display control interface, has no function of three-dimensional view, but the two-dimensional display control interface device cannot meet the requirements of the current battlefield situation, so that a three-dimensional avionics display control interface device is needed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a three-dimensional avionics display control interface device, and solves the defects of the existing avionics display control interface design platform.
The aim of the invention is achieved by the following technical scheme: a three-dimensional avionics display control interface device comprises a three-dimensional digital earth, a three-dimensional graphic assembly, a three-dimensional container assembly and a three-dimensional model assembly;
The three-dimensional digital earth: the method is used for generating three-dimensional terrain by adopting a modern OpenGL rendering mode and realizing the rotation operation of digital earth through three different observation angles;
the three-dimensional graphic assembly: for providing multiple types of three-dimensional graphics types and rendering a display in a three-dimensional digital earth;
the three-dimensional container assembly: a container base class for inheriting from the VAPS XT, providing some three-dimensional containers with the capability of mounting the child nodes;
the three-dimensional model component: for managing 3ds files by lib3ds and placing the parts to be rendered in the display list glGenLists by displaying the list, and then precompiling the list.
The three-dimensional digital earth comprises a modern OpenGL rendering module, a three-dimensional terrain generating module and a three-dimensional global camera controller;
the OpenGL rendering module is used for receiving a group of 3D coordinates and converting the 3D coordinates into 2D pixel display output on a screen;
the three-dimensional terrain generation module is used for generating terrain data and rendering according to the OpenGL rendering module, textures and the terrain file loading library GDAL;
the three-dimensional global camera controller is used for rotating the digital earth by supporting three different viewing angles of a free viewing angle, a tracking viewing angle and an earth viewing angle and simultaneously by a mouse dragging operation by bypassing vectors of an origin.
The modern OpenGL rendering module includes:
vertex shader unit: the 3D coordinates used for transmitting and inputting in the form of an array are converted into another 3D coordinate and then input into the primitive assembling unit;
primitive assembling unit: for assembling all points into a specified primitive shape and inputting the same to a geometry shader unit;
geometry shader unit: the method comprises the steps of taking the geometry of a series of vertexes of a primitive shape as input, generating new vertexes to construct new primitives to generate other primitive shapes, and inputting the other primitive shapes to a rasterization unit;
and a rasterizing unit: for mapping the primitives to corresponding pixels on the final screen, generating fragments for use by the fragment shader unit;
fragment shader unit: for calculating the final color of a pixel from contained 3D scene data
Test and mixing unit: for detecting the depth value and template value of the segment for judging the position of the pixel, and for detecting the alpha value and mixing the object.
The three-dimensional terrain rendering interaction flow comprises the following steps:
reading a topography file: the CPU loads the terrain file from the disk, analyzes the terrain file into terrain data, and generates original terrain data to the memory;
calculating vertex data: calculating vertex attributes comprising three-dimensional world coordinates, normals and texture coordinates and generating vertex attribute data;
Transmitting vertex data: calculating the size of a space, creating a VBO object by using glGenBuffers, setting the type of the VBO to be GL_ARRAY_BUFFER, storing vertex data in the VBO, providing the vertex shader with the vertex data for use, enabling a glBufferData function to allocate a block of data storage space for the currently bound VBO, writing the vertex data in a memory into the space, enabling the last parameter in the glBufferData to be GL_STATIC_DRAW, and enabling the data storage content to be initialized only once so as to be beneficial to the space allocation of the GPU;
creating a shader: generating source codes of the vertex shader and the fragment shader using the GLSL language, and creating corresponding executable logic units in the GPU;
rendering preparation and rendering: before rendering, the corresponding parameters are transferred to a shader, the parsed VBO data is transferred to a vertex shader, a drawing instruction glDrawElements is called for rendering, and an OpenGL rendering module respectively executes the vertex shader and the fragment shader and outputs rendering results to a window.
Computing vertex data includes:
a1, knowing the longitude and latitude of a starting point, calculating the longitude and latitude of a vertex according to the two-dimensional coordinates of the vertex, wherein the height value of the vertex is the value of the two-dimensional coordinates of the vertex in grid metadata, and then referring to a world coordinate formula in a camera to calculate world coordinates;
A2, selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and performing cross multiplication on the two vectors to obtain a normal vector;
a3, calculating texture coordinates of the vertex comprises the following steps:
generating a one-dimensional color table file through a third party tool;
creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, the texture parameters in a memory are written into a video memory through a glTeximage1D function, the coordinate range of the texture in OpenGL is between 0 and 1, then the color table is mapped to a range between 0 and 1, the texture coordinate 0 corresponds to the first color in the color table, and 1 corresponds to the last color;
and inputting the height value, the maximum height value and the minimum height value of the current vertex to calculate the texture coordinates of the vertex, and realizing mapping the height of the vertex to a range between 0 and 1, wherein the height of the vertex is consistent with the texture coordinates in the video memory.
The number of the texture and the texture coordinates of the vertex are input, and a sampling function is called in the fragment shader to acquire the color of the vertex from the texture.
The creating a shader includes:
the glgetshaaderiv function creates a shader program, sets source codes of vertex shaders and fragment shaders through the glshaadersource function and compiledhader function compiles the source codes;
The glattachloader function binds the vertex shader and fragment shader to the shader program, and connects the shader programs through the glLinProgram function.
If the VBO fails to allocate the storage space, a large block of data is segmented into a plurality of small blocks of data with the same capacity by adopting a data segmentation and multiplexing mode, then the small blocks of data are transmitted to a video memory, and finally rendering is carried out.
The step of data segmentation and multiplexing comprises the following steps:
setting the capacity of a single data block to be 6MB, and calculating the number of rows of the vertexes which can be accommodated according to the capacity of the data block;
calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;
memory space is allocated according to the capacity of the data block, and the vertex array is traversed;
sequentially reading the vertexes containing the row numbers, calculating the vertexes and transmitting the vertexes.
The workflow of the three-dimensional global camera controller comprises:
reading geographic position and attitude parameters of a camera, and judging the current camera view angle;
if the camera is free view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, and finally correcting the camera coordinate system according to the quaternion of the current scene to obtain a retaining wall view matrix;
If the view angle is the tracking view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, correcting the camera coordinate system according to the quaternion of the current scene to obtain a retaining wall view matrix, and finally translating the camera coordinate system in the opposite direction along a center vector according to the tracking distance parameter D to obtain the current view matrix;
and if the view angle is the earth view angle, restoring the geographic position and the attitude of the camera to an initial state, restoring the state of the three-dimensional digital earth to the initial state, then updating and calculating the unit quaternion by the construction unit quaternion according to the geographic position and the attitude information of the camera, finally giving the current scene quaternion, and finally correcting the coordinate system of the camera according to a north-south pole locking algorithm to obtain the current view matrix.
The three-dimensional container assembly includes a geographic location container, a 3D coordinate transformation container, a 3D clipping container, and a 3D instance construction container; the child node can access the geographic position attribute of the geographic position container and apply the geographic position attribute; the 3D coordinate line feed container is used for providing unified offset, posture and size scaling control for the child nodes in three directions of North west days; the 3D clipping container is used for rendering a part which is positioned in the rectangular area to the child nodes; the 3D instance construction container is used for constructing a plurality of child node instances.
The invention has the following advantages:
1. multiplexing the basic components of the VAPS XT platform, including data types, math libraries, and other tool libraries; multiplexing graphic node structure management of the VAPS XT framework, maintaining a tree structure among nodes, and facilitating communication among the nodes;
2. based on the plug-in interface specification of the VAPS XT platform, the plug-in interface specification keeps the uniformity on the framework, and ensures seamless butt joint with a logic design module, a data communication module and the like which are built in the VAPS XT;
3. three functions of three-dimensional global camera control, ambient light control and graphic assembly editing control are independent in architecture design, three unified logic control modules are formed, and direct low coupling and high cohesion with three-dimensional graphic assemblies are realized;
4. each three-dimensional graphic assembly is an independent plug-in, so that parallel expansion and maintenance are facilitated; and (3) the same principle: the loading analysis of each model file is realized as an independent logic component, so that parallel expansion and maintenance are facilitated;
5. the extended two-dimensional graphic component comprises a plurality of envelope graphic components (envelope circles, envelope sectors, envelope polygons), which are realized based on a unified envelope algorithm strategy.
Drawings
FIG. 1 is a schematic diagram of a frame structure of the present invention;
FIG. 2 is a schematic diagram of a data transmission flow;
FIG. 3 is a schematic diagram of a data slicing and multiplexing flow;
FIG. 4 is a schematic workflow diagram of a three-dimensional corpus camera controller;
FIG. 5 is a schematic diagram of an OpenGL matrix transformation sequence;
FIG. 6 is a flow chart of a mouse drag operation;
FIG. 7 is a flow chart of a rotation method based on quaternion;
FIG. 8 is a schematic diagram of the initial state of a camera and three-dimensional digital earth from the earth's perspective;
FIG. 9 is a schematic diagram of the rotated state of the camera and three-dimensional digital earth from the earth's perspective;
FIG. 10 is a flowchart of a north-south pole locking algorithm;
FIG. 11 is a schematic diagram of the corrected state of the camera and three-dimensional digital earth at the earth's perspective;
FIG. 12 is a flow diagram of an implementation of a geographic location container;
FIG. 13 is a flow chart of an implementation of a 3D coordinate transformation container;
FIG. 14 is a flow chart of a two-dimensional display fusion implementation.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Accordingly, the following detailed description of the embodiments of the present application, provided in connection with the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention relates to a three-dimensional avionics display control interface device, which provides a three-dimensional graphics development platform including a core three-dimensional digital earth, a three-dimensional model, a three-dimensional container, a three-dimensional graphics component, and the like, based on a plug-in interface specification of a VAPS XT platform; a series of two-dimensional graphic assemblies is also provided.
Further, the three-dimensional digital earth includes a modern OpenGL rendering module, a three-dimensional terrain generation module, and a three-dimensional global camera controller.
Modern OpenGL rendering modules:
its graphics rendering pipeline receives a set of 3D coordinates and then converts them into 2D pixel display output on the screen. The graphics rendering pipeline may be divided into several stages, each of which takes as input the output of the previous stage. All of these phases are highly specialized (they all have a specific function) and are easily performed in parallel. Due to their nature of parallel execution, most graphics cards today have thousands of small processing cores running separate applets on the GPU for each rendering stage, thereby rapidly processing data in the graphics rendering pipeline. These applets are called shaders.
The graphics rendering pipeline contains many parts, each of which will process a respective specific stage in the process of converting vertex data to final pixels, and the present invention will generally interpret the various parts of the pipeline.
First, 3D coordinates are passed as input to the graphics rendering pipeline in the form of an array, called Vertex Data (Vertex Data), to represent triangles; vertex data is a series of Vertex sets, and a Vertex (Vertex) is a data set of 3D coordinates. While vertex data is represented by vertex attributes (vertextribute), it may contain any data we want to use, such as 3D coordinates, color, texture.
The first part of the graphics rendering pipeline is the Vertex Shader (Vertex Shader), which takes as input a single Vertex. The main purpose of the vertex shader is to convert 3D coordinates to another 3D coordinate, while vertex shading allows us to do some basic processing of the vertex attributes.
The primitive assembly (Primitive Assembly) stage takes as input all vertices output by the vertex shader (one vertex if GL_points) and assembles all Points into the shape of the specified primitive; such as a triangle.
The output of the primitive assembly stage is passed to a Geometry Shader. The geometry shader takes as input the geometry of a series of vertices in primitive form, which can generate other constellations by generating new vertices to construct new (or other) primitives. In the example it generates another triangle.
The output of the geometry Shader is passed to a rasterization stage (Rasterization Stage), where it maps the primitives to corresponding pixels on the final screen, generating fragments (fragments) for use by the Fragment Shader (Fragment Shader). Clipping (Clipping) is performed before the fragment shader is run. Clipping discards all pixels beyond the view to improve execution efficiency.
One segment in OpenGL is all the data required by OpenGL to render a pixel, such as vertex coordinates, color, etc. The main purpose of the fragment shader is to calculate the final color of a pixel, which is also where the OpenGL advanced effects occur. Typically, a fragment shader contains data (such as lighting, shading, texture, etc.) of a 3D scene, which can be used to calculate the color of the final pixel.
After all corresponding color values are confirmed, the final object will be passed to the final stage, called Alpha test and Blending (Blending) stage. This stage detects the depth value of the segment and the template Stencil value to determine whether this pixel is behind or in front of other objects, and decides whether it should be discarded. This stage also examines the alpha value (which defines the transparency of an object) and blends (blend) the objects. Therefore, even if the color of one pixel output is calculated in the fragment shader, the final pixel color may be completely different when rendering multiple triangles.
Vertex shaders and fragment shaders in OpenGL rendering modules need to be self-defined.
Before drawing graphics, vertex data must be input to OpenGL, and this data is managed by vertex buffer objects (Vertext Buffer Objects, VBO), which store a large number of vertices in the video memory of the GPU, and the benefit of using these buffer objects is that we can send a large amount of data to the video memory at a time instead of sending the data from the memory to the video memory once every vertex, which is relatively slow, and the vertex shader can access the vertex data after the data is sent to the video memory, and the local access speed is very fast. Allocating a large number of VBOs (one VBO having a capacity of only kilobytes) may cause a graphics card driver problem. Some drivers can only allocate a certain number of VBOs from the video memory, and no matter what the capacity of the VBOs is, smaller objects need to be placed in a VBO with large capacity.
Three-dimensional terrain module:
three-dimensional terrain uses OpenGL shaders, texture techniques, and a terrain file loader library GDAL, etc., to generate terrain data and rendering. The three-dimensional terrain rendering interaction flow comprises the following steps:
s1, reading a topography file: the CPU loads the terrain file from the disk, analyzes the terrain file into terrain data, and generates original terrain data to the memory;
Specifically, a GDAL library is adopted to read a topography file, and the format of the file is tiff grid data; the GDAL library is called a geospatial data abstraction library, which is a software library for reading grids and vector geospatial data formats, and is issued by an open source geospatial foundation under the X/MIT permission protocol. As a library, it provides a single abstract data model for an application to parse all the formats it supports.
The projection mode WGS-84 geocentric coordinate system in the raster data comprises a set of standard longitude and latitude coordinate systems of the earth; metadata in the raster file represents terrain elevation values in meters; the sampling accuracy of the data represents the range size of each data point in the longitude and latitude directions; the start point coordinates represent the latitude and longitude of the upper left corner position of the grid. The longitude and latitude of any vertex can be calculated according to the coordinates of the starting point.
S2, calculating vertex data: calculating vertex attributes comprising three-dimensional world coordinates, normals and texture coordinates and generating vertex attribute data;
the raster data includes information of the longitude and latitude of the vertex, and world coordinates, normal lines and texture coordinates of the vertex are calculated based on the longitude and latitude of the vertex, respectively, and form vertex data. World coordinates are used for the calculation of spatial locations in the vertex shader, vertex normals are used for the calculation of illumination colors in the fragment shader, and texture coordinates of vertices are used to obtain the colors of the vertices. The final color of the vertex combines the color of the vertex itself with the illumination color.
The vertex data comprises three attributes of world coordinates, normals and texture coordinates of the vertex;
the world coordinates of the vertex are calculated as follows: firstly, calculating the longitude and latitude of each vertex, and then converting the longitude and latitude into world coordinates; the method comprises the following steps: the two-dimensional coordinates of a vertex represent the offset of the vertex from the starting point. Knowing the longitude and latitude of the starting point, calculating the longitude and latitude of the vertex according to the two-dimensional coordinates of the vertex. The height value of the vertex is the value of the two-dimensional coordinate of the vertex in the grid metadata;
input variables: longitude of the starting point StartLongtitude, latitude of the starting point StartLatituede, row number of vertexes Rows, column number of vertexes Cols, grid metadata, sampling precision CellSize, and two-dimensional coordinates OffsetX, offsetY of vertexes; longtitude = StartLongtitude + OffsetX CellSize, latitude = startlatitide-OffsetY × CellSize and altitide = data [ OffsetY ] [ OffsetX ]; the output is obtained: longtitude of the vertex, latitudes of the vertex, and Altitude of the height of the vertex.
The normal of the vertex is calculated as: selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and cross multiplying the two vectors to obtain a normal vector;
The texture coordinates of the vertices are calculated as: the vertex color is obtained by sampling from the texture according to the texture coordinates of the vertex. Firstly, generating textures, calculating the texture coordinates of the vertexes, and finally, sampling the textures by a fragment shader to obtain the colors of the vertexes during rendering. The method comprises the following steps:
generating a one-dimensional color table file through a third-party tool in order to obtain a smoother elevation color effect; the total number of colors in the color table may be set, such as 256 or 64, etc.; the value of each color is in RGB format; the colors of the key positions can be set, and the colors between the two key positions are obtained through linear interpolation operation. That is middleColor (t) =begincolor (1-t) +endcolor t,0< =t < =1;
acquiring the texture to acquire the vertex color, creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, writing texture parameters in a memory into a video memory through a glTeximage1D function, mapping the color table to a range between 0 and 1 when the coordinate range of the texture in OpenGL is 0 to 1, and mapping the texture coordinate 0 to the first color in the color table and 1 to the last color;
inputting the height value CurrentHeight, the maximum height value MaxHeight and the minimum height value MinHeight of the current vertex, calculating to obtain texture coordinates of the vertex according to the formula textCoord= (CurrentHeight-MinHeight)/(1.0/(MaxHeight-MinHeight)), and realizing mapping the height of the vertex to a range between 0 and 1, and keeping consistent with the texture coordinates in a video memory.
The number TextureId of the texture and the texture coordinates textcoord of the vertex are input, and a sampling function is called in the fragment shader to acquire the vertex color from the texture according to the formula color=texture 1D (TextureId, textcoord).
As shown in fig. 2, S3, vertex data is transmitted: calculating the size of a space, creating a VBO object by using glGenBuffers, setting the type of the VBO to be GL_ARRAY_BUFFER, storing vertex data in the VBO, providing the vertex shader with the vertex data for use, enabling a glBufferData function to allocate a block of data storage space for the currently bound VBO, writing the vertex data in a memory into the space, enabling the last parameter in the glBufferData to be GL_STATIC_DRAW, and enabling the data storage content to be initialized only once so as to be beneficial to the space allocation of the GPU;
the calculation space is as follows, input variables: the size SizeOfVertex of the space occupied by a single vertex is equal to the sum of the occupied sizes of all attributes of the vertex, and the number of rows of the vertex is as follows: rows, column number of vertices: cols, the space size sizeallvertex occupied by all vertices is obtained according to the formula numbervertex=rows×cols and sizeallvertex=sizeallvertex.
The distribution failure may be caused when a large continuous storage space is distributed for the VBO due to the large number of vertices of the terrain and the use condition of the current video memory; the method comprises the steps of dividing a large block of data into a plurality of small blocks of data with the same capacity by adopting data division and multiplexing, transmitting the small blocks of data into a video memory, and finally rendering. And a small data space is allocated in the memory and recycled, so that the memory multiplexing is realized. Before writing data into the video memory, a plurality of VBOs are allocated in advance, and then small-block data are written into the corresponding VBOs, so that not only is the fact that all vertex data are written into the VBOs ensured, but also the situation that allocation of large-block video memory fails can be avoided.
As shown in fig. 3, specifically: setting the capacity of a single data block to be 6MB, and calculating the number of rows of the vertexes which can be accommodated according to the capacity of the data block;
calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;
memory space is allocated according to the capacity of the data block, and the vertex array is traversed;
sequentially reading the vertexes containing the row numbers, calculating the vertexes and transmitting the vertexes.
Creating a shader: generating source codes of the vertex shader and the fragment shader using the GLSL language, and creating corresponding executable logic units in the GPU;
rendering preparation and rendering: before rendering, the corresponding parameters are transferred to a shader, the parsed VBO data is transferred to a vertex shader, a drawing instruction glDrawElements is called for rendering, and an OpenGL rendering module respectively executes the vertex shader and the fragment shader and outputs rendering results to a window.
Further, S4, creating a shader includes:
the glgetshaaderiv function creates a shader program, sets source codes of vertex shaders and fragment shaders through the glshaadersource function and compiledhader function compiles the source codes;
the glattachloader function binds the vertex shader and fragment shader to the shader program, and connects the shader programs through the glLinProgram function.
S5, preparing and rendering before rendering, wherein the viewing angle, the illumination position and the like need to be changed in the scene, so that corresponding parameters are transferred to the shader before rendering. The vertex shader is not aware of the structure of the data in the VBO, and the vertex shader is informed of the manner of parsing the VBO data. And (3) calling a drawing instruction glDrawElements to render, and enabling OpenGL to execute a vertex shader and a fragment shader respectively and outputting rendering results to a window.
Further, opengl uses a von willebrand model (Phong Lighting Model), the main structure consisting of three elements: environment (Ambient), diffuse reflection (Diffuse) and Specular (speculum) illumination, the color of illumination is white light, and the current application scene only adopts two illumination effects of environment and Diffuse reflection without using Specular illumination. The intensity of ambient light and diffuse reflected light may be set.
The illumination calculation method comprises the following steps: input variables: calculating ambient light according to the formula AmbientColor = ambicolor; the illumination inversion is calculated from lightdir=normal (lightPos-world pos), the diffuse reflection factor is calculated from diffusefcactor=dot (Normal) (vector dot product), the diffuse reflection is calculated from diffuseccolor=lightcolor×diffusefcactor×diffuse, and then the color is output from fragcolor=colorobj (ambmentcolor+diffuseccolor).
A three-dimensional global camera controller:
it supports three different viewing angles: free view angle (first person view angle), tracking view angle, earth view angle; meanwhile, the digital earth can be rotated by a mouse dragging operation by bypassing the vector of the original point.
The free view angle (namely, a first person view angle) refers to an observation view angle which realizes any degree of freedom in a scene based on three-dimensional digital earth, and the observation position of a camera and the sight line direction of the camera can be freely set. Wherein the line of sight direction of the camera is determined by two pose attributes: the camera is horizontally yaw-wise and pitch-wise.
Tracking viewing angle: taking the appointed observation point as an observation center of the camera (the observation center point is appointed through an attribute interface of the camera: geographic position); the position of the camera rotates around the observation center in the horizontal direction and the pitching direction; and the distance from the camera position to the observation center can be adjusted, so that the observation center can be tracked and observed through different visual angles and distances (typical application is that models such as fighter plane and the like are tracked and observed).
Earth viewing angle: the earth viewing angle is a special case of tracking viewing angle, and is the tracking viewing angle taking the sphere center of the earth as an observation point; in the earth view mode, the relationship between the geographic position and the pose of the camera is: the longitude (Long) of the camera and the horizontal deflection angle (head) of the camera are kept consistent, and the latitude (Lat) of the camera and the Pitch angle (Pitch) of the camera are kept consistent.
As shown in fig. 4, the workflow of the three-dimensional global camera controller includes:
reading geographic position and attitude parameters of a camera, and judging the current camera view angle;
if the camera is free view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, and finally correcting the camera coordinate system according to the quaternion of the current scene to obtain a retaining wall view matrix;
if the view angle is the tracking view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, correcting the camera coordinate system according to the quaternion of the current scene to obtain a retaining wall view matrix, and finally translating the camera coordinate system in the opposite direction along a center vector according to the tracking distance parameter D to obtain the current view matrix;
and if the view angle is the earth view angle, restoring the geographic position and the attitude of the camera to an initial state, restoring the state of the three-dimensional digital earth to the initial state, then updating and calculating the unit quaternion by the construction unit quaternion according to the geographic position and the attitude information of the camera, finally giving the current scene quaternion, and finally correcting the coordinate system of the camera according to a north-south pole locking algorithm to obtain the current view matrix.
Based on an optimized coordinate system conversion algorithm, the mutual conversion of a spherical coordinate system and a world coordinate system is realized so as to support: setting the observation position of the camera according to the longitude and latitude heights; setting any component in the scene according to the longitude and latitude height comprises the following steps: a three-dimensional graphic component, a three-dimensional model component, and a three-dimensional extension of a two-dimensional graphic component.
As shown in fig. 5, in the rendering process of OpenGL, all points of the three-dimensional graphic assembly to be rendered in the 3D space are sequentially subjected to a series of matrix transformations, converted into a 2D image, and rendered onto a screen. The matrix transformation order of OpenGL is:
1. the object coordinates (Object Coordinates), which are local coordinate systems of the object, have an initial position and an initial orientation prior to transformation. In OpenGL fixed pipelines, objects are transformed, typically by glRotatef (), glTranslatef (), and glScalef (), to rotate, translate, and scale the object; the object can also be selected by glmultMatrix (const GLFloat quaternion [4] [4 ]), based on the arbitrary vector of the cross origin of the quaternion.
2. The view Coordinates (Eye Coordinates) are obtained by multiplying the model view matrix (gl_model matrix) by the object Coordinates. I.e. the object space is transformed into the viewpoint space by means of the model view matrix. In OpenGL, the model View matrix is a combination of model matrix (Mode) and View matrix (mview·mtodel), where the model matrix is to transform objects from object space into world space, and the View matrix is to transform world space into viewpoint space.
3. Clipping Coordinates (Clip Coordinates): the viewpoint coordinates are transformed by the projection matrix (GL_ PROJECTION matrix) to obtain the clipping coordinates. The projection matrix defines the manner in which the view volume and vertex data are projected (perspective projection is used for this item). The coordinate system is called a clipping coordinate because some of the transformed vertex data (x, y, z) may be clipped after comparison with + -w.
4. The normalized device coordinates (Normalized Device Coordinates) are obtained by dividing the clipping coordinates by w (normalization coefficient), a process called perspective division (perspective division). The coordinates are similar to window coordinates or screen coordinates, except that they have not been translated and scaled into screen pixels. The data range on all 3 coordinate axes is scaled to between-1 and 1.
5. Window coordinates (Window Coordinates) are obtained from NDC via viewport transformation. NDC is just put into the rendering screen by panning and zooming. In the OpenGL pipeline, the window coordinates are then passed to the rasterization process as a tile. The glViewport () function is used to set the extent of the rendering region, and glDepthRange () can specify the z-value of the window coordinates. Window coordinates are calculated from the data provided by the two functions, such as: glViewport (x, y, w, h) and glDepthRange (n, f).
In the matrix transformation process of OpenGL, the core is 4 matrix transformations, which are called MVPW matrix together, namely model matrix, view matrix, projection matrix and window matrix.
As shown in fig. 6, the 3D rotation of the digital earth by the mouse dragging operation, which is performed by rotating the digital earth by a vector that bypasses the origin, is implemented based on a quaternion, and the operation flow is as follows:
1. the dragging process of the right button of the mouse starts from the press of the right button, and responds to a MousePress event of the mouse, and at the moment, the current viewport coordinate position of the cursor is read and recorded: viewport_Pos0;
2. after press, the mouse starts to move until the dragging process is finished, in which the MouseMove event of the mouse is responded for a plurality of times on the time axis and is recorded as N times, the MousePress event responded when dragging starts can be regarded as the 0 th MouseMove event, and then the following N times of MouseMove events are sequentially: mouseMove event, … …, nth MouseMove event. The entire process contains (n+1) MouseMove events;
3. when the ith response MouseMove event (i is more than or equal to 0 and i is less than or equal to N), recording the current viewport coordinate position of the cursor: viewport_Posi;
4. The quaternion-based 3D rotation algorithm core is implemented between two adjacent MouseMove events, i.e., the i-th and j-th (j=i+1, i > =0 and j < =n) events.
As shown in fig. 7, the quaternion-based rotation method includes:
1. initializing a scene quaternion as a unit quaternion;
2. when the MouseMove event of the mouse is responded for the ith time, the position coordinate of the viewport of the current mouse cursor is obtained and recorded as follows: viewPort_Posi;
3. when the j-th response mouse MouseMove event (j=i+1), the current position coordinates of the viewport of the mouse cursor are obtained and recorded as follows: viewPort_Posj;
4. and according to ViewPort_Posi and ViewPort_Posj, updating the scene quaternion by calculation, and re-rendering the whole scene.
In the running process of the platform, a globally unique quaternion (scene quaternion) exists, and when the platform runs, the globally unique quaternion (scene quaternion) is initialized into a unit quaternion, and the scene quaternion is used for carrying out corresponding matrix transformation on the positions of all three-dimensional graphic components (including the digital earth) in the scene each time the scene is rendered, so that the realization is realized: the whole scene can rotate at any angle along with the dragging operation of the right button of the mouse by bypassing any vector of the origin (the center of the earth sphere).
Further, S is the south pole of the three-dimensional digital earth, and the coordinate position of the south pole in the OpenGL world coordinate system is marked as S; n is the north pole of the three-dimensional digital earth, and the coordinate position of the north pole in the OpenGL world coordinate system is marked as N; the SN vector, which is a vector taking the south pole S of the three-dimensional digital earth as a starting point and the north pole N as an end point, is recorded as: SN vector.
The north-south pole is locked as follows: under the earth view angle, the Up vector of the camera and the SN vector of the three-dimensional digital earth need to be corrected to 0 when a certain included angle exists between the two vectors, and the directions of the two vectors are kept consistent, and the process is as follows: and locking in the north and south poles.
As shown in fig. 8 and 9, the south pole S and north pole N of the three-dimensional digital earth are located in the positive direction and negative direction of the Y-axis of the OpenGL world coordinate system, respectively, so that the SN vector thereof is parallel to the Y-axis, and the direction thereof points to the positive direction of the Y-axis; the origin of the camera coordinate system is located in the forward direction of the Z axis of the OpenGL world coordinate system, and the Up vector is parallel to the Y axis and is in the same direction as the Y axis forward direction, so that in the initial state of the earth view angle: the Up vector of the camera is parallel to the SN vector of the three-dimensional digital earth, and is in the same direction, and the included angle is 0, so that the north-south pole locking correction is not needed; in the running process of the platform, the rotation operation (namely 3D rotation) is carried out on the digital earth by a vector which bypasses an origin through mouse dragging operation, and after rotation, an included angle is possibly generated between an SN vector and the Y axis of an OpenGL world coordinate system; in the upper diagram, after dragging the digital earth to rotate around any vector (such as the vector represented by the white dotted line in the diagram) of the origin, a certain included angle is generated between the SN vector of the digital earth and the Y axis of the OpenGL world coordinate system; since the initial state of the camera coordinate system is as shown in the figure (Up vector and OpenGL world coordinate system Y-axis are parallel and co-directional) at the earth's perspective, the following is true: along with any rotation of the digital earth by dragging a mouse, an included angle may be generated between the Up vector and the SN vector of the camera, and corresponding technical scheme alignment is required to be corrected so as to ensure that the north and south poles are locked.
As shown in fig. 10, the algorithm for north-south locking includes:
according to the scene quaternion, respectively calculating the world coordinate system positions of the south pole and the north pole of the earth;
according to the world coordinate system positions of the south pole and the north pole of the earth, calculating an SN vector, and calculating a projection vector of the SN vector on a XoY plane of an OpenGL world coordinate system, and marking as: vproj_ XoY;
calculating the positive included angle between the projection vector vProj_ XoY and the Y-axis of the OpenGL world coordinate system, and marking the positive included angle as gamma;
the point pUp and the point pAm of the camera are rotated around the Center vector by the rotation angle gamma, and the view matrix is finally obtained by calculating and updating pUp and pAm, so that the current frame is rendered.
As shown in fig. 11, the projection of the SN vector on the XoY plane in the OpenGL world coordinate system is vproj_ XoY, which has an angle with the Y axis of: gamma; rotating the camera coordinate system by gamma degrees around the Center vector to obtain a new camera coordinate system after north-south pole locking correction, wherein the Up vector and the Arm vector are respectively in the figure: up2, arm2.
Further, the platform provides a series of three-dimensional container components that inherit the container base class from the VAPS XT, enabling it to be capable of mounting child nodes, including: geographic location container, 3D coordinate transformation container, 3D instance container.
Wherein, a geographical location container (Simple 3 dGroup) provides uniform geographical location attributes for its child nodes. The child node may access and apply the geographic location attribute of the geographic location container.
As shown in fig. 12, the steps implemented by the geographic location container include:
the three-dimensional graphic component (control) acquires a father node pointer, marks the father node pointer as pParent, and judges whether the pParent is Simple3dGroup;
if yes, the geographic position data (longitude, latitude and altitude) are read from the father node and are used as geographic position attributes of the current three-dimensional graphic assembly and recorded as LLA;
if not, reading geographic position data from an attribute interface of the three-dimensional graphic assembly, and recording the geographic position data as the geographic position attribute of the current three-dimensional graphic assembly as LLA;
and calculating the position of the three-dimensional graphic component in the OpenGL world coordinate system according to the geographic position attribute of the three-dimensional graphic component.
Wherein, a 3D coordinate transformation container (Transform 3 dGroup) provides unified North-West-day three-direction offset, gesture and size scaling control (collectively: transform data) for its child nodes.
As shown in fig. 13, the implementation flow of acquiring and applying transform data by child nodes under the 3D coordinate transformation container includes:
the three-dimensional graphic component (control) acquires a father node pointer, marks the father node pointer as pParent, acquires the father node pointer of pParent as pParent2, and judges whether the pParent2 is Simple3dGroup;
If yes, reading geographical position data (longitude, latitude and altitude) from the pParent2 node, and recording the geographical position data as geographical position attributes of the current three-dimensional graphic assembly as LLA;
if not, reading geographic position data from an attribute interface of the three-dimensional graphic assembly, and recording the geographic position data as the geographic position attribute of the current three-dimensional graphic assembly as LLA;
according to the geographic position attribute of the three-dimensional graphic component, calculating the position of the three-dimensional graphic component in an OpenGL world coordinate system, and marking as: curPos and determine if pprent is Transform3dGroup;
if yes, acquiring Transform data from the pParent point, and taking the Transform data as Transform data of the current three-dimensional graphic assembly;
if not, the three-dimensional image component is used as the Transform data of the current three-dimensional image component according to the Transform data read in the Transform attribute interface of the three-dimensional image component;
correcting curPos according to offset values of three directions of North west days in three-dimensional graphic assembly transformation data, and correspondingly rotating the three-dimensional graphic assembly according to attitude values in the three-dimensional graphic assembly transformation data;
and correspondingly scaling the three-dimensional graphic assembly according to the size scaling value in the three-dimensional graphic assembly transformation data.
The 3D clipping container (ClippingRegion 3D) defines a rectangular region, and only the part located in the rectangular region is rendered for its child nodes, and the rest of the child nodes are not displayed.
The realization steps of the clipping container comprise: invoking an OpenGL interface: glEnable (gl_scissor_test) enters clip rendering mode; then electrophoresis OpenGL interface: glScissor (x, y, w, h), determining a clipping range; sequentially rendering all the components downloaded and mounted by the clipping container according to the depth traversal sequence; and finally, calling an OpenGL interface glDisable (GL_Scissor_TEST), and exiting the clipping rendering mode.
A 3D Instance build container (Instance 3 dsroup) for which multiple child node instances build functions; the implementation steps comprise: the method comprises the steps of designating an instance type through an attribute interface of a 3d instance construction container, designating the number of instances through the attribute interface of the 3d instance construction container, and sequentially rendering each instance according to the designated type and number.
Three-dimensional model component:
the lib3DS library is a complete software library for managing 3DS files, and can replace a 3DS file toolkit of Autodesk for managing 3DS files; in order to improve the rendering efficiency, a list display mode is adopted, a part to be drawn is placed in a display list glGenLists, and then the list is precompiled. When the display is needed, the display list is directly called, and the display list can be directly displayed due to the fact that the display list is pre-compiled, so that the efficiency is improved.
Three-dimensional graphic assembly:
the present invention may provide various types of three-dimensional graphic component types to create specific three-dimensional graphic component instances. Comprises a spherical face mask, a multi-section wall body, a cone, a scanning surface, a ground pasting ring and the like. And also comprises a three-dimensional extension of a built-in two-dimensional graphic assembly of the VAPS XT platform. All three-dimensional graphic assemblies provide geographic location attributes, gesture attributes, using the same color attributes, brush attributes as the VAPS XT.
Spherical cover (spherocover) for characterizing a certain coverage in a scene. It has the attributes of radius, horizontal span range, vertical span range, color, transparency, grid's saliency, etc. The rendering flow is as follows:
judging whether the vertex data set needs to be updated before rendering the current frame through the corresponding identification, if so, updating the vertex data set, then calculating according to the current geographic position (comprising a geographic position container father node or a self attribute interface) of the spherical mask to obtain a model matrix of the spherical mask, and if not, directly calculating according to the current geographic position (comprising the geographic position container father node or the self attribute interface) of the spherical mask to obtain the model matrix of the spherical mask;
calculating and updating a model matrix of the spherical mask according to the current Transform data of the spherical mask, and judging whether a rendering network is needed at present;
If so, rendering the grids of the ball mask more than the vertex data set of the ball mask, and rendering the surface of the ball mask according to the vertex data set of the ball mask, and if not, rendering the surface of the ball mask directly according to the vertex data set of the ball mask.
Multi-section wall (multisection wall) is used for representing military elements such as electronic fence of battlefield situation. The wall segment has the properties of a wall segment starting point position set, wall segment height, color, transparency and the like. The rendering flow is as follows:
judging whether the vertex data set needs to be updated before rendering the current frame through the corresponding identification, if so, calculating the position of each section of wall in an OpenGL world coordinate system according to the longitude and latitude positions of the starting point and the ending point of each section of wall, and marking the position as PosSet; reading the height data of the wall body, and marking as: height; updating the vertex data set according to PosSet and Height; judging whether the frame is required to be rendered, if so, rendering the frame according to the vertex data sets of the multi-section wall body, and then rendering the surface according to the vertex data sets of the multi-section wall body, and if not, directly rendering the surface according to the vertex data sets of the multi-section wall body;
if not, directly judging whether the frame needs to be rendered, and then carrying out the corresponding following steps.
A cone of vision (Frustum) for use in characterizing military elements such as laser scanning for battlefield situations. The method has the properties of position, posture, opening angle, height-width ratio, distance cutting interface, entering cutting section and the like. The rendering process is similar to that of the spherical cover, and will not be repeated here.
Scan plane (ScanFace) for characterizing military elements such as the scan range of a battlefield situation. Has the properties of position, scanning angle, scanning range angle, etc. The rendering process is similar to that of the spherical cover, and will not be repeated here.
And a ground-attached circular ring (Gradientcircle) for representing military elements such as ground shock waves of a battlefield situation. The method has the properties of geographic position, outer ring opening angle, inner ring opening angle and the like. The rendering flow comprises the following steps:
judging whether the vertex data set needs to be updated before rendering the current frame through the corresponding identification, if so, updating the vertex data set, and calculating to obtain a model matrix of the current geographic position of the ground-attached ring; if not, directly calculating to obtain a model matrix according to the current geographic position of the ground-attached ring;
and then rendering according to the vertex data set of the ground-attached ring.
The invention also comprises a two-dimensional graphic component and an algorithm component, and based on the plug-in frame of the existing two-dimensional graphic component of the VAPS XT platform, a plurality of two-dimensional graphic components are expanded to better meet the development requirements of a modern avionics two-dimensional display control interface. The extended two-dimensional graphic assembly comprises an envelope circle, an envelope fan, an envelope polar polygon, a gradual change multi-section line and an equidistant polygon. Wherein the three envelope components are based on a unified envelope algorithm strategy.
As shown in FIG. 14, the method also comprises two three-dimensional display fusion, supports the simultaneous display of two-dimensional graphic components and three-dimensional graphic components, and is compatible with the display of the graphic components of VAPS XT. The two-dimensional graphic component is rendered in an orthographic projection mode, and the three-dimensional graphic component is rendered in a perspective projection mode, so that the effect of compatible display can be achieved.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (8)

1. A three-dimensional avionics display and control interface device is characterized in that: the three-dimensional digital earth model comprises a three-dimensional digital earth, a three-dimensional graph component, a three-dimensional container component and a three-dimensional model component;
the three-dimensional digital earth: the method is used for generating three-dimensional terrain by adopting a modern OpenGL rendering mode and realizing the rotation operation of digital earth through three different observation angles;
The three-dimensional graphic assembly: for providing multiple types of three-dimensional graphics types and rendering a display in a three-dimensional digital earth;
the three-dimensional container assembly: a container base class for inheriting from the VAPS XT, providing some three-dimensional containers with the capability of mounting the child nodes;
the three-dimensional model component: for managing 3ds files by lib3ds, and placing the part to be drawn in the display list glGenLists by displaying the list, and then precompiling the list;
the three-dimensional digital earth comprises a modern OpenGL rendering module, a three-dimensional terrain generating module and a three-dimensional global camera controller;
the OpenGL rendering module is used for receiving a group of 3D coordinates and converting the 3D coordinates into 2D pixel display output on a screen;
the three-dimensional terrain generation module is used for generating terrain data and rendering according to the OpenGL rendering module, textures and the terrain file loading library GDAL;
the three-dimensional global camera controller is used for rotating the digital earth through supporting three different observation view angles of a free view angle, a tracking view angle and an earth view angle and simultaneously through a vector which bypasses an origin randomly through a mouse dragging operation;
the workflow of the three-dimensional global camera controller comprises:
Reading geographic position and attitude parameters of a camera, and judging the current camera view angle;
if the view angle is the free view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, and finally correcting the camera coordinate system according to the quaternion of the current scene to obtain a current view matrix;
if the view angle is the tracking view angle, calculating the position of the camera in a world coordinate system according to the geographic position of the camera, calculating the sight direction and the Up direction of the camera according to the attitude parameters of the camera, correcting the camera coordinate system according to the quaternion of the current scene to obtain a retaining wall view matrix, and finally translating the camera coordinate system in the opposite direction along a center vector according to the tracking distance parameter D to obtain the current view matrix;
if the view angle is the earth view angle, restoring the geographic position and the attitude of the camera to an initial state, restoring the state of the three-dimensional digital earth to the initial state, then constructing a unit quaternion, updating and calculating the unit quaternion according to the geographic position and the attitude information of the camera, finally giving the current scene quaternion, and finally correcting the coordinate system of the camera according to a north-south pole locking algorithm to obtain the current view matrix;
The algorithm for north-south pole locking comprises the following steps:
according to the scene quaternion, respectively calculating the world coordinate system positions of the south pole and the north pole of the earth;
according to the world coordinate system positions of the south pole and the north pole of the earth, calculating an SN vector, and calculating a projection vector of the SN vector on a XoY plane of an OpenGL world coordinate system, and marking as: vproj_ XoY;
calculating the positive included angle between the projection vector vProj_ XoY and the Y-axis of the OpenGL world coordinate system, and marking the positive included angle as gamma;
the point pUp and the point pAm of the camera are rotated around the Center vector by the rotation angle gamma, and the view matrix is finally obtained by calculating and updating pUp and pAm, so that the current frame is rendered.
2. A three-dimensional avionics display and control interface device according to claim 1, characterized in that: the modern OpenGL rendering module includes:
vertex shader unit: the 3D coordinates used for transmitting and inputting in the form of an array are converted into another 3D coordinate and then input into the primitive assembling unit;
primitive assembling unit: for assembling all points into a specified primitive shape and inputting the primitive shape into a geometry shader unit;
geometry shader unit: the method comprises the steps of taking the geometry of a series of vertexes of a primitive shape as input, generating new vertexes to construct new primitives to generate other primitive shapes, and inputting the other primitive shapes to a rasterization unit;
And a rasterizing unit: for mapping the primitives to corresponding pixels on the final screen, generating fragments for use by the fragment shader unit;
fragment shader unit: for calculating a final color of a pixel from the contained 3D scene data;
test and mixing unit: for detecting the depth value and template value of the segment for judging the position of the pixel, and for detecting the alpha value and mixing the object.
3. A three-dimensional avionics display and control interface device in accordance with claim 2, wherein: the three-dimensional terrain rendering interaction flow comprises the following steps:
reading a topography file: the CPU loads the terrain file from the disk, analyzes the terrain file into terrain data, and generates original terrain data to the memory;
calculating vertex data: calculating vertex attributes comprising three-dimensional world coordinates, normals and texture coordinates and generating vertex attribute data;
transmitting vertex data: calculating the size of a space, creating a VBO object by using glGenBuffers, setting the type of the VBO to be GL_ARRAY_BUFFER, storing vertex data in the VBO, providing the vertex shader with the vertex data for use, enabling a glBufferData function to allocate a block of data storage space for the currently bound VBO, writing the vertex data in a memory into the space, enabling the last parameter in the glBufferData to be GL_STATIC_DRAW, and enabling the data storage content to be initialized only once so as to be beneficial to the space allocation of the GPU;
Creating a shader: generating source codes of the vertex shader and the fragment shader using the GLSL language, and creating corresponding executable logic units in the GPU;
rendering preparation and rendering: before rendering, the corresponding parameters are transferred to a shader, the parsed VBO data is transferred to a vertex shader, a drawing instruction glDrawElements is called for rendering, and an OpenGL rendering module respectively executes the vertex shader and the fragment shader and outputs rendering results to a window.
4. A three-dimensional avionics display and control interface device in accordance with claim 3, wherein: computing vertex data includes:
a1, knowing the longitude and latitude of a starting point, calculating the longitude and latitude of a vertex according to the two-dimensional coordinates of the vertex, wherein the height value of the vertex is the value of the two-dimensional coordinates of the vertex in grid metadata, and then referring to a world coordinate formula in a camera to calculate world coordinates;
a2, selecting two vertexes adjacent to the current vertex according to the current vertex, calculating vectors between the two vertexes and the current vertex, and performing cross multiplication on the two vectors to obtain a normal vector;
a3, calculating texture coordinates of the vertex comprises the following steps:
generating a one-dimensional color table file through a third party tool;
creating a one-dimensional texture according to a one-dimensional color table file, wherein the width of the texture represents the total number of colors, the texture parameters in a memory are written into a video memory through a glTeximage1D function, the coordinate range of the texture in OpenGL is between 0 and 1, then the color table is mapped to a range between 0 and 1, the texture coordinate 0 corresponds to the first color in the color table, and 1 corresponds to the last color;
Inputting the height value, the maximum height value and the minimum height value of the current vertex, and calculating to obtain the texture coordinates of the vertex, so that the height of the vertex is mapped to a range between 0 and 1, and the texture coordinates are consistent with the texture coordinates in a video memory;
the number of the texture and the texture coordinates of the vertex are input, and a sampling function is called in the fragment shader to acquire the color of the vertex from the texture.
5. A three-dimensional avionics display and control interface device in accordance with claim 3, wherein: the creating a shader includes:
the glgetshaaderiv function creates a shader program, sets source codes of vertex shaders and fragment shaders through the glshaadersource function and compiledhader function compiles the source codes;
the glattachloader function binds the vertex shader and fragment shader to the shader program, and connects the shader programs through the glLinProgram function.
6. A three-dimensional avionics display and control interface device in accordance with claim 3, wherein: if the VBO fails to allocate the storage space, a large block of data is segmented into a plurality of small blocks of data with the same capacity by adopting a data segmentation and multiplexing mode, then the small blocks of data are transmitted to a video memory, and finally rendering is carried out.
7. The three-dimensional avionics display and control interface device of claim 6, wherein: the step of data segmentation and multiplexing comprises the following steps:
setting the capacity of a single data block to be 6MB, and calculating the number of rows of the vertexes which can be accommodated according to the capacity of the data block;
calculating the number of required data blocks according to the total number of the vertexes, and creating VBOs with the same number;
memory space is allocated according to the capacity of the data block, and the vertex array is traversed;
sequentially reading the vertexes containing the row numbers, calculating the vertexes and transmitting the vertexes.
8. A three-dimensional avionics display and control interface device according to claim 1, characterized in that: the three-dimensional container assembly includes a geographic location container, a 3D coordinate transformation container, a 3D clipping container, and a 3D instance construction container; the child node accesses the geographic position attribute of the geographic position container and applies the geographic position attribute; the 3D coordinate line feed container is used for providing unified offset, posture and size scaling control for the child nodes in three directions of North west days; the 3D clipping container is used for rendering a part which is positioned in the rectangular area to the child nodes; the 3D instance construction container is used for constructing a plurality of child node instances.
CN202110880604.XA 2021-08-02 2021-08-02 Three-dimensional avionics display control interface device Active CN113593027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110880604.XA CN113593027B (en) 2021-08-02 2021-08-02 Three-dimensional avionics display control interface device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110880604.XA CN113593027B (en) 2021-08-02 2021-08-02 Three-dimensional avionics display control interface device

Publications (2)

Publication Number Publication Date
CN113593027A CN113593027A (en) 2021-11-02
CN113593027B true CN113593027B (en) 2024-01-02

Family

ID=78253692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110880604.XA Active CN113593027B (en) 2021-08-02 2021-08-02 Three-dimensional avionics display control interface device

Country Status (1)

Country Link
CN (1) CN113593027B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140593B (en) * 2021-12-02 2022-06-14 北京清晨动力科技有限公司 Digital earth and panorama fusion display method and device
CN114387367B (en) * 2022-03-22 2022-06-10 南京天朗防务科技有限公司 Radar track display method and device
CN116630510B (en) * 2023-05-24 2024-01-26 浪潮智慧科技有限公司 Method, equipment and medium for generating related cone gradual change texture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214368A (en) * 2010-04-07 2011-10-12 北京国遥新天地信息技术有限公司 Implementation method of three dimensional full-dimension digital globe
CN108197325A (en) * 2018-02-06 2018-06-22 覃睿 A kind of virtual three-dimensional outdoor scene is gone sightseeing application process and system in the air

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267273A1 (en) * 2013-03-15 2014-09-18 Janne Kontkanen System and method for overlaying two-dimensional map elements over terrain geometry
CN113409433B (en) * 2021-06-11 2023-10-17 东北大学 Medical three-dimensional model display and cutting system based on mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214368A (en) * 2010-04-07 2011-10-12 北京国遥新天地信息技术有限公司 Implementation method of three dimensional full-dimension digital globe
CN108197325A (en) * 2018-02-06 2018-06-22 覃睿 A kind of virtual three-dimensional outdoor scene is gone sightseeing application process and system in the air

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于 OpenGL三维地形实时交互显示的实现;沈正军;《淮海工学院学报(自然科学版)》;第15卷(第3期);第66-69页 *
空基电子系统三维显控框架设计与实现;靳慧亮等;《计算机技术与发展》;第27卷(第4期);第161-163、169页 *
航天器编队飞行多目标姿态快速跟踪鲁棒控制;袁长清等;《应用数学和力学》;第29卷(第2期);第169-180页 *

Also Published As

Publication number Publication date
CN113593027A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113593027B (en) Three-dimensional avionics display control interface device
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN108648269B (en) Method and system for singulating three-dimensional building models
US8115767B2 (en) Computer graphics shadow volumes using hierarchical occlusion culling
US8665266B2 (en) Global visualization process terrain database builder
US6268862B1 (en) Three dimensional virtual space generation by fusing images
US7027046B2 (en) Method, system, and computer program product for visibility culling of terrain
US20090237396A1 (en) System and method for correlating and synchronizing a three-dimensional site model and two-dimensional imagery
US6556195B1 (en) Image processing device and image processing method
US7098915B2 (en) System and method for determining line-of-sight volume for a specified point
JPH0757117A (en) Forming method of index to texture map and computer control display system
CZ273297A3 (en) Computer graphical system for making and improving carts relating to three-dimensional models
JP2007066064A (en) Image generating device and image generating program
US20090153555A1 (en) System and Computer-Implemented Method for Modeling the Three-Dimensional Shape of An Object by Shading of a Two-Dimensional Image of the Object
US10846908B2 (en) Graphics processing apparatus based on hybrid GPU architecture
US6724383B1 (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
US9401044B1 (en) Method for conformal visualization
CN110852952B (en) Large-scale terrain real-time drawing method based on GPU
US7116333B1 (en) Data retrieval method and system
Frommholz et al. Inlining 3d reconstruction, multi-source texture mapping and semantic analysis using oblique aerial imagery
CN113593028A (en) Three-dimensional digital earth construction method for avionic display control
JP2548742B2 (en) 3D scene display device
JPH1027268A (en) Image processing method and image processor
House Overview of three-dimensional computer graphics
CN116993894B (en) Virtual picture generation method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant