CN117911596A - Three-dimensional geographic image boundary rendering method, device, equipment and medium - Google Patents

Three-dimensional geographic image boundary rendering method, device, equipment and medium Download PDF

Info

Publication number
CN117911596A
CN117911596A CN202410064605.0A CN202410064605A CN117911596A CN 117911596 A CN117911596 A CN 117911596A CN 202410064605 A CN202410064605 A CN 202410064605A CN 117911596 A CN117911596 A CN 117911596A
Authority
CN
China
Prior art keywords
image
rendering
rendered
area
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410064605.0A
Other languages
Chinese (zh)
Inventor
叶梦轩
周文凯
周淼
林骁
伍学千
杜淑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG DAHUA SYSTEM ENGINEERING CO LTD
Original Assignee
ZHEJIANG DAHUA SYSTEM ENGINEERING CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG DAHUA SYSTEM ENGINEERING CO LTD filed Critical ZHEJIANG DAHUA SYSTEM ENGINEERING CO LTD
Priority to CN202410064605.0A priority Critical patent/CN117911596A/en
Publication of CN117911596A publication Critical patent/CN117911596A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a three-dimensional geographic image boundary rendering method, a device, equipment and a medium, wherein electronic equipment acquires a first image to be rendered, firstly performs gray level rendering on the first image to obtain a second image, and then performs fuzzy image processing on the second image to obtain a third image; and finally, removing the internal solid light-emitting part and the external edge light-emitting part in the second image to obtain a rendered target image. Compared with the scheme that the boundaries of the geographic elements are complex, the linear grid body is longer in construction time and is processed in the shader completely, so that the rendering efficiency is improved. And through gray level rendering and a rendering method for removing the internal solid luminous part and the external edge luminous part in the second image, the image rendering can be realized to cover all positions of the original image, including the internal and external areas, so that the image rendering effect is better.

Description

Three-dimensional geographic image boundary rendering method, device, equipment and medium
Technical Field
The present application relates to the field of three-dimensional geographic scene rendering technologies, and in particular, to a three-dimensional geographic image boundary rendering method, apparatus, device, and medium.
Background
Along with the continuous development of computer software and hardware technology, the requirements of smart city and digital twin construction on map visualization are also continuously improved. The geographical scene is used as a carrier of the scene, and more visual and visual display is provided for various information. Meanwhile, the current geographic scene is expanded from two dimensions to three dimensions, and the three-dimensional geographic scene has practicability for various geographic related element visualizations and has higher requirements on attractiveness.
In the prior art, a GPU-based color gradient linear map symbol drawing method is proposed, and the design key point is that a rectangular grid body is used for replacing lines, and gradient is carried out in a shader according to a UV value appointed function of each vertex. The method has the problems that when the boundary of the geographic element is complex, the linear grid body is long in construction time and is processed in the shader completely, so that the efficiency is low; the gradient effect of the method depends on a rectangular grid body formed by line width buffering, and the floodlight gradient effect of the element characterization polygon needs to depend on line width and cannot cover the internal area of the boundary.
Disclosure of Invention
The application provides a three-dimensional geographic image boundary rendering method, device, equipment and medium, which are used for solving the problems of lower image rendering efficiency and poorer rendering effect in the prior art.
In a first aspect, the present application provides a three-dimensional geographic image boundary rendering method, the method comprising:
acquiring a first image to be rendered, and performing gray rendering on the first image to obtain a second image;
performing fuzzy image processing on the second image to obtain a third image;
and removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
In a second aspect, the present application provides a three-dimensional geographic image boundary rendering apparatus, the apparatus comprising:
The gray level rendering module is used for acquiring a first image to be rendered, and performing gray level rendering on the first image to obtain a second image;
the image processing module is used for carrying out fuzzy image processing on the second image to obtain a third image;
And the image rendering module is used for removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
A memory for storing a computer program;
and the processor is used for realizing the steps of the method when executing the program stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method steps.
The application provides a three-dimensional geographic image boundary rendering method, a device, equipment and a medium, wherein the method comprises the following steps: acquiring a first image to be rendered, and performing gray rendering on the first image to obtain a second image; performing fuzzy image processing on the second image to obtain a third image; and removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
The technical scheme has the following advantages or beneficial effects:
The electronic equipment acquires a first image to be rendered, firstly performs gray level rendering on the first image to obtain a second image, and then performs fuzzy image processing on the second image to obtain a third image; and finally, removing the internal solid light-emitting part and the external edge light-emitting part in the second image to obtain a rendered target image. Compared with the scheme that the boundaries of the geographic elements are complex, the linear grid body is longer in construction time and is processed in the shader completely, so that the rendering efficiency is improved. And through gray level rendering and a rendering method for removing the internal solid luminous part and the external edge luminous part in the second image, the image rendering can be realized to cover all positions of the original image, including the internal and external areas, so that the image rendering effect is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application;
FIG. 2 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application;
FIG. 3 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application;
FIG. 4 is a flowchart for importing geometric information of geographic elements according to the present application;
FIG. 5 is a preparation flow before rendering after adding a layer provided by the application;
FIG. 6 is a three-dimensional rendering flow chart provided by the present application;
FIG. 7 is a general flow chart of image rendering provided by the present application;
FIG. 8 is a schematic diagram of a gray scale rendered image according to the present application;
FIG. 9 is an effect diagram of the Gaussian blur image processing in the x direction;
FIG. 10 is an effect diagram of the Gaussian blur image processing in the y direction;
FIG. 11 is a graph showing the effect of the present application after the internal solid segments are removed;
FIG. 12 is an effect diagram of the present application after Gaussian blur image processing in the x-direction;
FIG. 13 is an effect diagram of the Gaussian blur image processing in the y direction provided by the application;
FIG. 14 is a diagram showing the effect of eliminating the external light-emitting blur fragments provided by the application;
FIG. 15 is a schematic structural diagram of a three-dimensional geographic image boundary rendering device provided by the application;
Fig. 16 is a schematic structural diagram of an electronic device according to the present application.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.
FIG. 1 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application, which comprises the following steps:
s101: and acquiring a first image to be rendered, and performing gray rendering on the first image to obtain a second image.
S102: and carrying out fuzzy image processing on the second image to obtain a third image.
S103: and removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
The image rendering method provided by the application is applied to electronic equipment, and the electronic equipment can be equipment such as a PC (personal computer), a tablet personal computer, a server and the like, and can also be image acquisition equipment. The first image to be rendered includes, but is not limited to, a three-dimensional geographic image. If the electronic equipment is the image acquisition equipment, the image acquisition equipment acquires the first image to be rendered, and then the rendering process of the first image is directly carried out. If the electronic equipment is a PC, a tablet personal computer, a server and the like, the image acquisition equipment acquires a first image to be rendered, the first image is sent to the electronic equipment, and the electronic equipment performs the rendering process of the first image.
When the image is rendered, the layer needs to be initialized first, and the layer is used as a carrier for image rendering. And adding a first image to be rendered in the image layer, and firstly carrying out gray level rendering on the first image to obtain a second image. Wherein performing gray scale rendering on the first image to obtain a second image includes: and acquiring coordinate information of the first image, and carrying out gray rendering and redrawing on the first image based on the coordinate information of the first image on the basis of a graphical programming interface Draw Call to obtain a second image. The graphical programming interface Draw Call refers to: the engine first determines the objects that the camera can see through simple visibility tests, then indexes (how the vertices form triangles) the vertices of the objects (including local positions, normals, UV, etc.), transforms (i.e., the positions of the objects, rotations, scales, camera positions, etc.), prepares data related to light sources, textures, rendering modes (determined by materials/shapers), etc., and then informs the graphics API or simply as informing the GPU to begin drawing, and the GPU draws thousands of triangles (drawing units of Draw Call) on the screen based on the data, finally forming an image. In Unity, the process of preparing data and informing the GPU every time the engine is called one Draw Call. This process is done object by object, and for each object, not only is the GPU rendered, but the engine resets the texture/loader, a very time consuming operation. The Draw Call count per frame is a very important performance index, and should be controlled as much as 20 times or less for ioS, this value can be called a Draw Call when the editor's STATISTIC window sees Unity preparing data each time and informing the GPU of rendering. Typically, rendering an object that has one mesh at a time and that carries one material will be used once Draw Call.
Specifically, based on a graphical programming interface Draw Call, performing gray rendering and redrawing on the first image according to the coordinate information of the first image includes:
Determining a contour area of the first image according to the coordinate information of the first image, and adding the contour area of the first image into a layer;
dividing the outline area into each sub-area according to the drawing unit of the graphic programming interface Draw Call, and carrying out gray rendering on each sub-area in the image layer and redrawing the outline area.
The drawing unit of Draw Call is generally triangle, and according to the coordinate information of the first image, the contour area of the first image is determined, and the contour area of the first image is added into the image layer. Dividing the contour region into thousands of triangle regions, carrying out gray level rendering on the thousands of triangle regions in the image layer, and redrawing the contour region according to the coordinate information of the triangle regions. And the first image and the outline area are in different layers, and the second image is obtained after the outline area is redrawn.
FIG. 2 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application, which includes the following steps:
S201: and acquiring coordinate information of a first image to be rendered, determining a contour area of the first image according to the coordinate information of the first image, and adding the contour area of the first image into a layer.
S202: dividing the outline area into each subarea according to the drawing unit of the graphic programming interface Draw Call, carrying out gray rendering on each subarea in the image layer, and redrawing the outline area to obtain a second image.
S203: and carrying out fuzzy image processing on the second image to obtain a third image.
S204: and removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
In the present application, removing the internal solid light emitting portion and the external edge light emitting portion in the third image to obtain a rendered target image includes:
And removing an internal solid light-emitting part in the third image to obtain a fourth image, performing fuzzy image processing on the fourth image to obtain a fifth image, and removing an external edge light-emitting part in the fifth image to obtain the rendered target image.
The application is subjected to two times of blurred image processing. The second image is subjected to first fuzzy image processing to obtain a third image. And after the third image is determined, removing the internal solid light-emitting part in the third image, and reserving the external gradual change fuzzy area to obtain a fourth image. And carrying out second fuzzy image processing on the fourth image, wherein the gradual change effect of the outline part can be improved through the twice fuzzy image processing. And finally, removing the outer edge light-emitting part in the fifth image to obtain a rendered target image. The outline of the outer boundary of the target image is clear, so that the effect of internal gradual change and edge lighting is achieved.
In the present application, the blurring image processing for an image includes: and respectively carrying out Gaussian blur image processing on the image in a first direction and a second direction, wherein the first direction and the second direction are two directions perpendicular to each other in the image.
The first direction may be an X direction in the image and the second direction may be a Y direction in the image. And performing fuzzy image processing on the second image, namely performing Gaussian fuzzy image processing on the second image in the X direction to obtain a first candidate image, and performing Gaussian fuzzy image processing on the first candidate image in the Y direction to obtain a third image. And carrying out fuzzy image processing on the fourth image, namely carrying out Gaussian fuzzy image processing on the fourth image in the X direction to obtain a second candidate image, and then carrying out Gaussian fuzzy image processing on the second candidate image in the Y direction to obtain a fifth image.
In the present application, removing the internal solid light emitting portion in the third image to obtain a fourth image includes: acquiring a transparency component of each pixel point in the third image; and hiding the pixel point with the transparency component of 1 in the third image to obtain a fourth image.
After the third image is determined, the transparency component of each pixel point in the third image is obtained, then the pixel point with the transparency component of 1 is determined, and the pixel point with the transparency component of 1 in the third image is hidden, so that a fourth image is obtained. The fourth image removes the inner solid light emitting portion, leaving the outer graded blur area.
Removing the outer edge light emitting portion in the fifth image to obtain the rendered target image includes: acquiring a transparency component of each pixel point in the fifth image; and hiding pixel points with transparency components smaller than 1 in the fifth image to obtain the rendered target image.
After the fifth image is determined, the transparency component of each pixel point in the fifth image is obtained, then the pixel point with the transparency component smaller than 1 is determined, and the pixel point with the transparency component smaller than 1 in the fifth image is hidden, so that the rendered target image is obtained. The target image eliminates the external fuzzy segments, so that the outline of the external boundary is clear, and the effects of internal gradual change and edge lighting are achieved.
FIG. 3 is a schematic diagram of a three-dimensional geographic image boundary rendering process provided by the application, which includes the following steps:
S301: and acquiring coordinate information of a first image to be rendered, determining a contour area of the first image, and adding the contour area of the first image into a layer.
S302: dividing the outline area into each subarea according to the drawing unit of the graphic programming interface Draw Call, carrying out gray rendering on each subarea in the image layer, and redrawing the outline area to obtain a second image.
S303: and respectively carrying out Gaussian blur image processing in the first direction and the second direction on the second image to obtain a third image.
S304: acquiring a transparency component of each pixel point in the third image; and hiding the pixel point with the transparency component of 1 in the third image to obtain a fourth image.
S305: and respectively carrying out Gaussian blur image processing in the first direction and the second direction on the fourth image to obtain a fifth image.
S306: acquiring a transparency component of each pixel point in the fifth image; and hiding pixel points with transparency components smaller than 1 in the fifth image to obtain the rendered target image.
The image rendering process provided by the application is described in detail below with reference to the accompanying drawings.
The application provides a three-dimensional geographic scene floodlight effect rendering method and a three-dimensional geographic scene floodlight effect rendering process based on off-screen rendering, wherein the three-dimensional geographic scene floodlight effect rendering method and the three-dimensional geographic scene floodlight effect rendering process based on off-screen rendering comprise the following steps:
s1, preparing coordinate data of an image to be rendered, and storing the coordinate data of the polygon in geojson format.
S2, a preparation stage before geometric data loading and rendering.
S2.1, a callback stage flow after the addition of the custom layer is completed is as follows: constructing a plurality of frame buffer objects and shader programs, reading coordinate data, subdividing polygons into a plurality of triangles, creating and binding a buffer area, binding subdivided indexes to the buffer area, and triggering redrawing by a map;
S2.2, constructing a plurality of frame buffer objects and a shader program: a plurality of frame buffer objects and shader programs are constructed with the help of mapbox custom layer exposed WebGL contexts. Wherein, a plurality of frame buffer objects are respectively bound with different texture objects for the shader to process images; texture objects are terms in three-dimensional rendering that function to store images generated at each stage described above, corresponding to a container; the plurality of shader programs are respectively used for generating an original image and processing the image (floodlight operation) in combination with the frame buffer;
s2.3, reading coordinate data, and subdividing the polygon into a plurality of triangles: dividing the polygon into a plurality of triangles by means of an earcut. Js algorithm library to obtain an index array after division for subsequent rendering and drawing;
s2.4, creating and binding a buffer area and binding the subdivided index to the buffer area;
S2.5, map triggering redrawing: triggering of redrawing operations is performed with the map object exposed by mapbox.
S3, three-dimensional rendering is carried out on the browser by means of the shader, the buffer zone, the subdivided triangle vertexes and the frame buffer objects which are constructed in the previous step. The rendering phase flow is as follows: original image rendering, xy direction fuzzy rendering, internal external light-emitting fuzzy segment removal, xy direction fuzzy rendering and external light-emitting fuzzy segment removal. In three-dimensional rendering Framebuffer (frame buffering) allows the rendering results to be stored in a buffer, rather than just being directly rendered onto the screen. This has advantages in implementing some advanced rendering techniques, such as post-processing, shadow effects, and multiple rendering targets.
S3.1, rendering an original image: the vertex shader is transmitted with EPSG:4326 longitude and latitude triangle vertex coordinates after segmentation in S2.3, a function is written (fromLngLat method provided by MercatorCoordinate in mapbox) and converted into EPSG:3857 coordinates, the converted coordinates are subjected to matrix transformation (map camera matrix provided by customlayert layer life cycle render function in mapboxgl) and then an original image is drawn, and the original image is bound to a texture object associated with S2.2 frame cache;
The role of the vertex shader is to describe the shape of the boundary or luminous area, the coordinates of the vertices determining the shape of the area; the following work is mainly done in the vertex shader: EPSG:4326 longitude and latitude triangle vertex coordinates are converted into EPSG:3857 coordinates; the coordinates at this time constitute the shape of the region (light emitting boundary) that we want; if three-dimensional drawing is desired, a matrix transformation is needed, and the matrix transformation is needed to be performed, and the three-dimensional rendering is needed to be performed by using the observer coordinate space coordinates, so that the observer coordinate space coordinates are obtained by multiplying the three-dimensional rendering by the camera matrix;
S3.2, fuzzy rendering in xy direction: acquiring texture objects associated with the frame buffer in S2.2 from the fragment shader, performing Gaussian blur image processing on the x and y directions, and binding the processed textures into the frame buffer again;
S3.3, eliminating internal and external luminous fuzzy fragments: removing the internal solid luminous part, namely setting the part with the transparency of the color component being 1 to zero;
S3.4, removing external light-emitting fuzzy fragments: the edge light emitting portion outside the area is removed.
Fig. 4 is a flowchart for importing geometric information of a geographic element, that is, longitude and latitude information of coordinates, provided by the application. As shown in fig. 4, the method comprises map initialization, layer initialization, geographic element geometric information, layer addition to the map and map loading.
FIG. 5 is a schematic diagram of a pre-rendering preparation process after layer addition, including custom layer addition callback, constructing multiple frame buffers, constructing multiple shader programs, triangulating polygon geometry data, receiving subdivision vertices by a base shader, and triggering a map rendering mechanism.
FIG. 6 is a flow chart of three-dimensional rendering provided by the present application, including creating a shader program and creating a frame buffer object, respectively, after custom layer rendering; binding and rendering texture objects to textures after creating frame buffer objects; after creating the shader program, respectively passing through a vertex shader and a fragment shader; the vertex shader carries out longitude and latitude ink transferring card support and defines the vertex position; the patch coloring device defines patch colors, gaussian blur in XY directions, proposes an internal luminous blur area and an external luminous blur area; and finally, rendering and drawing by a shader.
FIG. 7 is a general flow chart of image rendering provided by the application, including map initialization, layer initialization, geographic element geometry information, layer addition to a map, polygon geometry data subdivision triangularization, binding of texture objects after creation of frame buffer objects, rendering to textures; after creating the shader program, respectively passing through a vertex shader and a fragment shader; the vertex shader carries out longitude and latitude ink transferring card support and defines the vertex position; the patch coloring device defines patch colors, gaussian blur in XY directions, proposes an internal luminous blur area and an external luminous blur area; and finally, rendering by a shader and finishing layer rendering.
The rendering method provided by the application has higher rendering efficiency, is based on GPU rendering, and requires a large amount of computation for scenes with complex image textures. The rendering effect is attractive, the method and the device are operated according to the pictures generated by the scene, can cover all positions of the area, including the inner area and the outer area, can realize floodlight of the inner part and the outer part, and are not limited to unilateral floodlight.
Fig. 8 is a schematic diagram of an image after gray-scale rendering provided by the present application, and in order to clearly display the processing effect of the image to be rendered, an actual scene graph and a black background graph are illustrated in fig. 8.
And carrying out xy-direction fuzzy rendering on the image shown in fig. 8, acquiring texture objects associated with the frame buffer in the fragment shader, carrying out Gaussian fuzzy image processing on the x-direction and the y-direction, and binding the processed textures into the frame buffer again. Fig. 9 is an effect diagram of the present application after the gaussian blur image processing in the x direction, and fig. 10 is an effect diagram of the present application after the gaussian blur image processing in the y direction.
Internal solid segments were culled for the image shown in fig. 10: and removing the inner solid part of the original image area, namely the area with the hidden transparency component of 1 in the image, and reserving the outer gradual change fuzzy area. Fig. 11 is an effect diagram of the present application after internal solid segments are removed. And carrying out xy-direction fuzzy rendering on the image shown in fig. 11, acquiring texture objects associated with the frame buffer in the fragment shader, carrying out Gaussian fuzzy image processing on the x and y directions, and binding the processed textures into the frame buffer again. Fig. 12 is an effect diagram of the present application after the gaussian blur image processing in the x direction, and fig. 13 is an effect diagram of the present application after the gaussian blur image processing in the y direction.
And removing the external light-emitting fuzzy segments from the image shown in fig. 13, removing the area with hidden transparency component smaller than 1 from the image, and enabling the outline of the external boundary to be clear so as to achieve the effects of internal gradual change and edge light emission. Fig. 14 is an effect diagram of eliminating external light-emitting blur fragments provided by the application.
In the application, the custom layer is a customLayer class introduced from a mapbox.gl library, and the method comprises the ondd and render stages, wherein the gpu rendering flow can be defined in the stages: three frame buffer objects FBO_A, FBO_B and FBO_C are respectively introduced and are respectively used for storing original image textures, storing X-direction Gaussian textures in a fuzzy manner and storing Y-direction Gaussian textures in a fuzzy manner; shader program: programA, programB, programC, programD, programE, respectively performing original image rendering, X-direction Gaussian blur processing, Y-direction Gaussian blur processing, removing internal external luminous blur fragments, and removing external luminous blur fragments.
Fig. 15 is a schematic structural diagram of a three-dimensional geographic image boundary rendering device according to the present application, including:
the gray level rendering module 151 is configured to obtain a first image to be rendered, and perform gray level rendering on the first image to obtain a second image;
the image processing module 152 is configured to perform fuzzy image processing on the second image to obtain a third image;
and an image rendering module 153, configured to remove the internal solid light-emitting portion and the external edge light-emitting portion in the second image, so as to obtain a rendered target image.
The gray level rendering module 151 is configured to obtain coordinate information of the first image, and perform gray level rendering and redrawing on the first image according to the coordinate information of the first image based on a graphical programming interface Draw Call, so as to obtain the second image.
A gray level rendering module 151, configured to determine a contour area of the first image according to coordinate information of the first image, and add the contour area of the first image to a layer; dividing the outline area into each sub-area according to the drawing unit of the graphic programming interface Draw Call, and carrying out gray rendering on each sub-area in the image layer and redrawing the outline area.
And an image rendering module 153, configured to remove an internal solid light emitting portion in the third image to obtain a fourth image, perform blur image processing on the fourth image to obtain a fifth image, and remove an external edge light emitting portion in the fifth image to obtain the rendered target image.
The image processing module 152 is configured to perform gaussian blur image processing in a first direction and a second direction on an image, where the first direction and the second direction are two directions perpendicular to each other in the image.
An image rendering module 153, configured to obtain a transparency component of each pixel point in the third image; and hiding the pixel point with the transparency component of 1 in the third image to obtain a fourth image.
An image rendering module 153, configured to obtain a transparency component of each pixel point in the fifth image; and hiding pixel points with transparency components smaller than 1 in the fifth image to obtain the rendered target image.
The present application also provides an electronic device, as shown in fig. 16, including: the processor 161, the communication interface 162, the memory 163 and the communication bus 164, wherein the processor 161, the communication interface 162 and the memory 163 complete communication with each other through the communication bus 164;
the memory 163 has stored therein a computer program which, when executed by the processor 161, causes the processor 161 to perform any of the above method steps.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 162 is used for communication between the electronic device and other devices described above.
The memory may include random access memory (Random Access Memory, RAM) or may include non-volatile memory (NVM), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing unit, DSP), application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The application also provides a computer-readable storage medium having stored thereon a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform any of the above method steps.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for rendering boundaries of a three-dimensional geographic image, the method comprising:
acquiring a first image to be rendered, and performing gray rendering on the first image to obtain a second image;
performing fuzzy image processing on the second image to obtain a third image;
and removing the internal solid luminous part and the external edge luminous part in the third image to obtain a rendered target image.
2. The method of claim 1, wherein gray-scale rendering the first image to obtain a second image comprises:
and acquiring coordinate information of the first image, and carrying out gray rendering and redrawing on the first image based on the coordinate information of the first image on the basis of a graphical programming interface Draw Call to obtain the second image.
3. The method of claim 2, wherein gray rendering and redrawing the first image based on the graphical programming interface Draw Call according to the coordinate information of the first image comprises:
Determining a contour area of the first image according to the coordinate information of the first image, and adding the contour area of the first image into a layer;
dividing the outline area into each sub-area according to the drawing unit of the graphic programming interface Draw Call, and carrying out gray rendering on each sub-area in the image layer and redrawing the outline area.
4. The method of claim 1, wherein removing the inner solid light emitting portion and the outer edge light emitting portion of the third image to obtain a rendered target image comprises:
And removing an internal solid light-emitting part in the third image to obtain a fourth image, performing fuzzy image processing on the fourth image to obtain a fifth image, and removing an external edge light-emitting part in the fifth image to obtain the rendered target image.
5. The method of claim 1 or 4, wherein blurring the image comprises:
and respectively carrying out Gaussian blur image processing on the image in a first direction and a second direction, wherein the first direction and the second direction are two directions perpendicular to each other in the image.
6. The method of claim 4, wherein removing the internal solid light emitting portion of the third image to obtain a fourth image comprises:
acquiring a transparency component of each pixel point in the third image; and hiding the pixel point with the transparency component of 1 in the third image to obtain a fourth image.
7. The method of claim 4, wherein removing the outer edge lighting portion of the fifth image to obtain the rendered target image comprises:
Acquiring a transparency component of each pixel point in the fifth image; and hiding pixel points with transparency components smaller than 1 in the fifth image to obtain the rendered target image.
8. A three-dimensional geographic image boundary rendering apparatus, the apparatus comprising:
The gray level rendering module is used for acquiring a first image to be rendered, and performing gray level rendering on the first image to obtain a second image;
the image processing module is used for carrying out fuzzy image processing on the second image to obtain a third image;
And the image rendering module is used for removing the internal solid luminous part and the external edge luminous part in the second image to obtain a rendered target image.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-7.
CN202410064605.0A 2024-01-16 2024-01-16 Three-dimensional geographic image boundary rendering method, device, equipment and medium Pending CN117911596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410064605.0A CN117911596A (en) 2024-01-16 2024-01-16 Three-dimensional geographic image boundary rendering method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410064605.0A CN117911596A (en) 2024-01-16 2024-01-16 Three-dimensional geographic image boundary rendering method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117911596A true CN117911596A (en) 2024-04-19

Family

ID=90697302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410064605.0A Pending CN117911596A (en) 2024-01-16 2024-01-16 Three-dimensional geographic image boundary rendering method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117911596A (en)

Similar Documents

Publication Publication Date Title
KR102475212B1 (en) Foveated rendering in tiled architectures
CN108734624B (en) Graphics processing pipeline including multiple processing stages and method and medium for operating the same
CN105046736B (en) Graphics processing system, method of operating the same, and storage medium
US9569811B2 (en) Rendering graphics to overlapping bins
US9275493B2 (en) Rendering vector maps in a geographic information system
US9589386B2 (en) System and method for display of a repeating texture stored in a texture atlas
KR102122454B1 (en) Apparatus and Method for rendering a current frame using an image of previous tile
KR102322433B1 (en) Graphics processing systems
US8970583B1 (en) Image space stylization of level of detail artifacts in a real-time rendering engine
CN110956673A (en) Map drawing method and device
TWI434226B (en) Image processing techniques
KR20110093404A (en) Method and apparatus for rendering 3d graphics
US9495767B2 (en) Indexed uniform styles for stroke rendering
CN112184575A (en) Image rendering method and device
GB2537728A (en) Graphics processing using directional representations of lighting at probe positions within a scene
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN111754381A (en) Graphics rendering method, apparatus, and computer-readable storage medium
US20160307294A1 (en) Systems and Methods for Displaying Patterns of Recurring Graphics on Digital Maps
KR20010012841A (en) Image processor and image processing method
KR20160068204A (en) Data processing method for mesh geometry and computer readable storage medium of recording the same
GB2536754A (en) Graphics processing
US20230298212A1 (en) Locking mechanism for image classification
CN117911596A (en) Three-dimensional geographic image boundary rendering method, device, equipment and medium
US20110122140A1 (en) Drawing device and drawing method
CN111932689B (en) Three-dimensional object quick selection method adopting ID pixel graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination