CN114387378A - Image generation method and device based on digital twin rendering engine and electronic equipment - Google Patents
Image generation method and device based on digital twin rendering engine and electronic equipment Download PDFInfo
- Publication number
- CN114387378A CN114387378A CN202111654187.3A CN202111654187A CN114387378A CN 114387378 A CN114387378 A CN 114387378A CN 202111654187 A CN202111654187 A CN 202111654187A CN 114387378 A CN114387378 A CN 114387378A
- Authority
- CN
- China
- Prior art keywords
- displayed
- image
- distance
- distance interval
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 54
- 239000003086 colorant Substances 0.000 claims abstract description 17
- 238000013507 mapping Methods 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 14
- 238000002156 mixing Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 7
- 238000004422 calculation algorithm Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000013499 data model Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 241001310793 Podium Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application provides an image generation method, an image generation device and electronic equipment based on a digital twin rendering engine, wherein if an object to be displayed is in a far distance interval, the resolution of a target object model of the object to be displayed is low, and if the object to be displayed is in a near distance interval, the resolution of the target object model of the object to be displayed is high, namely, the lower resolution texture of the object to be displayed far away from a viewer is single, and the higher resolution texture of the object to be displayed near the viewer is abundant, so that the condition that a plurality of different colors are rendered in one pixel in a generated image can be reduced, and the phenomenon of moire in the image can be reduced; and the image without moire fringes generated in the embodiment of the application is a lossless image and cannot be blurred after processing.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image generation method and apparatus based on a digital twin rendering engine, and an electronic device.
Background
The digital twin is to integrate a multidisciplinary, multi-physical quantity, multi-scale and multi-probability simulation process by utilizing data such as a physical model, sensor data, operation historical data and the like, and complete mapping in a virtual space so as to reflect the full life cycle process of corresponding entity equipment. The real-time rendering part of the digital twin three-dimensional visualization engine needs to have image data acquisition and rendering processing capacity of image data, and under the comprehensive display condition of multi-source data, the moire flicker condition often occurs, so that the sensory experience of image data acquisition, analysis, study and judgment and visual presentation is influenced.
The main reasons for generating moire patterns include that adjacent pixels have large color differences and that one pixel renders a plurality of different colors. In a digital twin three-dimensional visualization engine, when a pattern of repeating lines, circles or dots overlaps an imperfect alignment, a new, irregular dynamic pattern occurs, which is known as moire. Moire can occur in 3D models and chartlet textures in three-dimensional scenes, which change the shape and frequency of its elements as the two original patterns move relative to each other.
In the prior art, the moire solution in the real-time rendering engine mainly includes schemes for adjusting space and frequency, such as control filtering, upsampling interpolation, and the like, and the upsampling or image interpolation is mainly used for amplifying an original image, so that the original image can be displayed on a display device with higher resolution. The image enlarging operation can not bring more detail information about the image, so that the image quality is inevitably affected, and the problem of moire flicker still exists in the image.
Disclosure of Invention
An embodiment of the application aims to provide an image generation method and device based on a digital twin rendering engine and an electronic device, so as to reduce a moire phenomenon in an image. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image generation method based on a digital twin rendering engine, where the method includes:
acquiring a target distance between each object to be displayed in the three-dimensional scene and the rendering camera;
for each object to be displayed, determining a distance interval to which a target distance of the object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed;
determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail;
and generating an image to be displayed based on the respective target object model of each object to be displayed.
In a possible implementation manner, the generating an image to be displayed based on a target object model of each of the objects to be displayed includes:
acquiring the pose of each object to be displayed in the image to be displayed;
aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed;
and generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
In one possible embodiment, the method further comprises:
acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, acquiring distances of the image texture maps of all the objects and preset distance intervals;
for each image texture mapping, dividing the image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs;
and aiming at each distance interval, establishing an object model of each object under the detail level corresponding to the distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
In a possible implementation manner, the establishing, for each distance interval, an object model of each object at a level of detail corresponding to the distance interval according to the image texture map of each object in the image set corresponding to the distance interval includes:
for each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of each object under the detail level corresponding to the distance interval by using the batches of sample data;
and for each object, respectively performing model blending on each primary object model of the object under the detail level corresponding to the same distance interval to obtain an object model of the object under the detail level corresponding to each distance interval.
In one possible embodiment, the method further comprises:
calculating color difference values between adjacent pixels in a moire pattern area aiming at the moire pattern area in the image texture mapping with the moire pattern;
and aiming at the adjacent pixels with the color difference value larger than a preset difference value threshold, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference colors between the adjacent pixels.
In a second aspect, an embodiment of the present application provides an image generation apparatus based on a digital twin rendering engine, the apparatus including:
the distance acquisition module is used for acquiring the target distance between each object to be displayed in the three-dimensional scene and the rendering camera;
the interval determining module is used for determining a distance interval to which a target distance of each object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed;
the model determining module is used for determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail;
and the image generation module is used for generating the image to be displayed based on the target object model of each object to be displayed.
In a possible implementation manner, the image generation module is specifically configured to:
acquiring the pose of each object to be displayed in the image to be displayed;
aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed;
and generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
In a possible embodiment, the apparatus further comprises:
the map acquiring module is used for acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, the acquisition distance of the image texture maps of all the objects and preset distance intervals;
the mapping dividing module is used for dividing each image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs;
and the model establishing module is used for establishing an object model of each object under the detail level corresponding to each distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
In a possible implementation manner, the model building module is specifically configured to:
for each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of each object under the detail level corresponding to the distance interval by using the batches of sample data;
and for each object, respectively performing model blending on each primary object model of the object under the detail level corresponding to the same distance interval to obtain an object model of the object under the detail level corresponding to each distance interval.
In a possible implementation, the apparatus further comprises a color compensation module configured to:
calculating color difference values between adjacent pixels in a moire pattern area aiming at the moire pattern area in the image texture mapping with the moire pattern;
and aiming at the adjacent pixels with the color difference value larger than a preset difference value threshold, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference colors between the adjacent pixels.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the method of any of the present applications when executing the program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method described in any of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product including instructions, which when run on a computer, cause the computer to perform the method described in any of the present applications.
The embodiment of the application has the following beneficial effects:
the image generation method, the image generation device and the electronic equipment based on the digital twin rendering engine, provided by the embodiment of the application, are used for obtaining the target distance between each object to be displayed in a three-dimensional scene and a rendering camera; for each object to be displayed, determining a distance interval to which a target distance of the object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed; determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail; and generating an image to be displayed based on the respective target object model of each object to be displayed. If the object to be displayed is in a far distance interval, the resolution of the target object model of the object to be displayed is lower, and if the object to be displayed is in a near distance interval, the resolution of the target object model of the object to be displayed is higher, namely, the lower resolution texture of the object to be displayed which is far away from an observer is more single, and the higher resolution texture of the object to be displayed which is near to the observer is more abundant, so that the situation that a plurality of different colors are rendered in one pixel in the generated image can be reduced, and the moire phenomenon in the image can be reduced; and the image without moire fringes generated in the embodiment of the application is a lossless image and cannot be blurred after processing.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of an image generation method based on a digital twin rendering engine according to an embodiment of the present application;
FIG. 2 is a schematic diagram of one possible implementation of step S104 in the embodiment of the present application;
FIG. 3 is a schematic diagram of an object model building method according to an embodiment of the present application;
FIG. 4 is another schematic diagram of an object model building method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an image generation apparatus based on a digital twin rendering engine according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
In order to reduce the moire phenomenon in the image, an embodiment of the present application provides an image generation method based on a digital twin rendering engine, and referring to fig. 1, the method includes:
s101, obtaining the target distance between each object to be displayed in the three-dimensional scene and the rendering camera.
The image generation method based on the digital twin rendering engine can be realized through electronic equipment, and can be applied to scenes such as the digital twin engine, three-dimensional scene modeling and image data rendering. In one example, the electronic device may be a personal computer, a server, a smart phone or a VR (Virtual Reality) device.
Here, the three-dimensional scene is a three-dimensional scene that needs to be displayed, and the rendering camera is a virtual camera, which can be understood as a viewpoint for viewing the three-dimensional scene, and is not a real camera device. For ease of understanding, a simple example is given below, where the three-dimensional scene is a classroom three-dimensional scene, and the user wishes to see the entire classroom in the center of the podium, then the rendering camera is considered to be in the center of the podium, i.e., the rendering camera is the user's viewpoint. The image shot by the rendering camera is the image to be displayed, namely the image seen by the user.
In one example, the target distance between the object to be displayed and the rendering camera may be obtained by a front-end perception algorithm. The front-end perception algorithm refers to an algorithm of object distance rendering camera distance and relative device display resolution size in a three-dimensional scene of a digital twin engine.
In a possible implementation manner, the acquiring a target distance between each object to be displayed in the three-dimensional scene and the rendering camera includes: acquiring the pose of a rendering camera in a three-dimensional scene; and determining each object to be displayed in the three-dimensional scene and the target distance between the rendering camera and each object to be displayed according to the pose of the rendering camera.
The pose of the rendering camera represents the shooting direction of the rendering camera, which objects in the three-dimensional scene can be shot by the rendering camera can be determined by combining the position of the rendering camera, the objects in the three-dimensional scene shot by the rendering camera are called to-be-displayed objects, and the target distance between the rendering camera and each to-be-displayed object can be obtained according to the pose of the to-be-displayed object in the three-dimensional scene.
Referring to the prior art, in an example, taking VR equipment as an example, the pose of the VR equipment in a world coordinate system can be obtained according to a gyroscope, a positioning instrument and the like in the VR equipment, and the pose of the rendering camera in a three-dimensional scene can be obtained by combining the resolution of a display and the conversion relationship between the world coordinate system and the three-dimensional scene coordinate system.
S102, for each object to be displayed, determining a distance interval to which a target distance of the object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed.
The distance intervals are preset, the number of the distance intervals and the distance span of each distance interval can be set in a user-defined mode according to actual conditions, and the distance spans of different distance intervals can be the same or different. In one example, taking three distance intervals as an example, the distance interval 1 is 0 meter to 10 meters, the distance interval 2 is 10 meters to 50 meters, and the distance interval 3 is more than 50 meters; in this case, if the target distance of the object 1 to be displayed is 5 meters, and 5 meters belong to a distance interval 1 from 0 meter to 10 meters, the distance interval 1 is the target distance interval of the object 1 to be displayed; if the target distance of the object 2 to be displayed is 15 meters, and the 15 meters belong to the distance interval 2 of 10 meters to 50 meters, the distance interval 2 is the target distance interval of the object 2 to be displayed.
S103, determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail.
Different distance intervals correspond to different detail levels, and the object models of the same object under different detail levels have different resolutions. The longer the distance represented by the distance section is, the lower the level of detail corresponding to the distance section is, and the lower the level of detail is, the lower the resolution of the object model at the level of detail is. The resolution of the lower object model at different levels of detail may be set according to the resolution size of the display.
For convenience of understanding, a simple example is given below, taking three distance intervals as an example, where the distance interval 1 is 0 m to 10 m, corresponding to the level of detail 3; the distance interval 2 is 10 meters to 50 meters, corresponding to the detail level 2; the distance interval 3 is more than 50 meters and corresponds to the detail level 1; the object model of the object 1 at the level of detail 3 may be represented by 1000 pixels, the object model of the object 1 at the level of detail 2 may be represented by 100 pixels, and the object model of the object 1 at the level of detail 1 may be represented by 10 pixels. It is understood that the numerical values herein are merely examples, and can be customized in the actual scene according to the principles of the present application.
In an example, the detail level in the embodiment of the present application may specifically be LOD (levels of detail), where LOD refers to determining, according to the position and the importance of a key node of an object in a display environment, allocation of rendering resources in an image to be displayed, and reducing the precision and the detail of an unimportant object, so as to obtain a high-efficiency calculation result. In the embodiment of the application, each object is displayed according to different visual distances (namely, target distances of the objects to be displayed).
And expressing the target distance between the object to be displayed and the rendering camera by using z buffer, wherein the larger the z buffer value is, the lower the LOD level of the object to be displayed is, and the lower the LOD level is, the lower the resolution of the object model of the same object is. And automatically calculating and switching different LOD grades according to the change of the z buffer value to obtain the object model under the LOD corresponding to each object to be displayed.
And S104, generating an image to be displayed based on the target object model of each object to be displayed.
And rendering the target object model of each object to be displayed so as to obtain the image to be displayed. In one example, the image with the moire fringes eliminated can be restored by fusing image texture data and a three-dimensional model data feature map through a multi-scale algorithm based on a target object model of each object to be displayed.
Judging the LOD level of the object to be displayed according to the target distance (z buffer) from the rendering camera of the digital twin rendering engine to the object to be displayed, and finally determining the display resolution of the object to be displayed in the display, wherein the smaller the z buffer value of the object to be displayed is, the larger the LOD level of the engine resolution is. The larger the Z buffer value of the object to be displayed is, the smaller the LOD grade displayed by the engine resolution is, so that the moire flicker phenomenon is reduced.
In the embodiment of the application, if the object to be displayed is in a far distance interval, the resolution of the target object model of the object to be displayed is lower, and if the object to be displayed is in a near distance interval, the resolution of the target object model of the object to be displayed is higher, that is, the lower resolution texture of the object to be displayed which is far away from an observer is more single, and the higher resolution texture of the object to be displayed which is near to the observer is more abundant, so that the situation that a plurality of different colors are rendered in one pixel in the generated image can be reduced, and the moire phenomenon in the image can be reduced; and the image without moire fringes generated in the embodiment of the application is a lossless image and cannot be blurred after processing.
In a possible implementation manner, referring to fig. 2, the generating an image to be displayed based on a target object model of each of the objects to be displayed includes:
and S1041, acquiring the pose of each object to be displayed in the image to be displayed.
In one example, the pose of each object to be displayed in the image to be displayed may be determined according to the pose of the rendering camera and the pose of each object to be displayed in the three-dimensional scene.
And S1042, aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed.
For each object to be displayed, the size of the target object model of the object to be displayed in the image to be displayed can be obtained according to the target distance of the object to be displayed, the target object model of the object to be displayed is rendered by combining the pose of the object to be displayed in the image to be displayed, and the pixel area of the object to be displayed in the image to be displayed can be generated.
In one example, object models at the same level of detail can also be divided into different degrees of refinement. In the case that the resolution of the display is fixed, assuming that the fineness ratio of the object to be displayed is 1 when the target distance D of the object to be displayed is 1, the fineness of the object to be displayed can be expressed as:
L=|1/D|
l represents the fineness of the model, when D is smaller than 1, as D is reduced, L infinitely approaches to positive infinity, and the highest-definition object model under the corresponding LOD is applied; when D is greater than 1, as D increases, L approaches to 0 indefinitely, and the corresponding object model with the lowest precision under LOD is applied.
And S1043, generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
In the embodiment of the application, the target object model of each object to be displayed is rendered to obtain the pixel area of each object to be displayed in the image to be displayed, and further obtain the image to be displayed.
The following describes the process of creating an object model, and in one possible embodiment, referring to fig. 3, the method further includes:
s301, acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, acquiring distances of the image texture maps of all the objects and preset distance intervals.
In one example, the captured image texture maps may be pre-processed, and the image texture maps with different detail levels may be processed by a digital twin rendering engine, and the size of the image texture map needs to be a power of 2, wherein the power of 2 refers to pictures with sizes of 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and 2048.
S302, aiming at each image texture mapping, dividing the image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs.
And for each image texture mapping, judging the detail level corresponding to the image texture mapping according to the distance from a rendering camera to the acquisition of the image texture mapping, wherein different detail levels correspond to different distance intervals, and dividing the image texture mapping into an image set corresponding to the distance interval to which the acquisition distance of the image texture mapping belongs. The smaller the acquisition distance of the image texture map is, the higher the level of detail corresponding to the image texture map is. The larger the acquisition distance of the image texture map is, the lower the detail level corresponding to the image texture map is, so that the phenomenon of moire flicker caused by image texture map factors can be reduced.
And S303, aiming at each distance interval, establishing an object model of each object under the detail level corresponding to the distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
In one example, the object at different levels of detail can be modeled separately in conjunction with image texture map data, three-dimensional model data of the object, and multi-resolution parameter data.
In a possible implementation manner, the establishing, for each distance interval, an object model of each object at a level of detail corresponding to the distance interval according to the image texture map of each object in the image set corresponding to the distance interval includes:
step one, aiming at each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of the objects under the detail level corresponding to the distance interval by using the batches of sample data.
And step two, respectively carrying out model mixed editing on each primary object model of each object under the detail level corresponding to the same distance interval to obtain an object model of each object under the detail level corresponding to each distance interval.
In one example, a multi-data model mixed-compilation algorithm can be adopted to achieve model mixed-compilation, the multi-data model mixed-compilation algorithm is used for performing mixed-compilation and fusion calculation on a data source model with multiple dimensions, accurate analysis and prediction capability can be achieved, and the multi-data model mixed-compilation algorithm is used for mixing and compiling a plurality of weak models with different types into a strong model. And automatically sampling a plurality of batches of sample data from the image texture maps of the objects in each distance interval, wherein the sampled data of each batch comprises a plurality of sample data, N batches are total, N batches of sample data are respectively used for training to obtain N primary object models, and then the N primary object model results of the same object are fused and mixed to obtain the object model of the object at the detail level corresponding to the distance interval.
The model mixture algorithm can adopt a voting method and an averaging method respectively. The voting method is that if the models are classified models, each model gives a class prediction result, and a new prediction result is obtained by fusion in a voting mode according to the principle that minority obeys majority. The mean value method means that if the model is a regression model, the prediction result given by each model is numerical, and at this time, the mean value of the prediction results of all sub models can be used as the final fusion result. The primary object models in the model mixed algorithm are not mutually connected, so that the method is a parallel fusion method, can simultaneously process N primary object models in parallel, and can greatly improve the algorithm execution efficiency; the process can be as shown in fig. 4, wherein the LOD data may include three-dimensional model data of the object, etc. in addition to the image texture map.
In the embodiment of the application, the data source used for establishing the object model is simple and convenient to obtain, the implementation cost is low, the object model is obtained by using a model mixed compiling mode, and the algorithm is simple without a large amount of deep learning training; the method can be applied to digital twin projects which cannot acquire data by the unmanned aerial vehicle, and can greatly reduce implementation cost.
In order to further reduce the phenomenon of moire, in one possible embodiment, the method further comprises:
step A, aiming at a moire area in an image texture mapping with moire, calculating a color difference value between adjacent pixels in the moire area.
And step B, aiming at the adjacent pixels with the color difference value larger than a preset difference threshold value, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference color between the adjacent pixels.
The preset difference threshold value can be set in a self-defined manner according to actual conditions and can be an experimental value or an empirical value. In one example, the colors of two of the neighboring pixels may be weighted averaged to obtain a difference color.
When moire occurs in the image texture mapping under a certain LOD level, the phenomenon of moire flicker can be reduced by an auxiliary method of compensating difference values. The complementary difference value means that when the color difference between adjacent pixels is too large, if the adjacent pixels are respectively black and white, an interpolation color is added between black and white as an intermediate value, thereby further reducing the moire phenomenon. In addition to adding the interpolation color, a model structure of 2 adjacent pixels having a large color difference may be removed, or a model structure of pixels adjusted to a similar color may be used.
In the embodiment of the application, the difference of colors between adjacent pixels can be reduced by complementing the color difference value, so that the phenomenon of moire flicker is further reduced.
An embodiment of the present application provides an image generation apparatus based on a digital twin rendering engine, and referring to fig. 5, the apparatus includes:
a distance obtaining module 501, configured to obtain a target distance between each object to be displayed in the three-dimensional scene and the rendering camera;
an interval determining module 502, configured to determine, for each object to be displayed, a distance interval to which a target distance of the object to be displayed belongs within a plurality of preset distance intervals, as a target distance interval of the object to be displayed;
a model determining module 503, configured to determine, for each object to be displayed, an object model at a detail level corresponding to a target distance interval of the object to be displayed, as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail;
an image generating module 504, configured to generate an image to be displayed based on a target object model of each of the objects to be displayed.
In a possible implementation manner, the image generation module is specifically configured to:
acquiring the pose of each object to be displayed in the image to be displayed;
aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed;
and generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
In a possible embodiment, the apparatus further comprises:
the map acquiring module is used for acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, the acquisition distance of the image texture maps of all the objects and preset distance intervals;
the mapping dividing module is used for dividing each image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs;
and the model establishing module is used for establishing an object model of each object under the detail level corresponding to each distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
In a possible implementation manner, the model building module is specifically configured to:
for each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of each object under the detail level corresponding to the distance interval by using the batches of sample data;
and for each object, respectively performing model blending on each primary object model of the object under the detail level corresponding to the same distance interval to obtain an object model of the object under the detail level corresponding to each distance interval.
In a possible implementation, the apparatus further comprises a color compensation module configured to:
calculating color difference values between adjacent pixels in a moire pattern area aiming at the moire pattern area in the image texture mapping with the moire pattern;
and aiming at the adjacent pixels with the color difference value larger than a preset difference value threshold, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference colors between the adjacent pixels.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
the processor is configured to implement the image generation method based on the digital twin rendering engine according to any one of the present application when executing the computer program stored in the memory.
Optionally, referring to fig. 6, the electronic device according to the embodiment of the present application further includes a communication interface 602 and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete communication with each other through the communication bus 604.
The communication bus mentioned in the electronic device may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (random access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also DSPs (Digital Signal Processing), ASICs (Application Specific Integrated circuits), FPGAs (Field Programmable gate arrays) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for generating an image based on a digital twin rendering engine according to any of the present application is implemented.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the digital twin rendering engine based image generation methods described herein.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, the computer program product and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (12)
1. An image generation method based on a digital twin rendering engine, the method comprising:
acquiring a target distance between each object to be displayed in the three-dimensional scene and the rendering camera;
for each object to be displayed, determining a distance interval to which a target distance of the object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed;
determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail;
and generating an image to be displayed based on the respective target object model of each object to be displayed.
2. The method according to claim 1, wherein the generating an image to be displayed based on the target object model of each of the objects to be displayed comprises:
acquiring the pose of each object to be displayed in the image to be displayed;
aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed;
and generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
3. The method of claim 1, further comprising:
acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, acquiring distances of the image texture maps of all the objects and preset distance intervals;
for each image texture mapping, dividing the image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs;
and aiming at each distance interval, establishing an object model of each object under the detail level corresponding to the distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
4. The method according to claim 3, wherein for each distance interval, establishing an object model of each object at a level of detail corresponding to the distance interval according to the image texture map of each object in the image set corresponding to the distance interval comprises:
for each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of each object under the detail level corresponding to the distance interval by using the batches of sample data;
and for each object, respectively performing model blending on each primary object model of the object under the detail level corresponding to the same distance interval to obtain an object model of the object under the detail level corresponding to each distance interval.
5. The method of claim 3, further comprising:
calculating color difference values between adjacent pixels in a moire pattern area aiming at the moire pattern area in the image texture mapping with the moire pattern;
and aiming at the adjacent pixels with the color difference value larger than a preset difference value threshold, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference colors between the adjacent pixels.
6. An image generation apparatus based on a digital twin rendering engine, the apparatus comprising:
the distance acquisition module is used for acquiring the target distance between each object to be displayed in the three-dimensional scene and the rendering camera;
the interval determining module is used for determining a distance interval to which a target distance of each object to be displayed belongs within a plurality of preset distance intervals as a target distance interval of the object to be displayed;
the model determining module is used for determining an object model of a detail level corresponding to a target distance interval of each object to be displayed as a target object model of the object to be displayed; the different distance intervals correspond to different detail levels, and for any distance interval, the longer the distance represented by the distance interval is, the lower the detail level corresponding to the distance interval is; for any level of detail, the lower the resolution of the object model at that level of detail;
and the image generation module is used for generating the image to be displayed based on the target object model of each object to be displayed.
7. The apparatus of claim 6, wherein the image generation module is specifically configured to:
acquiring the pose of each object to be displayed in the image to be displayed;
aiming at each object to be displayed, rendering a target object model of the object to be displayed according to the pose of the object to be displayed in the image to be displayed and the target distance of the object to be displayed, and generating a pixel area of the object to be displayed in the image to be displayed;
and generating an image to be displayed based on the pixel area of each object to be displayed in the image to be displayed.
8. The apparatus of claim 6, further comprising:
the map acquiring module is used for acquiring image texture maps of all objects in the three-dimensional scene acquired at different distances, the acquisition distance of the image texture maps of all the objects and preset distance intervals;
the mapping dividing module is used for dividing each image texture mapping into an image set corresponding to a distance interval to which the acquisition distance of the image texture mapping belongs;
and the model establishing module is used for establishing an object model of each object under the detail level corresponding to each distance interval according to the image texture mapping of each object in the image set corresponding to the distance interval.
9. The apparatus of claim 8, wherein the model building module is specifically configured to:
for each distance interval, selecting a plurality of batches of sample data from the image texture maps of the objects in the distance interval, and respectively determining a plurality of primary object models of each object under the detail level corresponding to the distance interval by using the batches of sample data;
and for each object, respectively performing model blending on each primary object model of the object under the detail level corresponding to the same distance interval to obtain an object model of the object under the detail level corresponding to each distance interval.
10. The apparatus of claim 8, further comprising a color compensation module to:
calculating color difference values between adjacent pixels in a moire pattern area aiming at the moire pattern area in the image texture mapping with the moire pattern;
and aiming at the adjacent pixels with the color difference value larger than a preset difference value threshold, calculating to obtain a difference color according to the colors of two pixels in the adjacent pixels, and inserting the pixels with the difference colors between the adjacent pixels.
11. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method of any of claims 1-5.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111654187.3A CN114387378A (en) | 2021-12-30 | 2021-12-30 | Image generation method and device based on digital twin rendering engine and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111654187.3A CN114387378A (en) | 2021-12-30 | 2021-12-30 | Image generation method and device based on digital twin rendering engine and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114387378A true CN114387378A (en) | 2022-04-22 |
Family
ID=81200688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111654187.3A Pending CN114387378A (en) | 2021-12-30 | 2021-12-30 | Image generation method and device based on digital twin rendering engine and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114387378A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115408552A (en) * | 2022-07-28 | 2022-11-29 | 深圳市磐鼎科技有限公司 | Display adjustment method, device, equipment and storage medium |
CN117475053A (en) * | 2023-09-12 | 2024-01-30 | 广州益聚未来网络科技有限公司 | Grassland rendering method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615227A (en) * | 2018-05-08 | 2018-10-02 | 浙江大华技术股份有限公司 | A kind of suppressing method and equipment of image moire fringes |
CN110730336A (en) * | 2019-07-02 | 2020-01-24 | 珠海全志科技股份有限公司 | Demosaicing method and device |
CN113269858A (en) * | 2021-07-19 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Virtual scene rendering method and device, computer equipment and storage medium |
-
2021
- 2021-12-30 CN CN202111654187.3A patent/CN114387378A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615227A (en) * | 2018-05-08 | 2018-10-02 | 浙江大华技术股份有限公司 | A kind of suppressing method and equipment of image moire fringes |
CN110730336A (en) * | 2019-07-02 | 2020-01-24 | 珠海全志科技股份有限公司 | Demosaicing method and device |
CN113269858A (en) * | 2021-07-19 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Virtual scene rendering method and device, computer equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115408552A (en) * | 2022-07-28 | 2022-11-29 | 深圳市磐鼎科技有限公司 | Display adjustment method, device, equipment and storage medium |
CN117475053A (en) * | 2023-09-12 | 2024-01-30 | 广州益聚未来网络科技有限公司 | Grassland rendering method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114387378A (en) | Image generation method and device based on digital twin rendering engine and electronic equipment | |
CN113643414B (en) | Three-dimensional image generation method and device, electronic equipment and storage medium | |
CN112102489B (en) | Navigation interface display method and device, computing equipment and storage medium | |
CN108230442A (en) | A kind of shield tunnel three-dimensional emulation method | |
CN116109765A (en) | Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium | |
CN113110731B (en) | Method and device for generating media content | |
CN114928718A (en) | Video monitoring method and device, electronic equipment and storage medium | |
CN114663324A (en) | Fusion display method of BIM (building information modeling) model and GIS (geographic information system) information and related components | |
CN112529006B (en) | Panoramic picture detection method, device, terminal and storage medium | |
KR100860673B1 (en) | Apparatus and method for generating image to generate 3d image | |
CN116385622B (en) | Cloud image processing method, cloud image processing device, computer and readable storage medium | |
CN110378948B (en) | 3D model reconstruction method and device and electronic equipment | |
CN116758206A (en) | Vector data fusion rendering method and device, computer equipment and storage medium | |
CN112634439B (en) | 3D information display method and device | |
CN112465692A (en) | Image processing method, device, equipment and storage medium | |
CN112652056B (en) | 3D information display method and device | |
JP2003233836A (en) | Image processor for conducting rendering shading processing by using distance component in modeling and its method | |
CN111626919B (en) | Image synthesis method and device, electronic equipment and computer readable storage medium | |
CN111667563B (en) | Image processing method, device, equipment and storage medium | |
CN114419286A (en) | Panoramic roaming method and device, electronic equipment and storage medium | |
CN114996374A (en) | Online data visualization implementation method, system, device and medium | |
CN113744361A (en) | Three-dimensional high-precision map construction method and device based on trinocular vision | |
CN110196638B (en) | Mobile terminal augmented reality method and system based on target detection and space projection | |
CN110390717B (en) | 3D model reconstruction method and device and electronic equipment | |
CN106991643B (en) | Real-time line checking method and real-time line checking system with low resource consumption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |