CN116109765A - Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium - Google Patents

Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium Download PDF

Info

Publication number
CN116109765A
CN116109765A CN202211668155.3A CN202211668155A CN116109765A CN 116109765 A CN116109765 A CN 116109765A CN 202211668155 A CN202211668155 A CN 202211668155A CN 116109765 A CN116109765 A CN 116109765A
Authority
CN
China
Prior art keywords
target
dimensional
rendering
labeling
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211668155.3A
Other languages
Chinese (zh)
Inventor
马柳青
贾庆雷
周淮浦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongketuxin Suzhou Technology Co ltd
Original Assignee
Zhongketuxin Suzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongketuxin Suzhou Technology Co ltd filed Critical Zhongketuxin Suzhou Technology Co ltd
Priority to CN202211668155.3A priority Critical patent/CN116109765A/en
Publication of CN116109765A publication Critical patent/CN116109765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application relates to a three-dimensional rendering method, a three-dimensional rendering device, a three-dimensional rendering computer device, a three-dimensional rendering storage medium and a three-dimensional rendering computer program product for labeling objects. The method comprises the following steps: responding to a labeling request for a three-dimensional model, acquiring two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, wherein the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model; determining target depth data of the labeling object in the depth texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device; determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value; and rendering the pixel points at the three-dimensional coordinates of the target by adopting rendering parameters to obtain a three-dimensional rendering result of the labeling object. By adopting the method, the height value can be quickly acquired based on the target depth data stored in the depth texture, so that the rendering efficiency of the marked object is improved.

Description

Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
Technical Field
The present invention relates to the field of three-dimensional visualization technology, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for three-dimensional rendering of a labeling object.
Background
With the progress of the three-dimensional visualization technology field, a plurality of sensors are carried on the same flight platform, and images are collected from five different angles such as a vertical angle, four inclined angles and the like, so that the user is led into the real visual world inclined photography technology conforming to human vision. The user can add corresponding labels in the oblique photographing data under different application scenes, for example, points of interest (Point of Interest, POI for short) or dynamic monitoring points on the two-dimensional map can be used as labels to be superimposed on the three-dimensional oblique photographing data for scene display. However, since oblique photographing data is generally three-dimensional model data, and the labeling points are generally only text, icons and two-dimensional coordinate information, it is quite difficult to attach the labeling points to the three-dimensional model surface for display.
In the conventional technology, a ray vertically passing through the position of the marking point from top to bottom is generally constructed, the ray is utilized to perform intersection with oblique photographing data or triangular patches of an oblique photographing model in a three-dimensional scene, the intersection results of the rays are ordered from high to low, the height value of the intersection point at the highest position is taken as the height of the marking point, and the marking point is rendered at the corresponding position in the oblique photographing data according to the height of the marking point. However, when the rendering method in the conventional technology is adopted, the high-efficiency drawing requirement of the large-scale point labeling attached oblique photography is difficult to meet, and the problem of lower rendering efficiency exists.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a three-dimensional rendering method, apparatus, computer device, computer-readable storage medium, and computer program product for labeling objects with high rendering efficiency.
In a first aspect, the present application provides a method for three-dimensional rendering of a labeling object. The method comprises the following steps:
responding to a labeling request for a three-dimensional model, and acquiring two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, wherein the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model;
determining target depth data of the labeling object in a depth texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device;
determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value;
and rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters to obtain a three-dimensional rendering result of the labeling object.
In one embodiment, the determining, according to the two-dimensional coordinates, the target depth data of the labeling object in the depth texture includes:
Converting the two-dimensional coordinates by adopting a space conversion matrix to obtain texture coordinates corresponding to the labeling object, wherein the space conversion matrix is generated according to a model conversion matrix of the target model, a view conversion matrix of the image acquisition device and a projection conversion matrix of the image acquisition device;
and inquiring the texture coordinates in the depth texture to obtain target depth data corresponding to the marked object.
In one embodiment, the determining the height value of the labeling object according to the target depth data, and generating the target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value, including;
converting the target depth data and the texture coordinates by adopting the space conversion matrix to obtain space three-dimensional coordinates of the labeling object;
and determining the height value from the space three-dimensional coordinates, and performing conversion processing on the height value according to the space conversion matrix to generate the target three-dimensional coordinates of the labeling object in the three-dimensional model.
In one embodiment, the method further comprises:
acquiring the current corresponding viewpoint position and the current corresponding view range of the image acquisition device;
According to the viewpoint position and the view range, determining an adjustment parameter of the image acquisition device;
and adjusting the pose of the image acquisition device by adopting the adjustment parameters to obtain the image acquisition device for acquiring the image aiming at the preset angle.
In one embodiment, the method further comprises:
acquiring a target viewpoint position and a target view range of an image acquisition device for image acquisition aiming at a preset angle;
generating a view conversion matrix corresponding to the image acquisition device according to the target viewpoint position and the target view range;
generating a projection conversion matrix of the image acquisition device according to imaging parameters of the image acquisition device;
and generating a space conversion matrix corresponding to the target model and the image acquisition device according to the model conversion matrix of the target model, the view conversion matrix and the projection conversion matrix.
In one embodiment, the rendering the pixel point at the target three-dimensional coordinate by using the rendering parameter to obtain a three-dimensional rendering result of the labeling object includes:
and in the non-front-end display component, rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters through a graphic processor to obtain a three-dimensional rendering result of the labeling object.
In a second aspect, the present application further provides a three-dimensional rendering device for labeling objects. The device comprises:
the request response module is used for responding to a labeling request for a three-dimensional model, acquiring two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, wherein the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model;
the depth acquisition module is used for determining target depth data of the labeling object in a depth texture according to the two-dimensional coordinates, and the depth texture is generated according to the depth data of the target model relative to the image acquisition device;
the height acquisition module is used for determining the height value of the labeling object according to the target depth data and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value;
and the pixel rendering module is used for rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters to obtain a three-dimensional rendering result of the labeling object.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to implement the three-dimensional rendering method of the labeling object according to any embodiment of the first aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program, which when executed by a processor, implements the method for three-dimensional rendering of a labeling object according to any of the embodiments of the first aspect.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the method for three-dimensional rendering of an annotation object according to any of the embodiments of the first aspect.
The three-dimensional rendering method, the device, the computer equipment, the storage medium and the computer program product for the labeling object acquire the two-dimensional coordinates of the labeling object and the rendering parameters of the labeling object by responding to the labeling request of the three-dimensional model, and the three-dimensional model is generated according to the image data acquired by the image acquisition device for the target model; determining target depth data of the labeling object in the depth texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device; determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value; and rendering the pixel points at the target three-dimensional coordinates by using rendering parameters to obtain a three-dimensional rendering result of the marked object, and rapidly acquiring the height value of the marked object based on the target depth data stored in the depth texture, so that the algorithm flow of the target three-dimensional coordinates is simplified, and the rendering efficiency of the marked object is improved.
Drawings
FIG. 1A is a flow diagram of a method of three-dimensional rendering of a marked object in one embodiment;
FIG. 1B is a schematic diagram of an image data acquisition step in one embodiment;
FIG. 1C is a schematic diagram of image data in one embodiment;
FIG. 1D is a schematic diagram of depth texture in one embodiment;
FIG. 1E is a schematic diagram of a three-dimensional model in one embodiment;
FIG. 2 is a flowchart illustrating an adjustment procedure of the image capturing device according to an embodiment;
FIG. 3 is a flow chart of a method of three-dimensional rendering of a marked object in another embodiment;
FIG. 4 is a block diagram of a three-dimensional rendering device marking objects in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The current common marking point height acquisition modes generally comprise two modes: the method comprises the steps of obtaining the height of each marking point position from oblique photographic data in advance by adopting a ray intersection mode, and storing the obtained height in corresponding marking point information, so that the marking point stores three-dimensional coordinate information. When the method is used for rendering the annotation points, although the speed of scene rendering is not affected, the problem of low processing efficiency still exists due to the fact that the preprocessing process is complex and tedious. Moreover, since the height of the annotation point is pre-calculated, the method is not suitable for the requirements of rendering and editing of the dynamic annotation point.
In addition, since the oblique photographing data has the rendering characteristic of multiple Detail Levels (LOD for short), namely, the rendering resource allocation of the object is determined according to the position and the importance of the node of the object model in the display environment, the number of planes and the Detail of non-important objects are reduced, so that the characteristic of high-efficiency rendering operation is obtained, and the phenomenon that part of the marking points sink into the model due to different roughness of the model built by the oblique photographing data under different resolutions is easy.
Compared with the former method, the method does not need to traverse all data files, only needs to process limited data in the scene, and can process dynamic annotation points. However, the method is synchronously performed in the rendering process, and the performance of ray intersection is relatively low, so that the rendering speed is very influenced, and the browsing scene is relatively stuck.
In view of the foregoing, the present application provides a three-dimensional rendering method, apparatus, computer device, computer readable storage medium and computer program product for labeling objects with high rendering efficiency, which address the problems in the prior art. In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The data (including, but not limited to, data for analysis, data stored, data displayed, etc.) referred to in this application are information and data that are fully authorized by each party.
In one embodiment, as shown in fig. 1A, a three-dimensional rendering method for labeling objects is provided, where the method is applied to a terminal for illustration, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart televisions, smart vehicle-mounted equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In this embodiment, the method includes the steps of:
step S102, in response to the labeling request for the three-dimensional model, the two-dimensional coordinates of the labeling object and the rendering parameters of the labeling object are obtained.
The three-dimensional model may be an oblique photography model generated from image data acquired by the image acquisition device for the target model. In one example, as shown in fig. 1B, the image capturing device 200 may capture image data of the object model 100 from a top view, and the resulting image data may be as shown in fig. 1C. The terminal may model fig. 1C to generate a three-dimensional model 300 as shown in fig. 1E.
The annotation object may be an annotation point or an annotation region. The two-dimensional coordinates of the labeling object may be the abscissa and the ordinate of the labeling object in the world coordinate system, or may be the longitude and latitude coordinates.
The rendering parameters may include, but are not limited to, any one or more of a variety of parameters of rendering shape, rendering color, light shadow effect, and the like.
In particular, the terminal may present a three-dimensional model (e.g., three-dimensional model 300 in fig. 1E) to the user in a front-end display component. In response to a user's annotation request for a three-dimensional model, two-dimensional coordinates of an annotated object input by the user (e.g., two-dimensional coordinates (x 1, y 1) of point O in fig. 1B) and rendering parameters of the annotated object are obtained. Optionally, in some embodiments, when the number of the labeling objects is greater, the two-dimensional coordinates and the rendering parameters corresponding to each labeling object may be imported into the terminal in batch in the form of table data, so that the terminal may obtain information of a plurality of labeling objects at the same time, and efficiency of obtaining labeling information is improved.
And step S104, determining target depth data of the labeling object in the depth texture according to the two-dimensional coordinates.
Wherein the depth texture may be generated from depth data of the object model relative to the image acquisition device. In one example, as shown in fig. 1B, with the image capturing device 200 as a horizontal line target, a distance D between the image capturing device 200 and a point O in the target model 100 in the vertical direction is taken as depth data of the point O in the target model 100 with respect to the image capturing device 200. In another example, as shown in FIG. 1D, a schematic representation of a depth texture is provided.
Specifically, in the generation stage of the depth texture, the terminal may perform operation processing on image data acquired by the image acquisition device for the target model, so as to determine depth data of each position in the target model relative to the image acquisition device. And constructing blank textures corresponding to the target model. According to the two-dimensional coordinates corresponding to each position in the target model, determining the texture coordinates corresponding to each position in the blank texture (namely, constructing the pixel coordinates of a coordinate system by using depth textures), storing the depth data corresponding to each position in the texture coordinates in the blank texture in a texture mode, and obtaining the depth texture storing the depth data of the target model after traversing each position in the target model.
In the application stage of the depth texture, the terminal can perform conversion processing on the two-dimensional coordinates of the labeling object,
and obtaining texture coordinates of the marked object in the depth texture. For example, the point O in fig. 1B is the labeling object, and the two-dimensional coordinates (x 1, y 1) of the point O are converted into texture coordinates (Px, py) in fig. 1D by 5. And obtaining the depth data stored at the texture coordinates in the depth texture, and obtaining target depth data corresponding to the marked object.
And S106, determining the height value of the labeling object according to the target depth data, and generating the target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value.
Specifically, the terminal may perform a conversion process on the texture coordinates of the labeling object and the target depth data, and 0 generates a spatial three-dimensional coordinate of the labeling object in the world coordinate system (for example, a three-dimensional coordinate (x 1, y1, H) of the point O in fig. 1B). Thereby obtaining the height value of the marked object in the world coordinate system. And calculating the relative height of the labeling object relative to the target model according to the height value of the labeling object and the height value corresponding to the target model. Thereby generating target three-dimensional coordinates of the annotation object in the three-dimensional model (e.g., three-dimensional coordinates (x 3, y3, h) of point O in fig. 1E).
And step S108, rendering the pixel points at the three-dimensional coordinates of the target by adopting rendering parameters to obtain a three-dimensional rendering result of the labeling object.
Specifically, the terminal may render the pixel point at the target three-dimensional coordinate in the three-dimensional model by using the rendering parameter corresponding to the labeling object to obtain the three-dimensional rendering result of the labeling object in the three-dimensional model, and then three the three pixels are rendered
The dimension rendering result is displayed in a front-end display component of the terminal, so that the 0 superposition display of the labeling object in the three-dimensional model is realized. Subsequently, the three-dimensional rendering method of the labeling object provided in the embodiment can be further applied to the fields of three-dimensional scene reporting, big data signboards, digital twinning, three-dimensional geographic information systems and the like.
In the method for rendering the three-dimensional object, the two-dimensional coordinates of the object and the rendering parameters of the object are acquired by responding to the request for labeling the three-dimensional model, and the three-dimensional model is acquired according to the image
The device generates image data acquired by the target model; determining target depth data of the labeling object in a depth 5-degree texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device; determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value; the pixel points at the target three-dimensional coordinates are rendered by adopting the rendering parameters to obtain the three-dimensional rendering result of the marked object, the height value of the marked object can be quickly obtained based on the target depth data stored in the depth texture, and the algorithm flow of the target three-dimensional coordinates is simplified, so that the rendering efficiency of the marked object is improved, and the rendering method provided by the embodiment is suitable for the real-time editing rendering requirement of the dynamic points and is also suitable for a large-scale marked point rendering scene.
In one embodiment, step S104, determining, according to the two-dimensional coordinates, target depth data of the labeling object in the depth texture includes: and converting the two-dimensional coordinates by adopting a space conversion matrix to obtain texture coordinates corresponding to the marked object, and inquiring in the depth texture by utilizing the texture coordinates to obtain target depth data corresponding to the marked object.
The spatial transformation matrix (MVP matrix) is generated according to a Model transformation matrix (Model matrix) of the object Model, a transformation matrix between a Model coordinate system established with the object Model as a target and a world coordinate system, such as a coordinate system in fig. 1E, a View transformation matrix (View matrix) of the image capturing device, a transformation matrix between a View coordinate system established with an image capturing View angle of the image capturing device and a world coordinate system, and a Projection transformation matrix (Projection matrix) of the image capturing device, a transformation matrix between a Projection coordinate system established with an image generated by the image capturing device and a world coordinate system, such as a coordinate system in fig. 1C. Alternatively, in this embodiment, the final result obtained by multiplying the model conversion matrix, the view conversion matrix, and the projection conversion matrix may be used as the output result of the spatial conversion matrix.
Specifically, the terminal may have a spatial conversion matrix stored therein in advance. The two-dimensional coordinates of the labeling object are converted by adopting the space conversion matrix, so that the two-dimensional coordinates of the labeling object in the world coordinate system can be converted into the two-dimensional coordinates in the view coordinate system, and then the two-dimensional coordinates in the view coordinate system are converted into the two-dimensional coordinates in the projection coordinate system, namely the texture coordinates corresponding to the labeling object. In the depth texture corresponding to the target model, texture coordinates may be used to query for target depth data corresponding to the annotation object.
In one example, as seen in fig. 1B and 1D, the two-dimensional coordinates (x 1, y 1) of the labeling object O in the world coordinate system are converted by using a spatial conversion matrix, so that the texture coordinates (Px, py) of the labeling object O in the depth texture can be obtained, and the value D at (Px, py) can be read from the depth texture.
In this embodiment, the two-dimensional coordinates of the labeling object are converted by using the space conversion matrix, so as to determine the texture coordinates corresponding to the labeling object, so that the accuracy and the determination efficiency of the texture coordinates can be improved, the target depth data of the labeling object can be queried by using the texture coordinates, the acquisition efficiency of the target depth data can be greatly improved, and meanwhile, the query efficiency of the target depth data can be further improved because the loaded depth texture occupies less memory compared with the loaded complete depth data.
In one embodiment, step S106, determining a height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value, includes: and converting the target depth data and the texture coordinates by adopting a space conversion matrix to obtain the space three-dimensional coordinates of the labeling object, determining a height value from the space three-dimensional coordinates, and converting the height value according to the space conversion matrix to generate the target three-dimensional coordinates of the labeling object in the three-dimensional model.
Specifically, the terminal may also perform inverse operation on the texture coordinates of the labeling object and the target depth data by using the spatial transformation matrix provided in the above embodiment, so as to directly determine the spatial three-dimensional coordinates of the labeling object in the world coordinate system. And determining the height value of the marked object in the world coordinate system from the space three-dimensional coordinates of the marked object. And carrying out operation processing on the lowest height value corresponding to the target model at the two-dimensional coordinates of the labeling object and the height value of the labeling object, and determining the relative height of the labeling object relative to the target model. And carrying out operation processing on the two-dimensional coordinates and the relative height of the labeling object by adopting a space conversion matrix, thereby generating the target three-dimensional coordinates of the labeling object in the three-dimensional model.
In this embodiment, the target three-dimensional coordinates of the labeling object in the three-dimensional model are obtained by performing inverse transformation processing on the target depth data and the texture coordinates by using the spatial transformation matrix, so that the efficiency of obtaining the target three-dimensional coordinates can be improved.
In one embodiment, as shown in fig. 2, there is further provided a flowchart of an adjustment step of the image capturing device, including:
step S202, obtaining the current corresponding viewpoint position and the current corresponding view range of an image acquisition device;
step S204, according to the viewpoint position and the scope, the adjusting parameters of the image acquisition device are determined.
Step S206, adjusting the pose of the image acquisition device by adopting the adjustment parameters to obtain the image acquisition device for acquiring the image aiming at the preset angle.
Wherein the viewpoint position may be used to characterize the position where the image acquisition device is located.
The view volume range may be used to characterize an image acquisition range of the image acquisition device.
The preset angle can be set according to the user's requirement, preferably, since the size of the target model under orthographic projection will not change, the angle photographed from directly above to below can be adopted as the preset angle.
Specifically, the terminal may further obtain a viewpoint position and a view range currently corresponding to the image capturing device. And carrying out operation processing on the viewpoint position and the view range, and determining an adjustment parameter corresponding to the target pose which is adjusted to a preset angle by the pose of the current image acquisition device. And adjusting the pose of the image acquisition device by adopting the adjustment parameters to obtain the image acquisition device for acquiring the image aiming at the preset angle, such as the image acquisition device for shooting by adopting an orthographic projection mode from right above or the image acquisition device for shooting by adopting an oblique isometric projection mode from the forty-five point angle inclined above the target model.
In this embodiment, the adjustment parameters are calculated according to the viewpoint position and the view range of the image acquisition device, and the image acquisition device is converted into the device for image acquisition under the preset angle by adopting the adjustment parameters, so that the subsequent coordinate system conversion difficulty of the image acquisition device can be simplified, the subsequent data processing efficiency is improved, and meanwhile, certain flexibility can be improved.
In one embodiment, the terminal may further acquire a target viewpoint position and a target view range of the image capturing device that captures an image for a preset angle. And generating a view conversion matrix corresponding to the image acquisition device according to the target viewpoint position and the target view range. And generating a projection conversion matrix of the image acquisition device according to the imaging parameters of the image acquisition device. And generating a space conversion matrix corresponding to the target model and the image acquisition device according to the model conversion matrix, the view conversion matrix and the projection conversion matrix of the target model.
Specifically, after the terminal obtains the adjusted image acquisition device for image acquisition aiming at the preset angle, a view coordinate system corresponding to the current image acquisition device can be established according to the target viewpoint position and the target view range of the current image acquisition device, namely, the target viewpoint position and the target view range corresponding to the preset angle, so that a view conversion matrix corresponding to the current image acquisition device is calculated. And acquiring one or more of various imaging parameters such as focal length, depth of field, magnification and the like of the image acquisition device, and performing operation processing on the imaging parameters of the image acquisition device and the target view range to generate a projection conversion matrix corresponding to the current image acquisition device. And establishing a model coordinate system corresponding to the target model, and further generating a model conversion matrix corresponding to the target model. The model conversion matrix of the target model, the view conversion matrix of the image acquisition device and the projection conversion matrix are used as space conversion matrices corresponding to the target model and the image acquisition device, and the product of the three matrices is used as an output result of the space conversion matrices.
In this embodiment, by calculating the corresponding spatial transformation matrix according to the viewpoint position and the view range of the current image acquisition device, the accuracy of the spatial transformation matrix can be improved, so that the accuracy of data obtained by processing based on the spatial transformation matrix can be improved.
In one embodiment, step S108, rendering the pixel point at the three-dimensional coordinate of the target by using the rendering parameter, to obtain a three-dimensional rendering result of the labeling object, includes: in the non-front-end display component, rendering the pixel points at the three-dimensional coordinates of the target by adopting rendering parameters through the graphic processor to obtain a three-dimensional rendering result of the marked object.
Specifically, the terminal may use off-screen rendering technology to turn off effects such as illumination and texture in the non-front-end display component, and render the pixel point at the target three-dimensional coordinate in the three-dimensional model by using rendering parameters in the fragment shader through the graphics processor (for example Graphics Processing Unit, GPU for short) to obtain a three-dimensional rendering result of the labeling object.
In this embodiment, the off-screen rendering technology and the graphics processor are adopted to perform three-dimensional rendering processing, so that the rendering performance and the rendering efficiency can be improved.
In one embodiment, as shown in fig. 3, there is provided a three-dimensional rendering method for labeling objects, including:
step S302, adjusting the pose of the image acquisition device according to the viewpoint position and the scope of the view body, and establishing a space conversion matrix corresponding to the target model.
Specifically, the terminal may acquire the current viewpoint position and the view range of the image acquisition device, so as to calculate an adjustment parameter for the image acquisition device. And adjusting the pose of the image acquisition device by adopting the adjustment parameters, and acquiring the image of the target model by adopting the adjusted image acquisition device. And establishing a view conversion matrix and a projection conversion matrix corresponding to the image acquisition device according to the viewpoint position, the view range and the imaging parameters of the image acquisition device. And establishing a space conversion model corresponding to the target model according to the view conversion matrix, the projection conversion matrix and the model conversion matrix of the target model.
Step S304, a three-dimensional model corresponding to the target model is built in the non-front-end display component through a graphic processor, and a depth texture corresponding to the target model is generated.
Specifically, the terminal can turn off effects such as illumination, texture and the like in the non-front-end display component by using an off-screen rendering mode, process image data acquired by the image acquisition device, and draw oblique photography model data corresponding to the target model, namely a three-dimensional model corresponding to the target model through the graphic processor. And simultaneously recording depth data between the target model and the image acquisition device to form a depth texture corresponding to the target model.
Step S306, in response to the labeling request for the three-dimensional model, the two-dimensional coordinates of the labeling object and the rendering parameters of the labeling object are obtained.
Specifically, the terminal may obtain, in response to a request for labeling the three-dimensional model, two-dimensional coordinates of a labeled object input by a user and rendering parameters of the labeled object. And inputting the depth texture of the target model, the space transformation matrix and the two-dimensional coordinates of the labeling object into a point labeling shader for rendering.
And step 308, converting the two-dimensional coordinates by using a space conversion matrix to obtain texture coordinates corresponding to the labeling object, and inquiring by using the texture coordinates to obtain target depth data.
And step S310, converting the target depth data and the texture coordinates by adopting a space conversion matrix, determining a height value corresponding to the labeling object, and determining the relative height of the labeling object according to the height value.
Step S312, the two-dimensional coordinates and the relative height are converted by using the space conversion matrix, and the target three-dimensional coordinates of the labeling object in the three-dimensional model are generated.
Specifically, the terminal may convert the two-dimensional coordinates in the vertex shader by using a spatial conversion matrix to obtain texture coordinates corresponding to the labeling object, and query the depth texture by using the texture coordinates to obtain target depth data corresponding to the labeling object. And converting the target depth data and the texture coordinates by adopting a space conversion matrix, and determining a height value corresponding to the marked object. And determining the relative height of the labeling object relative to the target model according to the height value and the height data of the target model. And converting the two-dimensional coordinates and the relative height by adopting a space conversion matrix to generate target three-dimensional coordinates of the labeling object in the three-dimensional model.
And step S314, rendering the pixel points at the three-dimensional coordinates of the target by adopting rendering parameters through a graphic processor in the non-front-end display component to obtain a three-dimensional rendering result of the marked object.
Specifically, the terminal can render the pixel points at the target three-dimensional coordinates in the three-dimensional model by using the rendering parameters of the labeling objects in the fragment shader through the graphics processor in the non-front-end display component in an off-screen rendering mode, so as to obtain the three-dimensional rendering result of the labeling objects.
In the embodiment, two-dimensional coordinates of an annotation object and rendering parameters of the annotation object are acquired by responding to an annotation request of the three-dimensional model, and the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model; determining target depth data of the labeling object in the depth texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device; determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value; and rendering the pixel points at the target three-dimensional coordinates by using rendering parameters to obtain a three-dimensional rendering result of the marked object, and rapidly acquiring the height value of the marked object based on the target depth data stored in the depth texture, so that the algorithm flow of the target three-dimensional coordinates is simplified, and the rendering efficiency of the marked object is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a three-dimensional rendering device for the labeling object for realizing the three-dimensional rendering method of the labeling object. The implementation scheme of the solution provided by the device is similar to the implementation scheme described in the above method, so the specific limitation in the embodiments of the three-dimensional rendering device for one or more labeling objects provided below may refer to the limitation of the three-dimensional rendering method for the labeling objects described above, and will not be repeated herein.
In one embodiment, as shown in fig. 4, there is provided a three-dimensional rendering apparatus 400 for labeling objects, including: a request response module 402, a depth acquisition module 404, a height acquisition module 406, and a pixel rendering module 408, wherein:
the request response module 402 is configured to obtain, in response to a labeling request for a three-dimensional model, two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, where the three-dimensional model is generated according to image data acquired by the image acquisition device for the target model.
The depth acquisition module 404 is configured to determine, according to the two-dimensional coordinates, target depth data of the labeling object in a depth texture, where the depth texture is generated according to the depth data of the target model relative to the image acquisition device.
The height obtaining module 406 is configured to determine a height value of the labeling object according to the target depth data, and generate a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value.
The pixel rendering module 408 is configured to render the pixel point at the target three-dimensional coordinate by using the rendering parameter, so as to obtain a three-dimensional rendering result of the labeling object.
In one embodiment, the depth acquisition module 404 includes: the coordinate acquisition unit is used for converting the two-dimensional coordinates by adopting a space conversion matrix to obtain texture coordinates corresponding to the labeling object, wherein the space conversion matrix is generated according to a model conversion matrix of the target model, a view conversion matrix of the image acquisition device and a projection conversion matrix of the image acquisition device; and the data query unit is used for querying the depth texture by using the texture coordinates to obtain target depth data corresponding to the marked object.
In one embodiment, the height acquisition module 406 includes: the coordinate conversion unit is used for converting the target depth data and the texture coordinates by adopting a space conversion matrix to obtain space three-dimensional coordinates of the marked object; the coordinate generation module is used for determining the height value from the space three-dimensional coordinates, converting the height value according to the space conversion matrix and generating the target three-dimensional coordinates of the labeling object in the three-dimensional model.
In one embodiment, the three-dimensional rendering apparatus 400 for labeling objects further includes: the pose acquisition module is used for acquiring the current corresponding viewpoint position and the scope of the view body of the image acquisition device; the parameter determining module is used for determining the adjustment parameters of the image acquisition device according to the viewpoint position and the view range; the device adjusting unit is used for adjusting the pose of the image acquisition device by adopting the adjusting parameters to obtain the image acquisition device for acquiring the image aiming at the preset angle.
In one embodiment, the three-dimensional rendering apparatus 400 for labeling objects further includes: the matrix generation module is used for acquiring a target viewpoint position and a target view range of an image acquisition device for image acquisition aiming at a preset angle; generating a view conversion matrix corresponding to the image acquisition device according to the target viewpoint position and the target view range; generating a projection conversion matrix of the image acquisition device according to imaging parameters of the image acquisition device; and generating a space conversion matrix corresponding to the target model and the image acquisition device according to the model conversion matrix, the view conversion matrix and the projection conversion matrix of the target model.
In one embodiment, the pixel rendering module 408 is further configured to render, in the non-front-end display component, the pixel points at the target three-dimensional coordinates with rendering parameters by the graphics processor, to obtain a three-dimensional rendering result of the labeled object.
The modules in the three-dimensional rendering device for labeling objects can be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a method of three-dimensional rendering of a labeling object. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the present application, which falls within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of three-dimensional rendering of a labeled object, the method comprising:
responding to a labeling request for a three-dimensional model, and acquiring two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, wherein the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model;
determining target depth data of the labeling object in a depth texture according to the two-dimensional coordinates, wherein the depth texture is generated according to the depth data of the target model relative to the image acquisition device;
Determining the height value of the labeling object according to the target depth data, and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value;
and rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters to obtain a three-dimensional rendering result of the labeling object.
2. The method of claim 1, wherein determining target depth data of the annotation object in depth texture from the two-dimensional coordinates comprises:
converting the two-dimensional coordinates by adopting a space conversion matrix to obtain texture coordinates corresponding to the labeling object, wherein the space conversion matrix is generated according to a model conversion matrix of the target model, a view conversion matrix of the image acquisition device and a projection conversion matrix of the image acquisition device;
and inquiring the texture coordinates in the depth texture to obtain target depth data corresponding to the marked object.
3. The method of claim 2, wherein the determining the height value of the annotation object from the target depth data, generating the target three-dimensional coordinates of the annotation object in the three-dimensional model from the height value, comprises;
Converting the target depth data and the texture coordinates by adopting the space conversion matrix to obtain space three-dimensional coordinates of the labeling object;
and determining the height value from the space three-dimensional coordinates, and performing conversion processing on the height value according to the space conversion matrix to generate the target three-dimensional coordinates of the labeling object in the three-dimensional model.
4. The method according to claim 1, wherein the method further comprises:
acquiring the current corresponding viewpoint position and the current corresponding view range of the image acquisition device;
according to the viewpoint position and the view range, determining an adjustment parameter of the image acquisition device;
and adjusting the pose of the image acquisition device by adopting the adjustment parameters to obtain the image acquisition device for acquiring the image aiming at the preset angle.
5. The method according to claim 4, wherein the method further comprises:
acquiring a target viewpoint position and a target view range of an image acquisition device for image acquisition aiming at a preset angle;
generating a view conversion matrix corresponding to the image acquisition device according to the target viewpoint position and the target view range;
Generating a projection conversion matrix of the image acquisition device according to imaging parameters of the image acquisition device;
and generating a space conversion matrix corresponding to the target model and the image acquisition device according to the model conversion matrix of the target model, the view conversion matrix and the projection conversion matrix.
6. The method according to claim 1, wherein the rendering the pixel point at the target three-dimensional coordinate with the rendering parameter to obtain the three-dimensional rendering result of the labeling object includes:
and in the non-front-end display component, rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters through a graphic processor to obtain a three-dimensional rendering result of the labeling object.
7. A three-dimensional rendering device for annotating an object, the device comprising:
the request response module is used for responding to a labeling request for a three-dimensional model, acquiring two-dimensional coordinates of a labeling object and rendering parameters of the labeling object, wherein the three-dimensional model is generated according to image data acquired by an image acquisition device for a target model;
the depth acquisition module is used for determining target depth data of the labeling object in a depth texture according to the two-dimensional coordinates, and the depth texture is generated according to the depth data of the target model relative to the image acquisition device;
The height acquisition module is used for determining the height value of the labeling object according to the target depth data and generating a target three-dimensional coordinate of the labeling object in the three-dimensional model according to the height value;
and the pixel rendering module is used for rendering the pixel points at the target three-dimensional coordinates by adopting the rendering parameters to obtain a three-dimensional rendering result of the labeling object.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202211668155.3A 2022-12-24 2022-12-24 Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium Pending CN116109765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211668155.3A CN116109765A (en) 2022-12-24 2022-12-24 Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211668155.3A CN116109765A (en) 2022-12-24 2022-12-24 Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116109765A true CN116109765A (en) 2023-05-12

Family

ID=86263025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211668155.3A Pending CN116109765A (en) 2022-12-24 2022-12-24 Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116109765A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883563A (en) * 2023-05-18 2023-10-13 苏州高新区测绘事务所有限公司 Method, device, computer equipment and storage medium for rendering annotation points
CN117456550A (en) * 2023-12-21 2024-01-26 绘见科技(深圳)有限公司 MR-based CAD file viewing method, device, medium and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883563A (en) * 2023-05-18 2023-10-13 苏州高新区测绘事务所有限公司 Method, device, computer equipment and storage medium for rendering annotation points
CN116883563B (en) * 2023-05-18 2024-01-16 苏州高新区测绘事务所有限公司 Method, device, computer equipment and storage medium for rendering annotation points
CN117456550A (en) * 2023-12-21 2024-01-26 绘见科技(深圳)有限公司 MR-based CAD file viewing method, device, medium and equipment
CN117456550B (en) * 2023-12-21 2024-03-15 绘见科技(深圳)有限公司 MR-based CAD file viewing method, device, medium and equipment

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN116109765A (en) Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
TWI637355B (en) Methods of compressing a texture image and image data processing system and methods of generating a 360-degree panoramic video thereof
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN110503718B (en) Three-dimensional engineering model lightweight display method
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN113628331B (en) Data organization and scheduling method for photogrammetry model in illusion engine
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112288878B (en) Augmented reality preview method and preview device, electronic equipment and storage medium
CN114298982A (en) Image annotation method and device, computer equipment and storage medium
CN116758206A (en) Vector data fusion rendering method and device, computer equipment and storage medium
CN112070854A (en) Image generation method, device, equipment and storage medium
CN114332356A (en) Virtual and real picture combining method and device
CN112652056B (en) 3D information display method and device
CN112634439B (en) 3D information display method and device
CN116128744A (en) Method for eliminating image distortion, electronic device, storage medium and vehicle
CN110196638B (en) Mobile terminal augmented reality method and system based on target detection and space projection
CN108986183B (en) Method for manufacturing panoramic map
CN106991643B (en) Real-time line checking method and real-time line checking system with low resource consumption
CN117557711B (en) Method, device, computer equipment and storage medium for determining visual field
CN116883563B (en) Method, device, computer equipment and storage medium for rendering annotation points
CN116883575B (en) Building group rendering method, device, computer equipment and storage medium
CN117456550B (en) MR-based CAD file viewing method, device, medium and equipment
CN115601512B (en) Interactive three-dimensional reconstruction method and device, computer equipment and storage medium
CN117611781B (en) Flattening method and device for live-action three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination