CN114820980A - Three-dimensional reconstruction method and device, electronic equipment and readable storage medium - Google Patents

Three-dimensional reconstruction method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114820980A
CN114820980A CN202210439570.5A CN202210439570A CN114820980A CN 114820980 A CN114820980 A CN 114820980A CN 202210439570 A CN202210439570 A CN 202210439570A CN 114820980 A CN114820980 A CN 114820980A
Authority
CN
China
Prior art keywords
texture information
patch
dimensional
virtual point
panorama
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210439570.5A
Other languages
Chinese (zh)
Inventor
焦少慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202210439570.5A priority Critical patent/CN114820980A/en
Publication of CN114820980A publication Critical patent/CN114820980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Abstract

The application discloses a three-dimensional reconstruction method, a three-dimensional reconstruction device, electronic equipment and a readable storage medium, and belongs to the technical field of image data processing. The three-dimensional reconstruction method provided by the application comprises the following steps: matching texture information for a plurality of patches in the three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information; rendering a first panorama of the three-dimensional grid model based on the first virtual point location, and determining a first area in the first panorama, wherein the first area corresponds to the first panel; and determining target texture information according to the first panoramic image and the first area, and assigning the target texture information to the first surface slice.

Description

Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of image data processing, and particularly relates to a three-dimensional reconstruction method, a three-dimensional reconstruction device, electronic equipment and a readable storage medium.
Background
In the prior art, three-dimensional grid model is the most intuitive method for displaying scenes. In the process of acquiring data of a scene, due to the limitation of image acquisition equipment (discrete limited fixed acquisition points), the problems of shielding and self-shielding and the limitation of the number of the limited acquisition points, the acquired image data is incomplete, and a complete three-dimensional grid model cannot be constructed.
Disclosure of Invention
An object of the embodiments of the present application is to provide a three-dimensional reconstruction method, a three-dimensional reconstruction apparatus, an electronic device, and a readable storage medium, in which, when acquired image data (texture information) is incomplete, a virtual point location is constructed, missing texture information is estimated, and the estimated texture information is filled in a three-dimensional mesh model, so that the texture information is effectively supplemented, and the integrity of model construction is ensured.
In a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, where the three-dimensional reconstruction method includes: matching texture information for a plurality of patches in the three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information; rendering a first panorama of the three-dimensional grid model based on the first virtual point location, and determining a first area in the first panorama, wherein the first area corresponds to the first panel; and determining target texture information according to the first panoramic image and the first area, and assigning the target texture information to the first surface slice.
In a second aspect, an embodiment of the present application provides a three-dimensional reconstruction apparatus, including: the processing unit is used for matching texture information for a plurality of patches in the three-dimensional mesh model and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information; the processing unit is further used for rendering a first panoramic image of the three-dimensional grid model based on the first virtual point location, and determining a first area in the first panoramic image, wherein the first area corresponds to the first panel; and the processing unit is further used for determining target texture information according to the first panoramic image and the first area and assigning the target texture information to the first surface slice.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the three-dimensional reconstruction method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the three-dimensional reconstruction method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the three-dimensional reconstruction method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored in a storage medium, for execution by at least one processor to implement the three-dimensional reconstruction method according to the first aspect.
In the embodiment of the application, in the process of reconstructing the three-dimensional grid model of the target scene, the first patch lacking texture information in the model is constructed to form the first virtual point location, the target texture information of the first patch is reevaluated according to the first panorama acquired at the first virtual point location and the first area corresponding to the patch lacking texture information in the first panorama, the three-dimensional grid model is filled with the estimated target texture information, and the missing texture information in the three-dimensional grid model is supplemented, so that the integrity of the three-dimensional grid model is effectively guaranteed, and the display effect of the three-dimensional grid model of the target scene is improved.
Drawings
Fig. 1 shows one of the flow diagrams of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 2 illustrates a second flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 3 shows one of schematic diagrams of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 4 shows a second schematic diagram of a three-dimensional reconstruction method provided by the embodiment of the present application;
fig. 5 shows a third schematic diagram of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 6 shows a fourth schematic diagram of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 7 shows a fifth schematic diagram of a three-dimensional reconstruction method provided by an embodiment of the present application;
fig. 8 shows a third flowchart of a three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 9 is a block diagram illustrating a three-dimensional reconstruction apparatus provided in an embodiment of the present application;
fig. 10 shows a block diagram of an electronic device provided in an embodiment of the present application;
fig. 11 shows a hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
For ease of understanding, some of the technical terms referred to in this application are briefly described below:
three-dimensional grid model: three-dimensional mesh models are polygonal representations of objects, typically displayed by a computer or other video device. The displayed object may be a real-world entity or a fictional object. Anything that exists in physical nature can be represented by a three-dimensional model. In the embodiment of the application, the three-dimensional model of the object is used for indicating the three-dimensional structure and size information of the target scene (such as a house). The data storage form of the three-dimensional model is various, for example, the data storage form is represented in the form of a three-dimensional point cloud, a grid or a voxel, and the like, and the data storage form is not limited herein.
Dough sheet: a patch refers to the smallest planar building block in a three-dimensional mesh model. In rendering, generally, a model in a space needs to be divided into an infinite number of minute planes. These planes, also called patches, can be any polygon, commonly triangles and quadrilaterals. The intersection of the edges of these patches is the vertex of each patch. The patches may be randomly divided according to information such as the material or color of the model.
Panoramic view: the panoramic view in a broad sense refers to a wide-angle view, i.e., an image with a large angle of view. The panoramic view can be realized by different projection modes, which commonly include: equiangular projection, equirectangular projection (equirectangular projection), orthogonal projection, and equiproduct projection, etc., and the specific examples are not limited herein.
The three-dimensional reconstruction method, the three-dimensional reconstruction apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to fig. 1 to 11 through specific embodiments and application scenarios thereof.
A three-dimensional reconstruction method is provided in an embodiment of the present application, and fig. 1 shows one of the flow diagrams of the three-dimensional reconstruction method provided in the embodiment of the present application, and as shown in fig. 1, the three-dimensional reconstruction method includes:
step 102: matching texture information for a plurality of patches in the three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information.
Specifically, the three-dimensional mesh model has a plurality of patches, and after the three-dimensional mesh model is constructed, texture mapping needs to be performed on the three-dimensional mesh model, and in the process of texture mapping, texture information needs to be matched for each patch in the three-dimensional mesh model.
Specifically, texture information may be matched for a plurality of patches in the three-dimensional mesh model through a texture estimation algorithm.
Further, a first patch of the plurality of patches may be a patch of the three-dimensional mesh model lacking texture information. In general, the number of the first patches is multiple, and the lack of texture information of the multiple first patches may result in incomplete expression of the three-dimensional mesh model after texture mapping. Therefore, padding assignment needs to be performed on the first tile lacking texture information, so as to make the three-dimensional mesh model more complete.
Further, the first patch may be a three-dimensional point set P (x ', y ', z '), where none of the points in the set can correctly estimate texture information.
Furthermore, because the first patches lack texture information, a virtual point location can be determined for each first patch through the spatial parameter information of each first patch, the first virtual point location is a viewpoint in the three-dimensional mesh model, and most patches in the three-dimensional mesh model can be well observed through the coordinate position of the first virtual point location, wherein the first patches lack texture information corresponding to the first virtual point location.
It can be understood that, if there is one virtual point location in the virtual point locations, which can observe all first patches lacking texture information, the virtual point location may be determined as the first virtual point location, that is, the one virtual point location may observe all patches lacking texture. If none of the virtual point locations can observe all the first patches of all missing textures, a plurality of first virtual point locations need to be determined to ensure that all the first patches of all missing textures can be observed.
It should be noted that, no matter whether the first slice lacking texture information corresponds to one or more first virtual point locations, only the number of times of subsequently obtaining the first panorama and the number of times of estimating the missing texture information are affected, in this embodiment of the present application, description is only given on the condition that the plurality of first slices lacking texture correspond to one first virtual point location, and details on the manner of estimating the missing texture information by the plurality of first virtual point locations are omitted.
Step 104: rendering a first panorama of the three-dimensional mesh model based on the first virtual point location, and determining a first area in the first panorama;
wherein the first area corresponds to the first panel;
it should be noted that the first panorama includes a 360-degree panorama corresponding to the three-dimensional mesh model, the panorama may be divided into a plurality of regions, and the plurality of regions in the first panorama and the plurality of patches in the three-dimensional mesh model have a corresponding relationship. Namely, the first area in the first panoramic image has a corresponding relation with the first patch in the three-dimensional mesh model.
Further, the size of the first panorama is limited, which specifically includes: a first panorama width, i.e. latitude, range [0,2 × Pai ], a first panorama height, i.e. longitude, range [ -Pai/2, Pai/2], wherein Pai ═ 3.1415926.
Further, the formula for calculating the first area in the first panorama is as follows:
P Y =arcsin(P_z’/R);
P x =arcsin(P_x’/R×cos(P Y ));
where, R is the radius of the first panorama when drawing, (P _ x ', P _ y ', P _ z ') is the coordinate value of each point of the first patch, and (Px, Py) is the coordinate value of a point in the first area.
It can be understood that the first panorama of the three-dimensional mesh model is obtained based on the first virtual point location, so that the position information of the first patch can be more accurately embodied, and the cavity condition of the first patch lacking texture information to the target scene image can also be determined.
Step 106: and determining target texture information according to the first panoramic image and the first area, and assigning the target texture information to the first surface slice.
Specifically, the target texture information includes depth data and color data.
It can be understood that the target texture information can be determined by analyzing and estimating the first panorama obtained by rendering the first virtual point location and the first area determined from the plurality of areas in the first panorama, and the target texture information is used for assigning a value to the first surface slice in the three-dimensional grid model so as to fill a cavity area caused by the lack of the texture information, so that the three-dimensional grid model is more complete and attractive in display. Illustratively, as shown in fig. 6, in the three-dimensional mesh model of the target scene, a partial region (black region) of the "bed" in the image lacks texture information, and a patch corresponding to the region lacking texture information is set as a first patch (target patch). And constructing a first virtual point location based on the first picture, re-rendering a 360-degree first panoramic picture of the three-dimensional grid model of the target scene at the coordinate position of the first virtual point location, wherein the first panoramic picture comprises a 'bed' in the picture 6, and a black area in the first panoramic picture is a first area corresponding to the first picture.
Further exemplarily, a first area (black area) corresponding to the first tile is determined in the re-rendered first panorama, the target texture information of the first tile is estimated according to the information of the first panorama, the first area and the like, and the estimated target texture information is re-filled into the first tile of the three-dimensional mesh model of the target scene. The target texture information is filled into the black hole area in fig. 6, the filling effect is as shown in fig. 6, and the target texture information is filled into the area where the texture information is missing, so that the three-dimensional grid model is more complete and beautiful in display.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the first virtual point position is constructed on the first surface patch lacking the texture information, the first panorama is rendered again, and the corresponding target texture information is estimated and filled into the first surface patch of the three-dimensional mesh model. By the three-dimensional reconstruction method, the texture information lacking in the three-dimensional grid model can be effectively supplemented, and the integrity of the texture information in the three-dimensional grid model of the target scene is ensured.
In some embodiments of the present application, the step 102 of determining the first virtual point location specifically includes:
and 102a, determining a first virtual point position corresponding to the first patch according to the spatial parameters of the first patch.
In the embodiment of the application, the space parameters of the first patch in the three-dimensional grid model are determined, and the space parameters comprise three-dimensional coordinate parameters of the first patch in the three-dimensional grid model.
Specifically, according to the spatial parameters of the first patch in the three-dimensional mesh model, the spatial coordinate parameters of the first virtual point in the three-dimensional mesh model are calculated.
Further, a three-dimensional coordinate value of a center point of the first patch in the three-dimensional mesh model may be determined, and a spatial coordinate parameter of the first virtual point location corresponding to the first patch may be calculated according to the three-dimensional coordinate value of the center point. According to the three-dimensional reconstruction method, the virtual point positions corresponding to the first patch are determined according to the space parameters of the first patch, the accuracy of positioning of the first virtual point positions is guaranteed, and therefore the display effect of the first panorama obtained through rendering of the first virtual point positions is guaranteed.
In some embodiments of the present application, the step 102a may further include:
step 1021, determining a first virtual point location according to the normal vector of the first patch;
and an included angle between a vector from the center of the first patch to the first virtual point and a normal vector of the first patch is smaller than 90 degrees.
In the above embodiment, the normal vector of the first patch may be determined according to the three-dimensional parameter of the first patch in the three-dimensional mesh model, and then the spatial coordinate parameter of the first virtual point location is calculated according to the normal vector of the first patch.
Furthermore, an included angle between a vector determined by the coordinate of the center point of the first patch and the coordinate of the first virtual point position and a normal vector of the first patch is less than 90 degrees.
It can be understood that an included angle between a vector determined by the center point coordinate of the first patch and the first virtual point location coordinate and a normal vector of the first patch is defined, so that the first virtual point location can render the first panorama at a more accurate viewing angle.
Specifically, the position point of the first virtual point location is not unique, and the first virtual point location may be located on an extension line of a normal vector of a center point of the first patch, or on extension lines of normal vectors of other points on the first patch, and as long as an included angle between a vector from the center of the first patch to the first virtual point location and the normal vector of the first patch is smaller than 90 °, the first virtual point location may be used.
Further, the coordinate calculation formula of the first virtual point location is as follows:
V x =D×avg(N_x’)+P_x’;
V y =D×avg(N_y’)+P_y’;
V z =D×avg(N_z’)+P_z’;
wherein, (Vx, Vy, V) z ) The coordinate values of the first virtual point location, (P _ x ', P _ y', P _ z ') are the coordinate values of the center point of the first patch, (N _ x', N _ y ', N _ z') are the coordinate values of a point on the normal vector of the first patch, and D is the spatial distance between the first virtual point location and the center point of the first patch.
Further, the first virtual point location may be a virtual point location corresponding to the largest number of first patches in the above range.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the first patch with missing texture is found in the three-dimensional grid model of the target scene, the normal vector of the target patch is determined, the coordinate value of the first virtual point location corresponding to the target patch is calculated according to the normal vector of the target patch, and then the virtual point location corresponding to the first patch is determined, so that the accuracy of the space parameter of the virtual point location is ensured.
In some embodiments of the present application, fig. 2 shows a second flowchart of a three-dimensional reconstruction method provided in the embodiments of the present application, and as shown in fig. 2, the three-dimensional reconstruction method specifically includes:
step 202: matching texture information for a plurality of patches in the three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information;
step 204: rendering a first panorama of the three-dimensional grid model based on the first virtual point location, and determining a first area in the first panorama, wherein the first area corresponds to the first panel;
step 206, rendering the first area to a target color, and generating a mask image in an area corresponding to the target color;
step 208, determining target texture information according to the rendering result of the first panoramic image and the mask image;
step 210, assigning the target texture information to the first tile.
Specifically, a first region in the first panorama is determined, where the first region corresponds to a first tile, and the first region may correspond to a plurality of first tiles.
Further, the color of the first area is rendered as a target color, such as pure black, pure green, or other pure colors.
Further, the mask image may define contour shape information of the target texture information.
For example, as shown in fig. 3, the three-dimensional reconstruction method provided by the present application may render a first area lacking texture information in a 360-degree first panorama as black, generate a mask image corresponding to fig. 3 according to a black area corresponding to a "gate" in fig. 3, where the mask image is as shown in fig. 4, and the mask image may determine an edge contour of target texture information, and after estimating the target texture information, fill the target texture information onto the first panel corresponding to the black area in fig. 3, so that the "gate" missing texture is complemented, and the effect is as shown in fig. 5, thereby making the content of the three-dimensional mesh model more complete.
According to the three-dimensional reconstruction method provided by the embodiment of the application, the color of the first area is rendered into the target color, and the mask image is generated in the area rendered into the target color, so that the texture information of the texture missing area can be conveniently estimated, and the accuracy of the target texture information is ensured.
In some embodiments of the present application, step 204 in the three-dimensional reconstruction method provided in the embodiments of the present application may specifically include:
step 204 a: the rendering result of the first panorama and the parameter information of the mask image are input to a first prediction model to output target texture information through the first prediction model.
In the above embodiment, the rendering result of the first panorama and the parameter information such as the mask image are input to the first prediction model, and the target texture information is obtained after the operation of the first prediction model.
Further, the mask image includes contour shape information, and an edge contour of the target texture information can be determined.
Specifically, the first prediction model may be a deep learning model, and for example, the deep learning model may be a model of an encoder-decoder architecture, a rendering result of the panorama and a mask image are input into the first prediction model, the first prediction model outputs texture information after being operated, and the texture information may be target texture information suitable for assigning a value to the first tile.
Illustratively, according to the above data of fig. 6, a corresponding mask image is generated, the mask image may determine an edge contour of the target texture information, and the first panorama and the mask image are input to a first prediction model, which outputs the target texture information through operation. The contour information of the target texture information conforms to the black area in fig. 6, and the target texture information is filled on the first panel corresponding to the black area, so that the missing texture at the bed edge is completed, and the effect is as shown in fig. 7, thereby enabling the content of the three-dimensional grid model to be more complete.
According to the three-dimensional reconstruction method, the target texture information is determined by using the prediction model, the accuracy of the estimated target texture information is guaranteed, and therefore the display integrity of the three-dimensional grid model is guaranteed, and the information parameters of the three-dimensional grid model are enriched.
In some embodiments of the present application, before step 102 in the three-dimensional reconstruction method provided herein, the three-dimensional reconstruction method further includes:
step 101: acquiring a plurality of depth maps of a shooting scene and a plurality of color maps corresponding to the plurality of depth maps, and constructing a three-dimensional grid model according to the plurality of depth maps and the plurality of color maps.
In the embodiment of the application, a plurality of depth maps of a shooting scene and a plurality of color maps corresponding to the plurality of depth maps are obtained, and a three-dimensional grid model is constructed according to the obtained plurality of depth maps and the obtained plurality of color maps. In a possible implementation manner, a plurality of color two-dimensional images and depth images of a target scene are required for constructing a three-dimensional mesh model of the target scene, after a depth map and a color map shot in the target scene are obtained, point location estimation of RGBD multi-point data is performed on the plurality of depth maps and the plurality of color maps, three-dimensional point cloud of the target scene is generated according to the obtained point location, then a mesh of the target scene is generated according to the three-dimensional point cloud data, texture information is estimated, and finally the three-dimensional mesh model of the target scene is constructed.
A three-dimensional mesh model of the target scene is constructed from the depth map and the color two-dimensional map.
The three-dimensional reconstruction method provided by the embodiment of the application constructs the three-dimensional grid model according to the multiple depth maps and the multiple color maps, and the accuracy and the integrity of the three-dimensional grid model are guaranteed.
In some embodiments of the present application, fig. 8 shows a third flowchart of a three-dimensional reconstruction method provided in the embodiments of the present application, and as shown in fig. 8, the three-dimensional reconstruction method specifically includes:
step 802: obtaining a plurality of patches in a three-dimensional grid model;
step 804: determining texture information corresponding to any one of the plurality of patches in a matching manner through a texture estimation algorithm;
step 806: rendering a first panorama of the three-dimensional grid model based on the first virtual point location, and determining a first area in the first panorama, wherein the first area corresponds to the first panel;
step 808: and determining target texture information according to the first panoramic image and the first area, and assigning the target texture information to the first surface slice.
In the embodiment of the application, the three-dimensional mesh model internally comprises a plurality of patches, and the texture information of each patch can be calculated through a texture estimation algorithm.
In one possible implementation, the texture estimation algorithm may be a Markov random number algorithm. The markov random number algorithm requires setting a data item and a smoothing item, and the calculation formula of the data item is as follows:
Data(F,v)=S/|dt-d|;
wherein v is the first patch, S is the area of the first patch projected onto the two-dimensional view, dt is the depth parameter of the first patch projected onto the two-dimensional view, and d is the depth parameter of the first patch in the corresponding depth map.
The formula for the smoothing term is:
Figure BDA0003614548470000111
wherein, I k Is a bi-bit semantic of the first patch, I p And setting the smoothing item as 1 if the semantics of the two adjacent patches are the same, and otherwise, setting the smoothing item as 0.
The formula for the markov random field is calculated as:
Figure BDA0003614548470000112
from the data items and the smoothing items, texture information e (v) of the first patch is calculated.
According to the three-dimensional reconstruction method, the texture information of each patch in the three-dimensional grid model is calculated through the texture estimation algorithm, so that the first patch lacking the texture information can be confirmed, the texture information is filled in the first patch, and the display integrity of the three-dimensional grid model is guaranteed.
In some embodiments of the present application, a three-dimensional reconstruction apparatus is provided, and fig. 9 shows a block diagram of the three-dimensional reconstruction apparatus provided in the embodiments of the present application, and as shown in fig. 9, a three-dimensional reconstruction apparatus 900 includes:
a processing unit 902, configured to match texture information for multiple patches in a three-dimensional mesh model, and determine a first virtual point location when a first patch in the three-dimensional mesh model is not matched with corresponding texture information;
the processing unit 902 is further configured to render a first panorama of the three-dimensional mesh model based on the first virtual point location, and determine a first area in the first panorama, where the first area corresponds to the first tile;
the processing unit 902 is further configured to determine target texture information according to the first panorama and the first area, and assign the target texture information to the first tile.
Specifically, in the process of constructing the three-dimensional mesh model of the target scene, the three-dimensional mesh model includes a plurality of patches, after the processing unit 902 confirms all patches inside the three-dimensional mesh model, the processing unit 902 confirms texture information for each patch, sets the patch lacking the texture information as a first patch, and constructs a first virtual point location based on the coordinate parameters of the first patch; the processing unit 902 re-renders the three-dimensional mesh model at the position of the first virtual point to obtain a first panorama, and determines a first area corresponding to the first tile in the first panorama; the processing unit 902 estimates target texture information of the first tile according to the first panorama and the first region, and reassigns the target texture information to the first tile.
The three-dimensional reconstruction device provided by the embodiment of the application constructs virtual point positions for the patches lacking texture information through the processing unit, renders the panoramic image again, and estimates the corresponding texture information to fill the texture information into the three-dimensional mesh model. The texture information lacking in the three-dimensional grid model is effectively supplemented, and the integrity of the texture information in the three-dimensional grid model of the target scene is ensured.
In some embodiments of the present application, the processing unit 902 is further configured to determine a first virtual point location corresponding to the first patch according to the spatial parameter of the first patch.
The three-dimensional reconstruction device provided by the embodiment of the application determines the first virtual point position corresponding to the first patch through the processing unit according to the space parameters of the first patch, and ensures the accuracy of the virtual point position.
In some embodiments of the present application, the processing unit 902 is further configured to determine a first virtual point location according to a normal vector of the first patch; and an included angle between a vector from the center of the first patch to the first virtual point and a normal vector of the first patch is smaller than 90 degrees.
The three-dimensional reconstruction device provided by the embodiment of the application determines the virtual point position corresponding to the first patch through the processing unit according to the normal vector of the first patch, guarantees the accuracy of the spatial parameters of the virtual point position, and further guarantees the display effect of the first panorama obtained by rendering the first virtual point position.
In some embodiments of the present application, the processing unit 902 is further configured to render the first region as a target color, and generate a mask image for a region corresponding to the target color; the processing unit 902 is further configured to determine target texture information according to the rendering result of the first panorama and the mask image.
The three-dimensional reconstruction device provided by the embodiment of the application renders the color of the first area into the target color through the processing unit, generates the mask image, estimates the texture information of the texture missing area, and ensures the accuracy of the target texture information.
In some embodiments of the present application, the processing unit 902 is further configured to input the rendering result of the first panorama and parameter information of the mask image to the first prediction model to output the target texture information through the first prediction model.
The three-dimensional reconstruction device provided by the embodiment of the application determines the texture information by using the prediction model through the processing unit, ensures the accuracy of the estimated target texture information, further ensures the display integrity of the three-dimensional grid model, and enriches the information parameters of the three-dimensional grid model.
In some embodiments of the present application, the processing unit 902 is further configured to obtain a plurality of depth maps of the shooting scene and a plurality of color maps corresponding to the plurality of depth maps, and construct a three-dimensional mesh model according to the plurality of depth maps and the plurality of color maps.
According to the three-dimensional reconstruction device, the processing unit is used for constructing the three-dimensional grid model according to the multiple depth maps and the multiple color maps, and the accuracy and the integrity of the three-dimensional grid model are guaranteed.
In some embodiments of the present application, the three-dimensional reconstruction apparatus 900 includes an obtaining unit, configured to obtain a plurality of patches in a three-dimensional mesh model;
the processing unit 902 is further configured to determine, by using a texture estimation algorithm, that any patch of the plurality of patches matches the corresponding texture information.
The three-dimensional reconstruction device provided by the embodiment of the application calculates the texture information of each patch in the three-dimensional mesh model by using a texture estimation algorithm through the processing unit, and confirms the target patch lacking the texture information.
The three-dimensional reconstruction apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The three-dimensional reconstruction apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The three-dimensional reconstruction device provided by the embodiment of the application can implement each process implemented by the method embodiment, and is not repeated here to avoid repetition.
Optionally, an electronic device is further provided in an embodiment of the present application, fig. 10 shows a block diagram of a structure of the electronic device according to the embodiment of the present application, as shown in fig. 10, an electronic device 1000 includes a processor 1002 and a memory 1004, a program or an instruction that can be executed on the processor 1002 is stored in the memory 1004, and when the program or the instruction is executed by the processor 1002, the steps of the embodiment of the method are implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1110 is configured to match texture information for a plurality of patches in the three-dimensional mesh model, and determine a first virtual point location when a first patch in the three-dimensional mesh model is not matched with corresponding texture information;
a processor 1110 configured to render a first panorama of the three-dimensional mesh model based on the first virtual point location, and determine a first region in the first panorama, the first region corresponding to the first tile;
and a processor 1110, configured to determine target texture information according to the first panorama and the first region, and assign the target texture information to the first tile.
The electronic equipment provided by the embodiment of the application constructs virtual point positions for the patches lacking texture information, re-renders the panoramic image, estimates the corresponding texture information and fills the texture information into the three-dimensional grid model. The texture information lacking in the three-dimensional grid model is effectively supplemented, and the integrity of the texture information in the three-dimensional grid model of the target scene is ensured.
Further, the processor 1110 is configured to determine a first virtual point location corresponding to the first tile according to the spatial parameter of the first tile.
The electronic equipment provided by the embodiment of the application determines the virtual point position corresponding to the first panel according to the space parameters of the first panel, and the accuracy of the virtual point position is guaranteed.
Further, the processor 1110 is configured to determine a first virtual point location according to a normal vector of the first patch;
and an included angle between a vector from the center of the first patch to the first virtual point and a normal vector of the first patch is smaller than 90 degrees.
The electronic device provided by the embodiment of the application determines the virtual point position corresponding to the first patch according to the normal vector of the first patch. The accuracy of the virtual point location space parameters is ensured.
Further, the processor 1110 is configured to render the first region as a target color, and generate a mask image for a region corresponding to the target color; and determining target texture information according to the rendering result of the first panoramic image and the mask image.
The electronic device provided by the embodiment of the application renders the color of the first area into the target color, generates the mask image, and estimates the texture information of the texture-missing area.
Further, the processor 1110 is configured to input the rendering result of the first panorama and the parameter information of the mask image to the first prediction model to output the target texture information through the first prediction model.
The electronic device provided by the embodiment of the application determines the texture information by using the prediction model. The accuracy of estimating the texture image is guaranteed.
Further, the processor 1110 is configured to obtain multiple depth maps of the shooting scene and multiple color maps corresponding to the multiple depth maps, and construct a three-dimensional mesh model according to the multiple depth maps and the multiple color maps.
The electronic equipment provided by the embodiment of the application builds the three-dimensional grid model according to the multiple depth maps and the multiple color maps, and the accuracy and the integrity of the three-dimensional grid model are guaranteed.
Further, the processor 1110 is configured to determine a third patch corresponding to a boundary of an invisible area in the three-dimensional mesh model, and fill a texture color of a fourth patch into the third patch, where the fourth patch is a patch adjacent to the third patch and located outside the boundary of the invisible area; the method comprises the steps of obtaining a plurality of patches in a three-dimensional grid model; the texture estimation method is used for determining texture information corresponding to any patch matching in a plurality of patches through a texture estimation algorithm.
The electronic device provided by the embodiment of the application uses a texture estimation algorithm to calculate the texture information of each patch in the three-dimensional mesh model and confirms the target patch lacking the texture information.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, an application program or instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1109 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above three-dimensional reconstruction method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media such as computer read only memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above three-dimensional reconstruction method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing three-dimensional reconstruction method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A three-dimensional reconstruction method, characterized in that it comprises:
matching texture information for a plurality of patches in a three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information;
rendering a first panorama of the three-dimensional mesh model based on the first virtual point location, and determining a first area in the first panorama, the first area corresponding to the first tile;
and determining target texture information according to the first panorama and the first area, and assigning the target texture information to the first patch.
2. The three-dimensional reconstruction method according to claim 1, wherein the step of determining the first virtual point location specifically includes:
and determining the first virtual point position corresponding to the first patch according to the spatial parameters of the first patch.
3. The three-dimensional reconstruction method according to claim 2, wherein the spatial parameters include normal vectors, and the step of determining the first virtual point location corresponding to the first patch according to the spatial parameters of the first patch specifically includes:
determining the first virtual point location according to the normal vector of the first patch;
wherein an included angle between a vector from the center of the first patch to the first virtual point location and a normal vector of the first patch is less than 90 °.
4. The three-dimensional reconstruction method according to claim 1, wherein the step of determining the target texture information from the first panorama and the first region specifically comprises:
rendering the first area to be a target color, and generating a mask image in an area corresponding to the target color;
and determining the target texture information according to the rendering result of the first panoramic image and the mask image.
5. The three-dimensional reconstruction method according to claim 4, wherein the step of determining the target texture information according to the rendering result of the first panorama and the mask image specifically comprises:
inputting a rendering result of the first panorama and parameter information of the mask image to a first prediction model to output the target texture information through the first prediction model.
6. The three-dimensional reconstruction method of any one of claims 1 to 5, wherein prior to said matching texture information for a plurality of patches in a three-dimensional mesh model, the three-dimensional reconstruction method further comprises:
acquiring a plurality of depth maps of a shooting scene and a plurality of color maps corresponding to the plurality of depth maps, and constructing the three-dimensional grid model according to the plurality of depth maps and the plurality of color maps.
7. The three-dimensional reconstruction method according to any one of claims 1 to 5, wherein the step of matching texture information to a plurality of patches in the three-dimensional mesh model specifically comprises:
obtaining a plurality of patches in the three-dimensional grid model;
and determining texture information corresponding to any patch matching in the plurality of patches through a texture estimation algorithm.
8. A three-dimensional reconstruction apparatus, characterized in that it comprises:
the processing unit is used for matching texture information for a plurality of patches in a three-dimensional mesh model, and determining a first virtual point location under the condition that a first patch in the three-dimensional mesh model is not matched with corresponding texture information;
the processing unit is further configured to render a first panorama of the three-dimensional mesh model based on the first virtual point location, and determine a first area in the first panorama, where the first area corresponds to the first tile;
the processing unit is further configured to determine target texture information according to the first panorama and the first region, and assign the target texture information to the first patch.
9. An electronic device, comprising:
a memory having a program or instructions stored thereon;
a processor for implementing the steps of the three-dimensional reconstruction method according to any one of claims 1 to 7 when executing the program or instructions.
10. A readable storage medium on which a program or instructions are stored, characterized in that said program or instructions, when executed by a processor, implement the steps of the three-dimensional reconstruction method according to any one of claims 1 to 7.
CN202210439570.5A 2022-04-25 2022-04-25 Three-dimensional reconstruction method and device, electronic equipment and readable storage medium Pending CN114820980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210439570.5A CN114820980A (en) 2022-04-25 2022-04-25 Three-dimensional reconstruction method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210439570.5A CN114820980A (en) 2022-04-25 2022-04-25 Three-dimensional reconstruction method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114820980A true CN114820980A (en) 2022-07-29

Family

ID=82508163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210439570.5A Pending CN114820980A (en) 2022-04-25 2022-04-25 Three-dimensional reconstruction method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114820980A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934984A (en) * 2023-09-19 2023-10-24 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934984A (en) * 2023-09-19 2023-10-24 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space
CN116934984B (en) * 2023-09-19 2023-12-08 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space

Similar Documents

Publication Publication Date Title
US11694392B2 (en) Environment synthesis for lighting an object
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
WO2017113731A1 (en) 360-degree panoramic displaying method and displaying module, and mobile terminal
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
US8970586B2 (en) Building controllable clairvoyance device in virtual world
JP5299173B2 (en) Image processing apparatus, image processing method, and program
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
CN104781852A (en) A computer graphics method for rendering three dimensional scenes
GB2559850A (en) Stroke operation prediction for three-dimensional digital content
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
KR20060131145A (en) Randering method of three dimension object using two dimension picture
CN107067461A (en) The construction method and device of indoor stereo figure
CN115738249A (en) Method and device for displaying three-dimensional model of game role and electronic device
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
CN108230430B (en) Cloud layer mask image processing method and device
WO2019042028A1 (en) All-around spherical light field rendering method
Hartl et al. Rapid reconstruction of small objects on mobile phones
WO2018151612A1 (en) Texture mapping system and method
CN114820968A (en) Three-dimensional visualization method and device, robot, electronic device and storage medium
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN110363860B (en) 3D model reconstruction method and device and electronic equipment
WO2020173222A1 (en) Object virtualization processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination