CN115439331B - Corner correction method and generation method and device of three-dimensional model in meta universe - Google Patents

Corner correction method and generation method and device of three-dimensional model in meta universe Download PDF

Info

Publication number
CN115439331B
CN115439331B CN202211075976.6A CN202211075976A CN115439331B CN 115439331 B CN115439331 B CN 115439331B CN 202211075976 A CN202211075976 A CN 202211075976A CN 115439331 B CN115439331 B CN 115439331B
Authority
CN
China
Prior art keywords
corner
target object
points
position information
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211075976.6A
Other languages
Chinese (zh)
Other versions
CN115439331A (en
Inventor
王海君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211075976.6A priority Critical patent/CN115439331B/en
Publication of CN115439331A publication Critical patent/CN115439331A/en
Application granted granted Critical
Publication of CN115439331B publication Critical patent/CN115439331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/608Skewing or deskewing, e.g. by two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The disclosure provides a corner correction method and a three-dimensional model generation method and device in a meta space, relates to the field of artificial intelligence, and in particular relates to the technical fields of virtual reality, augmented reality, meta space, computer vision, deep learning and the like. The specific implementation scheme of the corner correction method is as follows: determining the position information of a space point corresponding to the target object corner according to the position information of the target object corner included in each of the at least two images, and obtaining at least two pieces of space position information; clustering the target object corner points in at least two images according to at least two pieces of space position information to obtain at least one corner point group; determining the reference position information of the target object corner in each corner group according to the position information of the space point corresponding to the target object corner in each corner group; and correcting the position of the target object corner in the image according to the reference position information of the target object corner.

Description

Corner correction method and generation method and device of three-dimensional model in meta universe
Technical Field
The disclosure relates to the field of artificial intelligence, in particular to the technical fields of virtual reality, augmented reality, meta universe, computer vision, deep learning and the like, and especially relates to a corner correction method, a three-dimensional model generation method, a device, equipment and a medium.
Background
In case of too large a scene or too many occlusions, a single image cannot represent complete scene information. A complete three-dimensional model of a scene can typically be obtained by stitching point cloud data of multiple images. For a single target object in a scene, because the three-dimensional model of the target object is obtained by splicing point cloud data of a plurality of images and different images are acquired based on different poses, the three-dimensional model of the target object can have dislocation among different parts.
Disclosure of Invention
The disclosure aims to provide a correction method of corner points for solving the dislocation problem, and a generation method, a device, equipment and a medium of a three-dimensional model.
According to an aspect of the present disclosure, there is provided a method for correcting a corner point, including: determining the position information of a space point corresponding to the target object corner according to the position information of the target object corner included in each of the at least two images, and obtaining at least two pieces of space position information; clustering the target object corner points in at least two images according to at least two pieces of space position information to obtain at least one corner point group; determining the reference position information of the target object corner in each corner group according to the position information of the space point corresponding to the target object corner in each corner group; and correcting the position of the target object corner in the image according to the reference position information of the target object corner.
According to another aspect of the present disclosure, there is provided a method for generating a three-dimensional model, including: determining point cloud data corresponding to each panoramic image in at least two panoramic images comprising a target object; and aggregating point cloud data corresponding to at least two panoramic images to obtain a three-dimensional scene model comprising target objects, wherein the target object corner points in each panoramic image are corner points corrected by the corner point correction method provided by the disclosure.
According to another aspect of the present disclosure, there is provided a correction device for corner points, including: the position information determining module is used for determining the position information of the space point corresponding to the target object corner according to the position information of the target object corner included in each of the at least two images, so as to obtain at least two pieces of space position information; the corner clustering module is used for clustering the target object corners in at least two images according to the at least two spatial position information to obtain at least one corner group; the reference position determining module is used for determining the reference position information of the target object corner in each corner group according to the position information of the space point corresponding to the target object corner in each corner group; and the position correction module is used for correcting the position of the target object corner in the image according to the reference position information of the target object corner.
According to another aspect of the present disclosure, there is provided a generating apparatus of a three-dimensional model, including: the point cloud data determining module is used for determining point cloud data corresponding to each panoramic image in at least two panoramic images comprising a target object; and a model obtaining module, configured to aggregate point cloud data corresponding to at least two panoramic images to obtain a three-dimensional scene model including a target object, where a target object corner in each panoramic image is a corner corrected by the corner correction device provided by the present disclosure.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the corner correction method and/or the three-dimensional model generation method provided by the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the correction method of the corner points and/or the generation method of the three-dimensional model provided by the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program/instruction which, when executed by a processor, implements the method of correcting corner points and/or the method of generating a three-dimensional model provided by the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic view of an application scenario of a method for correcting corner points and/or a method and an apparatus for generating a three-dimensional model according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a method of correcting corner points according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of clustering a plurality of target object corner points according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining directional characteristics according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of clustering a plurality of target object corner points according to location features according to an embodiment of the present disclosure;
FIG. 6 is a flow diagram of a method of generating a three-dimensional model according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a structure of a correction device of a corner point according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of a three-dimensional model generating apparatus according to an embodiment of the present disclosure; and
fig. 9 is a block diagram of an electronic device used to implement the method of correcting corner points and/or the method of generating a three-dimensional model of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In three-dimensional reconstruction techniques, three-dimensional layout information of a scene may be generated from a single image. However, due to perspective, a single image cannot represent complete scene information in case of too large a scene or too much occlusion. To generate a complete three-dimensional model, a three-dimensional model of a local region may be generated for each image, then spatial relationships between the three-dimensional models of the plurality of local regions are determined from the positioning data, and the three-dimensional models of the plurality of local regions are stitched according to the spatial relationships. When the three-dimensional model under the two different coordinate systems is spliced, the three-dimensional model under the two different coordinate systems can be firstly transformed into the same coordinate system according to the transformation relation between the two different coordinate systems, and then the three-dimensional model is spliced.
For a single object of a larger size in a scene, such as a wall, there may be a case where a three-dimensional model of the single object is obtained by stitching three-dimensional models of a plurality of local areas. Because the camera poses corresponding to the images are different, the model dislocation situation exists in the three-dimensional target object obtained by splicing the three-dimensional models of the local areas. As such, the fidelity of the three-dimensional scene model may be impacted, affecting the real experience of the user in the virtual reality, augmented reality, and/or metauniverse applications.
In order to solve the problem of misalignment, the position of the misaligned model is usually manually adjusted or the overlapped model due to the misalignment is manually deleted. However, the method has strong artificial dependence, low operation efficiency and great artificial influence on the final result.
Based on the above, the present disclosure aims to provide a method for correcting an angular point of a target object, so as to avoid a situation that a three-dimensional model formed by stitching is misplaced by correcting the position of the angular point of the target object. Based on the corner correction method, the present disclosure also provides a three-dimensional model generation method.
An application scenario of the method and apparatus provided by the present disclosure will be described below with reference to fig. 1.
Fig. 1 is an application scenario schematic diagram of a method for correcting corner points and a method and apparatus for generating a three-dimensional model according to an embodiment of the disclosure.
As shown in fig. 1, the application scenario 100 of this embodiment may include an electronic device 110, and the electronic device 110 may be various electronic devices with processing functions, including but not limited to a smart phone, a tablet computer, a laptop computer, a desktop computer, a server, and the like.
The electronic device 110 may process the input image 120, specifically correct corner points of a target object such as a wall in the image 120, generate point cloud data according to the image corrected by the corner points, and represent a three-dimensional model corresponding to the image 120 by using the generated point cloud data.
In an embodiment, the electronic device 110 may process multiple images acquired for a scene to obtain multiple sets of point cloud data generated based on the multiple images. The electronic device 110 may also obtain a three-dimensional model of the scene (i.e., the three-dimensional scene model 130) by aggregating multiple sets of point cloud data. The plurality of images can be images obtained by shooting the image acquisition device at different angles and different positions.
In an embodiment, the application scenario 100 may further comprise a server 150. Electronic device 110 may be communicatively coupled to server 150 via a network. The network may include wired or wireless communication links.
For example, the electronic device 110 may correct only the corner points of the target object such as the wall in the image 120, and send the image corrected by the corner points to the server 150 as the corrected image 140. Accordingly, the server 150 may, for example, after receiving the corrected image 140, generate point cloud data according to the corrected image 140, and obtain the three-dimensional scene model 130 by stitching the point cloud data of the plurality of corrected images.
Illustratively, the electronic device 110 may also send the image 120 directly to the server 150, process the image 120 by the server 150, and generate the three-dimensional scene model 130. After obtaining the three-dimensional scene model 130, the server 150 may also send the three-dimensional scene model 130 to the electronic device 110 for presentation by the electronic device, for example.
It is to be appreciated that the server 150 can be a background management server that supports the operation of client applications in the electronic device 110, a virtual server, etc., which is not limited in this disclosure.
It should be noted that, the method for correcting the corner points and/or the method for generating the three-dimensional model provided in the present disclosure may be performed by the electronic device 110 or may be performed by the server 150. Accordingly, the correction device for corner points and/or the generation device for three-dimensional model provided in the present disclosure may be disposed in the electronic device 110 or may be disposed in the server 150.
It should be understood that the number and type of electronic devices 110 and servers 150 in fig. 1 are merely illustrative. There may be any number and type of electronic devices 110 and servers 150 as desired for implementation.
The method of correcting the corner points provided by the present disclosure will be described in detail below with reference to fig. 2 to 5.
Fig. 2 is a flow chart of a method of correcting corner points according to an embodiment of the present disclosure.
As shown in fig. 2, the corner correction method 200 of this embodiment may include operations S210 to S240.
In operation S210, according to the position information of the target object corner included in each of the at least two images, the position information of the spatial point corresponding to the target object corner is determined, and at least two spatial position information is obtained.
According to an embodiment of the present disclosure, an image acquisition apparatus acquires a scene including a target object in different poses, thereby obtaining at least two images. Alternatively, the embodiment may randomly select at least two images from a plurality of images including the target object acquired by the image acquisition apparatus in different poses.
According to the embodiment of the disclosure, a deep neural network model may be used to process each image, so as to obtain location information of a target object corner in each image. The corner point of the target object is the corner point of the target object. The deep neural network model may include a semantic segmentation model composed of an encoder and a decoder, for example, a Unet model, a maskrnn model, and the like. Specifically, the embodiment may input each image into the deep neural network model, output a semantic segmentation result by the deep neural network model, and locate a pixel position of the target object corner according to the semantic segmentation result as position information of the target object corner.
In an embodiment, the deep neural network model may be a HoHoNet (360 Indoor Holistic Understanding with Latent Horizontal Features) model. The model is input as an image and output as the boundary between adjacent faces in the target object. This embodiment makes it possible to derive the pixel positions of the corner points of the target object from the boundary lines on the basis of the solid geometry.
It can be understood that the position information of the target object corner in the image may be detected in real time or may be detected in advance, which is not limited in this disclosure. The target object corner point may be, for example, an intersection point of three surfaces adjacent to each other of the target object, or an intersection point of three surfaces adjacent to each other, which are formed by the target object and other objects. For example, if the target object is a wall, the corner point is a corner point, which is not limited in this disclosure.
The embodiment can convert the position information of each target object corner according to the conversion relation between the pixel coordinate system and the space three-dimensional coordinate system, so as to obtain the position information of the space point corresponding to each target object corner. The spatial three-dimensional coordinate system may be a camera coordinate system, or may be a spatial rectangular coordinate system using any point in the spatial points as an origin of coordinates, and the position information of the spatial points may be represented by coordinate values of the spatial points in the spatial three-dimensional coordinate system, which is not limited in the present disclosure.
In an embodiment, at least two images may be panoramic images, and the embodiment may convert the position information of each target object corner according to a conversion relationship between a spherical coordinate system and a spatial three-dimensional coordinate system.
For example, the coordinate value of the pixel position of a certain target object corner is set as p i (x, y), the coordinate value of the space point corresponding to the angular point of the certain target object is P i (X, Y, Z), the coordinate values of the pixel positions can be first transformed into the spherical coordinate system by the following formulas (1) to (2):
Figure BDA0003829908650000061
r=c h |cotθ y i formula (2).
Wherein c h For the height of the shooting viewpoint of the image acquisition device from the ground, r is the horizontal distance from the spatial point corresponding to the target object corner to the shooting viewpoint under the camera coordinate system, r can be understood as the depth of the target object corner, for example, and the depth can be obtained by processing each image by adopting a monocular depth estimation model. The resolution of the image is w×h, and w=2h is set.
If the certain target object corner is a ground pixel point, for example, the coordinate value of the space point corresponding to the certain target object corner can be obtained by calculating based on a trigonometric function through the following formula (3):
Figure BDA0003829908650000071
in operation S220, the target object corner points in the at least two images are clustered according to the at least two spatial position information, so as to obtain at least one corner point group.
According to the embodiment of the disclosure, at least two spatial position information corresponding to the target object corner points in at least two images can be clustered by adopting a clustering algorithm to obtain at least one information group. Then, the target space points represented by all the information in each information group can be determined, and all the target object corner points corresponding to the target space points form a corner group. The clustering algorithm may include a Kmeas clustering algorithm or a density-based clustering algorithm, and the disclosure is not limited thereto.
In operation S230, the reference position information of the target object corner in each corner group is determined according to the position information of the spatial point corresponding to the target object corner in each corner group.
According to the embodiment of the present disclosure, considering the misalignment, the spatial points corresponding to all the target object corner points in each corner point group obtained by clustering may be considered to be substantially the same spatial point. Based on this, the embodiment may take the clustering center of the information group corresponding to each corner group as the reference spatial information of the same spatial point, and convert the position information to the pixel coordinate system corresponding to the image where each target object corner in each corner group is located according to the conversion relationship between the pixel coordinate system and the spatial three-dimensional coordinate system, so as to obtain the reference position information of each target object corner.
In an embodiment, the average value of all the information in the information group corresponding to each corner group may be used as reference spatial information, that is, the average value of the position information of all the spatial points corresponding to all the target object corner points in each corner group may be used as reference spatial information. The reference position information of each target object angular point obtained through conversion is the position information corresponding to the reference space information in the image of each target object angular point.
In operation S240, the position of the target object corner in the image is corrected according to the reference position information of the target object corner.
The embodiment can realize the correction of the position of each target object corner in the image by taking the reference position information of each target object corner as the position information of each target object corner in the image where the target object corner is located.
According to the embodiment of the disclosure, the same corner points of the target objects in at least two images can be clustered into one corner point group by projecting the corner points of the target objects in at least two images into a three-dimensional space and clustering the corner points of the target objects according to the positions of the space points obtained by projection in the three-dimensional space. And finally, determining the reference positions of the corner points of the target object according to the positions of the corner points of all the target objects in the same corner point group projected into the three-dimensional space, unifying the positions of the same corner points of the target objects in at least two images, achieving the purpose of correcting the images, and solving the dislocation problem existing when the point cloud data corresponding to a plurality of images are spliced. Compared with the technical scheme of manually correcting the corner points in the model, the correction precision and correction efficiency can be improved.
Setting at least two target object corner points in each image, the clustering operation of the target object corner points according to the spatial position information will be further expanded and detailed in conjunction with fig. 3.
Fig. 3 is a schematic diagram of clustering a plurality of target object corner points according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, when clustering a plurality of target object corner points, in addition to considering the position information of the spatial point corresponding to each target object corner point, for example, the relative position relationship between at least two spatial points corresponding to at least two target object corner points in the same image may be considered, so that it is further ensured that the spatial points corresponding to all target object corner points in one corner point group obtained by clustering are the same spatial point, which is beneficial to improving the accuracy of the reference position information of the determined target object corner points and thus is beneficial to improving the correction effect.
Illustratively, as shown in fig. 3, in embodiment 300, when clustering a plurality of target object corner points, at least two target object corner points 320 of a target object in a first image 311 of at least two images may be determined first for the first image 311. For each target object corner 321 of the at least two target object corners 320, neighboring corners 322 of each target object corner 321 are determined. The neighboring corner 322 may be a corner adjacent to a position of each target object corner 321 from among at least two corners included in the target object in the first image 311. For example, the neighboring corner 322 may be a corner located within a predetermined range of each target object corner 321 in the first image, or may be a corner located closest to each target object corner 321 in a predetermined orientation of each target object corner 321 in the first image.
After determining the neighboring corner points, the embodiment may determine the location feature 340 of the spatial point corresponding to each target object corner point 321 according to the location information 331 of the spatial point corresponding to each target object corner point 321 and the location information 332 of the spatial point corresponding to the neighboring corner point 322. For example, the position feature 340 may be formed from the position information of the spatial point corresponding to each target object corner 321 and the distance between the spatial point corresponding to each target object corner 321 and the spatial point corresponding to the adjacent corner 322. In this way, for at least two target object corner points included in the first image, the position features of the corresponding at least two spatial points may be obtained.
For each image of the set of images 312 other than the first image 311, the operation for the first image 311 is performed, and a plurality of target object corner points 350 may be obtained for the at least two images in total, the plurality of target object corner points 350 comprising the at least two target object corner points 320. Accordingly, the position features of the spatial points corresponding to each of the plurality of target object corner points 350 may be obtained, and a plurality of position features of the plurality of spatial points may be obtained in total.
The embodiment may cluster the plurality of target object corner points 350 according to the location features of the plurality of spatial points (i.e. the plurality of location features), thereby obtaining at least one corner point group 360. It will be appreciated that, in general, the number of at least one corner group 360 may be equal to the actual number of corners of the target object, which is not limited by the present disclosure.
In an embodiment, it is considered that the relative orientation between two adjacent corner points of the target object is not affected by the pose change of the image acquisition device. I.e. the relative orientation between two spatial points corresponding to two adjacent corner points in different images is the same. In this embodiment, when determining the position feature of the spatial point corresponding to each target object corner, the direction feature of the spatial point corresponding to each target object corner may be determined according to the position information of the spatial point corresponding to each target object corner and the position information of the spatial point corresponding to the adjacent corner, where the direction feature may represent the relative orientation of each target object corner and the adjacent corner in the three-dimensional space. The embodiment may form a plurality of groups from the direction feature and the position information of the spatial point corresponding to each target object corner point, and use the plurality of groups as the position feature. For example, if the number of neighboring corner points is N, the multi-tuple is the (n+1) tuple. The (n+1) tuple comprises N position features and position information of a space corresponding to one target object corner, wherein the N position features are in one-to-one correspondence with N adjacent corner.
According to the embodiment, the position characteristics of the space corresponding to each target object corner are determined according to the direction characteristics and the position information, so that the position characteristics can more comprehensively reflect the positions of the space points corresponding to the corner, the clustering precision is improved, and the fact that all target object corners in each corner group obtained through clustering correspond to the same space point of the three-dimensional space is further ensured.
Fig. 4 is a schematic diagram of determining directional characteristics according to an embodiment of the present disclosure.
As shown in fig. 4, in embodiment 400, the target object corner points in the set image 410 include at least corner points 401 to 406.
For example, for the corner 403, the determined neighboring corner may include only any one of the corner 404, the corner 401, and the corner 405, or may include any plurality of the corner 404, the corner 401, and the corner 405, which is not limited in this disclosure.
In an embodiment, the determined neighboring corner points may comprise two corner points located in two orientations on both sides of each target object corner point. For example, for a corner 403, the determined neighboring corner may include a corner 401 adjacent to the corner 403 in a first orientation and a corner 405 adjacent to the corner 403 in a second orientation. Wherein the first orientation and the second orientation are located at two sides of the corner point 403. By arranging adjacent corner points in two directions on two sides of the corner point, the position features of the space points corresponding to the corner point 403 can be expressed more accurately.
In embodiment 400, for example, a direction vector between a spatial point corresponding to each target object corner and a spatial point corresponding to an adjacent corner may be used as a direction feature of the spatial point corresponding to each target object corner. For example, for corner 403, it is set that neighboring corners include corner 401 and corner 405. The embodiment may determine, as a direction vector, a vector from the spatial point corresponding to the corner 403 to the spatial point corresponding to the corner 401 according to the position information of the spatial point corresponding to the corner 403 and the position information of the spatial point corresponding to the corner 401. According to the position information of the space point corresponding to the corner 403 and the position information of the space point corresponding to the corner 405, the vector of the space point corresponding to the corner 405 pointed by the space point corresponding to the corner 403 is determined as another direction vector, and the two direction vectors can form the direction feature of the space point corresponding to the corner 403.
Illustratively, assuming that the first azimuth is a left azimuth and the second azimuth is a right azimuth, the position feature of the spatial point corresponding to the corner point 403 may be represented by the following triplet [ left direction vector, spatial point position, right direction vector ]. The left direction vector may be a vector in which a spatial point corresponding to the corner 403 points to a spatial point corresponding to the corner 401, the spatial point position is represented by position information of the spatial point corresponding to the corner 403, and the right direction vector may be a vector in which a spatial point corresponding to the corner 403 points to a spatial point corresponding to the corner 405.
Fig. 5 is a schematic diagram of clustering a plurality of target object corner points according to a location feature according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, a plurality of target object corner points may be clustered, for example, according to feature distances between location features. Specifically, feature distances between the position features may be calculated first, that is, for each two corner points of the plurality of target object corner points, a distance between the position features of two spatial points corresponding to each two corner points may be determined, so as to obtain feature distances between the position features of the plurality of spatial points.
For example, the feature distance may be represented by a euclidean distance, a cosine distance, or the like, which is not limited by the present disclosure. The embodiment may classify two corner points corresponding to two position features having feature distances between each other smaller than a distance threshold value into one corner point group.
As shown in fig. 5, in embodiment 500, the location features include a direction feature of a spatial point corresponding to each target object corner and location information of a spatial point corresponding to each target object corner. In the embodiment, when determining the feature distance, the feature sub-distance can be calculated for the direction feature and the position information respectively, and finally the feature distance is determined according to the obtained two feature sub-distances. By determining the feature sub-distances for different information in the position feature, respectively, it is convenient to calculate the distances for different information in different calculation modes, and thus it is advantageous to improve the accuracy of the determined feature distances.
For example, for a first corner 510, a first location feature 520 of a determined corresponding spatial point may include a first direction feature 521 and first location information 522. For the second corner point 530, the determined second location feature 540 of the corresponding spatial point may comprise a second direction feature 541 and second location information 542. The embodiment may calculate the cosine distance between the first direction feature 521 and the second direction feature 541 to obtain the first feature sub-distance 551, and calculate the euclidean distance between the first position information 522 and the second position information 542 to obtain the second feature sub-distance 552. Finally, the weighted sum between the first feature sub-distance 551 and the second feature sub-distance 552 is taken as the feature distance 560 between the first location feature 520 and the second location feature 540. The weighting coefficient used in the weighting process can be set according to actual requirements, for example, the two feature sub-distances can be directly added to obtain the feature distance, which is not limited in the disclosure.
In this embodiment, if the feature distance 560 is smaller than a preset distance threshold, it may be determined that the first corner 510 and the second corner 530 belong to the same corner group. In this embodiment, in the obtained corner group, feature distances between the position features of two spatial points corresponding to any two target object corner points may be smaller than the distance threshold. Or, in the obtained corner group, feature distances between the position features of the space points corresponding to the corner points of each target object and the position features serving as the clustering centers are smaller than a distance threshold, which is not limited by the disclosure.
Based on the method for correcting the corner points provided by the present disclosure, the present disclosure further provides a method for generating a three-dimensional model, which will be described in detail below with reference to fig. 6.
Fig. 6 is a flow diagram of a method of generating a three-dimensional model according to an embodiment of the present disclosure.
As shown in fig. 6, the three-dimensional model generation method 600 of this embodiment may include operations S610 to S620.
In operation S610, point cloud data corresponding to each of at least two panoramic images including a target object is determined.
In operation S620, point cloud data corresponding to at least two panoramic images are aggregated to obtain a three-dimensional scene model including a target object.
According to an embodiment of the present disclosure, each of the at least two panoramic images may be an image obtained by correcting the target object corner in the initial image using the method of correcting the corner described above.
The embodiment can adopt a point cloud generation network to generate point cloud data corresponding to each panoramic image. For example, a sparse point cloud network may be employed to generate sparse point cloud data from each image, and then the sparse point cloud data is input into a Dense model (Dense Module) to generate a Dense point cloud. Wherein the sparse point cloud network may comprise an encoder and a decoder. Wherein the encoder is formed by a convolutional network and the decoder is formed by a deconvolution network and a convolutional network. The dense model may process sparse point cloud data through a feature extraction operation and a feature expansion operation.
According to the embodiment of the disclosure, when determining the point cloud data corresponding to each panoramic image, for example, a monocular depth estimation model may be used to process each panoramic image first to obtain a depth map of each panoramic image. And then, according to the panoramic images and the depth map, determining the point cloud data corresponding to each panoramic image.
The monocular depth estimation model is a generation model, and is input as an image and output as an image containing depth information. The monocular depth estimation model may be a model constructed based on a deep learning technique. The monocular depth estimation model may be a supervised monocular depth estimation model or a self-supervised monocular depth estimation model. The supervised monocular depth estimation model needs to take a real depth map as supervision, and relies on a high-precision depth sensor to capture real depth information. The self-supervising monocular depth estimation model may use constraints between successive frames to predict depth information. The self-supervising monocular depth estimation model may include, for example, the model framework MLDA-Net (Mult-Level Dual Attention-Based Network for Self-Supervised Monocular Depth Estimation) or the like. The model framework MLDA-Net takes a low-resolution color image as an input, and corresponding depth information can be estimated in a self-supervision mode; the framework uses a multi-level feature fusion (multi-level feature extraction, MLFE) strategy to extract rich hierarchical representations from different receptive fields for high quality depth prediction. The model framework may obtain efficient features with a dual attention strategy that enhances global and local structural information by combining global and local attention modules. The model framework uses re-weighted policies to compute the loss function, re-weights the depth information of the outputs at different levels, and thereby effectively oversees the depth information of the final output.
In the overall structure of the model framework, input data comprising a plurality of different scales, which are selectable parameters scales, are processed through a two-way convolutional network to extract features, then integrated with an attention network GA and further extracted. The convolutional network and the attention network constitute a coding network structure. After extracting the features, the model framework inputs the extracted features into a second network structure, the second network structure mainly performs feature extraction and upsampling based on two attention modules, and finally, depth maps of different scales corresponding to the input maps of different scales are output.
It will be appreciated that the model framework of the monocular depth estimation model described above is merely an example to facilitate an understanding of the present disclosure, which is not limited thereto. For example, the monocular depth estimation model may also be constructed based on a Markov random field, or may be constructed based on a Monodepth algorithm or SVS (Single View Stereo Matching) algorithm, or the like.
After obtaining the depth map, the embodiment may map the depth map and each panoramic image into three-dimensional point cloud data based on three-dimensional geometric principles. The three-dimensional point cloud data may include three-dimensional coordinates, color information, reflection intensity information, and the like. The color information may be represented by, for example, RGB values of corresponding pixels in each panoramic image, and reflection intensity information included in each point cloud data corresponding to each pixel may be calculated according to a ray tracing algorithm and the RGB values of corresponding pixels in each panoramic image.
According to the embodiment, the monocular depth estimation model is adopted to generate the depth map corresponding to each image, and then the point cloud data are determined according to the depth map, so that the accuracy and the efficiency of determining the point cloud data can be improved.
According to the embodiment of the disclosure, when point cloud data corresponding to at least two panoramic images are aggregated, for example, one image of a plurality of panoramic images may be used as a pose reference image, and each other image except the pose reference image and the pose reference image form an image pair. And then, determining the relative pose aiming at the image pair, and converting the point cloud data corresponding to each other image into a three-dimensional coordinate system where the point cloud data corresponding to the pose reference image is located according to the relative pose. And aggregating the point cloud data after coordinate conversion and the point cloud data corresponding to the pose reference image, so as to obtain a three-dimensional scene model comprising the target object.
In the embodiment of the disclosure, because the panoramic image of the three-dimensional model is generated and corrected by adopting the angular point correction method, the generated three-dimensional scene model has no dislocation, the three-dimensional scene model is more real and vivid, and the user experience is improved.
Based on the method for correcting the corner points provided by the present disclosure, the present disclosure further provides a device for correcting the corner points, which will be described in detail below with reference to fig. 7.
Fig. 7 is a block diagram of a structure of a correction device of a corner point according to an embodiment of the present disclosure.
As shown in fig. 7, the corner correction apparatus 700 of this embodiment may include a location information determination module 710, a corner clustering module 720, a reference location determination module 730, and a location correction module 740.
The location information determining module 710 is configured to determine location information of a spatial point corresponding to the target object corner according to location information of the target object corner included in each of the at least two images, so as to obtain at least two spatial location information. In an embodiment, the location information determining module 710 may be configured to perform the operation S210 described above, which is not described herein.
The corner clustering module 720 is configured to cluster the target object corners in at least two images according to at least two spatial position information, so as to obtain at least one corner group. In an embodiment, the corner clustering module 720 may be configured to perform the operation S220 described above, which is not described herein.
The reference position determining module 730 is configured to determine reference position information of the target object corner in each corner group according to position information of a spatial point corresponding to the target object corner in each corner group. In an embodiment, the reference position determining module 730 may be configured to perform the operation S230 described above, which is not described herein.
The position correction module 740 is configured to correct a position of the target object corner in the image according to the reference position information of the target object corner. In an embodiment, the location correction module 740 may be configured to perform the operation S240 described above, which is not described herein.
According to an embodiment of the present disclosure, each image comprises at least two target object corner points. The corner clustering module 720 may include a location feature determination sub-module and a clustering sub-module. The position feature determining sub-module is used for determining the position feature of the space point corresponding to each target object corner according to the position information of the space point corresponding to each target object corner and the position information of the space point corresponding to the adjacent corner in each image. The adjacent corner points are corner points adjacent to the position of each target object corner point in at least two target object corner points included in each image. The clustering sub-module is used for clustering the plurality of target object corner points according to the position characteristics of the plurality of space points corresponding to the plurality of target object corner points in the at least two images to obtain at least one corner point group.
According to an embodiment of the present disclosure, the position feature determination submodule may include a direction feature determination unit and a position feature determination unit. The direction feature determining unit is used for determining the direction feature of the space point corresponding to each target object corner according to the position information of the space point corresponding to each target object corner and the position information of the space point corresponding to the adjacent corner. The position feature determining unit is used for determining the position feature according to the direction feature and the position information of the space point corresponding to each target object angular point.
According to an embodiment of the present disclosure, the above-mentioned clustering sub-module includes a feature distance determining unit and a clustering unit. The feature distance determining unit is used for determining feature distances between the position features of two space points corresponding to each two corner points of the plurality of target object corner points. The clustering unit is used for clustering the corner points of the plurality of target objects according to the feature distances between the position features of the plurality of space points.
According to an embodiment of the present disclosure, the position features include position information of spatial points corresponding to each corner point and a direction feature. The characteristic distance determining unit comprises a first distance determining subunit, a second distance determining subunit and a third distance determining subunit. The first distance determining subunit is configured to determine a distance between positions indicated by two position information in two position features of two spatial points, so as to obtain a first feature sub-distance. The second distance determining subunit is configured to determine a difference between two direction features in the two position features, and obtain a second feature sub-distance. The third distance determining subunit is configured to determine a feature distance between the position features of the two spatial points according to the first feature sub-distance and the second feature sub-distance. The direction characteristic indicates a direction vector between a space point corresponding to each target object corner point and a space point corresponding to an adjacent corner point.
According to an embodiment of the present disclosure, the neighboring corner points include a corner point neighboring each target object corner point in a first orientation and a corner point neighboring each target object corner point in a second orientation. The first azimuth and the second azimuth are positioned at two sides of each target object angular point.
The reference position determination module 730 may include a mean determination sub-module and a reference position determination sub-module according to embodiments of the present disclosure. The mean value determining submodule is used for determining the mean value of the position information of all the space points corresponding to all the target object corner points in each corner point group and taking the mean value as reference space information. The reference position determining submodule is used for determining position information corresponding to the reference space information in the image where each target object corner point in each corner point group is located, and the position information is used as the reference position information of each target object corner point.
Based on the method for generating the three-dimensional model provided by the present disclosure, the present disclosure further provides a device for generating the three-dimensional model, which will be described in detail below with reference to fig. 8.
Fig. 8 is a block diagram of a structure of a three-dimensional model generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the three-dimensional model generating apparatus 800 of this embodiment may include a point cloud data determining module 810 and a model obtaining module 820.
The point cloud data determining module 810 is configured to determine, for at least two panoramic images including a target object, point cloud data corresponding to the at least two panoramic images. Wherein the target object corner in each panoramic image is a corner corrected by the corner correction device described above. In an embodiment, the point cloud data determining module 810 may be configured to perform the operation S610 described above, which is not described herein.
The model obtaining module 820 is configured to aggregate point cloud data corresponding to at least two panoramic images to obtain a three-dimensional scene model including a target object. In an embodiment, the model obtaining module 820 may be used to perform the operation S620 described above, which is not described herein.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and applying personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public welcome is not violated. In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement the method of correcting corner points and/or the method of generating a three-dimensional model of embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 901 performs the respective methods and processes described above, for example, a correction method of corner points and/or a generation method of three-dimensional models. For example, in some embodiments, the method of correction of corner points and/or the method of generation of a three-dimensional model may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the correction method of corner points and/or the generation method of three-dimensional model described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the correction method of corner points and/or the generation method of the three-dimensional model in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS"). The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (18)

1. A method of correcting corner points, comprising:
determining the position information of a space point corresponding to the target object corner according to the position information of the target object corner included in each of at least two images, and obtaining at least two space position information;
clustering the target object corner points in the at least two images according to the at least two spatial position information to obtain at least one corner point group;
Determining the reference position information of the target object corner in each corner group according to the position information of the space point corresponding to the target object corner in each corner group; and
correcting the position of the target object corner in the image according to the reference position information of the target object corner.
2. The method of claim 1, wherein each image includes at least two target object corner points therein; clustering the target object corner points in the at least two images according to the at least two spatial position information to obtain at least one corner point group, wherein the clustering comprises the following steps:
determining the position characteristics of the space points corresponding to each target object corner according to the position information of the space points corresponding to each target object corner and the position information of the space points corresponding to adjacent corners in each image; the adjacent corner points are corner points adjacent to the position of each target object corner point in at least two target object corner points included in each image; and
and clustering the plurality of target object corner points according to the position characteristics of a plurality of space points corresponding to the plurality of target object corner points in the at least two images to obtain the at least one corner point group.
3. The method of claim 2, wherein determining the location feature of the spatial point corresponding to each target object corner according to the location information of the spatial point corresponding to each target object corner and the location information of the spatial point corresponding to the neighboring corner in each image comprises:
determining the direction characteristics of the space points corresponding to each target object corner according to the position information of the space points corresponding to each target object corner and the position information of the space points corresponding to the adjacent corner; and
and determining the position features according to the direction features and the position information of the space points corresponding to the corner points of each target object.
4. The method of claim 2, wherein the clustering the plurality of target object corner points according to the location features of the plurality of spatial points corresponding to the plurality of target object corner points in the at least two images comprises:
determining feature distances between the position features of two space points corresponding to each two corner points in the plurality of target object corner points; and
and clustering the plurality of target object corner points according to the feature distances between the position features of the plurality of space points.
5. The method according to claim 4, wherein the location features include location information of a spatial point corresponding to each target object corner and a direction feature; the determining the feature distance between the position features of the two space points corresponding to each two corner points of the plurality of target object corner points comprises:
determining the distance between positions indicated by two position information in two position features of the two space points to obtain a first feature sub-distance;
determining the difference between two direction features in the two position features to obtain a second feature sub-distance; and
determining a feature distance between the position features of the two spatial points according to the first feature sub-distance and the second feature sub-distance,
the direction feature indicates a direction vector between a space point corresponding to each target object corner point and a space point corresponding to the adjacent corner point.
6. The method of any one of claims 2-5, wherein:
said adjacent corner points comprising a corner point adjacent to said each target object corner point in a first orientation and a corner point adjacent to said each target object corner point in a second orientation,
The first azimuth and the second azimuth are positioned at two sides of each target object angular point.
7. The method of claim 1, wherein determining the reference position information of the target object corner in each corner group according to the position information of the spatial point corresponding to the target object corner in each corner group comprises:
determining the average value of the position information of all the space points corresponding to all the target object corner points in each corner point group as reference space information; and
and determining position information corresponding to the reference space information in the image where each target object corner in each corner group is located, and taking the position information as the reference position information of each target object corner.
8. A method of generating a three-dimensional model, comprising: determining point cloud data corresponding to each panoramic image in at least two panoramic images comprising a target object; and
aggregating the point cloud data corresponding to the at least two panoramic images to obtain a three-dimensional scene model comprising the target object,
wherein the target object corner in each panoramic image is a corner corrected by the method of any one of claims 1 to 7.
9. A corner correction device comprising:
the position information determining module is used for determining the position information of the space point corresponding to the target object angular point according to the position information of the target object angular point included in each image in at least two images to obtain at least two space position information;
the corner clustering module is used for clustering the target object corners in the at least two images according to the at least two spatial position information to obtain at least one corner group;
the reference position determining module is used for determining the reference position information of the target object corner in each corner group according to the position information of the space point corresponding to the target object corner in each corner group; and
and the position correction module is used for correcting the position of the target object corner in the image according to the reference position information of the target object corner.
10. The apparatus of claim 9, wherein each image includes at least two target object corner points therein; the corner clustering module comprises:
the position feature determining sub-module is used for determining the position feature of the space point corresponding to each target object corner according to the position information of the space point corresponding to each target object corner in each image and the position information of the space point corresponding to the adjacent corner; the adjacent corner points are corner points adjacent to the position of each target object corner point in at least two target object corner points included in each image; and
And the clustering sub-module is used for clustering the plurality of target object corner points according to the position characteristics of a plurality of space points corresponding to the plurality of target object corner points in the at least two images to obtain the at least one corner point group.
11. The apparatus of claim 10, wherein the location feature determination submodule comprises:
the direction feature determining unit is used for determining the direction feature of the space point corresponding to each target object corner according to the position information of the space point corresponding to each target object corner and the position information of the space point corresponding to the adjacent corner; and
and the position feature determining unit is used for determining the position feature according to the direction feature and the position information of the space point corresponding to each target object angular point.
12. The apparatus of claim 10, wherein the clustering sub-module comprises:
a feature distance determining unit, configured to determine feature distances between position features of two spatial points corresponding to each two corner points of the plurality of target object corner points; and
and the clustering unit is used for clustering the plurality of target object corner points according to the characteristic distances between the position characteristics of the plurality of space points.
13. The apparatus of claim 12, wherein the location features include location information of a direction feature and a spatial point corresponding to each target object corner; the feature distance determination unit includes:
a first distance determining subunit, configured to determine a distance between positions indicated by two pieces of position information in two position features of the two spatial points, to obtain a first feature sub-distance;
a second distance determining subunit, configured to determine a difference between two direction features in the two position features, to obtain a second feature sub-distance; and
a third distance determining subunit configured to determine a feature distance between the location features of the two spatial points according to the first feature sub-distance and the second feature sub-distance,
the direction feature indicates a direction vector between a space point corresponding to each target object corner point and a space point corresponding to the adjacent corner point.
14. The apparatus of any one of claims 10-13, wherein:
said adjacent corner points comprising a corner point adjacent to said each target object corner point in a first orientation and a corner point adjacent to said each target object corner point in a second orientation,
The first azimuth and the second azimuth are positioned at two sides of each target object angular point.
15. The apparatus of claim 9, wherein the reference position determination module comprises:
the mean value determining submodule is used for determining the mean value of the position information of all the space points corresponding to all the target object corner points in each corner point group and taking the mean value as reference space information; and
the reference position determining sub-module is used for determining position information corresponding to the reference space information in the image where each target object corner point in each corner point group is located, and the position information is used as the reference position information of each target object corner point.
16. A three-dimensional model generation device, comprising:
the point cloud data determining module is used for determining point cloud data corresponding to each panoramic image in at least two panoramic images comprising a target object; and
a model obtaining module for aggregating the point cloud data corresponding to the at least two panoramic images to obtain a three-dimensional scene model comprising the target object,
wherein the target object corner in each panoramic image is a corner corrected using the apparatus of any one of claims 9 to 15.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202211075976.6A 2022-09-02 2022-09-02 Corner correction method and generation method and device of three-dimensional model in meta universe Active CN115439331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211075976.6A CN115439331B (en) 2022-09-02 2022-09-02 Corner correction method and generation method and device of three-dimensional model in meta universe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211075976.6A CN115439331B (en) 2022-09-02 2022-09-02 Corner correction method and generation method and device of three-dimensional model in meta universe

Publications (2)

Publication Number Publication Date
CN115439331A CN115439331A (en) 2022-12-06
CN115439331B true CN115439331B (en) 2023-07-07

Family

ID=84246778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211075976.6A Active CN115439331B (en) 2022-09-02 2022-09-02 Corner correction method and generation method and device of three-dimensional model in meta universe

Country Status (1)

Country Link
CN (1) CN115439331B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9912934B2 (en) * 2014-07-21 2018-03-06 Moaath Alrajab Determining three dimensional information using a single camera
CN108765487B (en) * 2018-06-04 2022-07-22 百度在线网络技术(北京)有限公司 Method, device, equipment and computer readable storage medium for reconstructing three-dimensional scene
CN113191174B (en) * 2020-01-14 2024-04-09 北京京东乾石科技有限公司 Article positioning method and device, robot and computer readable storage medium
CN112097732A (en) * 2020-08-04 2020-12-18 北京中科慧眼科技有限公司 Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112288853B (en) * 2020-10-29 2023-06-20 字节跳动有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium

Also Published As

Publication number Publication date
CN115439331A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
EP4027299A2 (en) Method and apparatus for generating depth map, and storage medium
CN113077548B (en) Collision detection method, device, equipment and storage medium for object
CN115439543B (en) Method for determining hole position and method for generating three-dimensional model in meta universe
CN112529097B (en) Sample image generation method and device and electronic equipment
US20220198743A1 (en) Method for generating location information, related apparatus and computer program product
CN115375823A (en) Three-dimensional virtual clothing generation method, device, equipment and storage medium
CN114792355A (en) Virtual image generation method and device, electronic equipment and storage medium
CN112509135B (en) Element labeling method, element labeling device, element labeling equipment, element labeling storage medium and element labeling computer program product
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
CN114723894B (en) Three-dimensional coordinate acquisition method and device and electronic equipment
CN115619986B (en) Scene roaming method, device, equipment and medium
CN115439331B (en) Corner correction method and generation method and device of three-dimensional model in meta universe
CN115375847B (en) Material recovery method, three-dimensional model generation method and model training method
CN115439536B (en) Visual map updating method and device and electronic equipment
US20220189027A1 (en) Panorama Rendering Method, Electronic Device and Storage Medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN113763468A (en) Positioning method, device, system and storage medium
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
CN115880555B (en) Target detection method, model training method, device, equipment and medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114463409B (en) Image depth information determining method and device, electronic equipment and medium
CN113470131B (en) Sea surface simulation image generation method and device, electronic equipment and storage medium
CN115375740A (en) Pose determination method, three-dimensional model generation method, device, equipment and medium
CN115908723B (en) Polar line guided multi-view three-dimensional reconstruction method based on interval perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant