CN114241029B - Image three-dimensional reconstruction method and device - Google Patents

Image three-dimensional reconstruction method and device Download PDF

Info

Publication number
CN114241029B
CN114241029B CN202111564281.XA CN202111564281A CN114241029B CN 114241029 B CN114241029 B CN 114241029B CN 202111564281 A CN202111564281 A CN 202111564281A CN 114241029 B CN114241029 B CN 114241029B
Authority
CN
China
Prior art keywords
matrix
images
reconstructed
camera coordinate
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111564281.XA
Other languages
Chinese (zh)
Other versions
CN114241029A (en
Inventor
周杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202111564281.XA priority Critical patent/CN114241029B/en
Publication of CN114241029A publication Critical patent/CN114241029A/en
Application granted granted Critical
Publication of CN114241029B publication Critical patent/CN114241029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for three-dimensional reconstruction of an image, wherein the method comprises the following steps: performing feature extraction and feature matching on at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed; determining a rotation matrix between at least two images to be reconstructed according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate systems and the infinity homography in the two camera coordinate systems; determining a translation matrix between at least two images to be reconstructed through a epipolar constraint algorithm based on the rotation matrix and the coordinates of at least one pair of matching points in the matching point set in a corresponding camera coordinate system; and performing three-dimensional reconstruction on at least two images to be reconstructed based on the rotation matrix and the translation matrix. According to the method and the device, the rotation matrix can be calculated according to the solution of the approximate infinite homography matrix of the image to be reconstructed, and then the translation matrix is calculated according to the rotation matrix, so that the final pose is obtained, and the three-dimensional reconstruction is realized.

Description

Image three-dimensional reconstruction method and device
Technical Field
The present disclosure relates to the field of computer vision and image processing technologies, and in particular, to a method and an apparatus for three-dimensional image reconstruction.
Background
Motion-based modeling (Structure From Motion, abbreviated as SFM) is a traditional three-dimensional reconstruction technique, and the workflow thereof is as follows: pre-extracting and matching feature points in the two images, and solving a basic matrix F and an essential matrix E according to epipolar geometric constraint so as to calculate the pose [ R | T ] between the two images; and solving a projection matrix of the two-dimensional pixel points mapped to the three-dimensional space by using the basic matrix and the pose [ R | T ] between the two images through a triangle method, and further performing iterative optimization on the projection matrix by using the characteristic matching points through adjustment of a beam method to obtain the three-dimensional space coordinates of the characteristic points, obtain a sparse point cloud model and realize three-dimensional reconstruction.
When the existing SFM is adopted to realize three-dimensional reconstruction, if the matched characteristic points in the two images are coplanar, the fundamental matrix F and the essential matrix E can not be solved, and the homography matrix H needs to be solved. In a scene (such as an outdoor panoramic picture shot at an indoor balcony) with a large number of feature points as long-range features in an image, the number of the long-range feature points is far greater than that of the short-range feature points, so that the basic matrix F and the essential matrix E are easy to degenerate and fail to solve, and the long-range feature points satisfy the homography H close to infinity However, the close-range feature point is not satisfied, and is therefore based on H Only the rotation matrix R can be determined, the translation matrix T cannot be calculated, and further the final pose cannot be obtained, so that the three-dimensional reconstruction effect is comparedAnd (4) poor. In an actual application scene, when the images contain more distant view information and the distant view information in the plurality of images contains more matched feature points, the operation of splicing the panoramic images is difficult, splicing dislocation of the panoramic images often occurs, and great trouble is brought to a display end.
Disclosure of Invention
One technical problem to be solved by the embodiments of the present disclosure is: provided are a method and a device for three-dimensional reconstruction of an image.
According to an aspect of the embodiments of the present disclosure, there is provided an image three-dimensional reconstruction method applied in an image three-dimensional reconstruction scene in which at least two images to be reconstructed satisfy a long-range image condition, the method including:
performing feature extraction and feature matching on the at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed;
determining a rotation matrix between the at least two images to be reconstructed according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate systems and the infinity homography matrix in the two camera coordinate systems;
determining a translation matrix between the at least two images to be reconstructed through a epipolar constraint algorithm based on the rotation matrix and coordinates of at least one pair of matching points in the matching point set in a corresponding camera coordinate system;
and performing three-dimensional reconstruction on the at least two images to be reconstructed based on the rotation matrix and the translation matrix.
In an embodiment of the present disclosure, before determining the rotation matrix between the at least two images to be reconstructed, the method further includes:
and calculating the homography matrix at infinity in the two camera coordinate systems based on the solving algorithm of the homography matrix.
In another embodiment of the present disclosure, the homography-based solution algorithm calculates an infinity homography in two camera coordinate systems, including:
determining a homography matrix based on equation (1):
Figure BDA0003421432670000021
h represents a homography matrix, K' and K respectively represent internal reference matrices of the at least two to-be-reconstructed images corresponding to the image pickup devices, R represents a rotation matrix, T represents a translation matrix, n represents a unit normal vector of a target plane in a first camera coordinate system when the two image pickup devices collect image information of the target plane, and n represents a unit normal vector of the target plane in the first camera coordinate system τ The expression pair is subjected to transposition operation and expresses the distance from the target plane to the coordinate origin of the first camera coordinate system;
determining an infinity homography in the image coordinate system based on equation (2) and the homography:
Figure BDA0003421432670000022
wherein H represents the homography matrix, K' and K respectively represent the internal reference matrices of the at least two images to be reconstructed corresponding to the camera device, R represents a rotation matrix, H ∞1 Representing an infinity homography in the image coordinate system;
based on an infinity homography H in the image coordinate system ∞1 Determining an infinity homography H in the two camera coordinate systems ∞2
In another embodiment of the present disclosure, the determining a rotation matrix between the at least two images to be reconstructed includes:
determining the rotation matrix based on equation (3):
p 2 =H ∞2 p 1 =Rp 1 formula (3)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set under the corresponding camera coordinate system, H ∞2 Denotes an infinity homography in the two camera coordinate systems and R denotes a rotation matrix.
In another embodiment of the present disclosure, the determining, by a epipolar constraint algorithm, a translation matrix between the at least two images to be reconstructed based on the rotation matrix and coordinates of at least one pair of matching points in the set of matching points in a corresponding camera coordinate system includes:
determining the essential matrix based on equation (4):
e = T ^ R type (4)
Wherein R represents a rotation matrix, T represents a translation matrix, and T ^ T; representing the antisymmetric operation of the translation matrix T;
determining a constraint relation between coordinates of at least one pair of matching points in the translation matrix and the matching point set in a corresponding camera coordinate system by equation (5) based on equation (4) and the epipolar constraint algorithm:
determining the translation matrix based on the constraint relationship.
In another embodiment of the present disclosure, the determining a constrained relationship between coordinates of at least one pair of matching points in the set of matching points in a corresponding camera coordinate system based on equation (4) and the epipolar constraint algorithm includes:
according to the epipolar constraint algorithm, equation (5) is obtained:
Figure BDA0003421432670000031
wherein E represents an essential matrix, p 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000032
represents p 2 The transposed matrix of (2);
obtaining the constraint relation according to the formulas (4) and (5):
Figure BDA0003421432670000033
wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set at the corresponding camerasThe coordinates of the system are determined according to the system,
Figure BDA0003421432670000034
represents p 2 R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T.
In yet another embodiment of the present disclosure, the degree of freedom of the translation matrix T is 2.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for three-dimensional reconstruction of an image, which is applied to a three-dimensional reconstruction scene of an image in which at least two images to be reconstructed satisfy a long-range image condition, the apparatus including:
the matching point determining module is used for performing feature extraction and feature matching on the at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed;
the rotation matrix determining module is used for determining a rotation matrix between the at least two images to be reconstructed according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate systems and the infinity homography matrices in the two camera coordinate systems;
a translation matrix determination module, configured to determine, based on the rotation matrix and coordinates of at least one pair of matching points in the matching point set in a corresponding camera coordinate system, a translation matrix between the at least two images to be reconstructed through a epipolar constraint algorithm;
and the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the at least two images to be reconstructed based on the rotation matrix and the translation matrix.
In yet another embodiment of the present disclosure, the apparatus further comprises:
and the infinite homography matrix determining module is used for calculating infinite homography matrixes in the two camera coordinate systems based on a homography matrix solving algorithm.
In yet another embodiment of the present disclosure, the infinity homography determination module includes:
a first determination submodule for determining a homography matrix based on equation (1):
Figure BDA0003421432670000041
h represents a homography matrix, K' and K respectively represent internal reference matrices of the at least two to-be-reconstructed images corresponding to the image pickup devices, R represents a rotation matrix, T represents a translation matrix, n represents a unit normal vector of a target plane in a first camera coordinate system when the two image pickup devices collect image information of the target plane, and n represents a unit normal vector of the target plane in the first camera coordinate system τ The expression pair is subjected to transposition operation and expresses the distance from the target plane to the coordinate origin of the first camera coordinate system;
a second determination submodule for determining an infinity homography in the image coordinate system based on equation (2) and the homography:
Figure BDA0003421432670000042
wherein H represents the homography matrix, K' and K respectively represent the internal reference matrices of the at least two images to be reconstructed corresponding to the camera device, R represents a rotation matrix, H ∞1 Representing an infinity homography in the image coordinate system;
a third determining submodule for determining a homography matrix H based on infinity in the image coordinate system ∞1 Determining an infinity homography H in the two camera coordinate systems ∞2
In yet another embodiment of the present disclosure, the rotation matrix determining module is configured to determine the rotation matrix based on equation (3):
p 2 =H ∞2 p 1 =Rp 1 formula (3)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the set of matching points in the corresponding camera coordinate system, H ∞2 Denotes an infinity homography in the two camera coordinate systems and R denotes a rotation matrix.
In yet another embodiment of the present disclosure, the translation matrix determination module includes:
a fourth determination submodule for determining the essential matrix based on equation (4):
e = T ^ R type (4)
Wherein R represents a rotation matrix, T represents a translation matrix, and T ^ T; representing the antisymmetric operation on the translation matrix T;
a fifth determining submodule, configured to determine a constraint relationship between coordinates of at least one pair of matching points in the translation matrix and the set of matching points in a corresponding camera coordinate system based on equation (4) and the epipolar constraint algorithm;
a sixth determining submodule, configured to determine the translation matrix based on the constraint relationship.
In an embodiment of the present disclosure, the fifth determining sub-module is specifically configured to determine, based on equation (4) and the epipolar constraint algorithm, a constraint relationship between coordinates of at least one pair of matching points in the translation matrix and the matching point set in a corresponding camera coordinate system, and includes:
according to the epipolar constraint algorithm, equation (5) is obtained:
Figure BDA0003421432670000051
wherein E represents an essential matrix, p 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000052
represents p 2 The transposed matrix of (2);
obtaining the constraint relation according to the formulas (4) and (5):
Figure BDA0003421432670000053
wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000054
represents p 2 R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation on the translation matrix T.
In an embodiment of the present disclosure, the degree of freedom of the translation matrix T is 2.
According to another aspect of the embodiments of the present disclosure, an electronic device is provided, which is applied to an image three-dimensional reconstruction scene in which at least two images to be reconstructed satisfy a long-range image condition, and the electronic device includes:
a memory for storing a computer program;
and a processor for executing the computer program stored in the memory, wherein when the computer program is executed, the three-dimensional image reconstruction method is realized.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, which is applied to an image three-dimensional reconstruction scene in which at least two images to be reconstructed satisfy a long-range image condition, and on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the above image three-dimensional reconstruction method.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer program product, which is applied to an image three-dimensional reconstruction scene in which at least two images to be reconstructed satisfy a long-range image condition, and includes a computer program/instruction, and the computer program/instruction when executed by a processor implements the above image three-dimensional reconstruction method.
Based on the image three-dimensional reconstruction method and the device provided by the embodiment of the disclosure, when at least two images to be reconstructed for three-dimensional reconstruction meet a long-range image condition, a matching point set can be extracted and matched from the at least two images to be reconstructed; according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate system and the infinity homography matrixes in the two camera coordinate systems, a rotation matrix between at least two images to be reconstructed can be determined, and then a translation matrix between at least two images to be reconstructed can be determined through an epipolar constraint algorithm according to the rotation matrix; based on the determined rotation matrix and translation matrix, three-dimensional reconstruction can be performed on at least two images to be reconstructed, and the problems that translation matrix T cannot be calculated in an image three-dimensional reconstruction scene with at least two images to be reconstructed meeting the long-range image condition, and the final pose cannot be obtained are solved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of one embodiment of a method of three-dimensional reconstruction of images of the present disclosure;
FIG. 2A is a flow chart of yet another embodiment of a method of three-dimensional reconstruction of images of the present disclosure;
fig. 2B is a schematic diagram of two images to be reconstructed of the image three-dimensional reconstruction method of the present disclosure;
FIG. 2C is a graph illustrating the registration effect of the two images shown in FIG. 2B obtained by the prior art;
FIG. 2D is a graph of the registration effect of the two images shown in FIG. 2B obtained by the image three-dimensional reconstruction method of the present disclosure; FIG. 2E is a diagram illustrating the effect of matching the features of the two images shown in FIG. 2B by using the image three-dimensional reconstruction method of the present disclosure;
FIG. 3 is a flowchart of a method for determining a translation matrix in the image three-dimensional reconstruction method according to the present disclosure;
FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for three-dimensional reconstruction of images according to the present disclosure;
FIG. 5 is a schematic structural diagram of a three-dimensional image reconstruction apparatus according to another embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the disclosure may be implemented in electronic devices such as computer systems/servers, which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with electronic devices such as computer systems/servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
The electronic device, such as computer system/server, may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the disclosure
The technical scheme provided by the embodiment of the disclosure is applied to an image three-dimensional reconstruction scene with at least two images to be reconstructed meeting the long-range image condition (the pose between the two images can be determined by determining a basic matrix and an essential matrix, and if the pose cannot be successfully calculated, the image to be reconstructed can be determined to meet the long-range image condition). In the prior art, in the scene, in at least two images to be reconstructed, a large number of matched distant view feature points and a small number of matched close view feature points (for example, a panoramic image shot at an indoor balcony) exist, since the number of the distant view feature points is far greater than that of the close view feature points, the solutions of a base matrix F and an essential matrix E are liable to be degraded and fail, and the distant view feature points satisfy a homography H close to infinity However, the close-range feature point is not satisfied, and is therefore based on H Only the rotation matrix R can be determined, but the translation matrix can not be calculated, and further the final pose can not be obtained, so that the three-dimensional reconstruction effect is poor. The embodiment of the disclosure can be used for reconstructing the approximate infinity homography H according to the image to be reconstructed The solution method of (1) calculates the rotation matrix R, and further calculates the translation matrix T by the rotation matrix R.
Exemplary embodiments
FIG. 1 is a flow chart of one embodiment of a method of three-dimensional reconstruction of images of the present disclosure; the image three-dimensional reconstruction method can be applied to electronic devices (such as computer systems and servers), as shown in fig. 1, the image three-dimensional reconstruction method includes the following steps:
in step 101, feature extraction and feature matching are performed on at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed.
In an embodiment, for an input image to be reconstructed, feature extraction of the image and determination of a matching point may be performed through an existing Feature extraction algorithm, such as a Scale Invariant Feature Transform (SIFT) algorithm. For example, an image A and an image B are input, and are extracted and matched to a set of matching points { p1}, { p2}, wherein the set of matching points { p1}, { p2} are sets of coordinates in a camera coordinate system.
In an embodiment, the coordinates of the matching points extracted from the image are coordinates in an image coordinate system, that is, two-dimensional coordinates, and the two-dimensional coordinates in the image coordinate system can be converted into coordinates in a camera coordinate system according to a calculation algorithm of spherical coordinates of the image.
In step 102, a rotation matrix between at least two images to be reconstructed is determined according to coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate system and an infinite homography matrix in the two camera coordinate systems.
In one embodiment, some matching point pairs in the long-range view should satisfy the relationship of equation (3):
p 2 =H ∞2 p 1 =Rp 1 formula (3)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system, H ∞2 Denotes an infinity homography in the two camera coordinate systems and R denotes a rotation matrix.
The rotation matrix R can be solved according to the matching points and equation (3) in a Random Sample Consensus (RANSAC) framework, and in specific implementation, the rotation matrix R can be solved by using direct linear transformation or by using rotation solution in Iterative Closest Point algorithm (ICP).
In one embodiment, the specific way to solve the infinity homography matrix in the two camera coordinate systems can be seen in the embodiment shown in fig. 2, which is not detailed herein.
In step 103, a translation matrix between at least two images to be reconstructed is determined by a epipolar constraint algorithm based on the rotation matrix and the coordinates of at least one pair of matching points in the set of matching points in the corresponding camera coordinate system.
In an embodiment, the rotation matrix R calculated in step 102 may be used as prior information to solve the essential matrix again, and then a constraint relation of the translation matrix T is determined according to epipolar constraints, so as to determine the translation matrix T.
In one embodiment, the specific manner of calculating the translation matrix T can be seen in the embodiment shown in fig. 3, which is not described in detail here.
In step 104, three-dimensional reconstruction is performed on at least two images to be reconstructed based on the rotation matrix and the translation matrix.
In an embodiment, based on the rotation matrix and the translation matrix, the method for performing three-dimensional reconstruction on at least two images to be reconstructed may refer to the prior art, which is not described herein again.
When the at least two images to be reconstructed subjected to three-dimensional reconstruction satisfy the long-range view image condition in the above steps 101 to 104, extracting and matching the at least two images to be reconstructed to a matching point set; according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate system and the infinity homography matrixes in the two camera coordinate systems, a rotation matrix between at least two images to be reconstructed can be determined, and then a translation matrix between at least two images to be reconstructed can be determined through an epipolar constraint algorithm according to the rotation matrix; based on the determined rotation matrix and translation matrix, three-dimensional reconstruction can be performed on at least two images to be reconstructed, the problems that the translation matrix T cannot be calculated in an image three-dimensional reconstruction scene of which at least two images to be reconstructed meet the long-range image condition and the final pose cannot be obtained are solved, and the effect of three-dimensional reconstruction is ensured.
To better illustrate the scheme of three-dimensional reconstruction of images of the present disclosure, another embodiment is described below.
Fig. 2A is a flowchart of another embodiment of the three-dimensional image reconstruction method of the present disclosure, fig. 2B is a schematic diagram of two images to be reconstructed of the three-dimensional image reconstruction method of the present disclosure, fig. 2C is a registration effect diagram of the two images shown in fig. 2B obtained by using the prior art, fig. 2D is a registration effect diagram of the two images shown in fig. 2B obtained by using the three-dimensional image reconstruction method of the present disclosure, and fig. 2E is an effect diagram of performing feature matching on the two images shown in fig. 2B by using the three-dimensional image reconstruction method of the present disclosure; the embodiment is exemplarily illustrated by taking a solution of an infinite homography matrix as an example, and as shown in fig. 2A, the method includes the following steps:
in step 201, feature extraction and feature matching are performed on at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed.
In an embodiment, when two images to be reconstructed include more distant view information and the distant view information in the multiple images includes more matching feature points, as shown in fig. 2B, two images in fig. 2B include more distant view information, and it is difficult to perform the operation of stitching the panoramic images, and the stitching problem of the panoramic images is usually caused by a misalignment, which is a registration effect diagram of the two images shown in fig. 2B obtained by using the prior art, referring to fig. 2C.
In step 202, an infinity homography in the two camera coordinate systems is calculated based on a homography solving algorithm.
In one embodiment, the homography matrix may be calculated according to a calculation formula of a classical pinhole camera model for the homography matrix, that is, equation (1):
Figure BDA0003421432670000091
h represents a homography matrix, K' and K respectively represent internal reference matrices of at least two to-be-reconstructed images corresponding to the image pickup devices, R represents a rotation matrix, T represents a translation matrix, n represents a unit normal vector of a target plane in a first camera coordinate system when the two image pickup devices collect image information of the target plane, and n represents a unit normal vector of the target plane in the first camera coordinate system τ The representation pair is transposed and represents the distance from the target plane to the origin of coordinates of the first camera coordinate system.
Then, an infinity homography in the image coordinate system is determined based on equation (2) and the homography:
Figure BDA0003421432670000092
/>
wherein H represents a homography matrix, K' and K respectively represent internal reference matrices of the image pickup device corresponding to at least two images to be reconstructed, R represents a rotation matrix, H ∞1 Representing an infinity homography in the image coordinate system.
Next, based on the infinity homography in the image coordinate system, infinity homographies in the two camera coordinate systems may be determined.
Specifically, since K 'and K are respectively reference matrices of cameras corresponding to two images, and describe a conversion relationship between image coordinates and camera coordinates of feature points, when converting to a camera coordinate system, K' and K corresponding to equation (2) are both identity matrices, so that an infinite homography matrix in the two camera coordinate systems can be derived according to equation (2): h ∞2 =。
In step 203, a rotation matrix between at least two images to be reconstructed is determined according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate system and the infinity homography matrix in the two camera coordinate systems.
In step 204, a translation matrix between at least two images to be reconstructed is determined by a epipolar constraint algorithm based on the rotation matrix and the coordinates of at least one pair of matching points in the set of matching points in the corresponding camera coordinate system.
In step 205, at least two images to be reconstructed are reconstructed in three dimensions based on the rotation matrix and the translation matrix.
In one embodiment, the effect of feature matching the two graphs shown in fig. 2B according to the rotation matrix and the translation matrix calculated in step 205 can be seen in fig. 2E, which illustrates the matching of a large amount of perspective feature matching and a small amount of close-range matching: effect of registration fig. 2D can be seen, and the effect of translational rotation in fig. 2D is better and more accurate than that of fig. 2C.
In an embodiment, step 203 to step 205 can refer to step 102 to step 104 in the embodiment shown in fig. 1, which is not described herein again.
Through the steps 201 to 205, the rotation matrix R can be calculated according to the solving method of the homography matrix at infinity in the two camera coordinate systems of the image to be reconstructed, and then the translation matrix T can be solved based on the constraint of the rotation matrix R, so that the final pose is obtained, and the effect of three-dimensional reconstruction is ensured.
FIG. 3 is a flowchart of a method for determining a translation matrix in the image three-dimensional reconstruction method according to the present disclosure; the present embodiment takes calculation of a translation matrix as an example for illustration, and as shown in fig. 3, the method includes the following steps:
in step 301, an essential matrix is determined.
In an embodiment, the essential matrix may be determined based on equation (4):
e = T ^ R type (4)
Wherein, R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T.
In step 302, based on the determined essential matrix and the epipolar constraint algorithm, a constraint relation between coordinates of at least one pair of matching points in the translation matrix and the matching point set in the corresponding camera coordinate system is determined.
In an embodiment, in a specific implementation, the constraint relationship of the formula (6) may be obtained by determining the formula (5) according to an epipolar constraint algorithm, and then obtaining the constraint relationship of the formula (4) and the formula (5).
Firstly, determining an equation (5) according to an epipolar constraint algorithm:
Figure BDA0003421432670000101
wherein E represents an essential matrix, p 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000111
represents p 2 The transposed matrix of (2);
obtaining the constraint relation according to the formula (4) and the formula (5):
Figure BDA0003421432670000112
wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000113
denotes p 2 R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T.
In step 303, based on the constraint relationship, a translation matrix is determined.
In one embodiment, the constraint relationship of equation (6) may be further rewritten by commanding
Figure BDA0003421432670000114
The constraint relationship of equation (6) may be rewritten into the form of equation (7):
Figure BDA0003421432670000115
is equivalent to->
Figure BDA0003421432670000116
In an embodiment, since the rotation matrix R is the prior information, based on the constraint relationship of equation (7), the translation matrix may be solved through the coordinates of a certain number of matching points in the corresponding camera coordinate system.
In one embodiment, the translation matrix T has a non-fixed scale, so that T has a degree of freedom of 2, and therefore, given the coordinates of two pairs of matching points, the translation matrix T can be solved by a Singular Value Decomposition (SVD) method. During specific solving, the whole solution can be put into a RANSAC frame to obtain a final translation matrix T.
Through the steps 301 to 303, the translation matrix T can be solved based on the constraint of the rotation matrix R, the final pose is obtained, and the effect of three-dimensional reconstruction is ensured.
Corresponding to the embodiment of the image three-dimensional reconstruction method, the disclosure also provides a corresponding embodiment of the image three-dimensional reconstruction device.
Fig. 4 is a schematic structural diagram of an embodiment of an apparatus for three-dimensional image reconstruction according to the present disclosure, which is applied to an electronic device (e.g., a computer system, a server, a VR device) and applied to at least two image three-dimensional reconstruction scenes in which images to be reconstructed satisfy a long-range image condition, as shown in fig. 4, the apparatus includes:
the matching point determining module 41 is configured to perform feature extraction and feature matching on at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed;
a rotation matrix determining module 42, configured to determine a rotation matrix between at least two images to be reconstructed according to coordinates of at least one pair of matching points in the matching point set in corresponding camera coordinate systems and an infinity homography in the two camera coordinate systems;
a translation matrix determining module 43, configured to determine, based on the rotation matrix and coordinates of at least one pair of matching points in the set of matching points in a corresponding camera coordinate system, a translation matrix between at least two images to be reconstructed through a epipolar constraint algorithm;
and the three-dimensional reconstruction module 44 is configured to perform three-dimensional reconstruction on at least two images to be reconstructed based on the rotation matrix and the translation matrix.
Fig. 5 is a schematic structural diagram of a further embodiment of the three-dimensional image reconstruction apparatus according to the present disclosure, as shown in fig. 5, on the basis of the embodiment shown in fig. 4, in an embodiment, the apparatus further includes:
and an infinity homography matrix determining module 45, configured to calculate infinity homography matrices in the two camera coordinate systems based on a homography matrix solving algorithm.
In one embodiment, the infinity homography determination module 45 includes:
a first determining submodule 451 for determining a homography matrix based on equation (1):
Figure BDA0003421432670000121
h represents a homography matrix, K' and K respectively represent internal reference matrices of at least two images to be reconstructed corresponding to the image pickup devices, R represents a rotation matrix, T represents a translation matrix, n represents a unit normal vector of a target plane in a first camera coordinate system when the two image pickup devices collect image information of the target plane, and n represents a unit normal vector of the target plane in the first camera coordinate system τ The expression pair is subjected to transposition operation and expresses the distance from the target plane to the coordinate origin of the first camera coordinate system;
a second determining submodule 452 for determining an infinity homography in the image coordinate system based on equation (2) and the homography:
Figure BDA0003421432670000122
wherein H represents a homography matrix, K' and K respectively represent internal reference matrices of the corresponding camera device of at least two images to be reconstructed, R represents a rotation matrix, H ∞1 Representing an infinity homography in the image coordinate system;
a third determination submodule 453 for determining the homography matrix H based on the infinity in the image coordinate system ∞1 Determining the homography matrix H at infinity in the two camera coordinate systems ∞2
In one embodiment, rotation matrix determination module 42 is configured to determine a rotation matrix based on equation (3):
p 2 =H ∞2 p 1 =Rp 1 formula (3)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the set of matching points in the corresponding camera coordinate system, H ∞2 Denotes the homography at infinity in both camera coordinate systems and R denotes the rotation matrix.
In one embodiment, the translation matrix determination module 43 includes:
a fourth determination submodule 431 for determining the essential matrix based on equation (4):
e = T ^ R type (4)
Wherein R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T;
a fifth determining submodule 432, configured to determine, based on equation (4) and a epipolar constraint algorithm, a constraint relationship between coordinates of at least one pair of matching points in the translation matrix and the set of matching points in a corresponding camera coordinate system; a sixth determining submodule 433 is configured to determine a translation matrix based on the constraint relationship.
In an embodiment, the fifth determining sub-module 432 is specifically configured to determine, based on equation (4) and the epipolar constraint algorithm, a constraint relationship between coordinates of at least one pair of matching points in the translation matrix and the set of matching points in a corresponding camera coordinate system, including:
according to the epipolar constraint algorithm, equation (5) is obtained:
Figure BDA0003421432670000131
wherein E represents an essential matrix, p 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000132
denotes p 2 The transposed matrix of (2);
obtaining the constraint relation according to the formula (4) and the formula (5):
Figure BDA0003421432670000133
wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set in the corresponding camera coordinate system,
Figure BDA0003421432670000134
denotes p 2 R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T.
In one embodiment, the degree of freedom of the translation matrix T is 2.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the disclosure. One of ordinary skill in the art can understand and implement it without inventive effort.
In the following, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 6, in which an apparatus implementing a method according to an embodiment of the present disclosure may be integrated. Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment of the disclosure, and as shown in fig. 6, the electronic device 6 includes one or more processors 61, one or more memories 62 of a computer-readable storage medium, and a computer program stored on the memories and executable on the processors. The above-described image three-dimensional reconstruction method can be implemented when the program of the memory 62 is executed.
In particular, in practical applications, the electronic device may further include an input device 63, an output device 64, and the like, which are interconnected via a bus system and/or other types of connection mechanisms (not shown). Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 6 is not intended to be limiting of the electronic device and may include more or fewer components than shown, or certain components, or a different arrangement of components. Wherein:
the processor 61 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities that performs various functions and processes data by running or executing software programs and/or modules stored in the memory 62 and invoking data stored in the memory 62 to thereby monitor the electronic device as a whole.
Memory 62 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 61 to implement the sound source localization methods of the various embodiments of the present disclosure above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
The input device 63 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The output device 64 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 64 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
The electronic device may further include a power supply for supplying power to the various components, and may be logically connected to the processor 61 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The power supply may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Of course, for simplicity, only some of the components of the electronic device 6 relevant to the present disclosure are shown in fig. 6, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 6 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the sound source localization method according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the sound source localization method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but it should be noted that advantages, effects, and the like, mentioned in the present disclosure are only examples and not limitations, and should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The method and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
The description of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. An image three-dimensional reconstruction method is applied to an image three-dimensional reconstruction scene with at least two images to be reconstructed satisfying a long-range image condition, and comprises the following steps:
performing feature extraction and feature matching on the at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed;
determining a rotation matrix between the at least two images to be reconstructed according to the coordinates of at least one pair of matching points in the matching point set in the corresponding camera coordinate system and an infinite homography matrix in the two camera coordinate systems;
determining a translation matrix between the at least two images to be reconstructed through a epipolar constraint algorithm based on the rotation matrix and coordinates of at least one pair of matching points in the matching point set in a corresponding camera coordinate system;
performing three-dimensional reconstruction on the at least two images to be reconstructed based on the rotation matrix and the translation matrix;
and the at least two images to be reconstructed satisfy the image three-dimensional reconstruction scene of the long-range image condition and are used for indicating the scene in which the pose between the two images to be reconstructed cannot be determined by determining the basic matrix and the essential matrix.
2. The method of claim 1, wherein prior to determining the rotation matrix between the at least two images to be reconstructed, further comprising:
and calculating the homography matrix at infinity in the two camera coordinate systems based on a homography matrix solving algorithm.
3. The method of claim 2, wherein the homography based solution algorithm calculates an infinity homography in two camera coordinate systems comprising:
determining a homography matrix H based on equation (1):
Figure FDA0003985477860000011
h represents a homography matrix, K' and K respectively represent internal reference matrices of the at least two to-be-reconstructed images corresponding to the image pickup devices, R represents a rotation matrix, T represents a translation matrix, n represents a unit normal vector of a target plane in a first camera coordinate system when the two image pickup devices collect image information of the target plane, and n represents a unit normal vector of the target plane in the first camera coordinate system τ D represents the distance from the target plane to the coordinate origin of the first camera coordinate system;
determining an infinity homography H in an image coordinate system based on equation (2) and the homography ∞1
Figure FDA0003985477860000012
Wherein H represents the homography matrix, K' and K respectively represent the internal reference matrices of the at least two images to be reconstructed corresponding to the camera device, R represents a rotation matrix, H ∞1 Representing an infinity homography in the image coordinate system;
based on an infinity homography H in the image coordinate system ∞1 Determining an infinity homography H in the two camera coordinate systems ∞2
4. The method of claim 1, wherein determining a rotation matrix between the at least two images to be reconstructed comprises:
determining the rotation matrix R based on equation (3):
p 2 =H ∞2 p 1 =Rp 1 formula (3)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the matching point set under the corresponding camera coordinate system, H ∞2 Representing an infinity homography in the two camera coordinate systems, R representing a rotation matrix.
5. The method according to claim 4, wherein the determining a translation matrix between the at least two images to be reconstructed by a epipolar constraint algorithm based on the rotation matrix and coordinates of at least one pair of matching points in the set of matching points in a corresponding camera coordinate system comprises:
determining an essential matrix based on equation (4):
e = T ^ R formula (4)
Wherein R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation of the translation matrix T;
determining a constraint relation between coordinates of at least one pair of matching points in the translation matrix and the matching point set in a corresponding camera coordinate system based on the equation (4) and the epipolar constraint algorithm;
determining the translation matrix based on the constraint relationship.
6. The method of claim 5, wherein determining the translation matrix, the constrained relationship between the coordinates of at least one pair of matching points in the set of matching points in the corresponding camera coordinate system based on equation (4) and the epipolar constraint algorithm comprises:
according to the epipolar constraint algorithm, equation (5) is obtained:
p 2 τ Ep 1 =0 type (5)
Wherein E represents an essential matrix, p 2 And p 1 Respectively representing the coordinates of the matching points in the set of matching points in the corresponding camera coordinate system, p 2 τ Represents p 2 The transposed matrix of (2);
obtaining the constraint relation according to the formula (4) and the formula (5):
p 2 τ T^Rp 1 =0 type (6)
Wherein p is 2 And p 1 Respectively representing the coordinates of the matching points in the set of matching points in the corresponding camera coordinate system, p 2 τ Represents p 2 R represents a rotation matrix, T represents a translation matrix, and T ^ represents the antisymmetric operation on the translation matrix T.
7. The method of claim 6, wherein the translation matrix T has a degree of freedom of 2.
8. An apparatus for three-dimensional reconstruction of images, applied to a three-dimensional reconstruction scene of images in which at least two images to be reconstructed satisfy a long-range image condition, the apparatus comprising:
the matching point determining module is used for performing feature extraction and feature matching on the at least two images to be reconstructed to obtain a matching point set of the at least two images to be reconstructed;
a rotation matrix determining module, configured to determine a rotation matrix between the at least two images to be reconstructed according to coordinates of at least one pair of matching points in the matching point set in corresponding camera coordinate systems and an infinity homography matrix in the two camera coordinate systems;
a translation matrix determination module, configured to determine, based on the rotation matrix and coordinates of at least one pair of matching points in the matching point set in a corresponding camera coordinate system, a translation matrix between the at least two images to be reconstructed through a epipolar constraint algorithm;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the at least two images to be reconstructed based on the rotation matrix and the translation matrix;
and the at least two images to be reconstructed satisfy the image three-dimensional reconstruction scene of the long-range image condition and are used for indicating the scene in which the pose between the two images to be reconstructed cannot be determined by determining the basic matrix and the essential matrix.
9. An electronic device, applied to a three-dimensional reconstruction scene of images in which at least two images to be reconstructed satisfy a long-range image condition, includes:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory, the computer program, when executed, implementing the method of any of claims 1-7;
and the at least two images to be reconstructed satisfy the image three-dimensional reconstruction scene of the long-range view image condition, and are used for indicating the scene in which the pose between the two images to be reconstructed cannot be determined by determining the base matrix and the essential matrix.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method of any one of claims 1 to 7, when being applied to a three-dimensional reconstruction scene of images in which at least two images to be reconstructed satisfy a long-range image condition;
and the at least two images to be reconstructed satisfy the image three-dimensional reconstruction scene of the long-range image condition and are used for indicating the scene in which the pose between the two images to be reconstructed cannot be determined by determining the basic matrix and the essential matrix.
CN202111564281.XA 2021-12-20 2021-12-20 Image three-dimensional reconstruction method and device Active CN114241029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111564281.XA CN114241029B (en) 2021-12-20 2021-12-20 Image three-dimensional reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111564281.XA CN114241029B (en) 2021-12-20 2021-12-20 Image three-dimensional reconstruction method and device

Publications (2)

Publication Number Publication Date
CN114241029A CN114241029A (en) 2022-03-25
CN114241029B true CN114241029B (en) 2023-04-07

Family

ID=80759414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111564281.XA Active CN114241029B (en) 2021-12-20 2021-12-20 Image three-dimensional reconstruction method and device

Country Status (1)

Country Link
CN (1) CN114241029B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439634B (en) * 2022-09-30 2024-02-23 如你所视(北京)科技有限公司 Interactive presentation method of point cloud data and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598674A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
RU2716896C1 (en) * 2019-04-01 2020-03-17 Акционерное общество Научно-производственный центр "Электронные вычислительно-информационные системы" Method for automatic adjustment of spaced-apart camera system for forming panoramic image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614429B1 (en) * 1999-05-05 2003-09-02 Microsoft Corporation System and method for determining structure and motion from two-dimensional images for multi-resolution object modeling
CN101998136B (en) * 2009-08-18 2013-01-16 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN107358645B (en) * 2017-06-08 2020-08-11 上海交通大学 Product three-dimensional model reconstruction method and system
CN108564617B (en) * 2018-03-22 2021-01-29 影石创新科技股份有限公司 Three-dimensional reconstruction method and device for multi-view camera, VR camera and panoramic camera
CN112444242B (en) * 2019-08-31 2023-11-10 北京地平线机器人技术研发有限公司 Pose optimization method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598674A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
RU2716896C1 (en) * 2019-04-01 2020-03-17 Акционерное общество Научно-производственный центр "Электронные вычислительно-информационные системы" Method for automatic adjustment of spaced-apart camera system for forming panoramic image

Also Published As

Publication number Publication date
CN114241029A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
US20200184726A1 (en) Implementing three-dimensional augmented reality in smart glasses based on two-dimensional data
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
CN112489114B (en) Image conversion method, image conversion device, computer readable storage medium and electronic equipment
CN112927353A (en) Three-dimensional scene reconstruction method based on two-dimensional target detection and model alignment, storage medium and terminal
CN111868738B (en) Cross-device monitoring computer vision system
CN114241029B (en) Image three-dimensional reconstruction method and device
CN113572978A (en) Panoramic video generation method and device
CN113592706B (en) Method and device for adjusting homography matrix parameters
GB2567245A (en) Methods and apparatuses for depth rectification processing
CN112950759B (en) Three-dimensional house model construction method and device based on house panoramic image
CN113689508A (en) Point cloud marking method and device, storage medium and electronic equipment
CN113989376B (en) Method and device for acquiring indoor depth information and readable storage medium
CN115619989A (en) Fusion effect graph generation method and device, electronic equipment and storage medium
CN117237532A (en) Panorama display method and device for points outside model, equipment and medium
CN112465716A (en) Image conversion method and device, computer readable storage medium and electronic equipment
CN116385612B (en) Global illumination representation method and device under indoor scene and storage medium
CN113709388B (en) Multi-source video splicing method and device
CN116228949B (en) Three-dimensional model processing method, device and storage medium
CN115471403A (en) Image processing method, device and storage medium
CN114022619B (en) Image pose optimization method and apparatus, device, storage medium, and program product
CN117078748A (en) Method, device and storage medium for positioning virtual object in mixed reality scene
CN117115332A (en) Method, device, equipment and medium for acquiring illumination information of three-dimensional reconstruction object
CN116612228A (en) Method, apparatus and storage medium for smoothing object edges
CN114332603A (en) Appearance processing method and device for dialogue module and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant