CN115063485A - Three-dimensional reconstruction method, device and computer-readable storage medium - Google Patents

Three-dimensional reconstruction method, device and computer-readable storage medium Download PDF

Info

Publication number
CN115063485A
CN115063485A CN202210999929.4A CN202210999929A CN115063485A CN 115063485 A CN115063485 A CN 115063485A CN 202210999929 A CN202210999929 A CN 202210999929A CN 115063485 A CN115063485 A CN 115063485A
Authority
CN
China
Prior art keywords
point
projection error
line
feature
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210999929.4A
Other languages
Chinese (zh)
Other versions
CN115063485B (en
Inventor
赵开勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyu Innovation Technology Co ltd
Original Assignee
Shenzhen Qiyu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyu Innovation Technology Co ltd filed Critical Shenzhen Qiyu Innovation Technology Co ltd
Priority to CN202210999929.4A priority Critical patent/CN115063485B/en
Publication of CN115063485A publication Critical patent/CN115063485A/en
Application granted granted Critical
Publication of CN115063485B publication Critical patent/CN115063485B/en
Priority to PCT/CN2023/113315 priority patent/WO2024037562A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The embodiment of the application relates to the technical field of three-dimensional reconstruction, and discloses a three-dimensional reconstruction method, a three-dimensional reconstruction device and a computer-readable storage medium, wherein the three-dimensional reconstruction method comprises the following steps: extracting point features, line features and face features of each of a plurality of images, the plurality of images representing multiple views of a target object; calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature; generating a three-dimensional model of the target object from the point projection errors, the line projection errors, and the surface projection errors. And redundant points, lines or planes are combined through the constraint of the points, the lines and the planes, so that the data of the three-dimensional model is more concise and accurate, the operation efficiency is improved, and the fineness of the three-dimensional model is improved.

Description

Three-dimensional reconstruction method, device and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction method, a three-dimensional reconstruction device and a computer-readable storage medium.
Background
At present, with the advancement of science and technology, a plurality of problems are solved in more and more fields through three-dimensional reconstruction, the three-dimensional reconstruction includes traditional modeling and live-action three-dimensional reconstruction, the traditional modeling is that modeling personnel build a model on a target object or a scene by combining drawings or self experience according to the knowledge of the target object or the scene, the model fineness obtained in the mode is difficult to guarantee and is often difficult to apply under the condition of needing fine comparison with the real object or the scene, the live-action three-dimensional reconstruction reversely finds out a three-dimensional model according to a single image or an image sequence, the model fineness is higher, the generation speed is high, and some defects of the traditional modeling are overcome.
The existing live-action three-dimensional reconstruction is usually established according to one characteristic of an image, the error is large, the model fineness is low, and the efficiency and the effect are poor when the three-dimensional model needs to be subjected to texture processing and the like in the follow-up process.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present invention provide a three-dimensional reconstruction method, which is used to solve the problems in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a three-dimensional reconstruction method, including: extracting point features, line features and face features of each of a plurality of images, the plurality of images representing multiple views of a target object; calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature; generating a three-dimensional model of the target object from the point projection errors, the line projection errors, and the surface projection errors.
The point projection error, the line projection error and the surface projection error are calculated by combining the point feature, the line feature and the surface feature of the target object, and the three-dimensional model of the target object is obtained by combining the point feature, the line feature and the surface feature, and redundant points, lines or surfaces are combined by combining the two constraints, so that the data of the three-dimensional model is simpler and more accurate, the operation efficiency is improved, the three-dimensional model is more precise and contains more image data, more expansion modes are provided in the subsequent application of the three-dimensional model, the precision of the three-dimensional model is improved, and the subsequent processing efficiency is optimized.
In an optional manner, the calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature further includes:
let the point projection error be
Figure 973741DEST_PATH_IMAGE001
The calculation formula is as follows:
Figure 483482DEST_PATH_IMAGE002
wherein j represents the number of the point features,
Figure 550664DEST_PATH_IMAGE003
for the purpose of the point feature,
Figure 83539DEST_PATH_IMAGE004
for the point features on the image
Figure 425527DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 981318DEST_PATH_IMAGE005
is an internal parameter of the image and is,
Figure 484981DEST_PATH_IMAGE006
and
Figure 36310DEST_PATH_IMAGE007
as an external parameter of the image
Figure 791777DEST_PATH_IMAGE008
(ii) a Let the line projection error be
Figure 509066DEST_PATH_IMAGE009
The calculation formula is as follows:
Figure 216253DEST_PATH_IMAGE010
wherein k represents the number of the line features,
Figure 956676DEST_PATH_IMAGE011
and
Figure 17298DEST_PATH_IMAGE012
respectively being characteristic of said line
Figure 323514DEST_PATH_IMAGE013
The start point and the end point of (c),
Figure 139286DEST_PATH_IMAGE014
is a characteristic of the line
Figure 694901DEST_PATH_IMAGE013
Corresponding three-dimensional line segment
Figure 526591DEST_PATH_IMAGE015
Projection on the image, the calculation formula of which is:
Figure 454358DEST_PATH_IMAGE016
wherein, in the step (A),
Figure 736303DEST_PATH_IMAGE017
to represent
Figure 483942DEST_PATH_IMAGE018
The co-factor matrix of (a) is,
Figure 181639DEST_PATH_IMAGE018
is the intrinsic parameter of the image, and the expression form is as follows:
Figure 698334DEST_PATH_IMAGE019
wherein, in the step (A),
Figure 26547DEST_PATH_IMAGE020
is the number of the image in question,
Figure 556754DEST_PATH_IMAGE021
and
Figure 243430DEST_PATH_IMAGE022
is the focal length of a pixel of the image,
Figure 378745DEST_PATH_IMAGE023
and
Figure 503958DEST_PATH_IMAGE024
the pixel point coordinates of the image are obtained; let the surface projection error be
Figure 131249DEST_PATH_IMAGE025
The calculation formula is as follows:
Figure 905170DEST_PATH_IMAGE026
wherein j represents the number of the point features, k represents the number of the line features, m represents the number of the face features,
Figure 193194DEST_PATH_IMAGE003
for the purpose of the point feature,
Figure 863210DEST_PATH_IMAGE004
is composed of
Figure 305692DEST_PATH_IMAGE027
The corresponding three-dimensional point is displayed on the screen,
Figure 650348DEST_PATH_IMAGE028
in order to be a feature of the face,
Figure 760256DEST_PATH_IMAGE013
in order to be a feature of the line,
Figure 601173DEST_PATH_IMAGE029
is composed of
Figure 235678DEST_PATH_IMAGE013
A corresponding three-dimensional line segment, wherein,
Figure 944877DEST_PATH_IMAGE030
is composed of
Figure 112554DEST_PATH_IMAGE031
The normal vector of (a) is calculated,
Figure 360258DEST_PATH_IMAGE032
and
Figure 246174DEST_PATH_IMAGE033
are respectively two-dimensional plane
Figure 900009DEST_PATH_IMAGE034
Corresponding three-dimensional plane
Figure 485974DEST_PATH_IMAGE035
And the distance of the origin of coordinates to the straight line.
The point projection error, the line projection error and the surface projection error of the image are calculated according to the known parameters and formulas, so that the point feature, the line feature and the surface feature on the image can obtain an accurate three-dimensional point, an accurate three-dimensional line segment and an accurate three-dimensional plane according to the point projection error, the line projection error and the surface projection error.
In an alternative manner, the generating a three-dimensional model from the point projection error, the line projection error, and the plane projection error further includes:
obtaining a camera pose and a three-dimensional point cloud according to the point projection error, the line projection error and the surface projection error; generating a patch according to the camera pose and the three-dimensional point cloud; and generating the three-dimensional model according to the patch.
The surface patch is generated by combining the point characteristic, the line characteristic, the surface characteristic, the point projection error, the line projection error and the surface projection error, so that the surface patch is restrained by the point, the line and the surface, the edge of the surface patch is clearer, and the three-dimensional model generated according to the surface patch is more precise and accurate.
In an optional manner, the calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature further includes:
calculating a total projection error from the point projection error, the line projection error, and the surface projection error
Figure 137535DEST_PATH_IMAGE036
Wherein, in the step (A),
Figure 979589DEST_PATH_IMAGE037
is the weight of the error of the point projection,
Figure 765011DEST_PATH_IMAGE038
is the weight of the line projection error,
Figure 143165DEST_PATH_IMAGE039
is the weight of the surface projection error and,
Figure 27944DEST_PATH_IMAGE040
the error of the projection of the point is represented,
Figure 622874DEST_PATH_IMAGE041
which is representative of the line projection error,
Figure 884091DEST_PATH_IMAGE042
representing the surface projection error; generating a three-dimensional point cloud and/or a patch according to the total projection error constraint; and generating the three-dimensional model according to the three-dimensional point cloud and/or the patch.
The weights are added into the total projection errors calculated according to the point projection errors, the line projection errors and the surface projection errors, so that the weights can be adjusted according to the differences of the points, the lines and the surfaces of the total projection errors, error items of the points, the lines and the surfaces are distinguished, the weights can be adjusted according to actual conditions when three-dimensional point clouds and surface patches are generated according to the total projection errors, and a three-dimensional model generated according to the three-dimensional point clouds and/or the surface patches is more precise and accurate.
In an alternative manner, the generating a three-dimensional model from the point projection error, the line projection error, and the plane projection error further includes:
performing disparity fusion of left and right views by combining a block dense reconstruction algorithm with weights set according to the point features, the line features or the plane features, wherein the block dense reconstruction algorithm formula comprises:
Figure 308295DEST_PATH_IMAGE043
wherein, in the step (A),
Figure 160713DEST_PATH_IMAGE044
is an adaptive weight set according to the difference of the point feature, the line feature and the face feature,
Figure 180622DEST_PATH_IMAGE045
in order to self-define the parameters,
Figure 74891DEST_PATH_IMAGE046
a weight set according to the point feature, the line feature, or the face feature.
Setting weights according to point features, line features and face features by introducing the point features, the line features and the face features as constraints in a block dense reconstruction algorithm
Figure 129435DEST_PATH_IMAGE047
And the three-dimensional point cloud can be generated under the constraint of point characteristics, line characteristics and surface characteristics.
In an alternative manner, the generating a three-dimensional model from the point projection error, the line projection error, and the plane projection error further includes:
extracting semantic features of each image; generating a semantic texture patch according to the point projection error, the line projection error, the surface projection error and the semantic features; and generating the three-dimensional model according to the semantic texture surface patch.
Through extracting the semantic features of each image, generating a semantic texture surface patch by combining point projection errors, line projection errors and plane projection errors, and generating a three-dimensional model according to the semantic texture surface patch, different parts in the three-dimensional model can be colored, textured and the like more conveniently and rapidly, and the service efficiency of the three-dimensional model is improved.
In an optional manner, after the generating the three-dimensional model according to the semantic texture patch, the method further includes:
and performing multi-detail level optimization on the three-dimensional model.
When the three-dimensional model generated by the semantic texture surface patch is optimized in multiple detail levels, the constraint of the midpoint, the line and the surface of the semantic texture surface patch can be combined to carry out the optimization in multiple detail levels, so that the efficiency of the optimization in multiple detail levels is higher.
According to another aspect of the embodiments of the present invention, there is provided a three-dimensional reconstruction apparatus including:
the extraction module is used for extracting the point feature, the line feature and the surface feature of each image in a plurality of images, and the images are shot from different angles;
a calculation module for calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature;
and the generating module is used for generating a three-dimensional model according to the point projection error, the line projection error and the surface projection error.
According to another aspect of the embodiments of the present invention, there is provided a three-dimensional reconstruction apparatus including:
the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation of the three-dimensional reconstruction method.
According to another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium, in which at least one executable instruction is stored, and the executable instruction causes a three-dimensional reconstruction device to perform operations corresponding to the method.
According to the three-dimensional reconstruction method, the three-dimensional reconstruction device and the computer readable storage medium, the three-dimensional reconstruction is carried out through comprehensively extracting the characteristics of the points, the lines and the surfaces, so that the generated three-dimensional model is provided with the constraints of the points, the lines and the surfaces, redundant points on the same line segment or redundant line segments on the same plane are convenient to merge or remove, the generated three-dimensional model has fewer miscellaneous lines, and the fineness of the generated three-dimensional model can be improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of a three-dimensional reconstruction method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a three-dimensional reconstruction apparatus provided in an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of a three-dimensional reconstruction apparatus provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Aiming at the problem of low model fineness of the existing three-dimensional reconstruction technology, the inventor notices that the existing real three-dimensional reconstruction technology used in the fields of building, cultural heritage digitization, mapping and the like is often extracted according to one feature of points, lines or planes of an image to realize three-dimensional reconstruction, the fineness of the generated three-dimensional model is limited, when the three-dimensional model is generated by only constraining any one of the point features, the line features or the plane features, because only single constraint exists in the three-dimensional model, the three-dimensional model cannot be combined with each other to combine or eliminate redundant points, lines or planes, the generated three-dimensional model is easy to have more miscellaneous lines, redundant line segments in the same plane are difficult to remove or combine with the plane, the generated three-dimensional model is relatively complicated, and the three-dimensional model is not beneficial to further processing of the three-dimensional model, for example, the occupation of resources is large when the subsequent texture adding and the like of the model is operated, It takes a long time. Therefore, it is important to develop a three-dimensional reconstruction method capable of improving the accuracy of the three-dimensional model.
In order to solve the above problems, the inventors of the present application have studied and designed a three-dimensional reconstruction method, which performs three-dimensional reconstruction by comprehensively extracting features of points, lines, and planes, so that the generated three-dimensional model is constrained by the points, the lines, and the planes, and is convenient for merging or removing redundant points on the same line segment or redundant line segments on the same plane, so that the generated three-dimensional model has fewer miscellaneous lines, and the fineness of the generated three-dimensional model can be improved.
Fig. 1 shows a flowchart of a three-dimensional reconstruction method provided by an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step 110: point features, line features and face features of each of a plurality of images are extracted, the plurality of images representing multiple views of a target object.
In this step, the target object refers to an object targeted by three-dimensional reconstruction, such as an article, a person, a scene, or the like, and a plurality of the above objects may be used together to form the target object, such as a person and an article in one scene are used together as the target object.
In this step, the image is a photograph including the target object, or may be a video frame extracted from a video including the target object, and the source and the form of the image may be various, and only the three-dimensional reconstruction method provided by the present application needs to be able to obtain a certain amount of initial two-dimensional information of the target object from the image, which is not particularly limited by the present application.
The method for extracting the point feature, the line feature and the plane feature of each image in the plurality of images may be an lbp (local Binary patterns) algorithm, an hog (histogram of ordered gradient) feature extraction algorithm, a SIFT (Scale-invariant feature transform) operator, or the like, and the method aims to accurately extract the point feature, the line feature and the plane feature in each image, and only needs to select a proper feature extraction mode according to an actual application scene and actual needs, which is not particularly limited in the embodiment of the present application.
In order to obtain sufficient two-dimensional information of the target object to generate the three-dimensional model, multiple views of the target object, i.e., multiple different angles of the target object, should be included in the multiple images.
By extracting point features, line features and surface features in each image from a plurality of images containing a plurality of angles of the target object, enough target object information can be acquired as basic data, and the subsequent calculation for generating the three-dimensional model is facilitated.
Step 120: the point projection error of the point feature, the line projection error of the line feature, and the plane projection error of the plane feature are calculated.
In this step, point projection errors of the point features, line projection errors of the line features, and surface projection errors of the surface features are calculated from the point features, the line features, and the surface features extracted in step 110, respectively.
For example, when the plurality of images in step 110 are captured by a camera, image parameters required for calculating the point projection error, the line projection error and the surface projection error may be given by the camera, or may be directly obtained according to image attributes, and it is understood that the required parameters of the same image do not change due to the change of the obtaining means, so that different parameter obtaining modes may be adopted according to actual situations in order to obtain known parameters of the image necessary for calculating the projection error, which is not particularly limited in the embodiment of the present application.
The method for calculating the point projection error, the line projection error and the surface projection error can be carried out by a BA (bundle adjustment) algorithm, and aims to obtain the errors of the theoretical projection positions and the actual positions of the point characteristics, the line characteristics and the surface characteristics on the three-dimensional plane, so that the accurate positions of the three-dimensional point, the three-dimensional line segment and the three-dimensional plane of the target object can be obtained, and an accurate three-dimensional model can be conveniently generated according to the accurate positions.
By calculating the point projection error of the point characteristic, the line projection error of the line characteristic and the plane projection error of the plane characteristic, the error of the two-dimensional characteristic point in three-dimensional projection is known, so that the error correction is conveniently carried out when the two-dimensional characteristic point is converted into three-dimensional in the follow-up process, the three-dimensional reconstruction of the two-dimensional characteristic point in the image can be ensured to be correctly carried out, and the three-dimensional model of the target object is obtained.
Step 130: and generating a three-dimensional model of the target object according to the point projection error, the line projection error and the surface projection error.
In this step, a three-dimensional point corresponding to a point feature of the target object is calculated by the point projection error, a three-dimensional line segment corresponding to a line feature of the target object is calculated by the line projection error, and a three-dimensional plane corresponding to a surface feature of the target object is calculated by the surface projection error, thereby generating a three-dimensional model of the target object.
The three-dimensional model may be a mesh grid structure combined with attributes of the target object, for example, the three-dimensional model is a mesh grid structure combined with semantic generation, and may also be combined with other attributes according to actual needs, so as to facilitate further application and expansion of the three-dimensional model for different application scenarios, which is not particularly limited in the embodiments of the present application.
When the three-dimensional model is constrained by any two or more of the point feature, the line feature and the surface feature, the redundant useless points, lines or surfaces can be found by combining other features, so that the miscellaneous lines in the generated three-dimensional model are reduced, and the fineness is improved. For example, in the embodiment of the present application, a point projection error, a line projection error, and a plane projection error are obtained through a point feature, a line feature, and a plane feature, and then, an accurate three-dimensional point, a three-dimensional line segment, and a three-dimensional plane are calculated.
Wherein, the combination of the point feature, the line feature and the face feature can be performed by a ba (bundle adjustment) algorithm.
By generating the three-dimensional model of the target object according to the point projection error, the first projection error and the surface projection error, the three-dimensional reconstruction method provided by the embodiment of the application can generate a more refined three-dimensional model by combining the characteristics of points, lines and surfaces at the same time.
As can be seen from the combination of the above steps 110, 120 and 130, according to the three-dimensional reconstruction method provided by the present application, the point feature, the line feature and the plane feature of each of the plurality of images are extracted, and the plurality of images include a plurality of angles of the target object; calculating point projection errors of the point features, line projection errors of the line features and surface projection errors of the surface features; and generating a three-dimensional model of the target object according to the point projection error, the line projection error and the surface projection error. By utilizing the scheme of the embodiment, the point projection error, the line projection error and the plane projection error are calculated by combining the point feature, the line feature and the plane feature of the target object, the three-dimensional model of the target object is obtained by combining the point projection error, the line projection error and the plane projection error, redundant points, lines or planes in the three-dimensional model can be combined by combining two constraints, so that the data of the three-dimensional model is simpler and more accurate, the operation efficiency is improved, the three-dimensional model is more precise and contains more image data, more expansion modes are provided in the subsequent application of the three-dimensional model, and the subsequent processing efficiency of the three-dimensional model is improved.
In one embodiment of the present invention, calculating a point projection error of a point feature, a line projection error of a line feature, and a plane projection error of a plane feature further comprises:
step a 01: set point projection error of
Figure 215071DEST_PATH_IMAGE001
The calculation formula is as follows:
Figure 286058DEST_PATH_IMAGE048
where j represents the number of point features,
Figure 357919DEST_PATH_IMAGE049
in order to be a point feature,
Figure 329286DEST_PATH_IMAGE004
as a point feature on the image
Figure 461190DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 753893DEST_PATH_IMAGE018
is an internal parameter of the image and is,
Figure 160604DEST_PATH_IMAGE006
and with
Figure 924160DEST_PATH_IMAGE007
As an external parameter of the image
Figure 758124DEST_PATH_IMAGE050
(ii) a Set the line projection error to
Figure 538124DEST_PATH_IMAGE051
The calculation formula is as follows:
Figure 482946DEST_PATH_IMAGE052
where k represents the number of line features,
Figure 225643DEST_PATH_IMAGE053
and
Figure 997552DEST_PATH_IMAGE054
are respectively line characteristics
Figure 701066DEST_PATH_IMAGE013
The start point and the end point of (c),
Figure 184000DEST_PATH_IMAGE014
is a line feature
Figure 984466DEST_PATH_IMAGE013
Corresponding three-dimensional line segment
Figure 989593DEST_PATH_IMAGE055
In the imageThe calculation formula of the projection is as follows:
Figure 977140DEST_PATH_IMAGE056
wherein, in the step (A),
Figure 263765DEST_PATH_IMAGE057
to represent
Figure 154623DEST_PATH_IMAGE018
The co-factor matrix of (a) is,
Figure 704553DEST_PATH_IMAGE018
is an intrinsic parameter of the image, which is expressed in the form of:
Figure 444976DEST_PATH_IMAGE058
wherein, in the step (A),
Figure 535292DEST_PATH_IMAGE020
is the number of the image or images,
Figure 885850DEST_PATH_IMAGE059
and
Figure 341102DEST_PATH_IMAGE060
is the focal length of a pixel of the image,
Figure 568821DEST_PATH_IMAGE023
and
Figure 26610DEST_PATH_IMAGE024
the pixel point coordinates of the image are obtained; set the surface projection error as
Figure 390595DEST_PATH_IMAGE061
The calculation formula is as follows:
Figure 79065DEST_PATH_IMAGE026
where j represents the number of point features, k represents the number of line features, m represents the number of surface features,
Figure 233229DEST_PATH_IMAGE003
in order to be a point feature, the method comprises the following steps of,
Figure 665347DEST_PATH_IMAGE004
is composed of
Figure 883839DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 743210DEST_PATH_IMAGE062
in order to be a surface characteristic,
Figure 509303DEST_PATH_IMAGE013
in order to be a line feature,
Figure 948375DEST_PATH_IMAGE029
is composed of
Figure 21373DEST_PATH_IMAGE013
A corresponding three-dimensional line segment is formed,
Figure 553111DEST_PATH_IMAGE030
is composed of
Figure 711560DEST_PATH_IMAGE029
The normal vector of (a) is,
Figure 751060DEST_PATH_IMAGE063
and
Figure 507925DEST_PATH_IMAGE064
are respectively two-dimensional plane
Figure 177941DEST_PATH_IMAGE062
Corresponding three-dimensional plane
Figure 823686DEST_PATH_IMAGE035
And the distance of the origin of coordinates to the straight line.
For example, knowing the number j of point features in an image, knowing the feature points
Figure 401298DEST_PATH_IMAGE003
Knowing the characteristic points
Figure 950353DEST_PATH_IMAGE027
Corresponding three-dimensional point
Figure 650325DEST_PATH_IMAGE004
Knowing the intrinsic parameters of the image
Figure 48945DEST_PATH_IMAGE005
The expression is
Figure 666134DEST_PATH_IMAGE019
Wherein is
Figure 771493DEST_PATH_IMAGE020
The number of the image or images is/are,
Figure 580049DEST_PATH_IMAGE059
and
Figure 934807DEST_PATH_IMAGE060
as an image
Figure 671864DEST_PATH_IMAGE065
The focal length of the pixel of (a),
Figure 897309DEST_PATH_IMAGE023
and
Figure 876766DEST_PATH_IMAGE024
as an image
Figure 79339DEST_PATH_IMAGE066
And the coordinates of the pixels of the image are known, and the external parameters of the image are known
Figure 536865DEST_PATH_IMAGE067
Then can be according to the formula
Figure 351238DEST_PATH_IMAGE002
Calculating to obtain featuresSigned point projection error
Figure 3061DEST_PATH_IMAGE001
For example, the number k of line features in the image is known, and
Figure 332411DEST_PATH_IMAGE068
and
Figure 593628DEST_PATH_IMAGE012
respectively being characteristic of said line
Figure 324824DEST_PATH_IMAGE013
The coordinates of the starting point and the coordinates of the ending point of (1), are known
Figure 881970DEST_PATH_IMAGE069
Is a characteristic of the line
Figure 964195DEST_PATH_IMAGE013
Corresponding three-dimensional line segment
Figure 91420DEST_PATH_IMAGE029
Projection on the image, the calculation formula of which is:
Figure 178587DEST_PATH_IMAGE016
wherein, in the process,
Figure 936327DEST_PATH_IMAGE057
representing intrinsic parameters of an image
Figure 177953DEST_PATH_IMAGE018
The co-factor matrix of (a), wherein,
Figure 46552DEST_PATH_IMAGE020
is the number of the image in question,
Figure 50542DEST_PATH_IMAGE070
and
Figure 244763DEST_PATH_IMAGE022
is the focal length of a pixel of the image,
Figure 239264DEST_PATH_IMAGE071
and
Figure 881860DEST_PATH_IMAGE024
the coordinates of the pixel points of the image can be calculated according to a formula
Figure 442154DEST_PATH_IMAGE072
Calculating the line projection error of the line feature
Figure 10539DEST_PATH_IMAGE051
For example, given the number of point features in an image is j, the number of line features is k, the number of surface features is m,
Figure 554653DEST_PATH_IMAGE073
in order to be a point feature,
Figure 328836DEST_PATH_IMAGE074
is composed of
Figure 743637DEST_PATH_IMAGE075
The corresponding three-dimensional point is displayed on the screen,
Figure 748502DEST_PATH_IMAGE076
in order to be a surface feature,
Figure 27516DEST_PATH_IMAGE077
in order to be a line feature,
Figure 510450DEST_PATH_IMAGE078
is composed of
Figure 45337DEST_PATH_IMAGE077
A corresponding three-dimensional line segment is formed,
Figure 158786DEST_PATH_IMAGE030
is composed of
Figure 913378DEST_PATH_IMAGE079
The normal vector of (a) is,
Figure 527899DEST_PATH_IMAGE080
and
Figure 182871DEST_PATH_IMAGE064
are respectively two-dimensional plane
Figure 234266DEST_PATH_IMAGE076
Corresponding three-dimensional plane
Figure 709110DEST_PATH_IMAGE035
The distance from the normal vector of the plane and the origin of coordinates to the straight line can be calculated according to the formula
Figure 533846DEST_PATH_IMAGE026
And calculating to obtain the surface projection error of the surface feature.
The point projection error, the line projection error and the plane projection error of the image are calculated according to the known parameters and a formula, so that the point feature, the line feature and the plane feature on the image can obtain an accurate three-dimensional point, an accurate three-dimensional line segment and an accurate three-dimensional plane according to the point projection error, the line projection error and the plane projection error.
In an embodiment of the present invention, generating the three-dimensional model according to the point projection error, the line projection error, and the surface projection error further includes:
step b 01: obtaining a camera pose and a three-dimensional point cloud according to the point projection error, the line projection error and the surface projection error; generating a surface patch according to the camera pose and the three-dimensional point cloud; and generating a three-dimensional model according to the patch.
Wherein, the camera pose refers to the external parameters of the image, and the image is set
Figure 43325DEST_PATH_IMAGE065
External parameters of
Figure 859096DEST_PATH_IMAGE081
By means of a formula tableThe method comprises the following steps:
Figure 86815DEST_PATH_IMAGE082
where the rotation uses a rotation vector, it can be transformed into a rotation matrix by the rodregs equation.
The three-dimensional point cloud is generated by combining point features, line features and surface features with point projection errors, line projection errors and surface projection errors.
The patch is merged according to the point feature, the line feature and the surface feature, combined with the point projection error, the line projection error and the surface projection error, for example, when a patch is located in a three-dimensional plane obtained by the surface projection error, the patch is merged with the three-dimensional plane, for example, when one edge of a patch intersects with a three-dimensional line segment obtained by the image line projection error, the patch takes the three-dimensional line segment as a new edge to form a new patch, that is, the generation of the patch is constrained by the point feature, the line feature and the surface feature, the point projection error, the line projection error and the surface projection error.
A patch is generated by combining point characteristics, line characteristics, surface characteristics, point projection errors, line projection errors and surface projection errors, so that the patch is constrained by the points, the lines and the surface, the edge of the patch is clearer, and a three-dimensional model generated according to the patch is finer.
In one embodiment of the present invention, calculating a point projection error of a point feature, a line projection error of a line feature, and a plane projection error of a plane feature further comprises:
step c 01: calculating a total projection error based on the point projection error, the line projection error and the surface projection error
Figure 544604DEST_PATH_IMAGE036
Wherein, in the step (A),
Figure 643010DEST_PATH_IMAGE037
is the weight of the error of the point projection,
Figure 597059DEST_PATH_IMAGE038
is the weight of the line projection error,
Figure 46495DEST_PATH_IMAGE039
is the weight of the surface projection error,
Figure 714499DEST_PATH_IMAGE040
the error of the projection of the point is represented,
Figure 401833DEST_PATH_IMAGE041
which is indicative of the line projection error,
Figure 589100DEST_PATH_IMAGE083
representing the surface projection error; generating a three-dimensional point cloud and/or a patch according to the total projection error constraint; and generating a three-dimensional model according to the three-dimensional point cloud and/or the patch.
The weights are added into the total projection errors calculated according to the point projection errors, the line projection errors and the surface projection errors, so that the weights can be adjusted according to the differences of the points, the lines and the surfaces of the total projection errors, error items of the points, the lines and the surfaces are distinguished, the weights can be adjusted according to actual conditions when three-dimensional point clouds and surface patches are generated according to the total projection errors, and a three-dimensional model generated according to the three-dimensional point clouds and/or the surface patches is finer.
In an embodiment of the present invention, generating the three-dimensional model according to the point projection error, the line projection error, and the surface projection error further includes:
step d 01: and performing parallax fusion of left and right views by combining a block dense reconstruction algorithm with weights set according to point features, line features or plane features, wherein the block dense reconstruction algorithm formula comprises:
Figure 27297DEST_PATH_IMAGE043
wherein, in the step (A),
Figure 731948DEST_PATH_IMAGE084
is self-adaptive weight set according to the difference of point characteristics, line characteristics and surface characteristics,
Figure 398421DEST_PATH_IMAGE085
in order to self-define the parameters,
Figure 930159DEST_PATH_IMAGE086
weights set according to point features, line features, or surface features.
The block dense reconstruction algorithm is an image optimization algorithm, patch refers to a range of 3 × 3 or 5 × 5 centered on a certain pixel, and the average error minimization is performed by using the patches of the pixels in the left and right views in the conventional block dense reconstruction algorithm.
Setting weights according to point features, line features and face features by introducing the point features, the line features and the face features as constraints in a block dense reconstruction algorithm
Figure 291870DEST_PATH_IMAGE047
Therefore, the point feature, the line feature and the surface feature can be restrained when the three-dimensional point cloud is generated.
In an embodiment of the present invention, generating the three-dimensional model according to the point projection error, the line projection error, and the surface projection error further includes:
step d 02: extracting semantic features of each image; generating a semantic texture surface patch according to the point projection error, the line projection error, the surface projection error and the semantic features; and generating a three-dimensional model according to the semantic texture patches.
The semantic features refer to semantic information of an image, the semantic information is divided into a visual layer, an object layer and a concept layer, the visual layer is a commonly understood bottom layer and contains textures, shapes and the like, the object layer is a middle layer and generally contains attribute features, such as the state of a certain object at a certain moment, and the concept layer is a high layer and is the object expressed by the image and closest to human understanding. In this step, the semantic features may be a visual layer, an object layer, or a concept layer, and different semantic features are extracted according to actual needs, which is not particularly limited in this embodiment of the present application.
Through extracting the semantic features of each image, generating semantic texture patches by combining point projection errors, line projection errors and plane projection errors, and generating a three-dimensional model according to the semantic texture patches, different parts in the three-dimensional model can be colored, textured and the like more conveniently and rapidly, and the service efficiency of the three-dimensional model is improved.
In an embodiment of the present invention, after generating the three-dimensional model according to the semantic texture patch, the method further includes:
step d 03: and performing multi-detail level optimization on the three-dimensional model.
In this step, the multi-level-of-Detail optimization refers to optimization according to the multi-level-of-Detail (Levels of Detail) of the model, which was proposed by Clark in 1976, and it is considered that when an object covers a small area of the screen, the object can be used to describe a thicker model, and a geometric level model for a visible surface judgment algorithm is provided, so as to rapidly draw a complex scene.
When the three-dimensional model generated by the semantic texture surface patch is subjected to multi-detail-level optimization, the multi-detail-level optimization can be performed by combining constraints of points, lines and surfaces of the semantic texture surface patch, so that the multi-detail-level optimization efficiency is higher.
Fig. 2 shows a functional block diagram of a three-dimensional reconstruction apparatus 200 according to an embodiment of the present invention. As shown in fig. 2, the apparatus includes: an extraction module 210, a calculation module 220, and a generation module 230.
An extraction module 210 for extracting a point feature, a line feature and a face feature of each of a plurality of images, the plurality of images representing multiple views of a target object;
a calculating module 220, configured to calculate a point projection error of the point feature, a line projection error of the line feature, and a surface projection error of the surface feature;
a generating module 230, configured to generate a three-dimensional model according to the point projection error, the line projection error, and the surface projection error;
in some embodiments, the calculation module 220 further comprises:
a first calculation unit for setting the projection error as
Figure 65791DEST_PATH_IMAGE001
The calculation formula is as follows:
Figure 258875DEST_PATH_IMAGE002
where j represents the number of point features,
Figure 277692DEST_PATH_IMAGE003
in order to be a point feature,
Figure 392278DEST_PATH_IMAGE004
as a point feature on the image
Figure 969890DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 846842DEST_PATH_IMAGE087
is an internal parameter of the image and is,
Figure 484496DEST_PATH_IMAGE006
and
Figure 617537DEST_PATH_IMAGE007
as an external parameter of the image
Figure 703567DEST_PATH_IMAGE088
A second calculation unit for setting the line projection error to
Figure 605664DEST_PATH_IMAGE089
The calculation formula is as follows:
Figure 476537DEST_PATH_IMAGE010
where k represents the number of line features,
Figure 598339DEST_PATH_IMAGE011
and
Figure 517754DEST_PATH_IMAGE012
are respectively line features
Figure 477619DEST_PATH_IMAGE013
The start point and the end point of (c),
Figure 286438DEST_PATH_IMAGE014
is a line feature
Figure 659650DEST_PATH_IMAGE013
Corresponding three-dimensional line segment
Figure 320439DEST_PATH_IMAGE055
Projection on the image, the calculation formula is:
Figure 197128DEST_PATH_IMAGE016
wherein, in the step (A),
Figure 442427DEST_PATH_IMAGE057
to represent
Figure 240618DEST_PATH_IMAGE018
The co-factor matrix of (a) is,
Figure 65617DEST_PATH_IMAGE018
is an intrinsic parameter of the image, which is expressed by the following form:
Figure 62392DEST_PATH_IMAGE019
wherein, in the step (A),
Figure 586914DEST_PATH_IMAGE020
is the number of the image or images,
Figure 403560DEST_PATH_IMAGE059
and
Figure 297829DEST_PATH_IMAGE060
is the focal length of a pixel of the image,
Figure 883532DEST_PATH_IMAGE023
and
Figure 844534DEST_PATH_IMAGE024
the pixel point coordinates of the image are obtained;
a third calculation unit for setting the surface projection error as
Figure 989556DEST_PATH_IMAGE061
The calculation formula is as follows:
Figure 858155DEST_PATH_IMAGE026
where j represents the number of point features, k represents the number of line features, m represents the number of surface features,
Figure 298364DEST_PATH_IMAGE003
in order to be a point feature,
Figure 666154DEST_PATH_IMAGE004
is composed of
Figure 191813DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 864103DEST_PATH_IMAGE062
in order to be a surface feature,
Figure 424397DEST_PATH_IMAGE013
in order to be a line feature,
Figure 822142DEST_PATH_IMAGE029
is composed of
Figure 366256DEST_PATH_IMAGE013
A corresponding three-dimensional line segment is formed,
Figure 514341DEST_PATH_IMAGE030
is composed of
Figure 430607DEST_PATH_IMAGE029
The normal vector of (a) is,
Figure 169892DEST_PATH_IMAGE063
and
Figure 670144DEST_PATH_IMAGE064
are respectively two-dimensional plane
Figure 684236DEST_PATH_IMAGE062
Corresponding three-dimensional plane
Figure 782905DEST_PATH_IMAGE090
And the distance of the origin of coordinates to the straight line.
In some embodiments, the calculation module 220 further comprises:
a fourth calculation unit for calculating a total projection error based on the point projection error, the line projection error and the surface projection error
Figure 958671DEST_PATH_IMAGE091
Wherein, in the step (A),
Figure 946219DEST_PATH_IMAGE037
is the weight of the error of the point projection,
Figure 734308DEST_PATH_IMAGE038
is the weight of the line projection error,
Figure 389280DEST_PATH_IMAGE039
is the weight of the surface projection error,
Figure 939211DEST_PATH_IMAGE040
the error of the projection of the point is represented,
Figure 414054DEST_PATH_IMAGE041
which is indicative of the line projection error,
Figure 333731DEST_PATH_IMAGE083
representing the surface projection error;
the first generating unit is used for generating a three-dimensional point cloud and/or a patch according to the total projection error constraint;
and the second generation unit is used for generating a three-dimensional model according to the three-dimensional point cloud and/or the patch.
In some embodiments, the generation module 230 further comprises:
the third generation unit is used for obtaining a camera pose and a three-dimensional point cloud according to the point projection error, the line projection error and the surface projection error;
the fourth generating unit is used for generating a patch according to the camera pose and the three-dimensional point cloud;
and the fifth generating unit is used for generating the three-dimensional model according to the patch.
In some embodiments, the generation module 230 further comprises:
a sixth generating unit, configured to perform disparity fusion of left and right views through a block dense reconstruction algorithm and a weight combination set according to the point feature, the line feature, or the plane feature, where the block dense reconstruction algorithm formula includes:
Figure 577630DEST_PATH_IMAGE043
wherein, in the step (A),
Figure 95199DEST_PATH_IMAGE044
is self-adaptive weight set according to the difference of point characteristics, line characteristics and surface characteristics,
Figure 558804DEST_PATH_IMAGE092
in order to self-define the parameters,
Figure 452811DEST_PATH_IMAGE046
weights set according to point features, line features, or surface features.
In some embodiments, the generation module 230 further comprises:
a seventh generating unit, configured to extract semantic features of each image;
the eighth generating unit is used for generating semantic texture patches according to the point projection errors, the line projection errors, the surface projection errors and the semantic features;
and the ninth generating unit is used for generating the three-dimensional model according to the semantic texture surface patch.
In some embodiments, the three-dimensional reconstruction device 200 further comprises:
and the fifth calculation unit is used for performing multi-detail-level optimization on the three-dimensional model.
Fig. 3 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the three-dimensional reconstruction apparatus.
As shown in fig. 3, the three-dimensional reconstruction apparatus may include: a processor 302, a memory 306, a communication interface 304, and a communication bus 308.
The processor 302, memory 306, and communication interface 304 communicate with each other via a communication bus 308.
The memory 306 is configured to store at least one executable instruction 310, and the executable instruction 310 causes the processor 302 to perform the relevant steps in the above-described three-dimensional reconstruction method embodiment.
The embodiment of the present invention further provides a computer-readable storage medium, where at least one executable instruction is stored in the storage medium, and when the executable instruction runs on a three-dimensional reconstruction device, the three-dimensional reconstruction device may execute the three-dimensional reconstruction method in any method embodiment described above.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. A method of three-dimensional reconstruction, comprising:
extracting point features, line features and face features of each of a plurality of images, the plurality of images representing multiple views of a target object;
calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature;
and generating a three-dimensional model of the target object according to the point projection error, the line projection error and the surface projection error.
2. The three-dimensional reconstruction method of claim 1, wherein said calculating a point projection error of said point feature, a line projection error of said line feature, and a plane projection error of said plane feature, further comprises:
let the point projection error be
Figure 790673DEST_PATH_IMAGE001
The calculation formula is as follows:
Figure 839663DEST_PATH_IMAGE002
wherein j represents the number of the point features,
Figure 85836DEST_PATH_IMAGE003
for the purpose of the point feature,
Figure 155772DEST_PATH_IMAGE004
for the point features on the image
Figure 938045DEST_PATH_IMAGE003
The corresponding three-dimensional point is displayed on the screen,
Figure 62996DEST_PATH_IMAGE005
is an internal parameter of the image and is,
Figure 999728DEST_PATH_IMAGE006
and
Figure 533740DEST_PATH_IMAGE007
as an external parameter of the image
Figure 934634DEST_PATH_IMAGE008
Let the line projection error be
Figure 200793DEST_PATH_IMAGE009
The calculation formula is as follows:
Figure 952717DEST_PATH_IMAGE010
wherein k represents the number of said line features,
Figure 352737DEST_PATH_IMAGE011
and
Figure 14662DEST_PATH_IMAGE012
respectively being characteristic of said line
Figure 841935DEST_PATH_IMAGE013
The start point and the end point of (c),
Figure 549997DEST_PATH_IMAGE014
is a characteristic of the line
Figure 97915DEST_PATH_IMAGE015
Corresponding three-dimensional line segment
Figure 738981DEST_PATH_IMAGE016
Projection on the image, the calculation formula of which is:
Figure 579898DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 807879DEST_PATH_IMAGE018
to represent
Figure 126865DEST_PATH_IMAGE005
The co-factor matrix of (a) is,
Figure 112184DEST_PATH_IMAGE019
the internal parameters of the image are expressed in the form of:
Figure 920740DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 72235DEST_PATH_IMAGE021
is the number of the image in question,
Figure 24273DEST_PATH_IMAGE022
and
Figure 639931DEST_PATH_IMAGE023
is the focal length of a pixel of the image,
Figure 324115DEST_PATH_IMAGE024
and
Figure 759645DEST_PATH_IMAGE025
the pixel point coordinates of the image are obtained;
let the surface projection error be
Figure 482750DEST_PATH_IMAGE026
The calculation formula is as follows:
Figure 657642DEST_PATH_IMAGE027
wherein j represents the number of the point features, k represents the number of the line features, m represents the number of the face features,
Figure 808000DEST_PATH_IMAGE028
for the purpose of the point feature,
Figure 606192DEST_PATH_IMAGE029
is composed of
Figure 696770DEST_PATH_IMAGE030
The corresponding three-dimensional point is displayed on the screen,
Figure 896807DEST_PATH_IMAGE031
in order to be a feature of the face,
Figure 47428DEST_PATH_IMAGE032
in order to be a feature of the line,
Figure 129654DEST_PATH_IMAGE033
is composed of
Figure 663403DEST_PATH_IMAGE032
A corresponding three-dimensional line segment is formed,
Figure 249105DEST_PATH_IMAGE034
is composed of
Figure 570628DEST_PATH_IMAGE035
The normal vector of (a) is,
Figure 77832DEST_PATH_IMAGE036
and
Figure 775792DEST_PATH_IMAGE037
are respectively two-dimensional plane
Figure 684842DEST_PATH_IMAGE031
Corresponding three-dimensional plane
Figure 347905DEST_PATH_IMAGE038
And the distance of the origin of coordinates to the straight line.
3. The three-dimensional reconstruction method of claim 1, wherein said generating a three-dimensional model from said point projection error, said line projection error, and said plane projection error, further comprises:
obtaining a camera pose and a three-dimensional point cloud according to the point projection error, the line projection error and the surface projection error;
generating a patch according to the camera pose and the three-dimensional point cloud;
and generating the three-dimensional model according to the patch.
4. The three-dimensional reconstruction method according to any one of claims 1 to 3, wherein said calculating a point projection error of said point feature, a line projection error of said line feature and a plane projection error of said plane feature further comprises:
calculating a total projection error from the point projection error, the line projection error, and the surface projection error
Figure 58302DEST_PATH_IMAGE039
Wherein, in the process,
Figure 933854DEST_PATH_IMAGE040
is the weight of the error of the point projection,
Figure 822045DEST_PATH_IMAGE041
is the weight of the line projection error,
Figure 360736DEST_PATH_IMAGE042
is the weight of the surface projection error,
Figure 232746DEST_PATH_IMAGE043
the error of the projection of the point is represented,
Figure 380830DEST_PATH_IMAGE044
which is representative of the line projection error,
Figure 562675DEST_PATH_IMAGE045
representing the surface projection error;
generating a three-dimensional point cloud and/or a patch according to the total projection error constraint;
and generating the three-dimensional model according to the three-dimensional point cloud and/or the patch.
5. The three-dimensional reconstruction method of claim 1, wherein said generating a three-dimensional model from said point projection error, said line projection error, and said plane projection error, further comprises:
performing disparity fusion of left and right views by combining a block dense reconstruction algorithm with weights set according to the point features, the line features or the plane features, wherein the block dense reconstruction algorithm formula comprises:
Figure 895436DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 661267DEST_PATH_IMAGE047
is an adaptive weight set according to the difference of the point feature, the line feature and the face feature,
Figure 114507DEST_PATH_IMAGE048
in order to self-define the parameters,
Figure 977290DEST_PATH_IMAGE049
according to the point feature, the line feature orThe weight of the face feature setting.
6. The three-dimensional reconstruction method of claim 1, wherein said generating a three-dimensional model from said point projection error, said line projection error, and said plane projection error, further comprises:
extracting semantic features of each image;
generating a semantic texture patch according to the point projection error, the line projection error, the surface projection error and the semantic features;
and generating the three-dimensional model according to the semantic texture patches.
7. The method of claim 6, wherein after generating the three-dimensional model from the semantic texture patches, the method further comprises:
and performing multi-detail level optimization on the three-dimensional model.
8. A three-dimensional reconstruction apparatus, comprising:
the extraction module is used for extracting the point feature, the line feature and the surface feature of each image in a plurality of images, and the images are shot from different angles;
a calculation module for calculating a point projection error of the point feature, a line projection error of the line feature, and a plane projection error of the plane feature;
and the generating module is used for generating a three-dimensional model according to the point projection error, the line projection error and the surface projection error.
9. A three-dimensional reconstruction apparatus, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the three-dimensional reconstruction method of any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one executable instruction that, when executed on a three-dimensional reconstruction device, causes the three-dimensional reconstruction device to perform operations of the three-dimensional reconstruction method of any one of claims 1-7.
CN202210999929.4A 2022-08-19 2022-08-19 Three-dimensional reconstruction method, device and computer-readable storage medium Active CN115063485B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210999929.4A CN115063485B (en) 2022-08-19 2022-08-19 Three-dimensional reconstruction method, device and computer-readable storage medium
PCT/CN2023/113315 WO2024037562A1 (en) 2022-08-19 2023-08-16 Three-dimensional reconstruction method and apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210999929.4A CN115063485B (en) 2022-08-19 2022-08-19 Three-dimensional reconstruction method, device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN115063485A true CN115063485A (en) 2022-09-16
CN115063485B CN115063485B (en) 2022-11-29

Family

ID=83208595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210999929.4A Active CN115063485B (en) 2022-08-19 2022-08-19 Three-dimensional reconstruction method, device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN115063485B (en)
WO (1) WO2024037562A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037562A1 (en) * 2022-08-19 2024-02-22 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and apparatus, and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015154601A1 (en) * 2014-04-08 2015-10-15 中山大学 Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
CN111784842A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Three-dimensional reconstruction method, device, equipment and readable storage medium
WO2021120175A1 (en) * 2019-12-20 2021-06-24 驭势科技(南京)有限公司 Three-dimensional reconstruction method, apparatus and system, and storage medium
CN113313832A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
CN114241050A (en) * 2021-12-20 2022-03-25 东南大学 Camera pose optimization method based on Manhattan world hypothesis and factor graph
CN114708293A (en) * 2022-03-22 2022-07-05 广东工业大学 Robot motion estimation method based on deep learning point-line feature and IMU tight coupling

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102411B (en) * 2020-11-02 2021-02-12 中国人民解放军国防科技大学 Visual positioning method and device based on semantic error image
CN115063485B (en) * 2022-08-19 2022-11-29 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015154601A1 (en) * 2014-04-08 2015-10-15 中山大学 Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
WO2021120175A1 (en) * 2019-12-20 2021-06-24 驭势科技(南京)有限公司 Three-dimensional reconstruction method, apparatus and system, and storage medium
CN111784842A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Three-dimensional reconstruction method, device, equipment and readable storage medium
CN113313832A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
CN114241050A (en) * 2021-12-20 2022-03-25 东南大学 Camera pose optimization method based on Manhattan world hypothesis and factor graph
CN114708293A (en) * 2022-03-22 2022-07-05 广东工业大学 Robot motion estimation method based on deep learning point-line feature and IMU tight coupling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL BLEYER ET AL.: "PatchMatch Stereo - Stereo Matching with Slanted Support Windows", 《BMVC2011》 *
杨奎 等: "基于递推自适应权重的快速稠密立体匹配", 《北京航空航天大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037562A1 (en) * 2022-08-19 2024-02-22 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN115063485B (en) 2022-11-29
WO2024037562A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
US11410320B2 (en) Image processing method, apparatus, and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN108010123B (en) Three-dimensional point cloud obtaining method capable of retaining topology information
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
US10169891B2 (en) Producing three-dimensional representation based on images of a person
US9437034B1 (en) Multiview texturing for three-dimensional models
CN114332415B (en) Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology
CN111998862B (en) BNN-based dense binocular SLAM method
CN109685879B (en) Method, device, equipment and storage medium for determining multi-view image texture distribution
WO2024037562A1 (en) Three-dimensional reconstruction method and apparatus, and computer-readable storage medium
CN112734914A (en) Image stereo reconstruction method and device for augmented reality vision
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN113077504B (en) Large scene depth map generation method based on multi-granularity feature matching
CN115953471A (en) Indoor scene multi-scale vector image retrieval and positioning method, system and medium
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
CN117501313A (en) Hair rendering system based on deep neural network
CN115984583B (en) Data processing method, apparatus, computer device, storage medium, and program product
WO2023078052A1 (en) Three-dimensional object detection method and apparatus, and computer-readable storage medium
CN116664895B (en) Image and model matching method based on AR/AI/3DGIS technology
Kordelas et al. Accurate stereo 3D point cloud generation suitable for multi-view stereo reconstruction
Zhang et al. Design of a 3D reconstruction model of multiplane images based on stereo vision
Dan et al. Depth estimation from a single outdoor image based on scene classification
Hu et al. ADMap: Anti-disturbance framework for reconstructing online vectorized HD map
Zhang et al. Viewpoint estimation of image object based on parameters sharing network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant