CN113724369A - Scene-oriented three-dimensional reconstruction viewpoint planning method and system - Google Patents

Scene-oriented three-dimensional reconstruction viewpoint planning method and system Download PDF

Info

Publication number
CN113724369A
CN113724369A CN202110877752.6A CN202110877752A CN113724369A CN 113724369 A CN113724369 A CN 113724369A CN 202110877752 A CN202110877752 A CN 202110877752A CN 113724369 A CN113724369 A CN 113724369A
Authority
CN
China
Prior art keywords
image
viewpoint
dimensional reconstruction
scene
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110877752.6A
Other languages
Chinese (zh)
Inventor
石启新
张潇
张涛
王小波
张玉伟
胡英坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Xuzhou Power Supply Co
Xuzhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Xuzhou Power Supply Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Xuzhou Power Supply Co filed Critical State Grid Xuzhou Power Supply Co
Priority to CN202110877752.6A priority Critical patent/CN113724369A/en
Publication of CN113724369A publication Critical patent/CN113724369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional reconstruction viewpoint planning method and system for a scene. The method comprises the following steps: s1, performing initial three-dimensional reconstruction of the scene according to the multi-view image set of the scene; s2, taking each observation visual angle of the multi-visual-angle image as a detected visual angle, calculating a synthetic image under the detected visual angle by adopting the neighborhood color image and the corresponding depth map, calculating a cavity area of the synthetic image and an error between the synthetic image and the detected visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant visual point; s3, taking the image virtual viewpoint with the cavity area larger than the second set threshold value as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint; s4, excluding redundant viewpoints and registering the newly added actual picture to the initial three-dimensional reconstruction model in step S1. The invention realizes high-efficiency and high-integrity three-dimensional reconstruction by reducing redundant viewpoints and increasing viewpoints at proper positions.

Description

Scene-oriented three-dimensional reconstruction viewpoint planning method and system
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to a method and a system for planning a three-dimensional reconstruction viewpoint of a scene.
Background
The image-based scene three-dimensional reconstruction method is to model a scene by shooting multi-angle image data of the scene. When a camera is used for carrying out handheld shooting on a scene, some missing areas are inevitably included, so that enough observed images are lacked in some scene areas, and incomplete reconstruction results are caused. Also, photo capture for some areas may be too dense, with higher data redundancy leading to longer computational perspectives. For these problems, it is necessary to develop a scene-oriented three-dimensional reconstruction viewpoint planning method and system, which can delete and increase the current viewpoint by evaluating the quality of the image synthesized by three-dimensional reconstruction, thereby increasing the reconstruction speed and increasing the reconstruction integrity.
Disclosure of Invention
The invention aims to provide a three-dimensional reconstruction viewpoint planning method and a three-dimensional reconstruction viewpoint planning system for scenes so as to overcome the defects in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
compared with the prior art, the invention has the advantages that: the invention
A three-dimensional reconstruction viewpoint planning method for scenes comprises the following steps:
s1, performing initial three-dimensional reconstruction of the scene according to the multi-view image set of the scene to form an initial three-dimensional reconstruction model;
s2, taking each observation visual angle of the multi-visual-angle image as a detected visual angle, calculating a synthetic image under the detected visual angle by adopting the neighborhood color image and the corresponding depth map, calculating a cavity area of the synthetic image and an error between the synthetic image and the detected visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant visual point;
s3, taking the image virtual viewpoint with the cavity area larger than the second set threshold value as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint;
s4, excluding the redundant view point in step S2 and registering the actual picture added in step S3 to the initial three-dimensional reconstruction model in step S1.
Further, the step S1 specifically includes:
s10, set I ═ { I ═ I for multi-view images of a scene1,I2,I3,…InSIFT feature point extraction is carried out on each key frame, and F is obtained as F ═ F1,F2,F3,…FnIn which FiAs an image IiMatching the feature points between the key frames according to the corresponding feature point set, performing sparse three-dimensional reconstruction and camera pose recovery by using an SFM (sparse form-factor-mapping) method based on the matched feature points according to the matching result, and calculating each input image IiSelecting the camera parameters belonging to the I;
s11, down-sampling the input image, executing multi-view three-dimensional reconstruction algorithm on the down-sampled image data, calculating the depth map of each view i, and fusing the depth maps to obtain an initial three-dimensional reconstruction model SL
Further, the camera parameters in the step S10 include an intra-camera parameter and an extra-camera parameter, where the intra-camera parameter is
Figure BDA0003190949810000021
Where f focal length, cx,cyPrincipal point, the camera extrinsic parameter is rotation RiAnd translation ti
Further, the step S10 includes performing bundle adjustment on the sparse point cloud model to optimize parameters of all cameras and the position of the three-dimensional point cloud
Further, the step 11 includes calculating an initial three-dimensional model SLProjection depth map D ═ D at each input view angle1,D2,D3,…Dn}。
Further, the step S2 includes:
s20 neighborhood color image set using view angle j
Figure BDA0003190949810000022
And corresponding depth map
Figure BDA0003190949810000023
For any pixel p ∈ IjComputing color value projection from neighborhood view t to p
Figure BDA0003190949810000024
And averaging the color values of all neighborhood views
Figure BDA0003190949810000025
Where n1 is the number of neighboring views,
Figure BDA0003190949810000026
a pixel value for pixel p projected via a neighborhood view;
s21, calculating the composite image at the view angle j according to the step S20
Figure BDA0003190949810000027
S22, synthesizing the image
Figure BDA0003190949810000028
The number of pixels of the missing region contained in (1) calculates the area L of the holecFrom the composite image
Figure BDA0003190949810000029
From the original observed image I at viewing angle jjDifference calculation error of
Figure BDA00031909498100000210
Quality evaluation score cj=Ld+λLdWherein λ is a self-defined constant;
s23, evaluation score c for qualityjSorting, and evaluating the quality score cjAnd the view point which is larger than the first set threshold value x is used as a redundant view point.
Further, the step S3 includes:
s30, setting the closest and farthest distances between the camera and the scene, and uniformly sampling a new viewpoint m around the existing viewpoint;
s31 neighborhood image set using view m
Figure BDA00031909498100000211
And corresponding depth map
Figure BDA00031909498100000212
For any pixel p ∈ ImComputing color value projection from neighborhood view t to p
Figure BDA00031909498100000213
And averaging the color values of all neighborhood views
Figure BDA00031909498100000214
Wherein n is2For the number of neighboring views to be,
Figure BDA0003190949810000031
synthesizing a synthesized image at a virtual view angle m by performing the above operation on each pixel for the pixel value of the pixel p projected via the neighborhood view angle
Figure BDA0003190949810000032
S32, synthesizing the image
Figure BDA0003190949810000033
The number of pixels of the missing region contained in (b) calculates the area h of the holekThe area h of the cavitykAnd if the virtual viewpoint k is larger than the second preset threshold value T, taking the image virtual viewpoint k as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint.
Further, the step S4 includes:
s40, removing redundant viewpoints in the input image set and adding actual pictures to form a new image set I ═ { I ═ I1,I2,I3,…Ig}∪{J1,J2,…JfIn which, { I }1,I2,I3,…Ig{ J } set of input image set with redundant views removed, { J1,J2,…JfAdding actual pictures;
s41, adding the new picture to obtain the actual picture J1,J2,…JfRegistering to an initial three-dimensional reconstruction model SLIn (c), obtain { J1,J2,...JfRelative to the model SLFor image Ji∈{J1,J2,...JfSuppose that the intrinsic parameters are known, at JiExtract feature points and sum with { I1,I2,I3,...IgMatching the characteristic points, and executing BA operation optimization J on the matching resultiIncluding camera pose RiAnd camera translation tiThen for image JiAnd image set { I1,I2,I3,…IgExecuting a binding adjustment to optimize image J by minimizing reprojection errorsiCamera translation tiAnd camera pose Ri
S42, setting I' as new image set I ═ I1,I2,I3,...Ig}∪{J1,J2,...JfExecuting dense three-dimensional reconstruction, calculating the depth map of each input viewpoint, and obtaining a high-quality three-dimensional model S through the fusion of the depth mapsH
The invention also provides a system for implementing the scene-oriented three-dimensional reconstruction viewpoint planning method, which comprises the following steps:
the three-dimensional reconstruction module is used for performing initial three-dimensional reconstruction on a scene according to a multi-view image set of the scene to form an initial three-dimensional reconstruction model;
the redundant viewpoint calculation module is used for taking each observation visual angle of the multi-visual-angle image as a tested visual angle, calculating a synthetic image under the tested visual angle by adopting the neighborhood color image and the corresponding depth map, calculating the cavity area of the synthetic image and the error between the synthetic image and the tested visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant viewpoint;
the candidate viewpoint calculation module is used for taking the image virtual viewpoint with the cavity area larger than the second set threshold value as a candidate viewpoint and adding an actual picture at the position of the candidate viewpoint;
the registration reconstruction module is used for eliminating redundant viewpoints, registering the newly added actual pictures into the initial three-dimensional reconstruction model to optimize images and camera parameters, executing dense three-dimensional reconstruction on the new images, calculating a depth map of each input viewpoint and obtaining a high-quality three-dimensional model through fusion of the depth maps;
the three-dimensional reconstruction module, the redundant viewpoint calculation module, the candidate viewpoint calculation module and the registration reconstruction module are connected in sequence.
Compared with the prior art, the invention has the advantages that: according to the invention, the reconstruction speed is improved by reducing redundant viewpoints, and the reconstruction quality is improved by adding viewpoints at proper positions, so that the scene is not required to be preprocessed by adopting complex equipment, such as a laser radar and the like, the positions of the redundant viewpoints and the new viewpoints are judged by completely depending on information contained in input data, and the high-efficiency and high-integrity three-dimensional reconstruction is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a scene-oriented three-dimensional reconstruction viewpoint planning method of the present invention.
Fig. 2 is a frame diagram of the scene-oriented three-dimensional reconstruction viewpoint planning system of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and the scope of the present invention will be more clearly and clearly defined.
Referring to fig. 1, the present embodiment discloses a three-dimensional reconstruction viewpoint planning method for a scene, which includes the following steps:
step S1, performing initial three-dimensional reconstruction of the scene according to the multi-view image set of the scene to form an initial three-dimensional reconstruction model, specifically:
first, for a multi-view image set I ═ { I ] of a scene1,I2,I3,...InSIFT feature point extraction is carried out on each key frame, and F is obtained as F ═ F1,F2,F3,...FnIn which FiAs an image IiMatching the corresponding characteristic point set between the key frames with the characteristic points, performing sparse three-dimensional reconstruction and camera pose recovery by using an SFM (structure from motion) method based on the matched characteristic points according to the matching result, and calculating each input image IiSelecting the camera parameters belonging to the I; the camera parameters comprise camera intrinsic parameters and camera extrinsic parameters, the camera intrinsic parameters are
Figure BDA0003190949810000041
Where f focal length, cx,cyPrincipal point, camera extrinsic parameter is rotation RiAnd translation ti. On this basis, Bundle Adjustment (Bundle Adjustment) is performed on the sparse point cloud model to optimize the parameters of all cameras and the position of the three-dimensional point cloud.
Then, the input image is down-sampled, a multi-view stereo three-dimensional reconstruction algorithm is executed on the down-sampled image data, and each view is calculatedAfter the depth map of the angle i is fused, obtaining an initial three-dimensional reconstruction model SL. Calculating an initial three-dimensional model SLProjection depth map D ═ D at each input view angle1,D2,D3,...Dn}。
Step S2, taking each observation visual angle of the multi-visual-angle image as a tested visual angle, calculating a synthetic image under the tested visual angle by adopting the neighborhood color image and the corresponding depth map, calculating the cavity area of the synthetic image and the error between the synthetic image and the tested visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant visual point. One of the functions of the step (viewpoint planning) is to eliminate a group of input images according to the current model quickly established based on the low-resolution images, so that the quality of the reconstruction result is hardly influenced, and meanwhile, the establishment of the three-dimensional model S by using high-definition images is promotedHThe specific implementation method of the time calculation efficiency is as follows:
first, to synthesize an image at viewing angle j, a neighborhood color image set of viewing angle j is used
Figure BDA0003190949810000051
And corresponding depth map
Figure BDA0003190949810000052
For any pixel p ∈ IjComputing color value projection from neighborhood view t to p
Figure BDA0003190949810000053
And averaging the color values of all neighborhood views
Figure BDA0003190949810000054
Wherein n is1For the number of neighboring views to be,
Figure BDA0003190949810000055
is the pixel value of pixel p projected via the neighborhood view.
Then, a composite image at the view angle j is calculated according to the above steps
Figure BDA0003190949810000056
Then, based on the synthesized image
Figure BDA0003190949810000057
The number of pixels of the missing region contained in (1) calculates the area L of the holecFrom the composite image
Figure BDA0003190949810000058
From the original observed image I at viewing angle jjDifference calculation error of
Figure BDA0003190949810000059
Quality evaluation score cj=Ld+λLdWhere λ is a custom constant, fraction cjFor evaluating whether view j is a redundant view. When j is a redundant view, the periphery of j contains more neighborhood images, and the image of view j is synthesized by the neighborhood images, i.e. the image of view j
Figure BDA00031909498100000510
The integrity is higher and is also similar to the image stored at view j itself, which is a potential view that can be excluded.
Finally, the quality evaluation score cjSorting, and evaluating the quality score cjAnd the view point which is larger than the first set threshold value x is used as a redundant view point.
Step S3, the virtual viewpoint of the image with the hole area larger than the second set threshold is used as the candidate viewpoint, and the actual picture is added to the position of the candidate viewpoint. For a given input image I ═ I1,I2,I3,...InAnother function of viewpoint planning is to quickly build a model S based on the current low-resolution imageLIdentifying an incomplete area, and supplementing a shot image in the area, wherein the specific operation steps are as follows:
first, the closest and farthest distances of the camera from the scene are set, and a new viewpoint m is uniformly sampled around the existing viewpoint.
Second, a neighborhood image set using view m
Figure BDA0003190949810000061
And corresponding depth map
Figure BDA0003190949810000062
For any pixel p ∈ ImComputing color value projection from neighborhood view t to p
Figure BDA0003190949810000063
And averaging the color values of all neighborhood views
Figure BDA0003190949810000064
Wherein n is2For the number of neighboring views to be,
Figure BDA0003190949810000065
synthesizing a synthesized image at a virtual view angle m by performing the above operation on each pixel for the pixel value of the pixel p projected via the neighborhood view angle
Figure BDA0003190949810000066
Finally, according to the composite image
Figure BDA0003190949810000067
The number of pixels of the missing region contained in (b) calculates the area h of the holek,hkThe larger, the lower the integrity, the composite image
Figure BDA0003190949810000068
The larger the missing region of (2), the larger the void area hkAnd if the virtual viewpoint k is larger than the second preset threshold value T, taking the image virtual viewpoint k as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint.
Step S4, excluding the redundant view point in step S2, and registering the actual picture added in step S3 to the initial three-dimensional reconstruction model in step S1, specifically:
first, excluding the input mapAdding actual pictures to form a new image set I' ═ { I ═ by imaging the redundant viewpoints in the set1,I2,I3,...Ig}∪{J1,J2,…JfIn which, { I }1,I2,I3,...Ig{ J } set of input image set with redundant views removed, { J1,J2,...JfAdd the actual picture.
Secondly, add the new picture to get the actual picture { J1,J2,...JfRegistering to an initial three-dimensional reconstruction model SLIn (c), obtain { J1,J2,...JfRelative to the model SLFor image Ji∈{J1,J2,…JfSuppose that the intrinsic parameters are known, at JiExtract feature points and sum with { I1,I2,I3,…IgMatching the characteristic points, and executing BA operation optimization J on the matching resultiIncluding camera pose RiAnd camera translation tiThen for image JiAnd image set { I1,I2,I3,…IgExecuting bundle adjustment (bundle adjustment), optimizing image J by minimizing reprojection erroriCamera translation tiAnd camera pose Ri
Finally, in the new image set I ═ { I ═ I1,I2,I3,…Ig}∪{J1,J2,…JfExecuting dense three-dimensional reconstruction, calculating the depth map of each input viewpoint, and obtaining a high-quality three-dimensional model S through the fusion of the depth mapsH
Referring to fig. 2, the present invention further provides a system for implementing the above-mentioned three-dimensional reconstruction viewpoint planning method for a scene, including: the three-dimensional reconstruction module 1 is used for performing initial three-dimensional reconstruction on a scene according to a multi-view image set of the scene to form an initial three-dimensional reconstruction model; the redundant viewpoint calculation module 2 is used for taking each observation visual angle of the multi-visual-angle image as a tested visual angle, calculating a synthetic image under the tested visual angle by adopting the neighborhood color image and the corresponding depth map, calculating a cavity area of the synthetic image and an error between the synthetic image and the tested visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant viewpoint; the candidate viewpoint calculating module 3 is configured to use the image virtual viewpoint with the cavity area larger than the second set threshold as a candidate viewpoint, and add an actual picture at the position of the candidate viewpoint; the registration reconstruction module 4 is used for eliminating redundant viewpoints, registering the newly added actual pictures into the initial three-dimensional reconstruction model to optimize images and camera parameters, executing dense three-dimensional reconstruction on the new images, calculating a depth map of each input viewpoint and obtaining a high-quality three-dimensional model through fusion of the depth maps; the three-dimensional reconstruction module 1, the redundant viewpoint calculation module 2, the candidate viewpoint calculation module 3 and the registration reconstruction module 4 are connected in sequence.
The invention adopts the quality of image viewpoint synthesis to position redundant viewpoints and the area needing to supplement photos, can be used for reducing the redundant viewpoints in the three-dimensional reconstruction process and supplementing the viewpoints in the uncovered area, and realizes the high-efficiency and high-integrity three-dimensional reconstruction.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, various changes or modifications may be made by the patentees within the scope of the appended claims, and within the scope of the invention, as long as they do not exceed the scope of the invention described in the claims.

Claims (9)

1. A three-dimensional reconstruction viewpoint planning method for a scene is characterized by comprising the following steps:
s1, performing initial three-dimensional reconstruction of the scene according to the multi-view image set of the scene to form an initial three-dimensional reconstruction model;
s2, taking each observation visual angle of the multi-visual-angle image as a detected visual angle, calculating a synthetic image under the detected visual angle by adopting the neighborhood color image and the corresponding depth map, calculating a cavity area of the synthetic image and an error between the synthetic image and the detected visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant visual point;
s3, taking the image virtual viewpoint with the cavity area larger than the second set threshold value as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint;
s4, excluding the redundant view point in step S2 and registering the actual picture added in step S3 to the initial three-dimensional reconstruction model in step S1.
2. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 1, wherein the step S1 specifically includes:
s10, set I ═ { I ═ I for multi-view images of a scene1,I2,I3,...InSIFT feature point extraction is carried out on each key frame, and F is obtained as F ═ F1,F2,F3,...FnIn which FiAs an image IiMatching the feature points between the key frames according to the corresponding feature point set, performing sparse three-dimensional reconstruction and camera pose recovery by using an SFM (sparse form-factor-mapping) method based on the matched feature points according to the matching result, and calculating each input image IiSelecting the camera parameters belonging to the I;
s11, down-sampling the input image, executing multi-view three-dimensional reconstruction algorithm on the down-sampled image data, calculating the depth map of each view i, and fusing the depth maps to obtain an initial three-dimensional reconstruction model SL
3. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 2, wherein the camera parameters in the step S10 include an intra-camera parameter and an extra-camera parameter, and the intra-camera parameter is
Figure FDA0003190949800000011
Where f focal length, cx,cyPrincipal point, the camera extrinsic parameter is rotation RiAnd translation ti
4. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 2, wherein the step S10 further comprises performing bundle adjustment on the sparse point cloud model to optimize parameters of all cameras and positions of the three-dimensional point cloud
5. The method for planning the viewpoint of three-dimensional reconstruction facing to the scene as claimed in claim 2, wherein the step 11 further comprises calculating an initial three-dimensional model SLProjection depth map D ═ D at each input view angle1,D2,D3,...Dn}。
6. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 1, wherein the step S2 includes:
s20 neighborhood color image set using view angle j
Figure FDA0003190949800000021
And corresponding depth map
Figure FDA0003190949800000022
For any pixel p ∈ IjComputing color value projection from neighborhood view t to p
Figure FDA0003190949800000023
And averaging the color values of all neighborhood views
Figure FDA0003190949800000024
Wherein n is1For the number of neighboring views to be,
Figure FDA0003190949800000025
a pixel value for pixel p projected via a neighborhood view;
s21, calculating the composite image at the view angle j according to the step S20
Figure FDA0003190949800000026
S22, according toComposite image
Figure FDA0003190949800000027
The number of pixels of the missing region contained in (1) calculates the area L of the holecFrom the composite image
Figure FDA0003190949800000028
From the original observed image I at viewing angle jjDifference calculation error of
Figure FDA0003190949800000029
Quality evaluation score cj=Ld+λLdWherein λ is a self-defined constant;
s23, evaluation score c for qualityjSorting, and evaluating the quality score cjAnd the view point which is larger than the first set threshold value x is used as a redundant view point.
7. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 1, wherein the step S3 includes:
s30, setting the closest and farthest distances between the camera and the scene, and uniformly sampling a new viewpoint m around the existing viewpoint;
s31 neighborhood image set using view m
Figure FDA00031909498000000210
And corresponding depth map
Figure FDA00031909498000000211
For any pixel p ∈ ImComputing color value projection from neighborhood view t to p
Figure FDA00031909498000000212
And averaging the color values of all neighborhood views
Figure FDA00031909498000000213
Wherein n is2Is adjacent toThe number of domain view points is,
Figure FDA00031909498000000214
synthesizing a synthesized image at a virtual view angle m by performing the above operation on each pixel for the pixel value of the pixel p projected via the neighborhood view angle
Figure FDA00031909498000000215
S32, synthesizing the image
Figure FDA00031909498000000216
The number of pixels of the missing region contained in (b) calculates the area h of the holekThe area h of the cavitykAnd if the virtual viewpoint k is larger than the second preset threshold value T, taking the image virtual viewpoint k as a candidate viewpoint, and adding an actual picture at the position of the candidate viewpoint.
8. The scene-oriented three-dimensional reconstruction viewpoint planning method according to claim 1, wherein the step S4 includes:
s40, removing redundant viewpoints in the input image set and adding actual pictures to form a new image set I ═ { I ═ I1,I2,I3,...Ig}∪{J1,J2,...JfIn which, { I }1,I2,I3,...Ig{ J } set of input image set with redundant views removed, { J1,J2,…JfAdding actual pictures;
s41, adding the new picture to obtain the actual picture J1,J2,…JfRegistering to an initial three-dimensional reconstruction model SLIn (c), obtain { J1,J2,...JfRelative to the model SLFor image Ji∈{J1,J2,...JfSuppose that the intrinsic parameters are known, at JiExtract feature points and sum with { I1,I2,I3,...IgMatching the characteristic points, and executing BA operation optimization J on the matching resultiCamera (2)Extrinsic parameters, including camera pose RiAnd camera translation tiThen for image JiAnd image set { I1,I2,I3,…IgExecuting a binding adjustment to optimize image J by minimizing reprojection errorsiCamera translation tiAnd camera pose Ri
S42, setting I' as new image set I ═ I1,I2,I3,...Ig}∪{J1,J2,...JfExecuting dense three-dimensional reconstruction, calculating the depth map of each input viewpoint, and obtaining a high-quality three-dimensional model S through the fusion of the depth mapsH
9. A system for implementing the scene-oriented three-dimensional reconstruction viewpoint planning method according to any one of claims 1 to 8, comprising:
the three-dimensional reconstruction module is used for performing initial three-dimensional reconstruction on a scene according to a multi-view image set of the scene to form an initial three-dimensional reconstruction model;
the redundant viewpoint calculation module is used for taking each observation visual angle of the multi-visual-angle image as a tested visual angle, calculating a synthetic image under the tested visual angle by adopting the neighborhood color image and the corresponding depth map, calculating the cavity area of the synthetic image and the error between the synthetic image and the tested visual angle, calculating a quality evaluation score according to the cavity area and the error, and taking the image with the quality evaluation score larger than a first set threshold value as a redundant viewpoint;
the candidate viewpoint calculation module is used for taking the image virtual viewpoint with the cavity area larger than the second set threshold value as a candidate viewpoint and adding an actual picture at the position of the candidate viewpoint;
the registration reconstruction module is used for eliminating redundant viewpoints, registering the newly added actual pictures into the initial three-dimensional reconstruction model to optimize images and camera parameters, executing dense three-dimensional reconstruction on the new images, calculating a depth map of each input viewpoint and obtaining a high-quality three-dimensional model through fusion of the depth maps;
the three-dimensional reconstruction module, the redundant viewpoint calculation module, the candidate viewpoint calculation module and the registration reconstruction module are connected in sequence.
CN202110877752.6A 2021-08-01 2021-08-01 Scene-oriented three-dimensional reconstruction viewpoint planning method and system Pending CN113724369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110877752.6A CN113724369A (en) 2021-08-01 2021-08-01 Scene-oriented three-dimensional reconstruction viewpoint planning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110877752.6A CN113724369A (en) 2021-08-01 2021-08-01 Scene-oriented three-dimensional reconstruction viewpoint planning method and system

Publications (1)

Publication Number Publication Date
CN113724369A true CN113724369A (en) 2021-11-30

Family

ID=78674638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110877752.6A Pending CN113724369A (en) 2021-08-01 2021-08-01 Scene-oriented three-dimensional reconstruction viewpoint planning method and system

Country Status (1)

Country Link
CN (1) CN113724369A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105460A (en) * 2019-12-26 2020-05-05 电子科技大学 RGB-D camera pose estimation method for indoor scene three-dimensional reconstruction
CN111314688A (en) * 2020-03-16 2020-06-19 北京迈格威科技有限公司 Disparity map hole filling method and device and electronic system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105460A (en) * 2019-12-26 2020-05-05 电子科技大学 RGB-D camera pose estimation method for indoor scene three-dimensional reconstruction
CN111314688A (en) * 2020-03-16 2020-06-19 北京迈格威科技有限公司 Disparity map hole filling method and device and electronic system

Similar Documents

Publication Publication Date Title
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN112367514B (en) Three-dimensional scene construction method, device and system and storage medium
KR102013978B1 (en) Method and apparatus for fusion of images
EP2328125B1 (en) Image splicing method and device
CN111968129A (en) Instant positioning and map construction system and method with semantic perception
US11176704B2 (en) Object pose estimation in visual data
JP6921686B2 (en) Generator, generation method, and program
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
KR101804205B1 (en) Apparatus and method for image processing
CN112434709A (en) Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
CN206563985U (en) 3-D imaging system
EP1999685B1 (en) Method of and system for storing 3d information
JP2002524937A (en) Method and apparatus for synthesizing a high resolution image using a high resolution camera and a low resolution camera
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
Gao et al. Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN111523411B (en) Synthetic aperture imaging method based on semantic patching
Zhou et al. MR video fusion: interactive 3D modeling and stitching on wide-baseline videos
CN113724369A (en) Scene-oriented three-dimensional reconstruction viewpoint planning method and system
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
CN114913064A (en) Large parallax image splicing method and device based on structure keeping and many-to-many matching
Zhao et al. 3dfill: Reference-guided image inpainting by self-supervised 3d image alignment
US20230222635A1 (en) Cloud based intelligent image enhancement system
CN117196943A (en) Model architecture-based 3D image simulation splicing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240419

AD01 Patent right deemed abandoned