CN113345084A - Three-dimensional modeling system and three-dimensional modeling method - Google Patents

Three-dimensional modeling system and three-dimensional modeling method Download PDF

Info

Publication number
CN113345084A
CN113345084A CN202110724089.6A CN202110724089A CN113345084A CN 113345084 A CN113345084 A CN 113345084A CN 202110724089 A CN202110724089 A CN 202110724089A CN 113345084 A CN113345084 A CN 113345084A
Authority
CN
China
Prior art keywords
ground
aerial
video
dimensional
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110724089.6A
Other languages
Chinese (zh)
Other versions
CN113345084B (en
Inventor
常远
段龙梅
徐彤
郭春阳
赵先洋
吕大邦
尹秋文
刘凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Traffic Planning And Design Institute
Original Assignee
Jilin Traffic Planning And Design Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Traffic Planning And Design Institute filed Critical Jilin Traffic Planning And Design Institute
Priority to CN202110724089.6A priority Critical patent/CN113345084B/en
Publication of CN113345084A publication Critical patent/CN113345084A/en
Application granted granted Critical
Publication of CN113345084B publication Critical patent/CN113345084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a three-dimensional modeling system and a three-dimensional modeling method, wherein the system part comprises an aerial video acquisition unit, a ground video acquisition unit and a processor unit; the method comprises the following steps: s1, obtaining an aerial panoramic video and a ground panoramic video of the area to be modeled; and S2, performing three-dimensional modeling according to the aerial panoramic video and the ground panoramic video to obtain a three-dimensional live-action model. The modeling method can increase the multi-view coverage of the area and the objects in the area from two dimensions of the field angle and the continuous scene, and can complete scene information acquisition as much as possible in a single acquisition task; according to the method, the aerial panoramic photogrammetry and the ground mobile panoramic three-dimensional reconstruction are effectively fused to obtain an accurate geometric model and omnibearing texture information of a target area; the invention also makes up the defect that the air panorama covers the side and bottom scenes through the close-range panorama data, and reduces the difficulty of data acquisition.

Description

Three-dimensional modeling system and three-dimensional modeling method
Technical Field
The invention relates to the field of three-dimensional modeling, in particular to a three-dimensional modeling system and a three-dimensional modeling method.
Background
With the rapid development of cities in China, the trends of digital cities and smart cities positively influence all industries in the society, and the fine expression of spatial information of cities and ground surface objects can provide effective auxiliary services for management decisions of related fields. The three-dimensional model can qualitatively reflect the shape, color, texture and other appearances of the object; geometric information such as position reference, length area, and volume of the modeling object can also be quantitatively described, and thus the modeling object is widely concerned. The construction of the three-dimensional model mainly comprises two modes of modeling based on graphs and modeling based on a real scene sensor. Compared with a method for drawing a three-dimensional virtual model based on a two-dimensional graph, the three-dimensional model constructed by the real scene sensor can more truly and objectively restore the real scene information of the target object.
The live-action three-dimensional reconstruction is a research hotspot in the field of photogrammetry, and in recent years, the development of technologies such as oblique photogrammetry, ground movement measurement, ground/airborne laser scanning and the like provides diversified data sources and processing modes for the live-action three-dimensional reconstruction. Precision laser scanner equipment cost is expensive and the operation is relatively inconvenient, and oblique photogrammetry and ground movement measurement become more general modeling mode.
In oblique photogrammetry, multiple sensors are used for carrying out image acquisition on a target from different angles, meanwhile, top information, side profiles and textures of the ground and objects in an area are obtained, and products such as a three-dimensional real-scene model are generated by combining with interior processing. The oblique photogrammetry can comprehensively sense the complex scene of the target object, and the automation degree and the modeling efficiency in the field processing process are also improved. However, for ground objects such as urban buildings, oblique photography is difficult to acquire complete information of all ground sides, and the modeling result often has the situations of blurred texture and distorted geometric models.
The ground movement measurement system carries multiple sensors such as a multi-azimuth camera and the like, continuous shooting is carried out around a scene in a vehicle-mounted or pedestrian movement collection mode, and texture information of the ground and the side face of a ground object of the scene is obtained. The ground movement measurement can make up the defect that the information obtained by oblique photography in the dense building area is insufficient, and provides image data with high definition and large overlapping degree. However, the ground mobile measurement system cannot acquire the top information of the ground and the object at the same time, and still cannot realize the complete reconstruction of the scene independently.
Disclosure of Invention
The present invention provides a three-dimensional modeling system and a three-dimensional modeling method for solving the above problems.
In order to achieve the purpose, the invention adopts the following specific technical scheme:
a three-dimensional modeling system, comprising: the system comprises an aerial video acquisition unit for acquiring aerial panoramic videos, a ground video acquisition unit for acquiring ground panoramic videos and a processor unit; the processor unit is used for carrying out three-dimensional modeling according to the aerial panoramic video and the ground panoramic video to obtain a three-dimensional live-action model;
the aerial video acquisition unit comprises an unmanned aerial vehicle for moving in the air and an aerial panoramic camera for shooting aerial panoramic video; the unmanned aerial vehicle is connected with the aerial panoramic camera to drive the aerial panoramic camera to collect aerial mobile data;
the ground video acquisition unit comprises a ground moving device for ground movement and a ground panoramic camera for shooting ground panoramic video; the ground moving device is connected with the ground panoramic camera and drives the ground panoramic camera to acquire ground moving data;
the processor unit comprises a video splicing module for video splicing, a ground video frame extraction module for extracting a ground key frame, a ground dense point cloud generation module for generating a ground dense point cloud, an aerial video frame extraction module for extracting an aerial key frame, an aerial dense point cloud generation module for generating an aerial dense point cloud, a point cloud fusion module for generating an air-ground three-dimensional point cloud, a three-dimensional grid construction module for three-dimensional grid model construction, a mapping image generation module for generating a mapping image set, an optimal mapping image selection module for selecting an optimal mapping image, and a three-dimensional live-action model generation module for generating a three-dimensional live-action model.
A three-dimensional modeling method comprising the steps of:
s1, simultaneously acquiring videos through an aerial video acquisition unit and a ground video acquisition unit to obtain an aerial panoramic video and a ground panoramic video of the area to be modeled;
and S2, performing three-dimensional modeling according to the aerial panoramic video and the ground panoramic video through the processor unit to obtain a three-dimensional live-action model.
Preferably, before step S1, the method further includes the following steps:
s0, uniformly distributing mark point groups on the ground and the object side surface of the area to be modeled, establishing a coordinate system, and measuring the real three-dimensional coordinates of all mark points contained in the mark point groups.
Preferably, step S2 includes the steps of:
s201, splicing an air panoramic video and a ground panoramic video by using a spherical panoramic model through a video splicing module of a processor to obtain an air panoramic video;
s202, performing video frame self-adaptive extraction on the ground panoramic video through a ground video frame extraction module of the processor to obtain a ground key frame group for three-dimensional reconstruction; performing aerial triangulation on the ground key frame group according to the mark point group through a ground dense point cloud generating module of the processor, acquiring a ground multi-view image set according to the ground key frame group, and performing stereo reconstruction according to the ground multi-view image set to obtain ground dense point cloud;
performing video frame self-adaptive extraction on the aerial panoramic video through an aerial video frame extraction module of the processor to obtain an aerial key frame group for three-dimensional reconstruction; performing aerial triangulation on the aerial key frame group according to the measurement adjustment of the mark points through an aerial dense point cloud generating module of the processor, acquiring an aerial multi-view image set according to the aerial key frame group, and performing three-dimensional reconstruction according to the aerial multi-view image set to obtain aerial dense point cloud;
s203, point cloud registration is carried out on the ground dense point cloud and the air dense point cloud through a point cloud fusion module of the processor, and the ground dense point cloud and the air dense point cloud after registration are fused to obtain an air-ground three-dimensional point cloud;
s204, constructing a three-dimensional grid model according to the air-ground three-dimensional point cloud through a three-dimensional grid construction module of the processor; generating a mapping image for each grid patch of the three-dimensional grid model through a texture image generation module of the processor to obtain a mapping candidate image set of each grid patch; selecting an optimal mapping image from the mapping candidate image set through an optimal mapping image selection module of the processor; and performing texture mapping according to the optimal mapping image through a three-dimensional live-action model generation module of the processor to generate a three-dimensional live-action model.
Preferably, in step S202, the adaptive video frame extraction of the ground panoramic video includes the following steps:
comparing the similarity of adjacent video frames of the ground panoramic video through a ground video frame extraction module, and removing the staying fragment video frames;
acquiring a left-view slice and a right-view slice of each frame of video frame in the ground panoramic video, calculating according to the relative horizontal displacement between adjacent side-view slices, and removing the video frame of the rotating segment;
setting a first frame video frame of the ground panoramic video after removal as a first ground key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed to obtain a ground key frame group;
the method for acquiring the ground multi-view image set according to the ground key frame group comprises the following steps:
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of the ground key frame in the ground key frame group to obtain a ground multi-view image set.
Preferably, in step S202, the adaptive video frame extraction for the hollow panoramic video includes the following steps:
comparing the similarity of adjacent video frames of the aerial panoramic video through an aerial video frame extraction module, and removing the staying fragment video frames;
obtaining downward-looking slices of each frame of video frame in the aerial panoramic video, calculating according to the change of barycentric coordinates between adjacent downward-looking slices, and removing rotating segment video frames;
setting a first video frame of the removed aerial panoramic video as a first aerial key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, and selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed to obtain an aerial key frame group;
the method for acquiring the aerial multi-view image set according to the aerial key frame group comprises the following steps:
determining the acquisition direction in the range of adjacent air key frames in the air key frame group according to the air key frame position calculated by air triangulation;
performing linear projection of two visual angles perpendicular to the acquisition direction on each frame of the air key frame to obtain a left-view slice and a right-view slice of each frame of the air key frame;
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of the air key frame to obtain an air multi-view image set.
Preferably, in step S203, the point cloud registration of the ground dense point cloud and the aerial dense point cloud includes the following steps:
s2031, marking corresponding mark points in the ground dense point cloud and the aerial dense point cloud by combining the mark point group and the natural mark points of the region to be modeled through a point cloud fusion module;
s2032, registering the ground dense point cloud and the aerial dense point cloud by an iterative closest point method, and calculating the three-dimensional point location root mean square error and the registration sampling point distance root mean square error of the registered landmark points;
s2033, if the three-dimensional point position root mean square error and the registration sampling point distance root mean square error of the registered mark point are both in a preset error range after registration, deriving a rigid body transformation during registration, and fusing the ground dense point cloud and the air dense point cloud according to the rigid body transformation to obtain the open-air three-dimensional point cloud;
and if the root mean square error of the three-dimensional point positions of the landmark points after the registration or the root mean square error of the distances between the registration sampling points exceeds the preset error range, returning to the step S202, and performing three-dimensional modeling again.
Preferably, in step S204, constructing the three-dimensional mesh model comprises the steps of:
s2041, performing evacuation on the three-dimensional point cloud of the air space by a three-dimensional grid construction module by adopting a Poisson disc sampling algorithm;
s2042, constructing the sparse three-dimensional point cloud of the air space by using a Poisson reconstruction method to obtain a three-dimensional grid model.
Preferably, in step S204, obtaining the mapping candidate image set of each mesh patch includes the following steps:
s2043, calculating a triangular surface normal vector, an acquisition direction and a projection area of each grid surface patch through a texture image generation module, and determining a mapping visual angle of each grid surface patch;
s2044, modeling the adjacency relation between the grid surface patches through a Markov random field, and calculating the optimal solution of the Markov random field combination through a graph cut method;
s2045, obtaining a mapping candidate image set of each grid patch according to the mapping view angle of the grid patch and the optimal solution of the Markov random field combination.
Preferably, in step S204, the selecting the best mapping image from the mapping candidate image set includes the following steps:
s2046, performing Tenengrad evaluation function calculation on the mapping candidate image of each grid patch through an optimal mapping image selection module to obtain the definition evaluation of each mapping candidate image, and removing the mapping candidate images with the fuzziness higher than a preset fuzziness threshold;
s2047, performing brightness detection on each mapping candidate image in the removed mapping candidate image set, and selecting the mapping candidate image with the minimum brightness abnormality indication parameter value as the optimal mapping image.
The invention can obtain the following technical effects:
(1) compared with oblique photography and ground movement measurement, the panoramic video can be used for performing multi-view coverage on an area and objects in the area from two dimensionalities of a field angle and a continuous scene, scene information acquisition is completed as much as possible in a single acquisition task of an unmanned aerial vehicle and a single ground movement device, scene three-dimensional reconstruction is performed by combining the aerial and ground panoramic videos, efficient fine modeling of the ground and the objects is achieved, and an effective geographic information data basis is provided for smart city perception, government intelligent decision and emergency real-time management;
(2) the method comprises the following steps of effectively fusing aerial panoramic photogrammetry with ground mobile panoramic three-dimensional reconstruction, obtaining and sensing aerial-ground cooperative geographic space information, obtaining accurate geometric models and omnibearing texture information of the ground and objects in a target area, and truly reconstructing three-dimensional appearance of a scene;
(3) the defects of the air panorama covering the side and bottom scenes are overcome through the close-range panoramic data, the advantage of the panoramic data covering the scenes in all directions in space-time dimensions is fully played, the difficulty of data acquisition is further reduced, the automation degree and efficiency of the processes of preprocessing, air triangulation, multi-view dense reconstruction, model texture mapping and the like in the three-dimensional model manufacturing process are improved, and therefore a more accurate reconstruction effect is achieved with lower human and material cost.
Drawings
FIG. 1 is a flow chart of a three-dimensional modeling method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
The three-dimensional modeling system provided by the embodiment of the invention comprises: the device comprises an aerial video frame acquisition unit for acquiring aerial panoramic videos, a ground video frame acquisition unit for acquiring ground panoramic videos, and a processor unit, wherein the aerial panoramic videos embody top information and upper side information of a region to be modeled; the processor unit is used for carrying out three-dimensional modeling according to the aerial panoramic video and the ground panoramic video, carrying out video processing on the ground panoramic video of the aerial panoramic video, and extracting scene information of a region to be modeled to obtain a three-dimensional live-action model;
the aerial video acquisition unit comprises an unmanned aerial vehicle for moving in the air and an aerial panoramic camera for shooting aerial panoramic video; the unmanned aerial vehicle is connected with the aerial panoramic camera to drive the aerial panoramic camera to collect aerial mobile data; the method comprises the following steps that different lenses of an aerial panoramic camera synchronously acquire vertical visual angle images and horizontal visual angle images, for example, a six-lens panoramic camera, one vertically downward lens acquires bottom orthoscopic visual angle images, and the other five horizontally oriented cameras acquire horizontal direction images;
the ground video acquisition unit comprises a ground moving device for ground movement and a ground panoramic camera for shooting ground panoramic video; the ground moving device is connected with the ground panoramic camera and drives the ground panoramic camera to acquire ground moving data; the ground panoramic camera comprises a ground panoramic camera, a six-lens panoramic camera, a vertical upward lens, a horizontal directional camera, a vertical directional camera, a horizontal directional camera and a vertical directional camera, wherein different lenses of the ground panoramic camera synchronously acquire vertical visual angle images and horizontal visual angle images;
the processor unit comprises a video splicing module for video splicing, a ground video frame extraction module for extracting a ground key frame, a ground dense point cloud generation module for generating a ground dense point cloud, an aerial video frame extraction module for extracting an aerial key frame, an aerial dense point cloud generation module for generating an aerial dense point cloud, a point cloud fusion module for generating an air-ground three-dimensional point cloud, a three-dimensional grid construction module for three-dimensional grid model construction, a mapping image generation module for generating a mapping image set, an optimal mapping image selection module for selecting an optimal mapping image, and a three-dimensional live-action model generation module for generating a three-dimensional live-action model; each module respectively carries out different processing, ensures that each step is carried out in sequence, does not generate interference, and reduces errors generated in the processing.
The above details describe the structure of the three-dimensional modeling system provided by the present invention, and in correspondence with the system, the present invention further provides a method for performing three-dimensional modeling on a region to be modeled by using the three-dimensional modeling system.
As shown in fig. 1, the three-dimensional modeling method provided in the embodiment of the present invention includes the following steps:
s1, simultaneously acquiring videos through an aerial video acquisition unit and a ground video acquisition unit to obtain an aerial panoramic video and a ground panoramic video of the area to be modeled;
the aerial panoramic video reflects the top information and the upper side information of the area to be modeled, and the ground panoramic video reflects the ground-near information and the lower side information of the area to be modeled;
s2, performing three-dimensional modeling according to the aerial panoramic video and the ground panoramic video through the processor unit to obtain a three-dimensional live-action model;
and extracting information of the area to be modeled according to the aerial panoramic video and the ground panoramic video, processing the information, and using the processed information data for three-dimensional modeling to obtain a three-dimensional live-action model.
In an embodiment of the present invention, before step S1, the method further includes the following steps:
s0, uniformly distributing mark point groups on the ground and the object side surface of the area to be modeled, establishing a coordinate system, and measuring the real three-dimensional coordinates of all mark points contained in the mark point groups;
and establishing a conversion relation between the coordinates in the image and the real three-dimensional coordinates through the mark point group.
In one embodiment of the present invention, step S2 includes the following steps:
s201, acquiring horizontal 360-degree and vertical 180-degree panoramic images of each frame by using a spherical panoramic model through a video splicing module of a processor, splicing the air panoramic video and the ground panoramic video, and obtaining an air-ground panoramic video completely covering a scene in a space-time dimension;
s202, performing video frame self-adaptive extraction on the ground panoramic video through a ground video frame extraction module of the processor, removing redundant video frames containing repeated information, obtaining a ground key frame group for three-dimensional reconstruction, and reducing the data volume of subsequent processing; performing aerial triangulation on the ground key frame group according to the mark point group through a ground dense point cloud generating module of the processor, acquiring a ground multi-view image set according to the ground key frame group, and performing stereo reconstruction according to the ground multi-view image set to obtain ground dense point cloud, wherein the ground dense point cloud reflects the characteristics of a ground panoramic video;
performing video frame self-adaptive extraction on the aerial panoramic video through an aerial video frame extraction module of the processor, and removing redundant video frames containing repeated information to obtain an aerial key frame group for three-dimensional reconstruction; performing aerial triangulation on the aerial key frame group according to the measurement adjustment of the mark points through an aerial dense point cloud generating module of the processor, acquiring an aerial multi-view image set according to the aerial key frame group, and performing stereo reconstruction according to the aerial multi-view image set to obtain aerial dense point cloud which reflects the characteristics of an aerial panoramic video;
s203, point cloud registration is carried out on the ground dense point cloud and the air dense point cloud through a point cloud fusion module of the processor, registration parameters enabling the same feature points in the air panoramic video and the ground panoramic video to be coincident are obtained through the point cloud registration, the ground dense point cloud and the air dense point cloud after registration are fused according to the registration parameters, and an air-ground three-dimensional point cloud is obtained and reflects the features of a region to be modeled, which are obtained according to the air panoramic video and the ground panoramic video;
s204, constructing a three-dimensional grid model according to the air-ground three-dimensional point cloud through a three-dimensional grid construction module of the processor; generating a mapping image for each grid patch of the three-dimensional grid model through a texture image generation module of the processor based on a geometric visual angle, wherein each grid patch has a plurality of mapping images in the aerial panoramic video and the ground panoramic video, and primarily screening the mapping images to obtain a mapping candidate image set of each grid patch; selecting an optimal mapping image from a mapping candidate image set through an optimal mapping image selecting module of a processor based on radiation quality evaluation indexes such as image definition, color and the like; and performing texture mapping on the three-dimensional grid model according to the optimal mapping image through a three-dimensional live-action model generation module of the processor to generate a three-dimensional live-action model of the region to be modeled.
In one embodiment of the present invention, in step S202, the adaptive video frame extraction for the ground panoramic video includes the following steps:
comparing the similarity of adjacent video frames of the ground panoramic video through a ground video frame extraction module, and removing the staying fragment video frames;
performing rapid dwell frame elimination according to a Structural Similarity (SSIM) index; SSIM structural similarity measurement is obtained by the formula (2-1), and (x, y) are two frames of images which are currently compared;
SSIM(x,y)=[B(x,y)α][C(x,y)β][S(x,y)γ] (2-1)
wherein:
Figure BDA0003137233050000091
Figure BDA0003137233050000101
Figure BDA0003137233050000102
wherein B (x, y) is a luminance difference, C (x, y) is a contrast difference, S (x, y) is a structural difference,. mu.xAnd muyRespectively represent the mean values of x, y,. sigmaxAnd σyIs the standard deviation, σ, of the image x, yxyIs the covariance of the x, y images, and c1,c1,c1To avoid a system constant with a denominator of 0, α, β, γ are coefficients larger than 0.
For the initial frame with high redundancy, the similarity comparison is not directly carried out on the adjacent frames in sequence, but the frame interval k is presetinter(kinter5), if the interval is kinterThe similarity of the two frames exceeds the upper threshold sim of the similarityMAX(0.94), it means that there is no motion between two frames, all the intermediate frames are eliminated, and the image interval k is expandedinterAnd then the comparison is carried out. If the similarity is lower than the lower threshold simMIN(0.90), it means that the interval between two images is too large, and the image interval k is reducedinterAnd comparing until the similarity is moderate, selecting a new video frame as a key frame, and performing stay screening on subsequent video frames.
Acquiring a left-view slice and a right-view slice of each frame of video frame in the ground panoramic video, calculating according to the relative horizontal displacement between adjacent side-view slices, and removing the video frame of the rotating segment;
selecting side-looking slice images to calculate the relative horizontal displacement between every two slices, judging the displacement from the first image, if three continuous images with negative relative displacement exist, starting recording by using the first negative basic image, and if three continuous images are positive and two subsequent images are not negative, stopping recording by using the last negative matching image. The recorded image is a suspected rotating segment;
setting a judgment threshold, searching and recording the left and right end point side view slice ratio SL and SR of the suspected rotation segment, and the segment length round-SL; calculating the similarity between the left end point SL and the slice of the right end extended region (SR + around) and the similarity between the right end point SR and the slice of the left end extended region (SL-around); judging whether the similarity variance is greater than a threshold value (0.04), if so, determining the similarity variance as an abnormal rotation segment, and recording a left endpoint as TL and a right endpoint as TR; the image between TL and TR is marked and removed.
Setting a first frame video frame of the ground panoramic video after removal as a first ground key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed to obtain a ground key frame group;
on the basis of ORB matching, an interframe homography transformation matrix H can be established through a RANSAC outlier elimination strategy,
Figure BDA0003137233050000111
where H is a homography matrix, x1,y1And x2,y2Respectively representing the coordinates of corresponding points in the two video frames;
the homography matrix H reflects homography transformation of matching point pairs between two images and defines a square window (x) of one imageaya,xbyb,xcyc,xdyd) Determining local window vertex coordinates (x) with the image principal point as the center in the previous frame through Haya,xbyb,xcyc,xdyd) Point location (x) transformed to the next framea′ya′,xb′yb′,xc′yc′,xd′yd′) Further acquiring coordinates (x) before and after the change of the image centercen,ycen) And (x)cen′,ycen′) Calculating the change in displacement (Δ x)cen,Δycen);
The estimation formula of the baseline delta base is shown as (2-6), namely L2 norm in x and y directions, the calculation formula of the overlap degree overlap is shown as (2-7), wherein L is the length of the multi-view slice and is calculated by taking pixels as a unit:
Figure BDA0003137233050000112
Figure BDA0003137233050000113
and setting an upper limit threshold and a lower limit threshold for the overlap rate overlap, and selecting a video frame which meets the overlap requirement with the previous key frame interval in the video frames as a current key frame until the frame sequence traversal is finished.
The method for acquiring the ground multi-view image set according to the ground key frame group comprises the following steps:
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of ground key frame in the ground key frame group to obtain a ground multi-view image set with overlapping and intersection of the complete scene.
In one embodiment of the present invention, in step S202, the adaptive extraction of video frames from the hollow panoramic video includes the following steps:
comparing the similarity of adjacent video frames of the aerial panoramic video through an aerial video frame extraction module, and removing the staying fragment video frames;
the method for removing the staying segment video frame of the air panoramic video is the same as the method for removing the staying segment video frame of the ground panoramic video, and the description is omitted here.
Obtaining downward-looking slices of each frame of video frame in the aerial panoramic video, calculating according to the change of barycentric coordinates between adjacent downward-looking slices, and removing rotating segment video frames;
judging the rotation state through the homography matrix; the homography matrix H reflects homography transformation of matching point pairs between two images, and image rotation is judged by square interest window gravity center shift on the basis of the homography matrix; defining a square window (x) of an imageaya,xbyb,xcyc,xdyd) According to HCalculating the corresponding matching point (x)a′ya′,xb′yb′,xc′yc′,xd′yd′) (ii) a Calculating respective barycentric coordinates (x) of the two windowscen,ycen) And (x)cen′,ycen′) Amount of change therebetween (Δ x)cen,Δycen) And when the variation is smaller, the rotation is in a stop state and is eliminated.
Setting a first video frame of the removed aerial panoramic video as a first aerial key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, and selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed to obtain an aerial key frame group;
the method for selecting the air key frame group for the air panoramic video is the same as the method for selecting the air key frame group for the ground panoramic video, and the description is omitted here.
The method for acquiring the aerial multi-view image set according to the aerial key frame group comprises the following steps:
determining the acquisition direction in the range of adjacent air key frames in the air key frame group according to the air key frame position calculated by air triangulation;
performing linear projection of two visual angles perpendicular to the acquisition direction on each frame of the air key frame to obtain a left-view slice and a right-view slice of each frame of the air key frame;
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of the air key frame to obtain an air multi-view image set.
In one embodiment of the present invention, the point cloud registration of the ground dense point cloud and the aerial dense point cloud in step S203 comprises the following steps:
s2031, marking corresponding mark points in the ground dense point cloud and the aerial dense point cloud by combining the mark point group and the natural mark points of the region to be modeled through a point cloud fusion module;
s2032, registering the ground dense point cloud and the aerial dense point cloud by an iterative closest point method, and calculating the three-dimensional point location root mean square error and the registration sampling point distance root mean square error of the registered landmark points;
the point cloud registration is to input two point cloud data source point clouds q (source) and target point clouds P (target) and output a transformation F, so that the transformed point clouds F (q) and P have high coincidence degree as much as possible. In this embodiment, F is a rigid transformation, and includes only translation and rotation. The transformation formula is as follows:
F([x,y,z]T)=R[x,y,z]T+t (3-1)
Figure BDA0003137233050000131
t=[x0,y0,z0]T (3-3)
wherein [ x, y, z]TThree-dimensional coordinates of each point in the source point cloud are obtained, R is a rotation transformation matrix, and t is a translation transformation matrix; the error between the source point set Q and the target point set P under the transformation matrix (R, t) is represented by E (R, t), and the process of solving the optimal transformation F is actually to solve the optimal solution (R, t) satisfying min (E (R, t)), as shown in the formula (5-4), NQFor the source point set Q size, QiAnd PiA pair of corresponding points in the source point set and the target point set.
Figure BDA0003137233050000132
The basic principle of the iterative closest point method is to find the closest point pair (Q) in a source point cloud Q and a target point cloud P to be matched according to certain constraint conditionsi,Pi) Then, the optimal transformation parameters R and t are calculated so that the error function E (R, t) in equation (3-4) is minimized. The iterative closest point method is essentially the optimal matching based on the least square method, and reduces errors through iterative solution, and finally obtains a rotation transformation matrix R and a translation transformation matrix t which enable an error function to be minimum or meet a certain convergence criterion.
The iterative process is a continuous loop of determining the corresponding relation of the point sets and calculating the optimal rigid body transformation, and the specific algorithm steps are as follows:
(1) and (4) taking a part of point set P in the target point cloud P, wherein P belongs to P.
(2) Finding p in the source point cloud Q corresponds to the point set Q, Q ∈ Q, such that | Q-p | ═ min.
(3) The rotation matrix R and the translation matrix t are calculated such that the error function E (R, t) is minimized.
(4) And (4) transforming the P by using the rotation and translation matrix obtained in the step (3) to obtain a new point set P', P ═ Rp + t, and P ∈ P.
(5) Calculating the average distance between p' and q
Figure BDA0003137233050000141
n is the size of the set of points p'.
(6) If d is smaller than a given threshold value or the iteration times are larger than a preset maximum iteration time, stopping the iteration; otherwise, returning to the step (2) until the convergence condition is met.
S2033, the mark points are used as three-dimensional check points, the air-ground point cloud registration precision is checked based on the three-dimensional point position root mean square error of the registered mark points and the distance root mean square error of the registered sampling points, if the three-dimensional point position root mean square error of the registered mark points and the distance root mean square error of the registered sampling points are both in a preset error range, rigid body transformation during registration is derived, and ground dense point cloud and air dense point cloud are fused according to the rigid body transformation to obtain the open-air three-dimensional point cloud; in this embodiment, the preset error range is 3 cm;
and if the root mean square error of the three-dimensional point positions of the landmark points after the registration or the root mean square error of the distances between the registration sampling points exceeds the preset error range, returning to the step S202, and performing three-dimensional modeling again.
In one embodiment of the present invention, in step S204, constructing a three-dimensional mesh model comprises the steps of:
s2041, performing thinning on the three-dimensional point cloud of the air space by a three-dimensional grid construction module by adopting a Poisson disc sampling algorithm, and removing redundant three-dimensional point data;
s2042, constructing the sparse three-dimensional point cloud of the air space by using a Poisson reconstruction method to obtain a three-dimensional grid model.
In an embodiment of the present invention, in step S204, obtaining a mapping candidate image set of each mesh patch includes the following steps:
s2043, calculating a triangular surface normal vector, an acquisition direction and a projection area of each grid surface patch through a texture image generation module, and determining a mapping visual angle of each grid surface patch;
s2044, modeling the adjacency relation between the grid surface patches through a Markov random field, and calculating the optimal solution of the Markov random field combination through a graph cut method;
s2045, obtaining a mapping candidate image set of each grid patch according to the mapping view angle of the grid patch and the optimal solution of the Markov random field combination;
inputting N calibrated images and a three-dimensional mesh model, wherein all patches are K ═ K1,K2,…,Kk}. The corresponding texture image on each face is used as a label, and the vector M is M ═ M1,m2,…,mkMeans that the texture image corresponding to the ith surface is the mth surfaceiAnd (6) web. Describing the richness of image information based on the visual angle direction and the surface slice normal direction, describing textures by texture color consistency, and jointly constructing a target energy function based on a Markov random field:
E(M)=Eq(M)+λEc(M) (4-1)
Eq(M) denotes the image quality described by the view direction and the patch method:
Figure BDA0003137233050000151
Ni(fi) Is normal to the dough sheet, ViIs the direction of the viewing angle;
Ec(M) represents texture color consistency:
Figure BDA0003137233050000152
ei,jrepresenting common edges of adjacent patches, d (p)i(x),pj(x) Express the Euclidean distance of two pixel points in the color space;
solving the minimum energy function of the formula to obtain a current optimal texture label, acquiring 4 images with the poses closest to the current optimal texture label based on the current optimal texture label, and constructing a candidate image set together.
In one embodiment of the present invention, the step S6 of selecting the best mapping image from the mapping candidate image set includes the following steps:
s2046, performing Tenengrad evaluation function calculation on the mapping candidate image of each grid patch through an optimal mapping image selection module to obtain the definition evaluation of each mapping candidate image, and removing the mapping candidate images with the fuzziness higher than a preset fuzziness threshold;
the Tenengrad function extracts gradient values in the horizontal direction and the vertical direction through a Sobel operator, the square sum of the gradient values is calculated to serve as an evaluation function, and the definition of an image based on the Tenengrad gradient function is defined as follows:
D(f)=∑yx|G(x,y)|,G(x,y)>T (4-4)
Figure BDA0003137233050000153
wherein: t is a given edge detection threshold, GxAnd GyThe convolution of the pixel coordinates (x, y) with the sobel operator horizontal and vertical directions, respectively, the sobel operator template is as follows:
Figure BDA0003137233050000161
according to the formula, the Tenengrad gradient D (f) of each level of pyramid image of each image is respectively calculatedi) I-0, 1, 2, 3 …, i-0 representing the original image, and i-j representing the j-level down-samplingAnd (5) imaging. Because the images with different resolutions have different sizes, the pyramid sequence image is first stretched to the size of the original level-0 image.
For each image, after the gradient calculation of each image of the pyramid is completed, the down-sampling level i of the image and the corresponding Tenengrad gradient value D (f) thereofi) Form a series of observation pairs (i, D (f)i) I ═ 0, 1, 2, 3 …; and performing linear regression on a series of observation pairs of each image by adopting least square fitting, and taking the absolute value of the slope of the fitted linear model as the Tenengrad gradient change rate of the image. From the calculation of least squares, the Tenengrad gradient change rate of the image is:
Figure BDA0003137233050000162
wherein N is the number of observation pairs, and the higher the gradient change rate is, the better the definition of the image is.
S2047, performing local gray mean-based cast/da brightness detection on each mapping candidate image in the removed mapping candidate image set, eliminating images with abnormal brightness, such as underexposure, overexposure and the like in the texture image set, and selecting the mapping candidate image with the minimum brightness abnormal indication parameter value as an optimal mapping image;
when the brightness is abnormal, the image brightness deviates from the brightness reference value, the mean value and the deviation of the image gray scale from the reference value are calculated, the image brightness can be measured, whether the image has brightness abnormality such as overexposure or underexposure is evaluated, the cast parameter is used for indicating whether the brightness is abnormal, and the parameter da is the mean value deviating from the reference value. The method needs to convert the RGB image into a corresponding gray-scale image for calculation. The mean value da deviating from the reference value is calculated as follows:
Figure BDA0003137233050000163
wherein N is the product of the width and height of the gray image, i.e. the total number of image pixels,xiThe gray value of each pixel point of the gray map is represented, and the refValue is a brightness reference value.
Figure BDA0003137233050000171
meanValueiIs the image gray level image mean value;
Figure BDA0003137233050000172
the deviation from the reference value is defined as follows:
Figure BDA0003137233050000173
in the formula, Hist is a histogram of a grayscale map.
The luminance abnormality indication parameter cast is calculated as follows:
D=|da| (4-12)
M=|Ma| (4-13)
cast=D/M (4-14)
when cast is more than or equal to 1, the image has abnormal brightness. When da >0 represents overexposure of the image, and da <0 represents underexposure of the image; cast <1 indicates that the image is properly exposed and no brightness abnormality exists.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are exemplary and should not be taken as limiting the invention. Variations, modifications, substitutions and alterations of the above-described embodiments may be made by those of ordinary skill in the art without departing from the scope of the present invention.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A three-dimensional modeling system, comprising: the system comprises an aerial video acquisition unit for acquiring aerial panoramic videos, a ground video acquisition unit for acquiring ground panoramic videos and a processor unit; the processor unit is used for carrying out three-dimensional modeling according to the aerial panoramic video and the ground panoramic video to obtain a three-dimensional live-action model;
the aerial video acquisition unit comprises an unmanned aerial vehicle for moving in the air and an aerial panoramic camera for shooting aerial panoramic videos; the unmanned aerial vehicle is connected with the aerial panoramic camera and drives the aerial panoramic camera to collect aerial mobile data;
the ground video acquisition unit comprises a ground moving device for ground movement and a ground panoramic camera for shooting ground panoramic video; the ground moving device is connected with the ground panoramic camera and drives the ground panoramic camera to acquire ground moving data;
the processor unit comprises a video splicing module for video splicing, a ground video frame extraction module for extracting a ground key frame, a ground dense point cloud generation module for generating a ground dense point cloud, an aerial video frame extraction module for extracting an aerial key frame, an aerial dense point cloud generation module for generating an aerial dense point cloud, a point cloud fusion module for generating an air-ground three-dimensional point cloud, a three-dimensional grid construction module for three-dimensional grid model construction, a mapping image generation module for generating a mapping image set, an optimal mapping image selection module for selecting an optimal mapping image, and a three-dimensional live-action model generation module for generating a three-dimensional live-action model.
2. A three-dimensional modeling method, comprising the steps of:
s1, simultaneously acquiring videos through an aerial video acquisition unit and a ground video acquisition unit to obtain an aerial panoramic video and a ground panoramic video of the area to be modeled;
and S2, performing three-dimensional modeling according to the aerial panoramic video and the ground panoramic video through a processor unit to obtain a three-dimensional real scene model.
3. The three-dimensional modeling method of claim 2, further comprising, before said step S1, the steps of:
and S0, uniformly distributing mark point groups on the ground and the object side surface of the region to be modeled, establishing a coordinate system, and measuring the real three-dimensional coordinates of all mark points contained in the mark point groups.
4. The three-dimensional modeling method of claim 3, wherein said step S2 includes the steps of:
s201, splicing the aerial panoramic video and the ground panoramic video by using a spherical panoramic model through a video splicing module of a processor to obtain an aerial panoramic video;
s202, performing video frame self-adaptive extraction on the ground panoramic video through a ground video frame extraction module of the processor to obtain a ground key frame group for three-dimensional reconstruction; performing aerial triangulation on the ground key frame group according to the mark point group through a ground dense point cloud generating module of the processor, acquiring a ground multi-view image set according to the ground key frame group, and performing three-dimensional reconstruction according to the ground multi-view image set to obtain ground dense point cloud;
performing video frame adaptive extraction on the aerial panoramic video through an aerial video frame extraction module of the processor to obtain an aerial key frame group for three-dimensional reconstruction; performing aerial triangulation on the aerial key frame group according to the measurement adjustment of the mark point through an aerial dense point cloud generating module of the processor, acquiring an aerial multi-view image set according to the aerial key frame group, and performing stereo reconstruction according to the aerial multi-view image set to obtain aerial dense point cloud;
s203, point cloud registration is carried out on the ground dense point cloud and the air dense point cloud through a point cloud fusion module of the processor, and the ground dense point cloud and the air dense point cloud after registration are fused to obtain an air-ground three-dimensional point cloud;
s204, constructing a three-dimensional grid model according to the air-ground three-dimensional point cloud through a three-dimensional grid construction module of the processor; generating a mapping image for each grid patch of the three-dimensional grid model through a texture image generation module of the processor to obtain a mapping candidate image set of each grid patch; selecting an optimal mapping image from the mapping candidate image set through an optimal mapping image selection module of the processor; and performing texture mapping according to the optimal mapping image through a three-dimensional live-action model generation module of the processor to generate a three-dimensional live-action model.
5. The three-dimensional modeling method according to claim 4, wherein in step S202, performing video frame adaptive extraction on the ground panoramic video comprises the following steps:
comparing the similarity of adjacent video frames of the ground panoramic video through the ground video frame extraction module, and removing the staying fragment video frames;
acquiring a left-view slice and a right-view slice of each frame of video frame in the ground panoramic video, calculating according to the relative horizontal displacement between adjacent side-view slices, and removing the video frame of the rotating segment;
setting a first frame video frame of the ground panoramic video after removal as a first ground key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed, and obtaining a ground key frame group;
the step of obtaining the ground multi-view image set according to the ground key frame group comprises the following steps:
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of the ground key frame group to obtain the ground multi-view image set.
6. The three-dimensional modeling method according to claim 5, wherein in said step S202, performing video frame adaptive decimation on said aerial panoramic video comprises the steps of:
comparing the similarity of adjacent video frames of the aerial panoramic video through the aerial video frame extraction module, and removing the staying fragment video frames;
obtaining downward-looking slices of each frame of video frame in the aerial panoramic video, calculating according to the change of barycentric coordinates between adjacent downward-looking slices, and removing rotating segment video frames;
setting a first video frame of the removed aerial panoramic video as a first aerial key frame, calculating the overlapping rate between a subsequent video frame and a previous key frame, comparing the overlapping rate with a preset overlapping rate threshold value, and selecting the video frame of which the overlapping rate with the previous key frame in the video frames meets the preset overlapping rate threshold value as a current key frame until all the video frames are traversed to obtain an aerial key frame group;
the acquiring the aerial multi-view image set according to the aerial key frame group comprises the following steps:
determining the acquisition direction in the range of adjacent air key frames in the air key frame group according to the air key frame position calculated by air triangulation;
performing linear projection of two visual angles perpendicular to the acquisition direction on each frame of the air key frame to obtain a left-view slice and a right-view slice of each frame of the air key frame;
and performing optimal intersection visual angle slicing on adjacent key frames according to the left-view slice and the right-view slice of each frame of the air key frame to obtain the air multi-view image set.
7. The three-dimensional modeling method of claim 6, wherein in said step S203, point cloud registration of said ground dense point cloud and said aerial dense point cloud comprises the steps of:
s2031, marking corresponding mark points in the ground dense point cloud and the aerial dense point cloud by the point cloud fusion module in combination with the mark point group and the natural mark points of the area to be modeled;
s2032, registering the ground dense point cloud and the aerial dense point cloud by an iterative closest point method, and calculating the three-dimensional point location root mean square error and the registration sampling point distance root mean square error of the registered landmark points;
s2033, if the three-dimensional point position root mean square error and the registration sampling point distance root mean square error of the registered mark point are both in a preset error range, deriving a rigid body transformation during registration, and fusing the ground dense point cloud and the aerial dense point cloud according to the rigid body transformation to obtain the open-air three-dimensional point cloud;
and if the root mean square error of the three-dimensional point positions of the registered mark points or the root mean square error of the distances between the registered sampling points exceeds a preset error range, returning to the step S202, and performing three-dimensional modeling again.
8. The three-dimensional modeling method of claim 7, wherein in said step S204, constructing said three-dimensional mesh model comprises the steps of:
s2041, performing rarefaction on the air space three-dimensional point cloud by the three-dimensional grid construction module through a Poisson disc sampling algorithm;
s2042, constructing the sparse three-dimensional point cloud of the air space by using a Poisson reconstruction method to obtain the three-dimensional grid model.
9. The three-dimensional modeling method of claim 8, wherein in step S204, obtaining the mapping candidate image set for each mesh patch comprises the steps of:
s2043, calculating a triangular surface normal vector, an acquisition direction and a projection area of each grid surface patch through the texture image generation module, and determining a mapping visual angle of each grid surface patch;
s2044, modeling the adjacency relation between the grid surface patches through a Markov random field, and calculating the optimal solution of the Markov random field combination through a graph cut method;
s2045, obtaining the mapping candidate image set of each grid patch according to the mapping view angle of the grid patch and the optimal solution of the Markov random field combination.
10. The three-dimensional modeling method of claim 9, wherein in step S204, selecting the best mapping image from the set of mapping candidate images comprises the steps of:
s2046, performing Tenengrad evaluation function calculation on the mapping candidate image of each grid patch through the optimal mapping image selection module to obtain the definition evaluation of each mapping candidate image, and removing the mapping candidate images with the fuzziness higher than a preset fuzziness threshold;
s2047, performing brightness detection on each mapping candidate image in the mapping candidate image set after removal, and selecting the mapping candidate image with the minimum brightness abnormality indication parameter value as the optimal mapping image.
CN202110724089.6A 2021-06-29 2021-06-29 Three-dimensional modeling system and three-dimensional modeling method Active CN113345084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110724089.6A CN113345084B (en) 2021-06-29 2021-06-29 Three-dimensional modeling system and three-dimensional modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110724089.6A CN113345084B (en) 2021-06-29 2021-06-29 Three-dimensional modeling system and three-dimensional modeling method

Publications (2)

Publication Number Publication Date
CN113345084A true CN113345084A (en) 2021-09-03
CN113345084B CN113345084B (en) 2022-10-21

Family

ID=77481352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110724089.6A Active CN113345084B (en) 2021-06-29 2021-06-29 Three-dimensional modeling system and three-dimensional modeling method

Country Status (1)

Country Link
CN (1) CN113345084B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332383A (en) * 2022-03-17 2022-04-12 青岛市勘察测绘研究院 Scene three-dimensional modeling method and device based on panoramic video
WO2024082440A1 (en) * 2022-10-20 2024-04-25 中铁第四勘察设计院集团有限公司 Three-dimensional model generation method and apparatus, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251399A (en) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
CN110544294A (en) * 2019-07-16 2019-12-06 深圳进化动力数码科技有限公司 dense three-dimensional reconstruction method based on panoramic video
CN111462326A (en) * 2020-03-31 2020-07-28 武汉大学 Low-cost 360-degree panoramic video camera urban pipeline three-dimensional reconstruction method and system
CN112767542A (en) * 2018-03-22 2021-05-07 影石创新科技股份有限公司 Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
US20210183080A1 (en) * 2019-12-13 2021-06-17 Reconstruct Inc. Interior photographic documentation of architectural and industrial environments using 360 panoramic videos

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251399A (en) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
CN112767542A (en) * 2018-03-22 2021-05-07 影石创新科技股份有限公司 Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
CN110544294A (en) * 2019-07-16 2019-12-06 深圳进化动力数码科技有限公司 dense three-dimensional reconstruction method based on panoramic video
US20210183080A1 (en) * 2019-12-13 2021-06-17 Reconstruct Inc. Interior photographic documentation of architectural and industrial environments using 360 panoramic videos
CN111462326A (en) * 2020-03-31 2020-07-28 武汉大学 Low-cost 360-degree panoramic video camera urban pipeline three-dimensional reconstruction method and system
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
XUJIE ZHANG ET AL: "A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
余飞等: "基于无人机视频的公路可量测三维重建方法", 《工程勘察》 *
单杰等: "大规模三维城市建模进展", 《测绘学报》 *
王果等: "基于无人机倾斜摄影的露天矿边坡三维重建", 《中国矿业》 *
高鹏 等: ""县域旅游景区多源全景数据采集与集成发布"", 《城市建筑》 *
黎娟: "基于空地融合的精细化实景建模及可视化研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332383A (en) * 2022-03-17 2022-04-12 青岛市勘察测绘研究院 Scene three-dimensional modeling method and device based on panoramic video
CN114332383B (en) * 2022-03-17 2022-06-28 青岛市勘察测绘研究院 Scene three-dimensional modeling method and device based on panoramic video
WO2024082440A1 (en) * 2022-10-20 2024-04-25 中铁第四勘察设计院集团有限公司 Three-dimensional model generation method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN113345084B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN112132972B (en) Three-dimensional reconstruction method and system for fusing laser and image data
US7509241B2 (en) Method and apparatus for automatically generating a site model
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
Hoppe et al. Online Feedback for Structure-from-Motion Image Acquisition.
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
Pylvanainen et al. Automatic alignment and multi-view segmentation of street view data using 3d shape priors
JP2003519421A (en) Method for processing passive volume image of arbitrary aspect
WO2012126500A1 (en) 3d streets
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
CN110782498B (en) Rapid universal calibration method for visual sensing network
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN111060006A (en) Viewpoint planning method based on three-dimensional model
Wendel et al. Automatic alignment of 3D reconstructions using a digital surface model
JP4568845B2 (en) Change area recognition device
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
CN114463521B (en) Building target point cloud rapid generation method for air-ground image data fusion
Barrile et al. 3D modeling with photogrammetry by UAVs and model quality verification
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN115019208A (en) Road surface three-dimensional reconstruction method and system for dynamic traffic scene
CN114812558A (en) Monocular vision unmanned aerial vehicle autonomous positioning method combined with laser ranging
Zhao et al. Alignment of continuous video onto 3D point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant