CN118015055A - Multi-source survey data fusion processing method and system based on depth fusion algorithm - Google Patents

Multi-source survey data fusion processing method and system based on depth fusion algorithm Download PDF

Info

Publication number
CN118015055A
CN118015055A CN202410411417.0A CN202410411417A CN118015055A CN 118015055 A CN118015055 A CN 118015055A CN 202410411417 A CN202410411417 A CN 202410411417A CN 118015055 A CN118015055 A CN 118015055A
Authority
CN
China
Prior art keywords
point cloud
image
representing
fusion
scanning point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410411417.0A
Other languages
Chinese (zh)
Inventor
徐一岗
袁丁
贾凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Dinoni Information Technology Co ltd
Original Assignee
Jiangsu Dinoni Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Dinoni Information Technology Co ltd filed Critical Jiangsu Dinoni Information Technology Co ltd
Priority to CN202410411417.0A priority Critical patent/CN118015055A/en
Publication of CN118015055A publication Critical patent/CN118015055A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of three-dimensional modeling, in particular to a multi-source survey data fusion processing method and system based on a depth fusion algorithm.

Description

Multi-source survey data fusion processing method and system based on depth fusion algorithm
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a multi-source survey data fusion processing method and system based on a depth fusion algorithm.
Background
Along with the rapid development of economic construction in China, the process of urban construction is increasingly accelerated, a plurality of fields are developed towards the direction of digitalization and intelligence, the space information data requirement is gradually changed from traditional two dimensions to three dimensions, especially the requirement on a three-dimensional live-action model is more advanced, and how to quickly, accurately and efficiently acquire three-dimensional data, construct the three-dimensional live-action model and reduce the production period and cost is a key problem in the current field. Therefore, in recent years, various modeling techniques based on unmanned aerial vehicle oblique photography, ground close-range photogrammetry, on-board LiDAR, foundation three-dimensional laser scanning, and the like have emerged. However, in many researches, the technology has respective limitations, so that the research of the multi-source data fusion technology is developed, the respective technical advantages are exerted, the defects are overcome, the construction of the three-dimensional real model without all-around dead angles is realized, and the method has important theoretical and application values.
For example, patent publication number CN113012205A proposes a three-dimensional reconstruction method based on multi-source data fusion. The method comprises the following steps: firstly, acquiring images and laser point cloud data by using an oblique photography technology and a laser radar technology; secondly, unifying the space coordinate systems of the two data to perform coarse registration of the data; in addition, an improved ICP algorithm is designed, and the point cloud is resampled by utilizing a voxel grid method on the basis of the traditional ICP algorithm, so that the convergence speed of the algorithm is increased, and the registration accuracy is improved. Experimental results show that the multi-source data fusion method ensures the integrity of the three-dimensional model, makes up the limitation of single-technology modeling, and has good application prospect.
The problems proposed in the background art exist in the above patents: the application designs a multi-source survey data fusion processing method and system based on a depth fusion algorithm in order to solve the problems that detail loss occurs when a building is subjected to three-dimensional modeling through a single data source, detailed data is difficult to acquire, different source data formats and coordinates are different, and direct fusion is difficult to carry out.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a multi-source survey data fusion processing method and system based on a depth fusion algorithm, which comprises the steps of firstly acquiring multi-source survey data of a target building through multi-source equipment, preprocessing the multi-source survey data, secondly, carrying out joint calibration on a multi-view image and a target close-range image to acquire an image point cloud, registering the image point cloud and a laser scanning point cloud according to a rotation matrix and a translation matrix to acquire a target building registration point cloud, finally constructing a point cloud depth fusion network, training through the point cloud depth fusion network, outputting depth fusion point cloud data, and importing the depth fusion point cloud data into point cloud processing software for processing, and outputting a three-dimensional model of the target building.
In order to achieve the above purpose, the present invention provides the following technical solutions:
The multi-source survey data fusion processing method based on the depth fusion algorithm comprises the following steps:
S1: the method comprises the steps that multi-source survey data of a target building are obtained through multi-source equipment, and are preprocessed, wherein the multi-source survey data comprise laser scanning point clouds, multi-view images and target close-range images, and the multi-source equipment comprises a ground laser scanner, an unmanned aerial vehicle carried inclined camera and a ground digital camera;
s2: performing joint calibration on the preprocessed multi-view image and the target close-range image to obtain a building fusion image, constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud;
s3: converting the image point cloud and the laser scanning point cloud into a unified coordinate system through rigid transformation, registering the image point cloud and the laser scanning point cloud according to a rotation matrix and a translation matrix, and obtaining a registration point cloud of a target building;
s4: constructing a point cloud depth fusion network, taking the registration point cloud of the target building as an input parameter of the point cloud depth fusion network, training through the point cloud depth fusion network, outputting depth fusion point cloud data, importing the depth fusion point cloud data into point cloud processing software for processing, and outputting a three-dimensional model of the target building;
the specific steps of acquiring the multi-source survey data of the target building through the multi-source equipment are as follows:
S1.1.1: determining the whole scanning range of a ground laser scanner through on-site survey, making a scanning point layout scheme and a scanning mode according to a principle of maximizing precision, arranging the ground laser scanner at a measuring station, and scanning a target building to obtain a laser scanning point cloud;
S1.1.2: determining an inclined photographing range of an unmanned aerial vehicle-mounted inclined camera through on-site survey, and arranging an image control point setting control mode according to the inclined photographing range, wherein the image control point setting control mode comprises edge leveling control, center leveling control, overall leveling control, edge leveling control and center Gao Chengshe control;
S1.1.3: determining the shielding condition of the periphery of a target building near the ground through on-site survey, and performing supplementary shooting on the target building behind the shielding object through a ground digital camera to obtain a target near-view image;
the calculation formula of the maximum precision is as follows:
Wherein, Representing the maximization accuracy, argmax (), represents the maximum optimization function, s represents the tilt of the ground laser scanner to the target building,Representing the laser incidence angle of the ground laser scanner to the target building,Representing the engineering measurement constant(s),Indicating the ranging accuracy of the ground laser scanner,Represents the horizontal angle measurement accuracy of the ground laser scanner,Representing the vertical angular accuracy of the ground laser scanner,Represents the lateral scan angle of the terrestrial laser scanner,Representing a vertical scan angle of the ground laser scanner;
The preprocessing of the multi-source survey data comprises laser scanning point cloud preprocessing and image preprocessing, wherein the laser scanning point cloud preprocessing comprises laser scanning point cloud compression, laser scanning point cloud noise reduction and laser scanning point cloud splicing, the image preprocessing comprises unmanned aerial vehicle oblique photography image preprocessing and target close-range image preprocessing, the unmanned aerial vehicle oblique photography image preprocessing comprises regional network joint adjustment and multi-view image matching, and the target close-range image preprocessing comprises camera calibration and coordinate conversion;
The specific steps of laser scanning point cloud compression are as follows:
S1.2.1: converting the three-dimensional coordinate of the scanning point cloud and the angular resolution of the scanner, which are obtained by the ground laser scanner, into the coordinate of the scanning point cloud image, wherein each coordinate is colored according to the reflection value of a pixel, the reflection value image is obtained, and the calculation formula of the coordinate of the scanning point cloud image is as follows:
Wherein, AndRepresenting the image coordinates of the scanning point cloud corresponding to the three-dimensional coordinates of the scanning point cloud,Representing the angular resolution of the scanner, X, Y and Z representing the three-dimensional coordinates of the scan point cloud;
s1.2.2: dividing the reflection value image into three plane images according to a minimum spanning tree algorithm, and calculating the plane gravity center position of each plane image;
s1.2.3: setting a fitting plane equation, calculating coefficients of the fitting plane equation according to three barycentric position coordinates, calculating distances from all other scanning point cloud image coordinates to a fitting plane, comparing the distances from the scanning point cloud image coordinates to the fitting plane with a plane distance threshold, if the distances are smaller than or equal to the plane distance threshold, storing the scanning point cloud image coordinates, and if the distances are larger than the plane distance threshold, deleting the scanning point cloud image coordinates;
S1.2.4: removing abnormal points in the image coordinates of the residual scanning point clouds according to the RANSAC algorithm, performing plane fitting on the image coordinates of the scanning point clouds after removing the abnormal points by a least square method, and recombining three plane images after plane fitting to obtain compressed laser scanning point clouds;
The specific steps of the S2 are as follows:
S2.1: calculating total time delay according to shooting time of the unmanned aerial vehicle carried inclined camera and the ground digital camera, and acquiring acceleration and angular velocity of each moment of the unmanned aerial vehicle carried inclined camera;
S2.2: the pixel coordinates of the multi-view image and the pixel coordinates of the target close-range image are calibrated in a combined mode through camera parameters calibrated by the target close-range image camera, the multi-view image and the target close-range image under a unified coordinate system are obtained, and a calculation formula of the combined calibration is as follows:
Wherein, Representing pixel coordinates of the jointly scaled multi-view image,Representing a rotational quaternion from the IMU coordinate system to the world coordinate system, E ()' representing a target close-range image camera calibration function,Represents an internal reference matrix of the ground digital camera,Represents the external parameter matrix of the ground digital camera,Representing the amount of acceleration in the world coordinate system,Representing the gravity vector in the world coordinate system, m representing the multi-view image voxel coordinate matrix,Represents matrix dot multiplication, B represents constant matrix, u (t) represents shooting time delay matrix of unmanned aerial vehicle carrying inclined camera and ground digital camera, B represents accelerometer bias of unmanned aerial vehicle,The method comprises the steps that gyroscope noise of the unmanned aerial vehicle is represented, a represents acceleration of the unmanned aerial vehicle at the current moment, and d represents angular velocity of the unmanned aerial vehicle at the current moment;
S2.3: traversing the multi-view image and the target near-view image, extracting the corner feature and the gray centroid of each image, and calculating the feature description vector of the image according to the included angle between the corner feature and the gray centroid, wherein the calculation formula of the feature description vector of the image is as follows:
Wherein, Representing the feature description vector of the image, i representing the sequence number of the individual corner features of the image, n representing the number of corner features of the image,Representing the angle between the ith corner feature and the gray centroid,The weighted weight values representing the ith corner feature, F ()'s represent fourier transform functions,AndThe abscissa and ordinate representing the gray centroid,Representing the i-th corner feature of the image,Representing the average value of the corner features of the image;
S2.4: repeatedly calculating the matching degree of each multi-view image feature description vector and the target close-range image feature description vector, and fusing the multi-view image with the highest matching degree with the target close-range image until the fusion of all the target close-range images is completed, so as to obtain a building fusion image;
s2.5: constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud;
the specific steps of registering the image point cloud and the laser scanning point cloud according to the rotation matrix and the translation matrix are as follows:
s3.1: calculating the corresponding nearest point of each point in the image point cloud in the laser scanning point cloud;
S3.2: calculating a minimum average distance according to corresponding point pairs of the image point cloud and the laser scanning point cloud, and determining a rotation matrix and a translation matrix through the minimum average distance;
S3.3: transforming the image point cloud according to the rotation matrix and the translation matrix, obtaining an iterative image point cloud, calculating the average distance between the iterative image point cloud and the laser scanning point cloud, stopping iterative calculation if the average distance is smaller than a point cloud distance threshold, and continuing iteration by taking the iterative image point cloud as a new image point cloud if the average distance is greater than or equal to the point cloud distance threshold, wherein the calculation formula of the average distance is as follows:
Wherein D represents the average distance between the iterative image point cloud and the laser scanning point cloud, j represents the serial number of single point cloud data, M represents the total number of point cloud data, Representing the j-th laser scanning point cloud,Representing the j-th iteration image point cloud,Representing Euclidean distance function, R representing rotation matrix, T representing translation matrix;
the constructing a point cloud depth fusion network comprises the following steps:
The input layer establishes a characteristic sequence according to input parameters, wherein the characteristic sequence comprises an image point cloud characteristic sequence and a laser scanning point cloud characteristic sequence;
The feature extraction layer is used for extracting feature vectors of the feature sequence, and comprises a geometric feature extraction module, a surface feature extraction module and a space feature extraction module, wherein the feature vectors comprise surface features, space features and geometric features;
And the feature fusion layer is used for fusing the surface features, the space features and the geometric features to generate depth fusion point cloud data.
A multi-source survey data fusion processing system based on a depth fusion algorithm, the system comprising:
a multi-source survey data acquisition module for acquiring multi-source survey data of a target building by multi-source equipment;
a multi-source survey data preprocessing module for preprocessing multi-source survey data;
The image point cloud conversion module is used for carrying out joint calibration on the multi-view image and the target close-range image to obtain an image point cloud;
The point cloud fusion module is used for registering the laser scanning point cloud and the image point cloud and acquiring depth fusion point cloud data through a point cloud depth fusion network;
The multi-source survey data pre-processing module comprises:
the laser scanning point cloud preprocessing unit is used for carrying out laser scanning point cloud compression, laser scanning point cloud noise reduction and laser scanning point cloud splicing on the laser scanning point cloud;
the multi-view image preprocessing unit is used for carrying out regional network joint adjustment and multi-view image matching on the multi-view images;
The target close-range image preprocessing unit is used for carrying out camera calibration and coordinate conversion on the target close-range image;
the image point cloud conversion module comprises:
the image joint calibration unit is used for jointly calibrating the pixel coordinates of the multi-view image and the pixel coordinates of the target close-range image through the camera parameters calibrated by the target close-range image camera to obtain the multi-view image and the target close-range image under a unified coordinate system;
the image feature vector calculation unit is used for traversing the multi-view image and the target near-view image, extracting the corner feature and the gray centroid of each image, and calculating the feature description vector of the image according to the included angle between the corner feature and the gray centroid;
the image fusion unit is used for calculating the matching degree of each multi-view image feature description vector and the target close-range image feature description vector, and fusing the multi-view image with the highest matching degree with the target close-range image until the fusion of all the target close-range images is completed, so as to obtain a building fusion image;
the image point cloud conversion unit is used for constructing a three-dimensional model of the building fusion image through modeling software and converting the three-dimensional model into image point clouds;
The point cloud fusion module comprises:
the point cloud registration unit is used for registering the image point cloud and the laser scanning point cloud according to the rotation matrix and the translation matrix;
The depth fusion unit is used for constructing a point cloud depth fusion network, taking the registration point cloud of the target building as an input parameter of the point cloud depth fusion network, training through the point cloud depth fusion network, and outputting depth fusion point cloud data.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, three technical means of unmanned aerial vehicle oblique photography, three-dimensional laser scanning and ground camera shooting are combined to obtain multi-source survey data, and the advantage complementation of data sources is realized through depth fusion, so that the problems of model distortion and hollowness in single-technology modeling are solved, and the accuracy of three-dimensional modeling is improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings in which:
FIG. 1 is a flow chart of a multi-source survey data fusion processing method based on a depth fusion algorithm according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram illustrating an image control point configuration method according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of the calibration steps of the camera according to embodiment 1 of the present invention;
Fig. 4 is a schematic view of a corner feature judgment region according to embodiment 1 of the present invention;
FIG. 5 is a diagram of a point cloud depth fusion network according to embodiment 1 of the present invention;
Fig. 6 is a block diagram of a multi-source survey data fusion processing system based on a depth fusion algorithm according to embodiment 2 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1:
Referring to fig. 1, an embodiment of the present invention is provided: the multi-source survey data fusion processing method based on the depth fusion algorithm comprises the following steps:
S1: the method comprises the steps that multi-source survey data of a target building are obtained through multi-source equipment, and are preprocessed, wherein the multi-source survey data comprise laser scanning point clouds, multi-view images and target close-range images, and the multi-source equipment comprises a ground laser scanner, an unmanned aerial vehicle carried inclined camera and a ground digital camera;
s2: performing joint calibration on the preprocessed multi-view image and the target close-range image to obtain a building fusion image, constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud;
s3: converting the image point cloud and the laser scanning point cloud into a unified coordinate system through rigid transformation, registering the image point cloud and the laser scanning point cloud according to a rotation matrix and a translation matrix, and obtaining a registration point cloud of a target building;
s4: constructing a point cloud depth fusion network, taking the registration point cloud of the target building as an input parameter of the point cloud depth fusion network, training through the point cloud depth fusion network, outputting depth fusion point cloud data, importing the depth fusion point cloud data into point cloud processing software for processing, and outputting a three-dimensional model of the target building;
the specific steps of acquiring the multi-source survey data of the target building through the multi-source equipment are as follows:
S1.1.1: determining the whole scanning range of a ground laser scanner through on-site survey, making a scanning point layout scheme and a scanning mode according to a principle of maximizing precision, arranging the ground laser scanner at a measuring station, and scanning a target building to obtain a laser scanning point cloud;
S1.1.2: determining an inclined photographing range of an unmanned aerial vehicle-mounted inclined camera through on-site survey, and arranging an image control point setting control mode according to the inclined photographing range, wherein the image control point setting control mode comprises edge leveling control, center leveling control, overall leveling control, edge leveling control and center Gao Chengshe control;
S1.1.3: determining the shielding condition of the periphery of a target building near the ground through on-site survey, and performing supplementary shooting on the target building behind the shielding object through a ground digital camera to obtain a target near-view image;
the calculation formula of the maximum precision is as follows:
Wherein, Representing the maximization accuracy, argmax (), represents the maximum optimization function, s represents the tilt of the ground laser scanner to the target building,Representing the laser incidence angle of the ground laser scanner to the target building,Representing the engineering measurement constant(s),Indicating the ranging accuracy of the ground laser scanner,Represents the horizontal angle measurement accuracy of the ground laser scanner,Representing the vertical angular accuracy of the ground laser scanner,Represents the lateral scan angle of the terrestrial laser scanner,Representing a vertical scan angle of the ground laser scanner;
Referring to fig. 2, an image control point setting and controlling mode of the embodiment of the invention includes edge leveling control, center leveling control, overall leveling control, and edge leveling control, center Gao Chengshe control, where the edge leveling control lays 4 leveling control points at the edge corner point of the target building, the center leveling control lays 4 leveling control points at the center area of the target building, the overall leveling control lays 8 leveling control points uniformly at the edge corner point and the center area of the target building, the edge leveling control and the center Gao Chengshe control lays 4 leveling control points at the edge corner point of the target building, and the center area lays 4 elevation control points;
The preprocessing of the multi-source survey data comprises laser scanning point cloud preprocessing and image preprocessing, wherein the laser scanning point cloud preprocessing comprises laser scanning point cloud compression, laser scanning point cloud noise reduction and laser scanning point cloud splicing, the image preprocessing comprises unmanned aerial vehicle oblique photography image preprocessing and target close-range image preprocessing, the unmanned aerial vehicle oblique photography image preprocessing comprises regional network joint adjustment and multi-view image matching, and the target close-range image preprocessing comprises camera calibration and coordinate conversion;
The specific steps of laser scanning point cloud compression are as follows:
S1.2.1: converting the three-dimensional coordinate of the scanning point cloud and the angular resolution of the scanner, which are obtained by the ground laser scanner, into the coordinate of the scanning point cloud image, wherein each coordinate is colored according to the reflection value of a pixel, the reflection value image is obtained, and the calculation formula of the coordinate of the scanning point cloud image is as follows:
Wherein, AndRepresenting the image coordinates of the scanning point cloud corresponding to the three-dimensional coordinates of the scanning point cloud,Representing the angular resolution of the scanner, X, Y and Z representing the three-dimensional coordinates of the scan point cloud;
s1.2.2: dividing the reflection value image into three plane images according to a minimum spanning tree algorithm, and calculating the plane gravity center position of each plane image;
S1.2.3: setting a fitting plane equation, calculating coefficients of the fitting plane equation according to three barycentric position coordinates, calculating distances from all other scanning point cloud image coordinates to a fitting plane, comparing the distances from the scanning point cloud image coordinates to the fitting plane with a plane distance threshold, if the distances are smaller than or equal to the plane distance threshold, storing the scanning point cloud image coordinates, and if the distances are larger than the plane distance threshold, deleting the scanning point cloud image coordinates, wherein the plane distance threshold is determined by a person skilled in the art according to a large number of experiments;
S1.2.4: removing abnormal points in the image coordinates of the residual scanning point clouds according to the RANSAC algorithm, performing plane fitting on the image coordinates of the scanning point clouds after removing the abnormal points by a least square method, and recombining three plane images after plane fitting to obtain compressed laser scanning point clouds;
The camera calibration method comprises the steps of obtaining camera calibration parameters through a differential evolution particle swarm mixing algorithm, referring to fig. 3, taking camera parameters as dependent variable of an objective function, taking pixel values of a target close-range image as independent variable of the objective function and an initial particle swarm, initializing particle displacement and speed in the particle swarm, setting upper and lower limits of the particle speed, calculating fitness of each particle, namely calculating fitness through a distance between an actual pixel value of the target close-range image and the target pixel value, selecting particles with the largest fitness as global optimal particles, marking historical optimal particles, carrying out tracking iteration according to positions of the two particles, obtaining optimal individuals and global optimal populations through selective updating, crossing and mutation, carrying out iteration continuously according to the fitness of the particles of a new generation, judging whether the maximum iteration times are met or not, and outputting camera parameters if the maximum iteration times are met;
The specific steps of the S2 are as follows:
S2.1: calculating total time delay according to shooting time of the unmanned aerial vehicle carried inclined camera and the ground digital camera, and acquiring acceleration and angular velocity of each moment of the unmanned aerial vehicle carried inclined camera;
S2.2: the pixel coordinates of the multi-view image and the pixel coordinates of the target close-range image are calibrated in a combined mode through camera parameters calibrated by the target close-range image camera, the multi-view image and the target close-range image under a unified coordinate system are obtained, and a calculation formula of the combined calibration is as follows:
Wherein, Representing pixel coordinates of the jointly scaled multi-view image,Representing a rotational quaternion from the IMU coordinate system to the world coordinate system, E ()' representing a target close-range image camera calibration function,Represents an internal reference matrix of the ground digital camera,Represents the external parameter matrix of the ground digital camera,Representing the amount of acceleration in the world coordinate system,Representing the gravity vector in the world coordinate system, m representing the multi-view image voxel coordinate matrix,Represents matrix dot multiplication, B represents constant matrix, u (t) represents shooting time delay matrix of unmanned aerial vehicle carrying inclined camera and ground digital camera, B represents accelerometer bias of unmanned aerial vehicle,The method comprises the steps that gyroscope noise of the unmanned aerial vehicle is represented, a represents acceleration of the unmanned aerial vehicle at the current moment, and d represents angular velocity of the unmanned aerial vehicle at the current moment;
S2.3: traversing the multi-view image and the target near-view image, extracting the corner feature and the gray centroid of each image, and calculating the feature description vector of the image according to the included angle between the corner feature and the gray centroid, wherein the calculation formula of the feature description vector of the image is as follows:
Wherein, Representing the feature description vector of the image, i representing the sequence number of the individual corner features of the image, n representing the number of corner features of the image,Representing the angle between the ith corner feature and the gray centroid,The weighted weight values representing the ith corner feature, F ()'s represent fourier transform functions,AndThe abscissa and ordinate representing the gray centroid,Representing the i-th corner feature of the image,Representing the average value of the corner features of the image;
S2.4: repeatedly calculating the matching degree of each multi-view image feature description vector and the target close-range image feature description vector, and fusing the multi-view image with the highest matching degree with the target close-range image until the fusion of all the target close-range images is completed, so as to obtain a building fusion image;
s2.5: constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud;
Referring to fig. 4, a schematic diagram of a corner feature judgment area according to an embodiment of the present invention, the steps of extracting the corner feature are as follows:
s2.3.1: randomly selecting any pixel point in the image, marking the pixel point as P point, and calculating the gray value of P
S2.3.2: taking P as a circle center, taking the size of 3 pixels as a radius to make a circle, and selecting 16 pixel points on the boundary of the circle;
S2.3.3: setting a gray threshold t, if gray values of 12 continuous pixel points in the 16 pixel points are smaller than Or is greater thanJudging the P point as a corner feature of the image;
s2.3.4: selecting the next pixel point, repeating the steps S2.3.1 to S2.3.3 until all the pixel points are processed, and counting all the corner points;
the specific steps of registering the image point cloud and the laser scanning point cloud according to the rotation matrix and the translation matrix are as follows:
s3.1: calculating the corresponding nearest point of each point in the image point cloud in the laser scanning point cloud;
S3.2: calculating a minimum average distance according to corresponding point pairs of the image point cloud and the laser scanning point cloud, and determining a rotation matrix and a translation matrix through the minimum average distance;
S3.3: transforming the image point cloud according to the rotation matrix and the translation matrix, obtaining an iterative image point cloud, calculating the average distance between the iterative image point cloud and the laser scanning point cloud, stopping iterative calculation if the average distance is smaller than a point cloud distance threshold, and continuing iteration by taking the iterative image point cloud as a new image point cloud if the average distance is greater than or equal to the point cloud distance threshold, wherein the calculation formula of the average distance is as follows:
Wherein D represents the average distance between the iterative image point cloud and the laser scanning point cloud, j represents the serial number of single point cloud data, M represents the total number of point cloud data, Representing the j-th laser scanning point cloud,Representing the j-th iteration image point cloud,Representing Euclidean distance function, R representing rotation matrix, T representing translation matrix;
referring to fig. 5, the construction of the point cloud depth fusion network according to the embodiment of the present invention includes:
The input layer establishes a characteristic sequence according to input parameters, wherein the characteristic sequence comprises an image point cloud characteristic sequence and a laser scanning point cloud characteristic sequence;
The feature extraction layer is used for extracting feature vectors of the feature sequence, and comprises a geometric feature extraction module, a surface feature extraction module and a space feature extraction module, wherein the feature vectors comprise surface features, space features and geometric features;
the feature fusion layer is used for fusing the surface features, the space features and the geometric features to generate depth fusion point cloud data;
The input layer is used for accumulating according to input parameters, establishing a gray differential equation, calculating the least square parameter of the gray differential equation, calculating fitting values of the feature sequences by discretizing the least square parameter solution, and obtaining an image point cloud feature sequence and a laser scanning point cloud feature sequence;
the feature extraction layer comprises a geometric feature extraction module, a surface feature extraction module and a space feature extraction module;
The surface feature extraction module comprises a surface feature extraction submodule, a surface structure feature extraction submodule and a feature cascading module, wherein the surface feature extraction submodule comprises a multi-layer perceptron network, each layer symmetrically operates an image point cloud feature sequence through average pooling, then performs shape description on the image point cloud feature sequence through one-dimensional convolution and normalization, finally activates according to a ReLU activation function to obtain surface features, the surface structure feature extraction submodule comprises a group of Gaussian kernels used for learning, learns the image point cloud feature sequence through a kernel function to obtain the radial distance, azimuth angle and inclination angle of each surface of the image point cloud feature sequence, encodes the local structural feature of each surface through convolution to obtain the surface structure feature, and the feature cascading module performs cascading combination on the surface shape feature and the surface structure feature to obtain the surface feature;
The geometric feature extraction module acquires the center point position of the image point cloud feature sequence through an attention mechanism, calculates neighbor points of the center point according to a K neighbor algorithm, takes the center point and the neighbor points as a source of a Gaussian kernel function, outputs local geometric features of the center point through the Gaussian kernel function, finally maps all the local geometric features to a high-dimensional space through a residual error connection network, carries out pooling operation, and outputs the geometric features;
The space feature extraction module trains the laser scanning point cloud feature sequence through a multi-layer perceptron network and outputs space features according to the maximum pooling;
the feature fusion layer maps the spatial features, the surface features and the geometric features into tuple sequences Wherein, the method comprises the steps of, wherein,Representing the corresponding input of the mth feature of the geometric features,Representing the corresponding input of the mth feature of the spatial features,Representing the corresponding input of the m-th feature in the surface features, carrying out dot product on the corresponding input feature in each candidate sequence in the tuple sequence, calculating the correlation score after the dot product of the candidate sequences through pearson correlation coefficients, compressing the correlation score according to a sigmoid function, comparing with a correlation threshold, if the correlation score is larger than the correlation threshold, reserving the candidate sequences, if the correlation threshold is smaller than or equal to the correlation threshold, filtering the candidate sequences, collecting the local features of the candidate sequences reserved in the tuple sequence, enhancing the local features, splicing the enhanced candidate sequences along the tuple sequence head sequence, fusing the local features through residual connection, and carrying out pooling operation on the feature vectors after the residual connection fusion to obtain deep fusion point cloud data.
Example 2:
referring to fig. 6, the present invention provides an embodiment: a multi-source survey data fusion processing system based on a depth fusion algorithm, the system comprising:
a multi-source survey data acquisition module for acquiring multi-source survey data of a target building by multi-source equipment;
a multi-source survey data preprocessing module for preprocessing multi-source survey data;
The image point cloud conversion module is used for carrying out joint calibration on the multi-view image and the target close-range image to obtain an image point cloud;
The point cloud fusion module is used for registering the laser scanning point cloud and the image point cloud and acquiring depth fusion point cloud data through a point cloud depth fusion network;
The multi-source survey data pre-processing module comprises:
the laser scanning point cloud preprocessing unit is used for carrying out laser scanning point cloud compression, laser scanning point cloud noise reduction and laser scanning point cloud splicing on the laser scanning point cloud;
the multi-view image preprocessing unit is used for carrying out regional network joint adjustment and multi-view image matching on the multi-view images;
The target close-range image preprocessing unit is used for carrying out camera calibration and coordinate conversion on the target close-range image;
the image point cloud conversion module comprises:
the image joint calibration unit is used for jointly calibrating the pixel coordinates of the multi-view image and the pixel coordinates of the target close-range image through the camera parameters calibrated by the target close-range image camera to obtain the multi-view image and the target close-range image under a unified coordinate system;
the image feature vector calculation unit is used for traversing the multi-view image and the target near-view image, extracting the corner feature and the gray centroid of each image, and calculating the feature description vector of the image according to the included angle between the corner feature and the gray centroid;
the image fusion unit is used for calculating the matching degree of each multi-view image feature description vector and the target close-range image feature description vector, and fusing the multi-view image with the highest matching degree with the target close-range image until the fusion of all the target close-range images is completed, so as to obtain a building fusion image;
the image point cloud conversion unit is used for constructing a three-dimensional model of the building fusion image through modeling software and converting the three-dimensional model into image point clouds;
The point cloud fusion module comprises:
the point cloud registration unit is used for registering the image point cloud and the laser scanning point cloud according to the rotation matrix and the translation matrix;
The depth fusion unit is used for constructing a point cloud depth fusion network, taking the registration point cloud of the target building as an input parameter of the point cloud depth fusion network, training through the point cloud depth fusion network, and outputting depth fusion point cloud data.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (9)

1. The multi-source survey data fusion processing method based on the depth fusion algorithm is characterized by comprising the following steps of:
S1: the method comprises the steps that multi-source survey data of a target building are obtained through multi-source equipment, and are preprocessed, wherein the multi-source survey data comprise laser scanning point clouds, multi-view images and target close-range images, and the multi-source equipment comprises a ground laser scanner, an unmanned aerial vehicle carried inclined camera and a ground digital camera;
s2: performing joint calibration on the preprocessed multi-view image and the target close-range image to obtain a building fusion image, constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud;
s3: converting the image point cloud and the laser scanning point cloud into a unified coordinate system through rigid transformation, registering the image point cloud and the laser scanning point cloud according to a rotation matrix and a translation matrix, and obtaining a registration point cloud of a target building;
S4: and constructing a point cloud depth fusion network, taking the registration point cloud of the target building as an input parameter of the point cloud depth fusion network, training through the point cloud depth fusion network, outputting depth fusion point cloud data, importing the depth fusion point cloud data into point cloud processing software for processing, and outputting a three-dimensional model of the target building.
2. The method for processing multi-source survey data based on depth fusion algorithm according to claim 1, wherein the steps of acquiring the multi-source survey data of the target building by the multi-source device are as follows:
S1.1.1: determining the whole scanning range of a ground laser scanner through on-site survey, making a scanning point layout scheme and a scanning mode according to a principle of maximizing precision, arranging the ground laser scanner at a measuring station, and scanning a target building to obtain a laser scanning point cloud;
S1.1.2: determining an inclined photographing range of an unmanned aerial vehicle-mounted inclined camera through on-site survey, and arranging an image control point setting control mode according to the inclined photographing range, wherein the image control point setting control mode comprises edge leveling control, center leveling control, overall leveling control, edge leveling control and center Gao Chengshe control;
S1.1.3: the shielding condition of the periphery of the target building near the ground is determined through on-site survey, and the target building behind the shielding object is subjected to supplementary shooting through a ground digital camera, so that a target near-view image is obtained.
3. The multi-source survey data fusion processing method based on the depth fusion algorithm of claim 2, wherein the calculation formula of the maximizing accuracy is:
Wherein, Representing the maximization accuracy, argmax (), representing the maximum optimization function, s representing the slant distance of the ground laser scanner to the target building,/>Representing the laser incidence angle of the ground laser scanner to the target building,/>Representing engineering measurement constants,/>,/>Representing the ranging accuracy of a ground laser scanner,/>Represents the horizontal angle measurement accuracy of the ground laser scanner,/>Representing the vertical angle measurement accuracy of a ground laser scanner,/>Represents the transverse scan angle of a terrestrial laser scanner,/>Representing the vertical scan angle of the surface laser scanner.
4. The depth fusion algorithm-based multi-source survey data fusion processing method according to claim 1, wherein the preprocessing of the multi-source survey data comprises laser scanning point cloud preprocessing and image preprocessing, the laser scanning point cloud preprocessing comprises laser scanning point cloud compression, laser scanning point cloud noise reduction and laser scanning point cloud stitching, the image preprocessing comprises unmanned aerial vehicle oblique photography image preprocessing and target close-range image preprocessing, the unmanned aerial vehicle oblique photography image preprocessing comprises regional network joint adjustment and multi-view image matching, and the target close-range image preprocessing comprises camera calibration and coordinate conversion.
5. The multi-source survey data fusion processing method based on the depth fusion algorithm of claim 4, wherein the specific laser scanning point cloud compression steps are as follows:
S1.2.1: converting the three-dimensional coordinate of the scanning point cloud and the angular resolution of the scanner, which are obtained by the ground laser scanner, into the coordinate of the scanning point cloud image, wherein each coordinate is colored according to the reflection value of a pixel, the reflection value image is obtained, and the calculation formula of the coordinate of the scanning point cloud image is as follows:
Wherein, And/>Representing scanning point cloud image coordinates corresponding to the scanning point cloud three-dimensional coordinates,/>Representing the angular resolution of the scanner, X, Y and Z representing the three-dimensional coordinates of the scan point cloud;
s1.2.2: dividing the reflection value image into three plane images according to a minimum spanning tree algorithm, and calculating the plane gravity center position of each plane image;
s1.2.3: setting a fitting plane equation, calculating coefficients of the fitting plane equation according to three barycentric position coordinates, calculating distances from all other scanning point cloud image coordinates to a fitting plane, comparing the distances from the scanning point cloud image coordinates to the fitting plane with a plane distance threshold, if the distances are smaller than or equal to the plane distance threshold, storing the scanning point cloud image coordinates, and if the distances are larger than the plane distance threshold, deleting the scanning point cloud image coordinates;
S1.2.4: and eliminating abnormal points in the image coordinates of the residual scanning point cloud according to the RANSAC algorithm, performing plane fitting on the image coordinates of the scanning point cloud with the abnormal points eliminated by a least square method, and recombining three plane images after plane fitting to obtain the compressed laser scanning point cloud.
6. The method for fusion processing of multi-source survey data based on a depth fusion algorithm of claim 5, wherein the step S2 comprises the following specific steps:
S2.1: calculating total time delay according to shooting time of the unmanned aerial vehicle carried inclined camera and the ground digital camera, and acquiring acceleration and angular velocity of each moment of the unmanned aerial vehicle carried inclined camera;
S2.2: the pixel coordinates of the multi-view image and the pixel coordinates of the target close-range image are calibrated in a combined mode through camera parameters calibrated by the target close-range image camera, the multi-view image and the target close-range image under a unified coordinate system are obtained, and a calculation formula of the combined calibration is as follows:
Wherein, Representing pixel coordinates of the jointly calibrated multiview image,/>Representing a rotation quaternion from an IMU coordinate system to a world coordinate system, E ()' representing a target close-range image camera calibration function,/>Represents an internal reference matrix of the ground digital camera,External parameter matrix representing ground digital camera,/>Representing the acceleration amount in the world coordinate system,/>Represents the gravity vector in the world coordinate system, m represents the three-dimensional pixel coordinate matrix of the multi-view image,/>Represents matrix dot multiplication, B represents constant matrix, u (t) represents shooting time delay matrix of unmanned aerial vehicle carrying inclined camera and ground digital camera, B represents accelerometer bias of unmanned aerial vehicle,The method comprises the steps that gyroscope noise of the unmanned aerial vehicle is represented, a represents acceleration of the unmanned aerial vehicle at the current moment, and d represents angular velocity of the unmanned aerial vehicle at the current moment;
S2.3: traversing the multi-view image and the target near-view image, extracting the corner feature and the gray centroid of each image, and calculating the feature description vector of the image according to the included angle between the corner feature and the gray centroid, wherein the calculation formula of the feature description vector of the image is as follows:
Wherein, Representing feature description vectors of an image, i representing sequence numbers of individual corner features of the image, n representing the number of corner features of the image,/>Representing the included angle between the ith corner feature and the gray centroid,/>Weighted weight values representing the ith corner feature, F ()'s representing fourier transform functions,/>And/>Representing the abscissa and ordinate of the gray centroid,/>Representing the ith corner feature of an image,/>Representing the average value of the corner features of the image;
S2.4: repeatedly calculating the matching degree of each multi-view image feature description vector and the target close-range image feature description vector, and fusing the multi-view image with the highest matching degree with the target close-range image until the fusion of all the target close-range images is completed, so as to obtain a building fusion image;
S2.5: and constructing a three-dimensional model of the building fusion image through modeling software, and converting the three-dimensional model into an image point cloud.
7. The method for fusion processing of multi-source survey data based on a depth fusion algorithm according to claim 6, wherein the registering of the image point cloud and the laser scanning point cloud according to the rotation matrix and the translation matrix comprises the following specific steps:
s3.1: calculating the corresponding nearest point of each point in the image point cloud in the laser scanning point cloud;
S3.2: calculating a minimum average distance according to corresponding point pairs of the image point cloud and the laser scanning point cloud, and determining a rotation matrix and a translation matrix through the minimum average distance;
S3.3: transforming the image point cloud according to the rotation matrix and the translation matrix to obtain an iterative image point cloud, calculating the average distance between the iterative image point cloud and the laser scanning point cloud, stopping iterative calculation if the average distance is smaller than a point cloud distance threshold, and continuing iteration by taking the iterative image point cloud as a new image point cloud if the average distance is greater than or equal to the point cloud distance threshold, wherein the calculation formula of the average distance is as follows:
Wherein D represents the average distance between the iterative image point cloud and the laser scanning point cloud, j represents the serial number of single point cloud data, M represents the total number of point cloud data, Represents the j-th laser scanning point cloud,/>Representing a j-th iteration image point cloud,/>Representing the euclidean distance function, R representing the rotation matrix, and T representing the translation matrix.
8. The depth fusion algorithm-based multi-source survey data fusion processing method of claim 7, wherein the constructing a point cloud depth fusion network comprises:
The input layer establishes a characteristic sequence according to input parameters, wherein the characteristic sequence comprises an image point cloud characteristic sequence and a laser scanning point cloud characteristic sequence;
The feature extraction layer is used for extracting feature vectors of the feature sequence, and comprises a geometric feature extraction module, a surface feature extraction module and a space feature extraction module, wherein the feature vectors comprise surface features, space features and geometric features;
And the feature fusion layer is used for fusing the surface features, the space features and the geometric features to generate depth fusion point cloud data.
9. A multi-source survey data fusion processing system based on a depth fusion algorithm, implemented based on the method of any of claims 1-8, the system comprising:
a multi-source survey data acquisition module for acquiring multi-source survey data of a target building by multi-source equipment;
a multi-source survey data preprocessing module for preprocessing multi-source survey data;
The image point cloud conversion module is used for carrying out joint calibration on the multi-view image and the target close-range image to obtain an image point cloud;
and the point cloud fusion module is used for registering the laser scanning point cloud and the image point cloud and acquiring depth fusion point cloud data through a point cloud depth fusion network.
CN202410411417.0A 2024-04-08 2024-04-08 Multi-source survey data fusion processing method and system based on depth fusion algorithm Pending CN118015055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410411417.0A CN118015055A (en) 2024-04-08 2024-04-08 Multi-source survey data fusion processing method and system based on depth fusion algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410411417.0A CN118015055A (en) 2024-04-08 2024-04-08 Multi-source survey data fusion processing method and system based on depth fusion algorithm

Publications (1)

Publication Number Publication Date
CN118015055A true CN118015055A (en) 2024-05-10

Family

ID=90952751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410411417.0A Pending CN118015055A (en) 2024-04-08 2024-04-08 Multi-source survey data fusion processing method and system based on depth fusion algorithm

Country Status (1)

Country Link
CN (1) CN118015055A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724477A (en) * 2020-07-06 2020-09-29 中铁二局第一工程有限公司 Method for constructing multi-level three-dimensional terrain model through multi-source data fusion
US20210035314A1 (en) * 2018-10-12 2021-02-04 Tencent Technology (Shenzhen) Company Limited Map element extraction method and apparatus, and server
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN113012205A (en) * 2020-11-17 2021-06-22 浙江华云电力工程设计咨询有限公司 Three-dimensional reconstruction method based on multi-source data fusion
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
US20220366681A1 (en) * 2021-05-10 2022-11-17 Tsinghua University VISION-LiDAR FUSION METHOD AND SYSTEM BASED ON DEEP CANONICAL CORRELATION ANALYSIS
CN115980700A (en) * 2023-01-17 2023-04-18 河海大学 Method and system for three-dimensional data acquisition of highway infrastructure
CN116129067A (en) * 2022-12-23 2023-05-16 龙岩学院 Urban live-action three-dimensional modeling method based on multi-source geographic information coupling
CN117197789A (en) * 2023-09-20 2023-12-08 重庆邮电大学 Curtain wall frame identification method and system based on multi-scale boundary feature fusion

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210035314A1 (en) * 2018-10-12 2021-02-04 Tencent Technology (Shenzhen) Company Limited Map element extraction method and apparatus, and server
CN111724477A (en) * 2020-07-06 2020-09-29 中铁二局第一工程有限公司 Method for constructing multi-level three-dimensional terrain model through multi-source data fusion
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN113012205A (en) * 2020-11-17 2021-06-22 浙江华云电力工程设计咨询有限公司 Three-dimensional reconstruction method based on multi-source data fusion
CN112967219A (en) * 2021-03-17 2021-06-15 复旦大学附属华山医院 Two-stage dental point cloud completion method and system based on deep learning network
CN112927360A (en) * 2021-03-24 2021-06-08 广州蓝图地理信息技术有限公司 Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
US20220366681A1 (en) * 2021-05-10 2022-11-17 Tsinghua University VISION-LiDAR FUSION METHOD AND SYSTEM BASED ON DEEP CANONICAL CORRELATION ANALYSIS
CN116129067A (en) * 2022-12-23 2023-05-16 龙岩学院 Urban live-action three-dimensional modeling method based on multi-source geographic information coupling
CN115980700A (en) * 2023-01-17 2023-04-18 河海大学 Method and system for three-dimensional data acquisition of highway infrastructure
CN117197789A (en) * 2023-09-20 2023-12-08 重庆邮电大学 Curtain wall frame identification method and system based on multi-scale boundary feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANRAN XU等: ""Fast and High Accuracy 3D Point Cloud Registration for Automatic Reconstruction From Laser Scanning Data"", 《IEEE ACCESS》, 26 April 2023 (2023-04-26) *
张星: ""基于激光点云与影像融合的三维可视化研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 April 2019 (2019-04-15) *
朱庆;李世明;胡翰;钟若飞;吴波;谢林甫;: "面向三维城市建模的多点云数据融合方法综述", 武汉大学学报(信息科学版), no. 12, 25 September 2018 (2018-09-25) *

Similar Documents

Publication Publication Date Title
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN110675418B (en) Target track optimization method based on DS evidence theory
CN111242041B (en) Laser radar three-dimensional target rapid detection method based on pseudo-image technology
CN106529538A (en) Method and device for positioning aircraft
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN111998862B (en) BNN-based dense binocular SLAM method
CN113221648B (en) Fusion point cloud sequence image guideboard detection method based on mobile measurement system
CN113642463B (en) Heaven and earth multi-view alignment method for video monitoring and remote sensing images
CN114724120A (en) Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115359474A (en) Lightweight three-dimensional target detection method, device and medium suitable for mobile terminal
CN115809986A (en) Multi-sensor fusion type intelligent external damage detection method for power transmission corridor
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN116740288A (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN111476167B (en) One-stage direction remote sensing image target detection method based on student-T distribution assistance
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN118015055A (en) Multi-source survey data fusion processing method and system based on depth fusion algorithm
CN110276240A (en) A kind of SAR image building wall window information extracting method
CN113762195A (en) Point cloud semantic segmentation and understanding method based on road side RSU
Bai et al. Application of unmanned aerial vehicle multi-vision image 3D modeling in geological disasters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination