CN111340942A - Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof - Google Patents
Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof Download PDFInfo
- Publication number
- CN111340942A CN111340942A CN202010116115.2A CN202010116115A CN111340942A CN 111340942 A CN111340942 A CN 111340942A CN 202010116115 A CN202010116115 A CN 202010116115A CN 111340942 A CN111340942 A CN 111340942A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- point
- edge
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a three-dimensional reconstruction system based on an unmanned aerial vehicle and a method thereof, and the three-dimensional reconstruction system comprises a workbench, wherein the middle part of the upper surface of the workbench is provided with a conveying device, the upper part of the conveying device is provided with a plurality of positioning carriers, each positioning carrier is internally provided with two material loading positions, and the side surface of the workbench, which is positioned on the conveying device, is sequentially provided with a lens ring feeding bending mechanism, a pile head bending welding mechanism, a nose bridge bending welding mechanism, a reinforcing rod grinding welding mechanism, a deburring mechanism, a nose support welding mechanism, a lens leg marking welding mechanism, a rotary riveting mechanism, a leg sleeve assembling mechanism, a lens leg hot-pressing bending; the invention conveys the mirror ring to each mechanism for corresponding processing operation through the conveying device, and then assembles and welds each part together according to a set process; the full-mechanized operation replaces manual feeding and discharging, the processing and assembling quality and efficiency are improved, the rate of finished sunglasses is improved, and the labor cost is reduced; can be assembled with other types of spectacle frame bodies and has good market application value.
Description
Technical Field
The invention belongs to the technical field of three-dimensional reconstruction, and particularly relates to a three-dimensional reconstruction system and a three-dimensional reconstruction method based on an unmanned aerial vehicle.
Background
The method for acquiring the image for three-dimensional reconstruction comprises the steps of controlling the periodic change of the brightness of each light source of at least two light sources which are separately arranged on a space, and respectively acquiring the image for three-dimensional reconstruction by adopting three cameras at least three positions. The image-based three-dimensional reconstruction is a process of automatically calculating and matching by a computer according to two or more two-dimensional images shot by an object or a scene, calculating two-dimensional geometric information and depth information of the object or the scene, and establishing a three-dimensional stereo model. However, there are some disadvantages, first, when the real scene is to be reconstructed and the image that can not be really perceived can not be obtained, for example, the object or the scene does not exist at all, and is fictional, or the scene is in the preset position
The planning phase, which is time-varying, cannot use image-based modeling techniques. Secondly, because objects in the scene are changed into two-dimensional objects in the image, the user is difficult to interact with the two-dimensional graphical objects to acquire required information; there is also a need for cameras and photographic equipment that obtain a realistic perceived image. And these large image files also require sufficient storage space to be saved. In the photovoltaic industry, the pipeline is often damaged, but the condition of large-scale renovation cannot be immediately carried out, the current scene is often recorded firstly, and positioning can be carried out when the subsequent rectification and renovation are carried out. Three-dimensional reconstruction is widely used in such practical scenes. The three-dimensional model reconstruction is carried out on the whole environment at intervals, changes can be carried out every time, and different points can be found in later comparison and maintenance. The traditional three-dimensional reconstruction is that the environment to be reconstructed is scanned by manually holding laser, the reliability is not high, only point cloud information is obtained by a single laser radar, a model framework is obtained by reconstruction, a real scene cannot be truly restored, and information including shapes, textures, colors and the like is obtained by matching with a visual sensor.
Along with the development of unmanned aerial vehicle technique, consumption level unmanned aerial vehicle price constantly descends to and laser radar's miniaturized portableization, and medium-size and small-size unmanned aerial vehicle carries on laser radar and carries out geographical survey and drawing and possibly. At present unmanned aerial vehicle Lidar survey and drawing generally uses ground mark target location and ground basic station complex mode, but uses not strong to the adaptability nature of different landforms, needs artificial exploration in advance, goes to establish the mark target around the survey and drawing place, and the loaded down with trivial details inefficiency of process. Meanwhile, the acquired point cloud data needs to be processed offline at a base station, and the real-time performance is not good enough.
In recent years, with the development of computer vision and the enhancement of graphic computing capability, a technology for performing three-dimensional modeling on urban terrain by means of a method for recovering a structure from motion by means of a sequence type image also appears, but the modeling precision is not high enough by means of shooting pictures, model holes and distortion are easy to occur, and adjustment after modeling depends on artificial participation to perform manual editing and optimization. The model is not accurate enough, and artificial subjective factors are easy to mix. The three-dimensional map function of maps such as hundred degree maps also relies on manual modeling, which is relatively low in scale.
Accordingly, the prior art is deficient and needs improvement.
Disclosure of Invention
The invention provides a three-dimensional reconstruction method based on three-dimensional laser and a camera, which aims to solve the problems and fuse laser data and visual data to avoid the situation that a single camera has a fictional scene. The above problems are solved.
In order to solve the defects of the prior art, the invention provides a three-dimensional urban terrain reconstruction method, which can be used for scanning a target area quickly and at low cost, so that the automatic real-time three-dimensional urban terrain reconstruction is realized, and the higher modeling precision is ensured to solve the problems, and the technical scheme provided by the invention is as follows:
a three-dimensional reconstruction system based on a three-dimensional reconstruction method,
step S1, calibrating the three-dimensional reconstruction coordinate system; acquiring point cloud data of a first frame of three-dimensional measurement data of a measured object, wherein the point cloud data is called global point cloud data, and a coordinate system based on the first frame of three-dimensional measurement data is called a global coordinate system; step S2, performing three-dimensional measurement on the surface of a certain local area of the object to be measured, and performing three-dimensional reconstruction by using a binocular stereo vision principle to obtain point cloud data of the local area, wherein an overlapping area exists between the local area and an area corresponding to the global point cloud data; step S3, transforming the local point cloud data to the global coordinate system, registering the local point cloud data and the global point cloud data according to the overlapping area, and updating the global point cloud data; step S4, keeping the object to be measured still, changing the angle of view of measurement to measure the object to be measured, and repeating the steps S2 to S3 until the object to be measured is measured; step S5, performing global optimization processing on the global point cloud data updated after the measurement is completed to obtain a point cloud model; the step S2 specifically includes the following steps: step S21, performing three-dimensional measurement on the surface of the local area to obtain a first speckle image and a second speckle image of the local area; step S22, determining a whole pixel level corresponding point of each pixel in the first speckle image in the second speckle image; step S23, according to the whole pixel level corresponding point and the coordinates of each pixel point in the first speckle image, performing sub-pixel corresponding point search on the second speckle image to obtain a sub-pixel corresponding point in the second speckle image; and step S24, performing three-dimensional reconstruction by using a binocular stereoscopic vision principle and combining a Kirsch algorithm with the corresponding relation of the sub-pixels of the second speckle image to obtain local point cloud data of the surface of the measured object.
The Kirsch algorithm includes:
step 1, smoothing the original image by a Gaussian filter with a specified standard deviation sigma, and then calculating local gradient g (x, y) and edge direction α (x, y) at each point;
step 2, performing Kirsch calculation on the calculated gradient image, assuming that the image has H × W pixel points, the edge pixels of the image generally do not exceed 5 × H, and the image with a certain target has a more relaxed limit value, and taking an initial threshold value T0Calculating Kirsch operator of each pixel point i, and if K (i) > T is satisfied0I is the edge point, N is the number of edge points plus 1, and once the number of edge points exceeds 5 × H and i is less than the number of pixels in the whole image, it means that the threshold value is taken too low, so that many pixels which are not edge points are also taken0Minimum K (i) of the conditionsTo KminTaking the minimum value as a new threshold value, the whole process is adjusted according to the following method:
(1) if K (i) > T, i is the edge point, recording the coordinate of the edge point i and the minimum value K of K (i)min=min[K(i)]While N is added by 1;
(2) once N ≧ 5 × H, the threshold is adjusted to the minimum that satisfies the lowest edge requirement, i.e., T ═ Kmin;
(3) Comparing the edge points with the new threshold, taking the points greater than the new threshold as new edge points, and recalculating K under the new thresholdminRecording newN of the new edge point;
(4) assigning new edge points to the count after N starts with N ═ newN;
(5) continuing to process the rest edge points in the step 1, and returning to the step (2) if N is more than or equal to 5 × H;
(6) if N < 5 × H, let T2=T,T1=βT2(wherein 0 < β < 1, β is a constant obtained in the test);
and 3, step 3: and (5) extracting edges. Using two thresholds T1And T2Performing threshold processing on the gradient image generated in the step 1, wherein the value is greater than T2Is called a strong edge pixel, T1And T2The ridge pixels in between are called weak edge pixels, and the weak edge is only included in the output when the strong edge and the weak edge are connected.
The three-dimensional reconstruction method comprises the following steps:
step 1, data acquisition:
step 1.1, acquiring a positioning information set in an RMC format by using an onboard GPS on a four-rotor unmanned aerial vehicle in real time, and sending the positioning information set to a ground base station for storage frame by frame in sequence, wherein any α th piece of positioning information comprises a α th GPS time stamp RMC α, a timetag, a α th longitude and latitude RMC α, a position and a α th piece of heading information RMC α, a track;
step 1.2, acquiring an urban terrain data set D by using an airborne laser radar on a quad-rotor unmanned aerial vehicle, and sending the urban terrain data set D to a ground base station for storage frame by frame according to a sequence, wherein any jth urban terrain data dj comprises: a jth point number dj.pointid, a jth spatial coordinate point (xj, yj, zj), a jth adjusting time dj.adjust time, a jth azimuth angle dj.azimuth, a jth distance dj.distance, a jth reflection intensity dj.intensity, a jth radar channel dj.laser _ id, and a jth point timestamp dj.time;
step 2, data integration:
selecting each piece of urban terrain data and positioning information which satisfy the formula (1) from the urban terrain data set D and the positioning information set, thereby obtaining n data and forming a data set Pfail:
to the rotated n spatial coordinate points:
step 4, point cloud denoising:
step 4.1, obtaining invalid point cloud data in the n point cloud data PN by using a threshold method and clearing the invalid point cloud data to zero, thereby obtaining a removed point cloud data set;
step 4.2, denoising and smoothing the removed point cloud data set by using a distance and quantity double-constrained KNN algorithm to obtain a denoised point cloud data set;
step 5, point cloud rarefying:
performing rarefaction treatment on the denoised point cloud data set by using a point cloud reduction algorithm based on K-means + + clustering to obtain rarefied point cloud data;
and 6, performing visualization processing on the diluted point cloud data to obtain a three-dimensional point cloud model of the urban terrain.
Compared with the prior art, the unmanned aerial vehicle-based multi-view blue-dimensional reconstruction method and system have the advantages that by adopting the scheme, the unmanned aerial vehicle carries the multi-view blue-dimensional reconstruction method and system
The image acquisition device acquires a plurality of two-dimensional images of a target building from a plurality of preset directions of the target building at a plurality of preset view angles respectively, and an image processor generates a blue-dimensional model of the target building according to the plurality of two-dimensional images acquired by the image acquisition device and by using preset image processing software, so that the generation of the target building from the images of the target building at the plurality of view angles is realized
The method has the advantages that the method can comprehensively acquire the surface data of the target building cultural relics, is low in implementation difficulty, low in price of used equipment, short in working period and low in cost investment, and the established blue dimension reconstruction precision is high and the effect is good.
The invention adopts the timestamp data integration method to effectively match GPS data and laser radar data, accelerates the data integration speed, improves the efficiency of extracting effective data from mass point cloud data, and can carry out three-dimensional reconstruction on different types of landforms.
The method adopts a distance and number double-constrained KNN algorithm to automatically remove outliers and noisy points, and can smooth the point cloud to a certain degree, thereby accelerating the drying speed.
Has good market application value.
Drawings
For a clearer explanation of the embodiments or technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the invention, and it is obvious for a person skilled in the art that other drawings can be obtained from the drawings without creative efforts.
FIG. 1 is a schematic view of the present invention;
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The use of the terms "fixed," "integrally formed," "left," "right," and the like in this specification is for illustrative purposes only, and elements having similar structures are designated by the same reference numerals in the figures.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
As shown in fig. 1, a three-dimensional reconstruction-based system,
step S1, calibrating the three-dimensional reconstruction coordinate system; acquiring point cloud data of a first frame of three-dimensional measurement data of a measured object, wherein the point cloud data is called global point cloud data, and a coordinate system based on the first frame of three-dimensional measurement data is called a global coordinate system; step S2, performing three-dimensional measurement on the surface of a certain local area of the object to be measured, and performing three-dimensional reconstruction by using a binocular stereo vision principle to obtain point cloud data of the local area, wherein an overlapping area exists between the local area and an area corresponding to the global point cloud data; step S3, transforming the local point cloud data to the global coordinate system, registering the local point cloud data and the global point cloud data according to the overlapping area, and updating the global point cloud data; step S4, keeping the object to be measured still, changing the angle of view of measurement to measure the object to be measured, and repeating the steps S2 to S3 until the object to be measured is measured; step S5, performing global optimization processing on the global point cloud data updated after the measurement is completed to obtain a point cloud model; the step S2 specifically includes the following steps: step S21, performing three-dimensional measurement on the surface of the local area to obtain a first speckle image and a second speckle image of the local area; step S22, determining a whole pixel level corresponding point of each pixel in the first speckle image in the second speckle image; step S23, according to the whole pixel level corresponding point and the coordinates of each pixel point in the first speckle image, performing sub-pixel corresponding point search on the second speckle image to obtain a sub-pixel corresponding point in the second speckle image; and step S24, performing three-dimensional reconstruction by using a binocular stereoscopic vision principle and combining a Kirsch algorithm with the corresponding relation of the sub-pixels of the second speckle image to obtain local point cloud data of the surface of the measured object.
The Kirsch algorithm includes:
step 1, smoothing the original image by a Gaussian filter with a specified standard deviation sigma, and then calculating local gradient g (x, y) and edge direction α (x, y) at each point;
step 2, performing Kirsch calculation on the calculated gradient image, assuming that the image has H × W pixel points, the edge pixels of the image generally do not exceed 5 × H, and the image with a certain target has a more relaxed limit value, and taking an initial threshold value T0Calculating Kirsch operator of each pixel point i, and if K (i) > T is satisfied0I is the edge point, N is the number of edge points plus 1, and once the number of edge points exceeds 5 × H and i is less than the number of pixels in the whole image, it means that the threshold value is taken too low, so that many pixels which are not edge points are also taken0Minimum K (i) of the conditions, denoted KminTaking the minimum value as a new threshold value, the whole process is adjusted according to the following method:
(1) if K (i) > T, i is the edge point, recording the coordinate of the edge point i and the minimum value K of K (i)min=min[K(i)]While N is added by 1;
(2) once N ≧ 5 × H, the threshold is adjusted to the minimum that satisfies the lowest edge requirement, i.e., T ═ Kmin;
(3) Comparing the edge points with the new threshold, taking the points greater than the new threshold as new edge points, and recalculating K under the new thresholdminRecording newN of the new edge point;
(4) assigning new edge points to the count after N starts with N ═ newN;
(5) continuing to process the rest edge points in the step 1, and returning to the step (2) if N is more than or equal to 5 × H;
(6) if N < 5 × H, let T2=T,T1=βT2(wherein 0 < β < 1, β is a constant obtained in the test);
and 3, step 3: and (5) extracting edges. Using two thresholds T1And T2Performing threshold processing on the gradient image generated in the step 1, wherein the value is greater than T2Is called a strong edge pixel, T1And T2The ridge pixels in between are called weak edge pixels, and the weak edge is only included in the output when the strong edge and the weak edge are connected.
The three-dimensional reconstruction method comprises the following steps:
step 1, data acquisition:
step 1.1, acquiring a positioning information set in an RMC format by using an onboard GPS on a four-rotor unmanned aerial vehicle in real time, and sending the positioning information set to a ground base station for storage frame by frame in sequence, wherein any α th piece of positioning information comprises a α th GPS time stamp RMC α, a timetag, a α th longitude and latitude RMC α, a position and a α th piece of heading information RMC α, a track;
step 1.2, acquiring an urban terrain data set D by using an airborne laser radar on a quad-rotor unmanned aerial vehicle, and sending the urban terrain data set D to a ground base station for storage frame by frame according to a sequence, wherein any jth urban terrain data dj comprises: a jth point number dj.pointid, a jth spatial coordinate point (xj, yj, zj), a jth adjusting time dj.adjust time, a jth azimuth angle dj.azimuth, a jth distance dj.distance, a jth reflection intensity dj.intensity, a jth radar channel dj.laser _ id, and a jth point timestamp dj.time;
step 2, data integration:
selecting each piece of urban terrain data and positioning information which satisfy the formula (1) from the urban terrain data set D and the positioning information set, thereby obtaining n data and forming a data set Pfail:
to the rotated n spatial coordinate points:
step 4, point cloud denoising:
step 4.1, obtaining invalid point cloud data in the n point cloud data PN by using a threshold method and clearing the invalid point cloud data to zero, thereby obtaining a removed point cloud data set;
step 4.2, denoising and smoothing the removed point cloud data set by using a distance and quantity double-constrained KNN algorithm to obtain a denoised point cloud data set;
step 5, point cloud rarefying:
performing rarefaction treatment on the denoised point cloud data set by using a point cloud reduction algorithm based on K-means + + clustering to obtain rarefied point cloud data;
and 6, performing visualization processing on the diluted point cloud data to obtain a three-dimensional point cloud model of the urban terrain.
Compared with the prior art, the unmanned aerial vehicle-based multi-view blue-dimensional reconstruction method and system have the advantages that by adopting the scheme, the unmanned aerial vehicle carries the multi-view blue-dimensional reconstruction method and system
The image acquisition device acquires a plurality of two-dimensional images of a target building from a plurality of preset directions of the target building at a plurality of preset view angles respectively, and an image processor generates a blue-dimensional model of the target building according to the plurality of two-dimensional images acquired by the image acquisition device and by using preset image processing software, so that the generation of the target building from the images of the target building at the plurality of view angles is realized
The method has the advantages that the method can comprehensively acquire the surface data of the target building cultural relics, is low in implementation difficulty, low in price of used equipment, short in working period and low in cost investment, and the established blue dimension reconstruction precision is high and the effect is good.
The invention adopts the timestamp data integration method to effectively match GPS data and laser radar data, accelerates the data integration speed, improves the efficiency of extracting effective data from mass point cloud data, and can carry out three-dimensional reconstruction on different types of landforms.
The method adopts a distance and number double-constrained KNN algorithm to automatically remove outliers and noisy points, and can smooth the point cloud to a certain degree, thereby accelerating the drying speed.
Has good market application value.
The technical features mentioned above are combined with each other to form various embodiments which are not listed above, and all of them are regarded as the scope of the present invention described in the specification; also, modifications and variations may be suggested to those skilled in the art in light of the above teachings, and it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention as defined by the appended claims.
Claims (3)
1. A three-dimensional reconstruction system based on a three-dimensional reconstruction method,
step S1, calibrating the three-dimensional reconstruction coordinate system; acquiring point cloud data of a first frame of three-dimensional measurement data of a measured object, wherein the point cloud data is called global point cloud data, and a coordinate system based on the first frame of three-dimensional measurement data is called a global coordinate system; step S2, performing three-dimensional measurement on the surface of a certain local area of the object to be measured, and performing three-dimensional reconstruction by using a binocular stereo vision principle to obtain point cloud data of the local area, wherein an overlapping area exists between the local area and an area corresponding to the global point cloud data; step S3, transforming the local point cloud data to the global coordinate system, registering the local point cloud data and the global point cloud data according to the overlapping area, and updating the global point cloud data; step S4, keeping the object to be measured still, changing the angle of view of measurement to measure the object to be measured, and repeating the steps S2 to S3 until the object to be measured is measured; step S5, performing global optimization processing on the global point cloud data updated after the measurement is completed to obtain a point cloud model; the step S2 specifically includes the following steps: step S21, performing three-dimensional measurement on the surface of the local area to obtain a first speckle image and a second speckle image of the local area; step S22, determining a whole pixel level corresponding point of each pixel in the first speckle image in the second speckle image; step S23, according to the whole pixel level corresponding point and the coordinates of each pixel point in the first speckle image, performing sub-pixel corresponding point search on the second speckle image to obtain a sub-pixel corresponding point in the second speckle image; and step S24, performing three-dimensional reconstruction by using a binocular stereoscopic vision principle and combining a Kirsch algorithm with the corresponding relation of the sub-pixels of the second speckle image to obtain local point cloud data of the surface of the measured object.
2. The three-dimensional reconstruction system of claim 1, wherein the Kirsch algorithm comprises:
step 1, smoothing the original image by a Gaussian filter with a specified standard deviation sigma, and then calculating local gradient g (x, y) and edge direction α (x, y) at each point;
step 2, performing Kirsch calculation on the calculated gradient image, assuming that the image has H × W pixel points, the edge pixels of the image generally do not exceed 5 × H, and the image with a certain target has a more relaxed limit value, and taking an initial threshold value T0Calculating Kirsch operator of each pixel point i, and if K (i) > T is satisfied0I is the edge point, N is the number of edge points plus 1, and once the number of edge points exceeds 5 × H and i is less than the number of pixels in the whole image, it means that the threshold value is taken too low, so that many pixels which are not edge points are also taken0Minimum K (i) of the conditions, denoted KminTaking the minimum value as a new threshold value, the whole process is adjusted according to the following method:
(1) if K (i) > T, i is the edge point, recording the coordinate of the edge point i and the minimum value K of K (i)min=min[K(i)]While N is added by 1;
(2) once N ≧ 5 × H, the threshold is adjusted to the minimum that satisfies the lowest edge requirement, i.e., T ═ Kmin;
(3) Comparing the edge points with the new threshold, taking the points greater than the new threshold as new edge points, and recalculating K under the new thresholdminRecording newN of the new edge point;
(4) assigning new edge points to the count after N starts with N ═ newN;
(5) continuing to process the rest edge points in the step 1, and returning to the step (2) if N is more than or equal to 5 × H;
(6) if N < 5 × H, let T2=T,T1=βT2(wherein 0 < β < 1, β is a constant obtained in the test);
and 3, step 3: and (5) extracting edges. Using two thresholds T1And T2Performing threshold processing on the gradient image generated in the step 1, wherein the value is greater than T2Is called a strong edge pixel, T1And T2The ridge pixels in between are called weak edge pixels, and the weak edge is only included in the output when the strong edge and the weak edge are connected.
3. The three-dimensional reconstruction system of claim 1, wherein the three-dimensional reconstruction method is:
step 1, data acquisition:
step 1.1, acquiring a positioning information set in an RMC format by using an onboard GPS on a four-rotor unmanned aerial vehicle in real time, and sending the positioning information set to a ground base station for storage frame by frame in sequence, wherein any α th piece of positioning information comprises a α th GPS time stamp RMC α, a timetag, a α th longitude and latitude RMC α, a position and a α th piece of heading information RMC α, a track;
step 1.2, acquiring an urban terrain data set D by using an airborne laser radar on a quad-rotor unmanned aerial vehicle, and sending the urban terrain data set D to a ground base station for storage frame by frame according to a sequence, wherein any jth urban terrain data dj comprises: a jth point number dj.pointid, a jth spatial coordinate point (xj, yj, zj), a jth adjusting time dj.adjust time, a jth azimuth angle dj.azimuth, a jth distance dj.distance, a jth reflection intensity dj.intensity, a jth radar channel dj.laser _ id, and a jth point timestamp dj.time;
step 2, data integration:
selecting each piece of urban terrain data and positioning information which satisfy the formula (1) from the urban terrain data set D and the positioning information set, thereby obtaining n data and forming a data set Pfail:
to the rotated n spatial coordinate points:
step 4, point cloud denoising:
step 4.1, obtaining invalid point cloud data in the n point cloud data PN by using a threshold method and clearing the invalid point cloud data to zero, thereby obtaining a removed point cloud data set;
step 4.2, denoising and smoothing the removed point cloud data set by using a distance and quantity double-constrained KNN algorithm to obtain a denoised point cloud data set;
step 5, point cloud rarefying:
performing rarefaction treatment on the denoised point cloud data set by using a point cloud reduction algorithm based on K-means + + clustering to obtain rarefied point cloud data;
and 6, performing visualization processing on the diluted point cloud data to obtain a three-dimensional point cloud model of the urban terrain.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010116115.2A CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010116115.2A CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111340942A true CN111340942A (en) | 2020-06-26 |
Family
ID=71183637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010116115.2A Withdrawn CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340942A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668610A (en) * | 2020-12-08 | 2021-04-16 | 上海裕芮信息技术有限公司 | Building facade recognition model training method, system, equipment and memory |
CN113985383A (en) * | 2021-12-27 | 2022-01-28 | 广东维正科技有限公司 | Method, device and system for surveying and mapping house outline and readable medium |
CN116486012A (en) * | 2023-04-27 | 2023-07-25 | 中国民用航空总局第二研究所 | Aircraft three-dimensional model construction method, storage medium and electronic equipment |
CN118118911A (en) * | 2024-04-30 | 2024-05-31 | 中国电子科技集团公司第五十四研究所 | Multi-unmanned aerial vehicle collaborative deployment method with safety and communication double constraints |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459759A (en) * | 2018-11-13 | 2019-03-12 | 中国科学院合肥物质科学研究院 | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | A kind of three-dimensional rebuilding method, three-dimensional reconstruction system, mobile terminal and storage device |
-
2020
- 2020-02-25 CN CN202010116115.2A patent/CN111340942A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459759A (en) * | 2018-11-13 | 2019-03-12 | 中国科学院合肥物质科学研究院 | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | A kind of three-dimensional rebuilding method, three-dimensional reconstruction system, mobile terminal and storage device |
Non-Patent Citations (1)
Title |
---|
于微波 等: "基于Canny算法的改进Kirsch人脸边缘检测方法", 《微计算机信息》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668610A (en) * | 2020-12-08 | 2021-04-16 | 上海裕芮信息技术有限公司 | Building facade recognition model training method, system, equipment and memory |
CN113985383A (en) * | 2021-12-27 | 2022-01-28 | 广东维正科技有限公司 | Method, device and system for surveying and mapping house outline and readable medium |
CN116486012A (en) * | 2023-04-27 | 2023-07-25 | 中国民用航空总局第二研究所 | Aircraft three-dimensional model construction method, storage medium and electronic equipment |
CN116486012B (en) * | 2023-04-27 | 2024-01-23 | 中国民用航空总局第二研究所 | Aircraft three-dimensional model construction method, storage medium and electronic equipment |
CN118118911A (en) * | 2024-04-30 | 2024-05-31 | 中国电子科技集团公司第五十四研究所 | Multi-unmanned aerial vehicle collaborative deployment method with safety and communication double constraints |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110648398B (en) | Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data | |
CN111629193B (en) | Live-action three-dimensional reconstruction method and system | |
CN107316325B (en) | Airborne laser point cloud and image registration fusion method based on image registration | |
CN112505065B (en) | Method for detecting surface defects of large part by indoor unmanned aerial vehicle | |
CN111340942A (en) | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof | |
EP3228984B1 (en) | Surveying system | |
KR100912715B1 (en) | Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors | |
EP3488603B1 (en) | Methods and systems for processing an image | |
CN110319772B (en) | Visual large-span distance measurement method based on unmanned aerial vehicle | |
CN111141264B (en) | Unmanned aerial vehicle-based urban three-dimensional mapping method and system | |
CN113192193B (en) | High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame | |
US20200357141A1 (en) | Systems and methods for calibrating an optical system of a movable object | |
JP2003519421A (en) | Method for processing passive volume image of arbitrary aspect | |
JP2012118666A (en) | Three-dimensional map automatic generation device | |
CN109459759B (en) | Urban terrain three-dimensional reconstruction method based on quad-rotor unmanned aerial vehicle laser radar system | |
KR102557775B1 (en) | Drone used 3d mapping method | |
CN106969721A (en) | A kind of method for three-dimensional measurement and its measurement apparatus | |
CN110458945B (en) | Automatic modeling method and system by combining aerial oblique photography with video data | |
Gao et al. | Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report | |
Bertram et al. | Generation the 3D model building by using the quadcopter | |
CN110780313A (en) | Unmanned aerial vehicle visible light stereo measurement acquisition modeling method | |
CN110021041B (en) | Unmanned scene incremental gridding structure reconstruction method based on binocular camera | |
Cavegn et al. | Evaluation of Matching Strategies for Image-Based Mobile Mapping | |
CN113963047B (en) | Method for locally and rapidly updating live-action fine modeling based on mobile phone image | |
CN112304250B (en) | Three-dimensional matching equipment and method between moving objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200626 |
|
WW01 | Invention patent application withdrawn after publication |