CN114283070B - Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud - Google Patents
Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud Download PDFInfo
- Publication number
- CN114283070B CN114283070B CN202210213772.8A CN202210213772A CN114283070B CN 114283070 B CN114283070 B CN 114283070B CN 202210213772 A CN202210213772 A CN 202210213772A CN 114283070 B CN114283070 B CN 114283070B
- Authority
- CN
- China
- Prior art keywords
- points
- section
- unmanned aerial
- aerial vehicle
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a method for manufacturing a terrain section by fusing unmanned aerial vehicle image and laser point cloud, which comprises the steps of obtaining unmanned aerial vehicle image and laser point cloud data; generating a digital elevation model, and performing orthorectification on the unmanned aerial vehicle image; constructing a section line with a ground object attribute label; extracting skeleton features and minutiae; combining the section lines with the ground feature attribute labels, the skeleton characteristics and the minutiae to generate a refined sparse terrain section result and the like. The laser point cloud and the unmanned aerial vehicle image are jointly utilized to manufacture the ground line, so that the defect of a single data source is effectively overcome. The method is suitable for high-precision section line manufacturing in complex areas, and can also obtain attribute point information of ground objects such as roads, houses and the like, so that various requirements of engineering investigation and topographic mapping are met.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle surveying and mapping, in particular to a method for manufacturing a terrain cross section by fusing unmanned aerial vehicle images and laser point clouds.
Background
The terrain section is very important terrain characteristic information, can reflect the elevation fluctuation condition of a local area in a certain plane line direction, and can also vividly display the terrain type and the characteristics of the area. Therefore, the topographic fracture surface is widely applied to the fields of engineering investigation and construction of railways, highways and the like, homeland surveying and mapping and the like. The traditional method for acquiring the terrain section is that an operator utilizes instruments such as a total station, a GPS/RTK and the like to acquire the coordinates of a terrain point in a target area, and then the operator calculates and draws a section vector diagram. However, in actual projects, due to the complex topographic environment of the measuring area (such as mountainous areas, canyons and dense forests), sometimes the operators cannot reach the point to be measured, and the section measuring accuracy cannot be guaranteed. Moreover, the manual measurement mode has low working efficiency and high labor cost, and can not meet the requirements of current engineering construction and topographic mapping on section production gradually.
With the development of software and hardware equipment, unmanned aerial vehicles have gradually become an important survey and drawing remote sensing data acquisition means. Compared with the traditional large-airplane aerial photography, the unmanned aerial vehicle has the advantages of low cost, flexibility and the like, and plays an increasingly important role in urban modeling and engineering investigation design. The digital image sensor is carried by the unmanned aerial vehicle platform, and high-resolution image data can be obtained. The real-scene three-dimensional model obtained by three-dimensional reconstruction based on the image can be used for section measurement. Compared with manual measurement, the method has the advantages that the working efficiency is greatly improved, and the working cost is controlled. However, the problem is that the visible light image cannot observe the ground of the area with more covering objects, and the accurate terrain profile line cannot be obtained by utilizing a real-scene three-dimensional model for the vegetation dense area. The airborne laser radar can effectively solve the problem of topographic survey of vegetation coverage areas by virtue of the characteristic of strong penetrating power, so that the airborne laser radar is widely applied to the field of surveying and mapping. However, engineering investigation tasks such as railways and highways generally require to obtain accurate positions of ground objects such as roads and houses on sections, and laser radars can only obtain three-dimensional coordinates of the ground and cannot obtain spectrum and texture information of the ground objects, so that the attributes of the ground objects such as the roads cannot be directly interpreted from point clouds. In summary, the images of the unmanned aerial vehicle and the laser point clouds can be used for manufacturing the terrain cross section, but the images and the laser point clouds have defects, and a terrain cross section measuring technical method suitable for various conditions cannot be formed by singly depending on a certain technical means. The current unmanned aerial vehicle platform can carry on laser radar and visible light image sensor simultaneously, and how fully jointly use the spectrum texture information that ground three-dimensional coordinate that laser radar measurement obtained and image provided, it is still a difficult point to obtain the topography section that satisfies the survey and drawing demand.
Disclosure of Invention
Therefore, the invention aims to provide a method for manufacturing a topographic cross section by fusing an unmanned aerial vehicle image and a laser point cloud, wherein the laser point cloud and the unmanned aerial vehicle image are jointly utilized to manufacture a ground line, so that the defect of a single data source is effectively overcome. The method is suitable for high-precision section line production of complex areas (such as vegetation coverage areas), and can also obtain attribute point information of ground objects such as roads, houses and the like, thereby meeting various requirements of engineering investigation and topographic mapping.
In order to achieve the purpose, the invention discloses a method for manufacturing a terrain cross section by fusing an unmanned aerial vehicle image and a laser point cloud, which comprises the following steps:
s1, acquiring unmanned aerial vehicle images and laser point cloud data;
s2, extracting ground point data from the obtained laser point cloud data to generate a digital elevation model, and performing orthorectification on the unmanned aerial vehicle image based on the digital elevation model;
s3, constructing an irregular triangular net by using the extracted ground point data, calculating to obtain a section line and a position of the section line on an orthoimage by using the irregular triangular net according to input section plane position information, and adding a ground attribute label to the section line by combining image texture information in the unmanned aerial vehicle image to finally obtain the section line with the ground attribute label;
s4, extracting key points from the section line, and obtaining the linear skeleton characteristics of the section by using the key points; calculating the detail points of local characteristic change by adopting a local detection method;
and S5, combining the section lines with the ground feature attribute labels, the skeleton characteristics and the minutiae to generate a refined sparse terrain section result.
Further preferably, in S2, when the ground point data is extracted from the acquired laser point cloud data to generate the digital elevation model, the method includes the following steps:
s201, performing gross error point detection on the acquired laser point cloud, and removing gross error points according to a detection result;
s202, filtering the laser point cloud after the rough difference points are removed, and obtaining ground points and non-ground points by adopting a filtering algorithm (including but not limited to a semi-global filtering algorithm, a progressive triangulation network filtering algorithm and a morphological filtering algorithm); and constructing a triangulation network by using the ground points, and performing interpolation to obtain a digital elevation model.
Further preferably, in S201, the following method is adopted to perform rough point detection on the acquired laser point cloud:
dividing the three-dimensional grid into a plurality of cuboids according to the size of a preset grid for the acquired laser point cloud data, and respectively counting the number of laser points falling into each cuboid grid; when the number of laser points in a certain cuboid grid is less than a preset threshold value, the cuboid grid is marked as a suspected rough difference grid, the number of the laser points of 26 neighborhood grids of the suspected rough difference grid in a three-dimensional space is counted, if at least one grid exists in the 26 neighborhood grids, the grid is a non-suspected grid, the grid is a normal grid, and otherwise, the points in the cuboid grid are judged to be rough difference points.
Further preferably, in S2, the method for performing orthorectification on the unmanned aerial vehicle image based on the digital elevation model includes:
projecting four vertexes of the acquired unmanned aerial vehicle image onto the digital elevation model to obtain the coverage range of the unmanned aerial vehicle image on the ground;
dividing the coverage area into two-dimensional grids along X and Y directions, and back-projecting a central point to the unmanned aerial vehicle image by utilizing a collinear condition equation for each grid to obtain an accurate position point of the central point on the unmanned aerial vehicle image;
and then calculating the gray value of the grid where the precise position point is located by utilizing the gray value of the neighborhood pixel of the precise position point and adopting a bilinear interpolation method, and by analogy, calculating the gray values of all grid points to finish the correction of the orthographic image of the unmanned aerial vehicle.
Further preferably, in S3, an irregular triangulation network is constructed using the extracted ground point data, and a section line is calculated using the irregular triangulation network according to the input section plane position information, including the steps of:
the input section plane position information comprises plane coordinates of a plurality of section nodes; according to the plane coordinates of the section nodes, the section is intercepted on the irregular triangular net to obtain an original section line L;
calculating section lineIntersection with triangle side in triangulation networkPlane coordinates, calculating the intersection point by interpolation according to the elevation values of two end points of the intersecting triangle sideThe elevation value of (a);
and performing the elevation interpolation calculation on all the triangles to obtain the elevation values of all the section points, thereby forming a complete section line.
Further preferably, in S3, when an irregular triangulation network is constructed using the extracted ground point data, and the position of the cross-sectional line on the ortho-image is calculated using the irregular triangulation network based on the input cross-sectional plane position information, the following method is employed:
setting the node coordinates of the section lineThe affine transformation parameters of the orthographic image of the unmanned aerial vehicle areThen line node is brokenCoordinates on an orthoimage of an unmanned aerial vehicleThe calculation formula is as follows:
further preferably, in S3, the feature attribute labels include house, road, scarp, river.
Further preferably, in S4, extracting key points from the cross section line, and obtaining the skeleton feature of the cross section line by using the key points, the method includes the following steps:
obtaining key points of a section line as initial skeleton points, wherein the key points comprise end points, highest points and lowest points;
connecting lines of two adjacent framework points to serve as a processing unit, and sequentially judging whether the interior of each processing unit contains other framework points;
and acquiring a vector set of all the skeleton points as the skeleton characteristics of the section line shape.
Further preferably, the method for sequentially determining whether each processing unit includes other skeleton points includes the following steps:
calculating any section node in each processing unitConnecting lines to two skeleton pointsPerpendicular distance therebetweenDistance from plumbSelectingAndis larger value asA characteristic saliency value of;
calculating the characteristic significant values of all the section points to obtain the node with the maximum characteristic significant valueIf a nodeIs greater than a first preset thresholdThen consider the nodeAdding the skeleton points into a skeleton point queue, otherwise, considering the skeleton points as skeleton pointsThe interior does not contain skeleton points.
More preferably, in S4, when calculating the minutiae of the local characteristic change by the local detection method, three adjacent nodes are sequentially extracted from the end point of the cross-sectional line、Andcalculating the intermediate pointToAndvertical distance of connecting line and plumb distanceAndget itAndthe larger value of the sum asLocal saliency value ofIf local significance is presentGreater than a second predetermined thresholdThen it is considered asIs a minutia point; and sequentially judging whether each section node is a detail point.
The application discloses a fuse topography section preparation method of unmanned aerial vehicle image and laser point cloud compares in prior art, has following advantage at least:
1. the application discloses a method for manufacturing a terrain section by fusing an unmanned aerial vehicle image and a laser point cloud, which is used for manufacturing a ground line by jointly utilizing the laser point cloud and the unmanned aerial vehicle image, and effectively avoids the defect of a single data source. The method is suitable for high-precision section line production of complex areas (such as vegetation coverage areas), and can also obtain attribute point information of ground objects such as roads, houses and the like, thereby meeting various requirements of engineering investigation and topographic mapping.
2. The application discloses a method for manufacturing a terrain section by fusing unmanned aerial vehicle images and laser point clouds provides an effective method for refining and thinning nodes of the profile line, can obtain skeleton points and key points of local feature details which reflect terrain features, and solves the problems that traditional section achievement points are more in number and large in redundancy.
3. The application discloses a method for manufacturing a terrain section by fusing unmanned aerial vehicle image and laser point cloud, when guaranteeing the quality of a terrain section measuring result, greatly reduces field work load in the traditional terrain measuring process, improves operation efficiency and safety, reduces production operation economic cost, and has strong practical application and popularization value.
Drawings
Fig. 1 is a schematic flow chart of the method for manufacturing a topographic cross section by fusing an unmanned aerial vehicle image and a laser point cloud.
Fig. 2 is a schematic diagram of the least square optimization of the center line of the top surface of the steel rail in the method for manufacturing the topographic cross section by fusing the unmanned aerial vehicle image and the laser point cloud provided by the invention.
Fig. 3 is a schematic diagram of track straight/curve judgment.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description.
As shown in fig. 1, an embodiment of the invention provides a method for manufacturing a topographic cross section by fusing an image of an unmanned aerial vehicle and a laser point cloud, which includes the following steps:
s1, acquiring unmanned aerial vehicle images and laser point cloud data; it should be noted that the method also includes accurate external orientation elements of the image and parameters in the camera;
s2, extracting ground point data from the obtained laser point cloud data to generate a digital elevation model, and performing orthorectification on the unmanned aerial vehicle image based on the digital elevation model;
s3, constructing an irregular triangular net by using the extracted ground point data, calculating to obtain a section line and a position of the section line on an orthoimage by using the irregular triangular net according to input section plane position information, and adding a ground attribute label to the section line by combining image texture information in the unmanned aerial vehicle image to finally obtain the section line with the ground attribute label;
s4, extracting key points from the section line, and obtaining the linear skeleton characteristics of the section by using the key points; calculating the detail points of local characteristic change by adopting a local detection method;
and S5, combining the section lines with the ground feature attribute labels, the skeleton characteristics and the minutiae to generate a refined sparse terrain section result.
In S2, when extracting ground point data from the acquired laser point cloud data and generating a digital elevation model, first removing low points and noise points from the laser point cloud by using a grid detection method, then performing point cloud filtering and generating a digital elevation model using the ground points, specifically including the following steps:
s201, performing gross error point detection on the acquired laser point cloud, and removing gross error points according to detection results;
s202, filtering the laser point cloud after the rough difference points are removed, and obtaining ground points and non-ground points by adopting a filtering algorithm (specifically, a semi-global filtering algorithm, a progressive triangulation network filtering algorithm or a morphological filtering algorithm); and constructing a triangulation network by using the ground points, and performing interpolation to obtain a digital elevation model.
In S201, the following method is adopted to perform rough point detection on the acquired laser point cloud:
dividing the three-dimensional grid into a plurality of cuboids according to the acquired laser point cloud data by the preset grid size, and respectively counting the number of laser points, namely LiDAR points, falling into each cuboid grid; as shown in FIG. 2, if the number of points in a grid is less than a certain thresholdIf so, the point in the grid is considered as a suspected gross error point, and the grid is a suspected gross error grid. For each suspected coarse difference gridCounting the number of LiDAR points of 26 neighborhood grids in the three-dimensional space, and if at least one grid in the 26 neighborhood grids is a non-suspected grid, determining that the grid is a non-suspected gridIs a normal grid, otherwise, the grid is judgedThe inner points are all coarse difference points.
In S2, performing orthorectification on the unmanned aerial vehicle image based on the digital elevation model, using the following method:
projecting four vertexes of the acquired unmanned aerial vehicle image onto the digital elevation model to obtain the coverage range of the unmanned aerial vehicle image on the ground;
dividing the coverage area into two-dimensional grids along X and Y directions, and back-projecting a central point to the unmanned aerial vehicle image by utilizing a collinear condition equation for each grid to obtain an accurate position point of the central point on the unmanned aerial vehicle image;
and then calculating the gray value of the grid where the accurate position point is located by utilizing the gray value of the neighborhood pixels of the accurate position point and adopting a bilinear interpolation method, and so on to obtain the gray values of all grid points and finish the correction of the orthoimage of the unmanned aerial vehicle.
In S3, an irregular triangulation network is constructed using the extracted ground point data, and a section line is calculated using the irregular triangulation network according to the input section plane position information, including the steps of:
the input section plane position information comprises plane coordinates of a plurality of section nodes; according to the plane coordinates of the section nodes, the section is intercepted on the irregular triangular net to obtain an original section line L;
calculating section lineIntersection with triangle side in triangulation networkPlane coordinates, calculating the intersection point by interpolation according to the elevation values of two end points of the intersecting triangle sideThe elevation value of (a);
and performing the elevation interpolation calculation on all the triangles to obtain the elevation values of all the section points to form a complete section line.
Further preferably, in S3, when an irregular triangulation network is constructed using the extracted ground point data, and the position of the cross-sectional line on the ortho-image is calculated using the irregular triangulation network based on the input cross-sectional plane position information, the following method is employed:
setting the node coordinates of the section lineThe affine transformation parameters of the orthographic image of the unmanned aerial vehicle areThen line node is brokenCoordinates on an orthoimage of an unmanned aerial vehicleThe calculation formula is as follows:
according to calculationImage coordinates and, the coordinatesThe image texture at the point can judge the nodeThe feature attribute labels include houses, roads, scarves, rivers, and the like, thereby giving attribute information to the relevant nodes. And obtaining a complete topographic section line with the information of the semantic attributes of the terrain.
Further preferably, in S4, extracting key points in the cross-sectional line, and obtaining the skeleton feature of the cross-sectional line shape by using the key points, the method includes:
obtaining key points of a section line as initial skeleton points, wherein the key points comprise end points, highest points and lowest points;
connecting lines of two adjacent framework points to serve as a processing unit, and sequentially judging whether the interior of each processing unit contains other framework points;
and acquiring a vector set of all the skeleton points as the skeleton characteristics of the section line shape.
As shown in fig. 3, it is further preferable that the method sequentially determines whether or not each of the processing units includes another skeleton point, including the steps of:
calculating any section node in each processing unitConnecting lines to two skeleton pointsPerpendicular distance therebetweenDistance from plumbSelectingAndis larger value asA characteristic saliency value of;
calculating the characteristic significant values of all the section points to obtain the node with the maximum characteristic significant valueIf a nodeIs greater than a first preset thresholdThen consider the nodeAdding the skeleton points into a skeleton point queue, otherwise, considering the skeleton points as skeleton pointsThe interior does not contain skeleton points.
More preferably, in S4, when calculating the minutiae of the local characteristic change by the local detection method, three adjacent nodes are sequentially extracted from the end point of the cross-sectional line、Andcalculating the intermediate pointToAndvertical distance of connecting line and plumb distanceAndget itAndand, taking the larger value of the sum asLocal saliency value ofIf local significance is presentGreater than a second predetermined thresholdThen it is considered asIs a minutia point; and sequentially judging whether each section node is a detail point.
And finally, combining the ground feature attribute points, the skeleton points and the fine nodes to obtain a result of the refined sparse terrain section points, and outputting and storing the result.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (10)
1. A method for manufacturing a terrain section by fusing unmanned aerial vehicle images and laser point clouds is characterized by comprising the following steps:
s1, acquiring unmanned aerial vehicle images and laser point cloud data;
s2, extracting ground point data from the obtained laser point cloud data to generate a digital elevation model, and performing orthorectification on the unmanned aerial vehicle image based on the digital elevation model;
s3, constructing an irregular triangular net by using the extracted ground point data, calculating to obtain a section line and a position of the section line on an orthoimage by using the irregular triangular net according to input section plane position information, and adding a ground attribute label to the section line by combining image texture information in the unmanned aerial vehicle image to finally obtain the section line with the ground attribute label;
s4, extracting key points from the section line, and obtaining the linear skeleton characteristics of the section by using the key points; calculating the detail points of local characteristic change by adopting a local detection method;
and S5, combining the section lines with the ground feature attribute labels, the skeleton characteristics and the minutiae to generate a refined sparse terrain section result.
2. The method for manufacturing the terrain section fusing the unmanned aerial vehicle image and the laser point cloud as claimed in claim 1, wherein in S2, when extracting ground point data from the obtained laser point cloud data and generating the digital elevation model, the method comprises the following steps:
s201, performing gross error point detection on the acquired laser point cloud, and removing gross error points according to detection results;
s202, filtering the laser point cloud with the coarse difference points removed, and obtaining ground points and non-ground points by adopting a filtering algorithm; and constructing a triangulation network by using the ground points, and performing interpolation to obtain a digital elevation model.
3. The method for manufacturing the terrain cross section fusing the unmanned aerial vehicle image and the laser point cloud according to claim 2, wherein in S201, the laser point cloud obtained is subjected to gross error point detection by adopting the following method:
dividing the three-dimensional grid into a plurality of cuboids according to the size of a preset grid for the acquired laser point cloud data, and respectively counting the number of laser points falling into each cuboid grid; when the number of laser points in a certain cuboid grid is less than a preset threshold value, the cuboid grid is marked as a suspected rough difference grid, the number of the laser points of 26 neighborhood grids of the suspected rough difference grid in a three-dimensional space is counted, if at least one grid exists in the 26 neighborhood grids, the grid is a non-suspected grid, the grid is a normal grid, and otherwise, the points in the cuboid grid are judged to be rough difference points.
4. The method for manufacturing a terrain profile fusing an unmanned aerial vehicle image and a laser point cloud according to claim 1, wherein in S2, the unmanned aerial vehicle image is subjected to orthorectification based on a digital elevation model by adopting the following method:
projecting four vertexes of the acquired unmanned aerial vehicle image onto the digital elevation model to obtain the coverage range of the unmanned aerial vehicle image on the ground;
dividing the coverage area into two-dimensional grids along X and Y directions, and back-projecting a central point to the unmanned aerial vehicle image by utilizing a collinear condition equation for each grid to obtain an accurate position point of the central point on the unmanned aerial vehicle image;
and calculating the gray value of the grid where the accurate position point is located by utilizing the gray value of the neighborhood pixels of the accurate position point and adopting a bilinear interpolation method, and by analogy, calculating the gray values of all grid points to finish the correction of the orthographic image of the unmanned aerial vehicle.
5. The method for creating a topographic cross-section fusing an unmanned aerial vehicle image and a laser point cloud according to claim 1, wherein an irregular triangulation network is constructed in S3 using the extracted ground point data, and a cross-sectional line is calculated using the irregular triangulation network according to input cross-sectional plane position information, comprising:
the input section plane position information comprises plane coordinates of a plurality of section nodes; according to the plane coordinates of the section nodes, the section is intercepted on the irregular triangular net to obtain an original section line L;
calculating section lineIntersection with triangle side in triangulation networkPlane coordinates, calculating the intersection point by interpolation according to the elevation values of two end points of the intersecting triangle sideThe elevation value of (a);
and performing the elevation interpolation calculation on all the triangles to obtain the elevation values of all the section points to form a complete section line.
6. The method of claim 1, wherein in step S3, an irregular triangulation network is constructed using the extracted ground point data, and the position of the cross-sectional line on the orthographic image is calculated using the irregular triangulation network based on the input cross-sectional plane position information, by using the following method:
setting the node coordinates of the section lineThe affine transformation parameters of the orthoimage of the unmanned aerial vehicle areThen line node is brokenCoordinates on an orthoimage of an unmanned aerial vehicleThe calculation formula is as follows:
7. the method of claim 1, wherein in step S3, the ground feature labels include house, road, scarp, river.
8. The method for manufacturing the terrain cross section by fusing the unmanned aerial vehicle image and the laser point cloud according to claim 1, wherein in S4, key points are extracted from a cross section line, and a skeleton feature of a cross section line shape is obtained by using the key points, and the method comprises the following steps:
obtaining key points of a section line as initial skeleton points, wherein the key points comprise end points, highest points and lowest points;
connecting lines of two adjacent framework points to serve as a processing unit, and sequentially judging whether the interior of each processing unit contains other framework points;
and acquiring a vector set of all the skeleton points as the skeleton characteristics of the section line shape.
9. The method for manufacturing the terrain cross section fusing the unmanned aerial vehicle image and the laser point cloud according to claim 8, wherein when it is sequentially judged whether the interior of each processing unit contains other skeleton points, the method comprises the following steps:
computing any of the interior of each processing unitSection nodeConnecting lines to two skeleton pointsPerpendicular distance therebetweenDistance from plumbSelectingAndis larger value asA characteristic saliency value of;
calculating the characteristic significant values of all the section points to obtain the node with the maximum characteristic significant valueIf a nodeIs greater than a first preset thresholdThen consider the nodeAdding the skeleton points into a skeleton point queue, otherwise, considering the skeleton points as skeleton pointsThe interior does not contain skeleton points.
10. The method of claim 9, wherein in step S4, when calculating the detail points of the local feature changes by using a local detection method, three adjacent nodes are sequentially extracted from the end point of the cross-sectional line、Andcalculating the intermediate pointToAndvertical distance of connecting line and plumb distanceAndget itAndis larger value asLocal saliency value ofIf local significance is presentGreater than a second predetermined thresholdThen it is considered asIs a minutia point; and sequentially judging whether each section node is a detail point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210213772.8A CN114283070B (en) | 2022-03-07 | 2022-03-07 | Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210213772.8A CN114283070B (en) | 2022-03-07 | 2022-03-07 | Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114283070A CN114283070A (en) | 2022-04-05 |
CN114283070B true CN114283070B (en) | 2022-05-03 |
Family
ID=80882272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210213772.8A Active CN114283070B (en) | 2022-03-07 | 2022-03-07 | Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114283070B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115048838B (en) * | 2022-06-14 | 2024-04-09 | 湖南大学 | Human body feature-based rapid human skeleton finite element model modeling method |
CN115546266B (en) * | 2022-11-24 | 2023-03-17 | 中国铁路设计集团有限公司 | Multi-strip airborne laser point cloud registration method based on local normal correlation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101154295A (en) * | 2006-09-28 | 2008-04-02 | 长江航道规划设计研究院 | Three-dimensional simulation electronic chart of navigation channel |
CN102682479A (en) * | 2012-04-13 | 2012-09-19 | 国家基础地理信息中心 | Method for generating three-dimensional terrain feature points on irregular triangulation network |
CN107092020A (en) * | 2017-04-19 | 2017-08-25 | 北京大学 | Merge the surface evenness monitoring method of unmanned plane LiDAR and high score image |
CN110046563A (en) * | 2019-04-02 | 2019-07-23 | 中国能源建设集团江苏省电力设计院有限公司 | A kind of transmission line of electricity measuring height of section modification method based on unmanned plane point cloud |
CN110390255A (en) * | 2019-05-29 | 2019-10-29 | 中国铁路设计集团有限公司 | High-speed rail environmental change monitoring method based on various dimensions feature extraction |
CN111429498A (en) * | 2020-03-26 | 2020-07-17 | 中国铁路设计集团有限公司 | Railway business line three-dimensional center line manufacturing method based on point cloud and image fusion technology |
CN111597605A (en) * | 2020-04-02 | 2020-08-28 | 中国国家铁路集团有限公司 | Railway dynamic simulation cockpit system |
CN112562079A (en) * | 2020-12-22 | 2021-03-26 | 中铁第四勘察设计院集团有限公司 | Method, device and equipment for thinning topographic section data |
-
2022
- 2022-03-07 CN CN202210213772.8A patent/CN114283070B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101154295A (en) * | 2006-09-28 | 2008-04-02 | 长江航道规划设计研究院 | Three-dimensional simulation electronic chart of navigation channel |
CN102682479A (en) * | 2012-04-13 | 2012-09-19 | 国家基础地理信息中心 | Method for generating three-dimensional terrain feature points on irregular triangulation network |
CN107092020A (en) * | 2017-04-19 | 2017-08-25 | 北京大学 | Merge the surface evenness monitoring method of unmanned plane LiDAR and high score image |
CN110046563A (en) * | 2019-04-02 | 2019-07-23 | 中国能源建设集团江苏省电力设计院有限公司 | A kind of transmission line of electricity measuring height of section modification method based on unmanned plane point cloud |
CN110390255A (en) * | 2019-05-29 | 2019-10-29 | 中国铁路设计集团有限公司 | High-speed rail environmental change monitoring method based on various dimensions feature extraction |
CN111429498A (en) * | 2020-03-26 | 2020-07-17 | 中国铁路设计集团有限公司 | Railway business line three-dimensional center line manufacturing method based on point cloud and image fusion technology |
CN111597605A (en) * | 2020-04-02 | 2020-08-28 | 中国国家铁路集团有限公司 | Railway dynamic simulation cockpit system |
CN112562079A (en) * | 2020-12-22 | 2021-03-26 | 中铁第四勘察设计院集团有限公司 | Method, device and equipment for thinning topographic section data |
Non-Patent Citations (5)
Title |
---|
"Road_Profile_Estimation_Using_a_3D_Sensor_and_Inte";Tao Ni等;《sensors》;20200731;第1-17页 * |
"无人机激光点云和多光谱数据的融合技术研究";陈杰;《中国新技术新产品》;20210731;第5-7页 * |
"无人机航摄在铁路工程中的应用";邓继伟;《铁道勘察》;20200430;第23-27页 * |
"无人机铁路综合巡线应用研究";赵海 等;《科技成果》;20200728;第1-5页 * |
"海量低空机载LiDAR点云的地形断面快速生成算法";周建红 等;《测绘科学技术学报》;20180723;第170-174页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114283070A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102506824B (en) | Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle | |
CN111597666B (en) | Method for applying BIM to transformer substation construction process | |
US7944547B2 (en) | Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data | |
CN114283070B (en) | Method for manufacturing terrain section by fusing unmanned aerial vehicle image and laser point cloud | |
CN113607135B (en) | Unmanned aerial vehicle inclination photogrammetry method for road and bridge construction field | |
CN105783878A (en) | Small unmanned aerial vehicle remote sensing-based slope deformation detection and calculation method | |
CN104952107A (en) | Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data | |
CN111105496A (en) | High-precision DEM construction method based on airborne laser radar point cloud data | |
CN112100715A (en) | Three-dimensional oblique photography technology-based earthwork optimization method and system | |
CN113916130B (en) | Building position measuring method based on least square method | |
CN111667569B (en) | Three-dimensional live-action soil visual accurate measurement and calculation method based on Rhino and Grasshopper | |
CN114859374B (en) | Newly-built railway cross measurement method based on unmanned aerial vehicle laser point cloud and image fusion | |
CN109146990B (en) | Building outline calculation method | |
CN110889899A (en) | Method and device for generating digital earth surface model | |
Sun et al. | Building displacement measurement and analysis based on UAV images | |
CN110046563B (en) | Power transmission line section elevation correction method based on unmanned aerial vehicle point cloud | |
CN111006645A (en) | Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction | |
Mao et al. | Precision evaluation and fusion of topographic data based on UAVs and TLS surveys of a loess landslide | |
Rebelo et al. | Building 3D city models: Testing and comparing Laser scanning and low-cost UAV data using FOSS technologies | |
Ahmad et al. | Generation of three dimensional model of building using photogrammetric technique | |
CN113744393B (en) | Multi-level slope landslide change monitoring method | |
WO2022104251A1 (en) | Image analysis for aerial images | |
CN114004949A (en) | Airborne point cloud assisted mobile measurement system arrangement parameter calibration method and system | |
Song et al. | Multi-feature airborne LiDAR strip adjustment method combined with tensor voting algorithm | |
Chen et al. | 3D model construction and accuracy analysis based on UAV tilt photogrammetry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |