CN114049474B - High-precision remote sensing rapid mapping method and device and storage medium - Google Patents

High-precision remote sensing rapid mapping method and device and storage medium Download PDF

Info

Publication number
CN114049474B
CN114049474B CN202210034221.5A CN202210034221A CN114049474B CN 114049474 B CN114049474 B CN 114049474B CN 202210034221 A CN202210034221 A CN 202210034221A CN 114049474 B CN114049474 B CN 114049474B
Authority
CN
China
Prior art keywords
image
point cloud
remote sensing
fused
boundary line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210034221.5A
Other languages
Chinese (zh)
Other versions
CN114049474A (en
Inventor
袁铁彪
潘牧
陈淑鑫
么大锁
孙锦飒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin City Beidou Satellite Navigation Positioning Technology Co ltd
Tianjin Renai College
Original Assignee
Tianjin City Beidou Satellite Navigation Positioning Technology Co ltd
Tianjin Renai College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin City Beidou Satellite Navigation Positioning Technology Co ltd, Tianjin Renai College filed Critical Tianjin City Beidou Satellite Navigation Positioning Technology Co ltd
Priority to CN202210034221.5A priority Critical patent/CN114049474B/en
Publication of CN114049474A publication Critical patent/CN114049474A/en
Application granted granted Critical
Publication of CN114049474B publication Critical patent/CN114049474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a high-precision remote sensing quick mapping method, a device and a storage medium, wherein the method comprises the following steps: acquiring a multi-view image and a point cloud image of a target area at the current moment; acquiring satellite positioning data of the flying device at the current moment, and determining the central points of the multi-view image and the point cloud image according to the satellite positioning data; overlapping and comparing the point cloud image and the multi-view image with a standard image of a target area by taking the central point as a reference, and performing deformation correction on the multi-view image to obtain a first image to be fused; carrying out deformation correction on the point cloud images according to the overlapping ratio, determining a contrast boundary line according to the pixel gray level difference of the multiple multi-view images during overlapping, and filtering noise points of the point cloud corrected images outside the contrast boundary line to obtain a second image to be fused; and fusing the first image to be fused and the second image to be fused to obtain a fused imaging image of the target area. The method can improve the boundary recognition rate of the image, has high imaging accuracy and is beneficial to remote sensing mapping of roads.

Description

High-precision remote sensing rapid mapping method and device and storage medium
Technical Field
The invention relates to the technical field of remote sensing mapping, in particular to a high-precision remote sensing quick mapping method, a high-precision remote sensing quick mapping device and a storage medium.
Background
The calibration and identification of roads in the traditional remote sensing mapping are not ideal, because the elevation coordinate change of the roads is not large, the scanning effect of the three-dimensional laser is not obvious, the boundary identification rate is poor from point cloud data, and in the image processing of the formed image, in order to correct the problem of image deformation caused by the angle slope when the unmanned aerial vehicle shoots the ground object, the image needs to be corrected through multidimensional processing, the calculation amount is large, and the calculation process is complex.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art or at least partially solve the technical problems, the invention provides a high-precision remote sensing rapid mapping method, a high-precision remote sensing rapid mapping device and a storage medium, which can improve the boundary identification rate of a point cloud image, have a simple image processing process and are beneficial to improving the mapping precision.
In a first aspect, the invention provides a high-precision remote sensing quick mapping method, which comprises the following steps:
acquiring a multi-view image and a point cloud image of a target area at the current moment, wherein a first imaging device of the multi-view image and a second imaging device of the point cloud image are both arranged on the same flight device, and the multi-view image is a plurality of images with pixel gray level difference, which are shot by the first imaging device from a plurality of shooting angles at the current moment;
acquiring satellite positioning data of the flying device at the current moment, and determining the central points of the multi-view image and the point cloud image according to the satellite positioning data;
taking the central point as a reference, and performing overlapping comparison on the point cloud image and the multi-view image with a standard image of the target area;
carrying out deformation correction on the multi-view image according to the overlapping ratio to obtain a first image to be fused;
carrying out deformation correction on the point cloud image according to the overlapping ratio, determining a contrast boundary line according to the pixel gray difference of the multiple multi-view images during overlapping, and filtering noise points of the point cloud corrected image, which are positioned outside the contrast boundary line, to obtain a second image to be fused;
and fusing the first image to be fused and the second image to be fused to obtain a fused imaging image of the target area.
In the scheme, a first imaging device (such as a three-dimensional laser scanner) for generating a point cloud image and a second imaging device for generating a multi-view image are simultaneously installed on the flying device, and at the same time, the flying device performs three-dimensional point cloud imaging and multi-view vision imaging on a target area from a specific position in the air at the same time.
In this scheme, the second imaging device can be a binocular or multi-view camera device, the binocular or multi-view camera device shoots a target area, one or more groups of binocular vision images can be obtained, and different shooting angles of the camera device form pixel gray level differences.
In this solution, a positioning device for acquiring positioning data is simultaneously installed on the flying device, for example, differential satellite positioning data is acquired by a satellite positioning device. And determining the central points of the multi-view image and the point cloud image by using the satellite positioning data.
In the scheme, the point cloud image and the multiple multi-view images are subjected to overlapping comparison by taking a central point as a reference, and after the overlapping comparison, the point cloud image and the multi-view images are firstly subjected to deformation correction at one time, specifically, a part which is not overlapped with the standard image is deleted (the noise point of the part which is not overlapped with the standard image is removed); in order to improve the boundary identification rate of the point cloud image, the point cloud image after deformation correction is further filtered, specifically: and when the point cloud images are overlapped and compared, the pixel gray difference among the multiple multi-view images is more obvious, a contrast boundary line is determined according to the pixel gray difference, and noise points of the point cloud images outside the contrast boundary line are filtered.
According to the technical scheme, the point cloud image and the multi-view image are subjected to overlapping comparison with the standard image of the pre-constructed target area, the deformed image can be corrected at one time through comparison, noise points of the point cloud image are removed, and the imaging accuracy can be improved. According to the technical scheme, the elevation information of the road image is enhanced by combining the point cloud data of the point cloud image and the depth and the parallax of the multi-view visual image through image fusion, the boundary identification rate of the point cloud data is high, and the road mapping and calibration are facilitated.
Preferably, the filtering noise points of the point cloud corrected image, which are located outside the contrast boundary line, specifically includes:
and removing discrete noise points of the point cloud image outside the contrast boundary line.
In the technical scheme, when a plurality of multi-view images are overlapped, the pixel gray level difference between the images is more obvious, a contrast boundary line can be generated according to the pixel gray level difference, the point cloud images can be further filtered through the contrast boundary line, and the boundary identification rate of the point cloud data is further improved. Specifically, in order to improve the boundary identification rate of the point cloud data, noise outside the contrast boundary line may be removed to implement filtering of the point cloud image.
Preferably, the filtering noise points of the point cloud corrected image, which are located outside the contrast boundary line, specifically includes:
and converging noise points of the point cloud image, which are positioned outside the contrast boundary line.
In the technical scheme, the noise points outside the contrast boundary line are randomly distributed, and the noise points which are discretely distributed outside the contrast boundary line are converged, so that the improvement of the boundary identification rate of the point cloud data is facilitated.
Preferably, the high-precision remote sensing rapid mapping method further comprises the following steps: and determining the shooting angle difference values of a plurality of shooting angles of the multi-view image, and determining the pixel gray difference according to the shooting angle difference values.
In this scheme, different shooting angles cause differences in image pixels, i.e., pixel gray-scale differences.
Preferably, the high-precision remote sensing rapid mapping method further comprises the following steps: and the target areas jointly form an area to be imaged, and the fusion imaging graphs of all the target areas are spliced to obtain a global imaging graph of the area to be imaged.
In the scheme, the area to be imaged is divided into a plurality of target areas, and the plurality of target areas jointly form the area to be imaged. The flight device can perform local imaging within the self imaging range, continuously flies to the next target area for imaging after the imaging of one target area is completed, and the imaging of all the target areas is spliced to generate a global imaging graph of the area to be imaged.
Preferably, the flight device is further provided with an attitude sensing device, the attitude data of the flight device at the current moment acquired by the attitude sensing device is acquired, and the fused imaging graph is calibrated according to the positioning data and the attitude data.
In the scheme, the images can be calibrated by using the positioning data and the posture data. The specific calibration is described in the prior art.
Preferably, the flying device is an unmanned aerial vehicle, and the attitude data includes a pitch angle, a yaw angle and a roll angle of the unmanned aerial vehicle.
Preferably, before the overlapping and comparing the point cloud image, the multi-view image 8 and the standard image of the target area with the central coordinate as a reference, the method further includes:
and constructing a standard image of the target area.
In this scheme, in order to increase the processing speed of the image overlap comparison and reduce the calculation pressure, a standard image of the target area may be constructed in advance, that is, a standard image library may be established by collecting a large amount of objects such as facilities, equipment, buildings, and people in the target area in advance.
In a second aspect, the present application further provides a high-precision remote sensing rapid mapping apparatus, including:
a memory for storing program instructions;
a processor, configured to invoke the program instructions stored in the memory to implement the high-precision remote sensing fast mapping method according to any one of the first aspect.
In a third aspect, the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores program codes, and the program codes are used to implement the high-precision remote sensing fast mapping method according to any one of the technical solutions in the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following advantages: the high-precision remote sensing quick mapping method comprises the steps of obtaining a multi-mesh image and a point cloud image of a target area at the same moment, determining the central point of the point cloud image and the central point of the multi-mesh image by using positioning data at the moment, overlapping and comparing the point cloud image, the multi-mesh image and a standard image by using the central point as a reference, and performing deformation correction on the multi-mesh image and the point cloud image through one-time comparison, so that the problem of image deformation caused by the angle slope when an unmanned aerial vehicle shoots a ground object can be well solved, the image does not need to be subjected to multi-dimensional processing for correcting, the comparison process is simple, the calculation amount is small, and the processing speed is high; and determining a contrast boundary line of the gray difference by using the pixel gray difference of the multi-view image during the overlapping contrast, filtering the point cloud image by using the contrast boundary line, further correcting the point cloud data (removing or converging noise points of the point cloud image outside the contrast boundary line), further improving the boundary identification rate of the point cloud image, and finally obtaining a relatively accurate fusion imaging image of a target area through image fusion, thereby being beneficial to remote sensing mapping of roads.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a high-precision remote sensing rapid mapping method provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For convenience of understanding, the following describes in detail a high-precision remote sensing fast mapping method provided by an embodiment of the present invention, and with reference to fig. 1, the high-precision remote sensing fast mapping method includes the following steps:
step S1, acquiring a multi-view image and a point cloud image of a target area at the current moment, wherein a first imaging device of the multi-view image and a second imaging device of the point cloud image are both arranged on the same flying device, and the multi-view image is a plurality of images with pixel gray level difference, which are shot by the first imaging device from a plurality of shooting angles at the current moment;
step S2, acquiring satellite positioning data of the flying device at the current moment, and determining the central points of the multi-view image and the point cloud image according to the satellite positioning data;
step S3, taking the central point as a reference, and performing overlapping comparison on the point cloud image and the multi-view image with the standard image of the target area;
step S4, carrying out deformation correction on the multi-view image according to the overlapping ratio to obtain a first image to be fused;
step S5, performing deformation correction on the point cloud image according to the overlapping ratio, determining a contrast boundary line according to the pixel gray level difference of the multiple multi-view images during overlapping, and filtering noise points of the point cloud corrected image, which are positioned outside the contrast boundary line, to obtain a second image to be fused;
and step S6, fusing the first image to be fused and the second image to be fused to obtain a fused imaging image of the target area.
In some embodiments of the present invention, a first imaging device (such as a three-dimensional laser scanner) for generating a point cloud image and a second imaging device for generating a multi-view image are simultaneously installed on the flying device, and at the same time, the flying device performs three-dimensional point cloud imaging and multi-view vision imaging on a target area from a specific position in the air at the same time.
In some embodiments of the present invention, the second imaging device can be a binocular or multi-view camera, the binocular or multi-view camera captures a target area, one or more sets of binocular vision images can be obtained, and different capturing angles of the camera form pixel gray level differences.
In some embodiments of the present invention, a positioning device for acquiring positioning data is installed on the flying device, for example, a satellite positioning device acquires differential satellite positioning data. And determining the central points of the multi-view image and the point cloud image by using the satellite positioning data.
In some specific embodiments of the present invention, the point cloud image and the multiple multi-view images are overlapped and compared with each other by using the central point as a reference, and after the overlapping and comparing, the point cloud image and the multi-view images are firstly deformed and corrected at one time, specifically, a part which is not overlapped with the standard image is deleted (a noise point of the part where the point cloud image is not overlapped with the standard image is removed); in order to improve the boundary identification rate of the point cloud image, the point cloud image after deformation correction is further filtered, specifically: and when the point cloud images are overlapped and compared, the pixel gray difference among the multiple multi-view images is more obvious, a contrast boundary line is determined according to the pixel gray difference, and noise points of the point cloud images outside the contrast boundary line are filtered.
According to the technical scheme, the point cloud image and the multi-view image are subjected to overlapping comparison with the standard image of the pre-constructed target area, the deformed image can be corrected at one time through comparison, noise points of the point cloud image are removed, and the imaging accuracy can be improved. According to the technical scheme, the elevation information of the road image is enhanced by combining the point cloud data of the point cloud image and the depth and the parallax of the multi-view visual image through image fusion, the boundary identification rate of the point cloud data is high, and the road mapping and calibration are facilitated.
In some embodiments of the invention, the filtering noise points of the point cloud corrected image, which are located outside the contrast boundary line, specifically includes:
and removing discrete noise points of the point cloud image outside the contrast boundary line.
In some embodiments of the present invention, when a plurality of multi-view images are overlapped, the pixel gray level difference between the images is more obvious, and a contrast boundary line can be generated according to the pixel gray level difference, and the contrast boundary line can further filter the point cloud image, thereby further improving the boundary identification rate of the point cloud data. Specifically, in order to improve the boundary identification rate of the point cloud data, noise outside the contrast boundary line may be removed to implement filtering of the point cloud image.
In some embodiments of the invention, the filtering noise points of the point cloud corrected image, which are located outside the contrast boundary line, specifically includes:
and converging noise points of the point cloud image, which are positioned outside the contrast boundary line.
In some embodiments of the present invention, noise points outside the contrast boundary line are randomly distributed, and noise points discretely distributed outside the contrast boundary line are converged, which is beneficial to improving the boundary identification rate of the point cloud data.
In some embodiments of the present invention, the high-precision remote sensing fast mapping method further includes: and determining the shooting angle difference values of a plurality of shooting angles of the multi-view image, and determining the pixel gray difference according to the shooting angle difference values.
In some embodiments of the invention, different capture angles cause differences in image pixels, i.e., pixel gray scale differences.
In some embodiments of the present invention, the high-precision remote sensing fast mapping method further includes: and the target areas jointly form an area to be imaged, and the fusion imaging graphs of all the target areas are spliced to obtain a global imaging graph of the area to be imaged.
In some embodiments of the present invention, the region to be imaged is divided into a plurality of target regions, and the plurality of target regions jointly constitute the region to be imaged. The flight device can perform local imaging within the self imaging range, continuously flies to the next target area for imaging after the imaging of one target area is completed, and the imaging of all the target areas is spliced to generate a global imaging graph of the area to be imaged.
In some specific embodiments of the present invention, an attitude sensing device is further disposed on the flying device, and the attitude data of the flying device at the current time acquired by the attitude sensing device is acquired, and the fused imaging graph is calibrated according to the positioning data and the attitude data.
In some embodiments of the present invention, the image can be calibrated using the positioning data and the pose data. The specific calibration is described in the prior art.
In some embodiments of the invention, the flying device is a drone and the attitude data includes a pitch angle, a yaw angle, and a roll angle of the drone.
In some embodiments of the present invention, before the performing the overlay comparison of the point cloud image, the multi-purpose image and the standard image of the target area by using the central coordinate as a reference, further includes:
and constructing a standard image of the target area.
In some embodiments of the present invention, in order to increase the processing speed of image overlap comparison and reduce the calculation pressure, a standard image of a target area may be constructed in advance, that is, a standard image library may be established by collecting a large amount of objects such as facilities, equipment, buildings, and people in the target area in advance.
In still other embodiments of the present invention, there is provided a high-precision remote sensing rapid mapping apparatus, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory to realize the high-precision remote sensing quick mapping method according to any one of the embodiments.
In further specific embodiments of the present invention, a computer-readable storage medium is further provided, where the computer-readable storage medium stores program codes for implementing the high-precision remote sensing fast mapping method according to any of the above embodiments.
The high-precision remote sensing quick mapping method comprises the steps of obtaining a multi-mesh image and a point cloud image of a target area at the same moment, determining the central point of the point cloud image and the central point of the multi-mesh image by using positioning data at the moment, overlapping and comparing the point cloud image, the multi-mesh image and a standard image by using the central point as a reference, and performing deformation correction on the multi-mesh image and the point cloud image through one-time comparison, so that the problem of image deformation caused by the angle slope when an unmanned aerial vehicle shoots a ground object can be well solved, the image does not need to be subjected to multi-dimensional processing for correcting, the comparison process is simple, the calculation amount is small, and the processing speed is high; and determining a contrast boundary line of the gray difference by using the pixel gray difference of the multi-view image during the overlapping contrast, filtering the point cloud image by using the contrast boundary line, further correcting the point cloud data (removing or converging noise points of the point cloud image outside the contrast boundary line), further improving the boundary identification rate of the point cloud image, and finally obtaining a relatively accurate fusion imaging image of a target area through image fusion, thereby being beneficial to remote sensing mapping of roads.
For easy understanding, the following describes a specific flow of the high-precision remote sensing quick mapping method by using an example: carrying out high-precision mapping on an area to be imaged (such as a specific small area), mainly using an unmanned aerial vehicle (or a rotorcraft) to carry a three-dimensional solid-state laser scanner and a multi-view high-definition camera, and combining an RTK differential high-precision positioning three-dimensional mapping technology to realize the mapping, firstly scanning a point cloud image by using three-dimensional laser in a range (corresponding to a target area) which is 80% of the radius covered by a point cloud imaging device (namely the three-dimensional solid-state laser scanner) on the unmanned aerial vehicle; meanwhile, a multi-view camera module (or a binocular camera group) fits a photo with pixel point gray scale difference (gray scale difference is formed by camera shooting angle difference), namely a multi-view image (or a binocular image), and then the central absolute coordinate of the photo is picked up through an RTK satellite positioning technology and the three-dimensional swing angle (namely the pitch angle, the yaw angle and the roll angle) of the unmanned aerial vehicle at the moment is obtained to calibrate and correct the image. In this link, the three-dimensional laser point cloud data and the picture of the binocular camera are subjected to superposition, comparison, fusion and calculation by taking the central point as a reference to obtain a very accurate road three-dimensional model image, and a plurality of images are spliced by using the central mark point to obtain the whole image of the region.
In order to correct the problem of image deformation caused by the angle inclination of an unmanned aerial vehicle shooting ground object in image processing, in the prior art, the object needs to be corrected in a multi-dimensional mode, the calculation amount is large, and the calculation process is complex.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A high-precision remote sensing quick mapping method is characterized by comprising the following steps:
acquiring a multi-view image and a point cloud image of a target area at the current moment, wherein a first imaging device of the multi-view image and a second imaging device of the point cloud image are both arranged on the same flight device, and the multi-view image is a plurality of images with pixel gray level difference, which are shot by the first imaging device from a plurality of shooting angles at the current moment;
acquiring satellite positioning data of the flying device at the current moment, and determining the central points of the multi-view image and the point cloud image according to the satellite positioning data;
taking the central point as a reference, and performing overlapping comparison on the point cloud image and the multi-view image with a standard image of the target area;
according to the overlapping comparison, carrying out deformation correction on the multi-view image to obtain a first image to be fused;
according to the overlapping comparison, carrying out deformation correction on the point cloud images, determining a contrast boundary line according to the pixel gray difference of the multiple multi-view images during overlapping, and filtering noise points of the point cloud corrected images outside the contrast boundary line to obtain a second image to be fused;
and fusing the first image to be fused and the second image to be fused to obtain a fused imaging image of the target area.
2. The method for high-precision remote sensing rapid mapping according to claim 1, wherein the filtering of noise points of the point cloud corrected image outside the contrast boundary line specifically comprises:
and removing discrete noise points of the point cloud image outside the contrast boundary line.
3. The method for high-precision remote sensing rapid mapping according to claim 1, wherein the filtering of noise points of the point cloud corrected image outside the contrast boundary line specifically comprises:
and converging noise points of the point cloud image, which are positioned outside the contrast boundary line.
4. The high-precision remote sensing rapid mapping method according to any one of claims 1 to 3, further comprising: and determining the shooting angle difference values of a plurality of shooting angles of the multi-view image, and determining the pixel gray difference according to the shooting angle difference values.
5. The high-precision remote sensing rapid mapping method according to claim 4, further comprising: and the target areas jointly form an area to be imaged, and the fusion imaging graphs of all the target areas are spliced to obtain a global imaging graph of the area to be imaged.
6. The method according to claim 1, wherein an attitude sensing device is further arranged on the flying device, attitude data of the flying device at the current moment acquired by the attitude sensing device is acquired, and the fused imaging graph is calibrated according to the positioning data and the attitude data.
7. The method for high-precision remote sensing rapid mapping according to claim 6, wherein the flying device is an unmanned aerial vehicle, and the attitude data comprises a pitch angle, a yaw angle and a roll angle of the unmanned aerial vehicle.
8. The method for high-precision remote sensing quick mapping according to claim 1, wherein before the overlapping comparison of the point cloud image, the multi-view image and the standard image of the target area is performed by taking the center coordinate as a reference, the method further comprises:
and constructing a standard image of the target area.
9. A high-precision remote sensing rapid mapping device is characterized by comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory to implement the high-precision remote sensing fast mapping method according to any one of claims 1 to 8.
10. A computer-readable storage medium characterized in that the computer-readable storage medium stores program code for implementing the high-precision remote sensing fast mapping method according to any one of claims 1 to 8.
CN202210034221.5A 2022-01-13 2022-01-13 High-precision remote sensing rapid mapping method and device and storage medium Active CN114049474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210034221.5A CN114049474B (en) 2022-01-13 2022-01-13 High-precision remote sensing rapid mapping method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034221.5A CN114049474B (en) 2022-01-13 2022-01-13 High-precision remote sensing rapid mapping method and device and storage medium

Publications (2)

Publication Number Publication Date
CN114049474A CN114049474A (en) 2022-02-15
CN114049474B true CN114049474B (en) 2022-03-29

Family

ID=80196402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210034221.5A Active CN114049474B (en) 2022-01-13 2022-01-13 High-precision remote sensing rapid mapping method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114049474B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020963A (en) * 2012-11-29 2013-04-03 北京航空航天大学 Multi-view stereo matching method based on self-adaptive watershed image segmentation
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN109506642A (en) * 2018-10-09 2019-03-22 浙江大学 A kind of robot polyphaser vision inertia real-time location method and device
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN112686935A (en) * 2021-01-12 2021-04-20 武汉大学 Airborne depth sounding radar and multispectral satellite image registration method based on feature fusion
CN113177565A (en) * 2021-03-15 2021-07-27 北京理工大学 Binocular vision position measuring system and method based on deep learning
CN113888416A (en) * 2021-09-10 2022-01-04 北京和德宇航技术有限公司 Processing method of satellite remote sensing image data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020963A (en) * 2012-11-29 2013-04-03 北京航空航天大学 Multi-view stereo matching method based on self-adaptive watershed image segmentation
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN109506642A (en) * 2018-10-09 2019-03-22 浙江大学 A kind of robot polyphaser vision inertia real-time location method and device
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN112686935A (en) * 2021-01-12 2021-04-20 武汉大学 Airborne depth sounding radar and multispectral satellite image registration method based on feature fusion
CN113177565A (en) * 2021-03-15 2021-07-27 北京理工大学 Binocular vision position measuring system and method based on deep learning
CN113888416A (en) * 2021-09-10 2022-01-04 北京和德宇航技术有限公司 Processing method of satellite remote sensing image data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Learning multiview 3D point cloud registration》;Zan Gojcic等;《IEEE Xplore.》;20201231;全文 *
《多视高分辨率纹理图像与双目三维点云的映射方法》;杜瑞建等;《中国光学》;20201031;第13卷(第5期);全文 *
《改进ICP算法用于多组图像的点云拼接与融合》;赵龙等;《SOFTWARE》;20141231;全文 *

Also Published As

Publication number Publication date
CN114049474A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN109920011B (en) External parameter calibration method, device and equipment for laser radar and binocular camera
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN109887087B (en) SLAM mapping method and system for vehicle
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
EP1378790B1 (en) Method and device for correcting lens aberrations in a stereo camera system with zoom
CN107492069B (en) Image fusion method based on multi-lens sensor
US20160212418A1 (en) Multiple camera system with auto recalibration
US20220215573A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
CN106600644B (en) Parameter correction method and device for panoramic camera
Lao et al. A robust method for strong rolling shutter effects correction using lines with automatic feature selection
CN111429527B (en) Automatic external parameter calibration method and system for vehicle-mounted camera
CN112785655A (en) Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium
CN109709977B (en) Method and device for planning movement track and moving object
Sai et al. Geometric accuracy assessments of orthophoto production from uav aerial images
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
KR102159134B1 (en) Method and system for generating real-time high resolution orthogonal map for non-survey using unmanned aerial vehicle
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
WO2020114433A1 (en) Depth perception method and apparatus, and depth perception device
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
CN114049474B (en) High-precision remote sensing rapid mapping method and device and storage medium
CN117036666B (en) Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN113096016A (en) Low-altitude aerial image splicing method and system
CN111127379B (en) Rendering method of light field camera 2.0 and electronic equipment
CN110363806B (en) Method for three-dimensional space modeling by using invisible light projection characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant