CN115909108A - Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence - Google Patents

Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence Download PDF

Info

Publication number
CN115909108A
CN115909108A CN202211577038.6A CN202211577038A CN115909108A CN 115909108 A CN115909108 A CN 115909108A CN 202211577038 A CN202211577038 A CN 202211577038A CN 115909108 A CN115909108 A CN 115909108A
Authority
CN
China
Prior art keywords
image
region
interest
pair
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211577038.6A
Other languages
Chinese (zh)
Inventor
金葵葵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202211577038.6A priority Critical patent/CN115909108A/en
Publication of CN115909108A publication Critical patent/CN115909108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, and relates to the field of unmanned aerial vehicle surveying and mapping; the optimal frame image is selected through image overlapping area matching, and the image with the jelly effect can be well filtered; and the point cloud with sparse precision is corrected or supplemented by using the texture information of the RGB image, so that the image defect caused by resonance is made up. The method specifically comprises the following steps: acquiring a first image, a second image and a first depth image; acquiring interested areas in the first image and the second image; acquiring a difference index of the region of interest in the first image and the region of interest in the second image and a to-be-selected image pair; if the three-dimensional angular velocity and the three-dimensional acceleration are not changed, the image pair to be selected is a selected image pair; the first depth image is corrected according to the gray scale gradient of the selected image pair. The specific application scenes of the invention are as follows: and correcting the point cloud data through the RGB image during urban surveying and mapping of the unmanned aerial vehicle.

Description

Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence
Technical Field
The application relates to the field of unmanned aerial vehicle surveying and mapping, in particular to a point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence.
Background
The development of the modern unmanned aerial vehicle surveying and mapping field is rapid, the point cloud data with high precision can be obtained through the laser radar on the unmanned aerial vehicle, the point cloud images of a scene are derived, each image is marked by geographic information and comprises three-dimensional coordinates of an image center point and partition lines and texture information of a building.
Resonance, shake and other conditions can be inevitable to appear in the flight process of city mapping carried out by an unmanned aerial vehicle, point cloud information obtained by a laser radar is inaccurate or even lost, and meanwhile point cloud data of a city building can cause loss of partial point cloud or low point cloud precision due to long shooting distance, so that the obtained point cloud data are sparse and influence mapping precision and image quality, and therefore the point cloud data need to be corrected to obtain a point cloud image with high precision.
Disclosure of Invention
The invention provides a point cloud correction method and a point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, wherein the point cloud correction method comprises the following steps: acquiring a first image, a second image and a first depth image; obtaining an interest region in an overlapping region of the first image and the second image; obtaining a difference degree index of an interest area in the first image and an interest area in the second image; determining an image pair to be selected according to the difference index and a preset difference threshold; if the three-dimensional angular velocity and the three-dimensional acceleration of the image pair to be selected at the corresponding moment are not changed, the image pair to be selected is the selected image pair; the first depth image is corrected according to the gray gradient of the selected image pair, compared with the prior art, the best contrast image can be obtained for point cloud correction by selecting the optimal frame image through image overlapping region matching, and therefore a more stable image is obtained and the robustness is better; the gradient distribution and the color distribution of the ROI (region of interest) in the two images can be used for obtaining more accurate image difference, and the images with the jelly effect can be well filtered.
Aiming at the technical problem, the invention provides a point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence.
In a first aspect, a method for point cloud correction in unmanned aerial vehicle mapping based on artificial intelligence is proposed, comprising:
the method comprises the steps of obtaining a first image, a second image and a first depth image.
Obtaining a region of interest in an overlapping region of the first image and the second image; the region of interest is the portion of the object to be measured in the overlap region.
And acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image.
And determining the image pair to be selected according to the difference index and a preset difference threshold value.
Judging whether the three-dimensional angular velocity and the three-dimensional acceleration of the image to be selected at the corresponding moment are not changed: and if the judgment result is yes, the image pair to be selected is a selected image pair, and the second image in the selected image pair is the optimal frame image.
The first depth image is corrected according to the gray scale gradients of the selected image pair.
Further, according to the point cloud correction method in unmanned aerial vehicle mapping based on artificial intelligence, the difference degree index comprises a color difference degree, a gray gradient difference degree and an edge difference degree.
Further, the method for correcting the point cloud in unmanned aerial vehicle mapping based on artificial intelligence comprises the following steps of:
and dividing the region of interest into different regions according to the tone at the part in the preset distance threshold range in the region of interest.
The hue, saturation and lightness of each region and the average value of hue, average value of saturation and average value of lightness of all regions are calculated.
And obtaining the color difference according to the hue, the saturation and the brightness of each area and the average value of the hue, the average value of the saturation and the average value of the brightness of all the areas.
Further, in the method for correcting the point cloud in unmanned aerial vehicle mapping based on artificial intelligence, the step of obtaining the gray gradient difference degree includes:
and carrying out gray level processing on the first image and the second image to respectively obtain a first gray level image and a second gray level image.
And obtaining gradient information of the first gray image according to the gradient values of the pixel points in the region of interest in the first gray image.
And obtaining gradient information of the second gray scale image according to the gradient values of the pixel points in the region of interest in the second gray scale image.
And obtaining the gray gradient difference degree according to the gradient information of the first gray image and the gradient information of the second gray image.
Further, the point cloud correction method in unmanned aerial vehicle mapping based on artificial intelligence comprises the following steps of:
and acquiring a first image edge pixel point set and a second image edge pixel point set, wherein the first image edge pixel point set is a set formed by the edge pixels of the interested region in the first image, and the second image edge pixel point set is a set formed by the edge pixels of the interested region in the second image.
And obtaining the edge difference degree according to the first image edge pixel point set and the second image edge pixel point set.
Further, the method for correcting the point cloud in unmanned aerial vehicle mapping based on artificial intelligence further comprises, before acquiring the first image, the second image and the first depth image in the mapping process:
and carrying out self-adaptive adjustment on the damping of the unmanned aerial vehicle holder according to a preset adjustment interval.
In a second aspect, the present invention provides a point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, including: the system comprises an image acquisition module, an interesting area acquisition module, a difference degree calculation module, a to-be-selected image pair acquisition module, a selected image pair acquisition module and an image correction module.
The image acquisition module is used for acquiring a first image, a second image and a first depth image.
The interesting region acquiring module is used for acquiring an interesting region in an overlapping region of the first image and the second image; the region of interest is a part to be corrected in the overlapping region.
The difference degree calculation module is used for acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image.
And the image pair to be selected acquisition module is used for determining the image pair to be selected according to the difference degree index and a preset difference degree threshold value.
The selected image pair obtaining module is used for judging whether the three-dimensional angular velocity and the three-dimensional acceleration at the moment corresponding to the image pair to be selected are unchanged: and if the judgment result is yes, the image pair to be selected is the selected image pair.
The image correction module is configured to correct the first depth image according to a gray scale gradient of the selected image pair.
The invention provides a point cloud correction method and a point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, wherein the point cloud correction method comprises the following steps: acquiring a first image, a second image and a first depth image; obtaining an interest area in an overlapping area of the first image and the second image; acquiring a difference degree index of an interest region in the first image and an interest region in the second image; determining an image pair to be selected according to the difference index and a preset difference threshold; if the three-dimensional angular velocity and the three-dimensional acceleration at the moment corresponding to the image pair to be selected are not changed, selecting the image pair as the selected image pair; the first depth image is corrected based on the gray scale gradients of the selected image pair.
Compared with the prior art, the optimal frame image is selected through image overlapping area matching, the best contrast image can be obtained for point cloud correction, and therefore a more stable image is obtained and the robustness is good; the gradient distribution and the color distribution of the ROI (region of interest) in the two images can be used for obtaining more accurate image difference, and the images with the jelly effect can be well filtered.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a point cloud correction method in unmanned aerial vehicle surveying and mapping based on artificial intelligence according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of another point cloud correction method in unmanned aerial vehicle mapping based on artificial intelligence according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of a point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
Example 1
The embodiment of the invention provides a point cloud correction method in unmanned aerial vehicle surveying and mapping based on artificial intelligence, which comprises the following steps as shown in figure 1:
s101, acquiring a first image, a second image and a first depth image.
In the embodiment, a first image is obtained through a global shutter camera deployed on an unmanned aerial vehicle, a second image is obtained through a rolling shutter camera, and the images obtained by the global shutter camera and the rolling shutter camera are both RGB format images; due to the self-exposure mode, the first image obtained by the global shutter camera does not have a jelly effect due to the resonance of the unmanned aerial vehicle, and the first image corresponds to the second image one to one.
The global shutter is realized by exposing the whole scene at the same time. All pixel points on the photosensitive component collect light rays and expose simultaneously, namely when exposure starts, the photosensitive component starts to collect the light rays; at the end of the exposure, the light collection circuit is switched off. Then the value of the photosensitive component is read out to be a photo.
The rolling shutter is different from a global shutter, and is realized by a line-by-line exposure mode through a photosensitive component. When exposure starts, the photosensitive components are scanned line by line and exposed line by line until all pixel points are exposed. Of course, all actions are completed in a very short time.
If the object to be photographed moves at a high speed relative to the camera. In the case of shooting with a global shutter, the picture is blurred if the exposure time is too long. In the rolling shutter mode, the progressive scanning speed is not enough, and any condition such as 'tilt', 'swing indeterminate' or 'partial exposure' may occur in the shooting result, and the phenomenon occurring in the rolling shutter mode is defined as the jelly effect.
In this embodiment, the first depth image is a TOF (Time of Flight) point cloud image obtained by performing high-frequency scanning with a laser radar.
Lidar is an active optical sensor that emits a laser beam toward a target while moving along a particular measurement path. The receiver in the lidar sensor detects and analyzes the laser light reflected from the target. These receivers record the precise time of the laser pulse from leaving the system to returning to the system, thereby calculating the range distance between the sensor and the target. These distance measurements along with position information) are converted into measurements of the actual three-dimensional points of the reflecting target in object space.
S102, acquiring a region of interest in an overlapping region of the first image and the second image; the region of interest is a part to be corrected in the overlapping region.
In the field of image processing, a region of interest (ROI) is an image region selected from an image, which is the focus of interest for image analysis by a practitioner. The area is delineated for further processing. The ROI is used to define the target to be read, so as to reduce the processing time and increase the accuracy, in this embodiment, the region of interest is the portion to be corrected in the overlap region, and in this embodiment, the portion to be corrected is the building portion.
S103, acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image.
In this embodiment, the difference index includes a color difference, a gray gradient difference, and an edge difference.
The color in this embodiment refers to hue (H), saturation (S), lightness (V), HSV, of an image after converting an RGB image into an HSV image, and represents a color pattern: in the HSV mode, H (hues) represents hue, S (saturation) represents saturation, and V (value) represents value.
Hue (h.hue): on a standard color wheel of 0-360 °, the hue is measured by location. In common use, the hue is identified by a color name, such as red, green, or orange. Black and white leuco phases.
Saturation(s): the color purity is indicated, and the color is gray when the color purity is 0. None of the white, black and other gray colors is saturated. At maximum saturation, each hue has the purest shade. The value range is 0-100%.
Value (v): is the brightness of the color. When 0, it is black. The maximum brightness is the state of the brightest color. The value range is 0-100%.
The color difference refers to a difference in hue (H), saturation (S), and lightness (V).
The image is regarded as a two-dimensional discrete function, the gray gradient is actually the derivation of the two-dimensional discrete function, and the difference is used for replacing the differentiation to obtain the gray gradient of the image. Some commonly used grayscale gradient templates are: roberts gradient, sobel gradient, prewitt gradient, laplacian gradient.
In this embodiment, the edge difference degree refers to a difference between pixel points on the edge line of the building in the ROI in the first image and the second image.
And S104, determining the image pair to be selected according to the difference index and a preset difference threshold value.
In this embodiment, because the first image and the second image are acquired simultaneously and all the first images and the second images are in a one-to-one correspondence relationship at the same time, the pair of images to be selected includes the first image and the second image corresponding to the first image.
And S105, if the three-dimensional angular velocity and the three-dimensional acceleration at the moment corresponding to the image pair to be selected are not changed, the image pair to be selected is a selected image pair.
In the embodiment, the influence of the change of the three-dimensional angular velocity and the three-dimensional acceleration of the unmanned aerial vehicle on the image imaging effect is considered, if the change of the three-dimensional angular velocity and the three-dimensional acceleration of the unmanned aerial vehicle exists, a jelly effect may exist in a second image in the to-be-selected image pair, and when the three-dimensional angular velocity and the three-dimensional acceleration at the moment corresponding to the to-be-selected image pair do not change, the to-be-selected image is determined to be a selected image pair, and the second image in the selected image pair is an optimal frame image.
And S106, correcting the first depth image according to the gray gradient of the selected image pair.
The image correction refers to restoration processing performed on a distorted image. The reasons for image distortion are: image distortion due to aberrations, distortion, bandwidth limitations, etc. of the imaging system; geometric distortion of the image due to imaging device shooting attitude and scanning nonlinearity; image distortion due to motion blur, radiation distortion, introduction of noise, etc. The basic idea of image correction is to establish a corresponding mathematical model according to the cause of image distortion, extract the required information from the contaminated or distorted image signal, and restore the original appearance of the image along the inverse process of distorting the image. The actual restoration process is to design a filter that can compute an estimate of the true image from the distorted image, so that it approximates the true image to the maximum extent according to a predefined error criterion.
In this embodiment, the image correction is to supplement the point cloud with missing data and correct the point cloud data with abnormal data.
Compared with the traditional technical scheme, the invention has the beneficial effects that:
1. the optimal frame image is selected through image overlapping area matching, the best contrast image can be obtained for point cloud correction, and therefore a more stable image is obtained and the robustness is good.
2. By utilizing the gradient distribution and the color distribution of the ROI (region of interest) in the two images, the more accurate image difference degree can be obtained, and the images with the jelly effect can be well filtered.
3. The point cloud data is corrected through the optimal frame image, point cloud with sparse point cloud precision can be well corrected or supplemented by utilizing abundant texture information in the RGB image, the defects of laser radar scanning can be overcome, and the implementation is simple.
Example 2
The embodiment of the invention provides a point cloud correction method in unmanned aerial vehicle surveying and mapping based on artificial intelligence, which comprises the following steps of:
s201, acquiring a first image, a second image and a first depth image.
In this embodiment, a first image is obtained by a global shutter camera deployed on an unmanned aerial vehicle, and a second image is obtained by a rolling shutter camera, where the first depth image refers to a TOF point cloud image obtained by a high-frequency scanning strategy.
The method comprises the steps that an unmanned aerial vehicle deploys a global shutter camera and a rolling shutter camera to respectively collect corresponding images, and TOF point cloud images are obtained through a high-frequency scanning strategy.
Because the global shutter camera is in global exposure during imaging, the image definition obtained by the higher flying height during unmanned aerial vehicle surveying and mapping is poorer, but the global shutter does not generate a jelly effect due to the resonance and the shake of the unmanned aerial vehicle; and the roller shutter camera can generate a jelly effect by the resonance of the unmanned aerial vehicle due to the characteristic of line-by-line exposure during imaging.
When the unmanned aerial vehicle deploys the global shutter camera and the rolling shutter camera, the visual angles of the two cameras are close to each other as far as possible, so that large offset cannot be generated, the images of the corresponding areas between the images can be matched subsequently, and meanwhile, the damping of the tripod head of the unmanned aerial vehicle is adjusted in a self-adaptive mode according to the preset adjusting interval threshold value to obtain a first image and a second image.
S202, acquiring a region of interest in an overlapping region of the first image and the second image.
In this embodiment, the region of interest is a building portion in the overlap region, and the purpose of this step is to obtain an ROI (region of interest) from the overlap region by acquiring the overlap region of the first image and the second image acquired by different cameras, further obtaining the ROI from the overlap region by using a regional image feature, and registering the overlap region, so that the image features of the ROIs in the first image and the second image are the same.
Performing overlapping region matching on the obtained first image and the second image, wherein the overlapping region matching specifically comprises the following contents:
firstly, under the condition that the flight direction of the unmanned aerial vehicle is known according to prior information, the global shutter camera is arranged on the left side of the rolling shutter camera, a target image in the first image is required to appear on the left side of the second image, and the area is an overlapping area.
And secondly, completing the overlapping matching of the left area image of the second image and the central area image of the first image through the prior area offset alpha, wherein the area offset alpha =100 in the embodiment, alpha can be adjusted according to the situation and the deployment position of the camera, and the calculation amount of the subsequent image matching can be reduced by setting the area offset.
In this embodiment, a building is used as the object to be corrected, in this embodiment, after the edge contour of the building is obtained by performing edge detection on all buildings in the overlapping region, the pixel coordinates (a, b) of a certain corner point of the edge contour of the building in the first image are taken, and then the pixel coordinates corresponding to the corner point in the second image should be (a, b- α).
The edge detection method used in the embodiment is a Canny edge detection algorithm, and the Canny algorithm can better protect the edge information of the image so as to obtain an accurate edge profile; and then carrying out image denoising processing on the image subjected to the edge contour detection.
Further, when the matching of the corner point fails, the pixel coordinate of the corresponding corner point that should appear in the second image is taken as a central point, a 3 × 3 corner point search box is constructed in this embodiment to perform a horizontal bidirectional search on the corner point, wherein the step size of the search box is 1, the search is stopped after the corresponding corner point is matched, and then the region offset value α is updated to obtain a new region offset value, so as to obtain a rough overlapping region, i.e., ROI.
S203, acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image.
The difference index includes color difference, gray gradient difference and edge difference.
The purpose of this step is to perform pixel matching on the ROIs in the first image and the second image, and the first image obtained by the global shutter camera due to the exposure mode does not have a jelly effect in the image due to the resonance of the unmanned aerial vehicle, so this embodiment determines the second image by using the first image, and determines whether the difference between the first image and the second image is within the preset difference threshold range.
The steps of obtaining the color difference, the gray gradient difference and the edge difference specifically comprise the following steps:
s2031, obtaining the color difference degree of the region of interest in the first image and the region of interest in the second image.
The steps aim at: obtaining the ROI in the first image and the second image by performing color analysis on the ROI in the first image and the ROI in the second imageColor difference degree C of middle ROI diff
Firstly, HSV color space conversion is carried out on an RGB image of the ROI to obtain information of hue (H), saturation (S) and brightness (V) of the ROI. The color space conversion method specifically comprises the following steps:
max=max(R,G,B),min=min(R,G,B),V=max(R,G,B),
Figure BDA0003991499570000091
Figure BDA0003991499570000092
where R is the pixel value of the red channel in the image, G is the pixel value of the green channel in the image, and B is the pixel value of the blue channel in the image.
Figure BDA0003991499570000101
The present embodiment focuses on color information of the ROI edge region, and since the two cameras are image sequences obtained under the conditions of almost the same view angle and the same illumination, matching of color information does not need to be performed globally, and only color information of the middle region needs to be matched with the region edge and the building region, which can be matched naturally, and the present invention focuses on color difference information at the left edge of the ROI.
In this embodiment, for an edge region from the left edge of the ROI to the edge region shifted to the right by ω columns, ω is a preset distance threshold, where ω =200 in this embodiment, the regions with the same hue are divided into the same region block, and then the edge region from the left edge of the ROI to the edge region shifted to the right by 200 columns is divided into different region blocks.
Calculate the hue (H) of all the region blocks A1 ,V A2 ,…,H Av ) Saturation (S) A1 ,S A2 ,…,S Av ) Lightness (V) A1 ,V A2 ,..,V Ac ) Simultaneously calculating the average hue of all the region blocks of the ROI in the corresponding second image
Figure BDA0003991499570000102
Average degree of saturation
Figure BDA0003991499570000103
And average brightness->
Figure BDA0003991499570000104
Figure BDA0003991499570000105
Figure BDA0003991499570000106
Wherein,
Figure BDA0003991499570000107
represents the mean hue over all the area blocks in the second image corresponding to the c-th area block in the first image, and/or>
Figure BDA0003991499570000108
Represents the average saturation of all the region blocks in the second image corresponding to the c-th region block in the first image. />
Figure BDA0003991499570000109
Represents the average brightness of all the region blocks in the second image corresponding to the c-th region block in the first image.
c is [1, v ]]A positive integer in the range, d is [1,m ]]Positive integer in the range, H Bcd Representing the hue, S, of the d-th region block in the second image corresponding to the c-th region block in the first image Bcd Indicating the saturation, V, of the d-th region block in the second image corresponding to the c-th region block in the first image Bcd Indicating the brightness of the d-th region block in the second image corresponding to the c-th region block in the first image.
Since the value range of the hue H is more than or equal to 0 and less than or equal to 360, the hue needs to be normalized before the color difference is solved.
Degree of color difference
Figure BDA0003991499570000111
When the color distribution approaches 0, the color distribution of the image of the ROI edge region is normally matched, and the region offset alpha is reasonable and accurate.
When the color difference degree
Figure BDA0003991499570000112
When this occurs, it is stated that the field offset α needs to be adjusted further to ÷ in>
Figure BDA0003991499570000113
The calculation of color information matching for the building area is the same as the edge color difference, and the color difference of the building area is finally obtained
Figure BDA0003991499570000114
When the degree of color differentiation in the building region reflects the jelly effect, if any, in the image sequence B, the degree of color differentiation->
Figure BDA0003991499570000115
The more severe the jelly effect, the greater the degree of color differentiation.
S2032, obtaining the gray gradient difference degree of the region of interest in the first image and the region of interest in the second image.
Performing graying processing on the first image and the second image, and matching the gray gradient of the building region in the ROI in the first image with the gray gradient of the building region in the ROI in the second image to obtain a gray gradient difference G diff
The gray gradient of the pixel points in the image is obtained by calculating the gray gradients in the edge area of the edge contour of the building in the x-axis direction, the y-axis direction and the diagonal direction, and the calculation method of the gray gradient of the pixel points in the image specifically comprises the following contents:
Figure BDA0003991499570000116
G x (x,y)=I(x+1,y)-I(x,y),
G y (x,y)=I(x,y+1)-I(x,y),
Figure BDA0003991499570000117
wherein G x (x, y) represents a gradation gradient in the x-axis direction, G y (x, y) represents a gradation gradient in the y-axis direction,
Figure BDA0003991499570000118
and I (x +1, y) is the gray value of the (x +1, y) pixel point with the coordinate, X, y is the gray value of the (x, y) pixel point with the coordinate, and X, y +1 is the gray value of the (x, y + 1) pixel point with the coordinate.
According to the gray gradient calculation method of the pixel points in the image, the gray gradient difference degree of the first image and the second image is obtained
Figure BDA0003991499570000119
Wherein G is k Representing gradient information, G ', of a kth pixel point in the first image' k And representing gradient information of the kth pixel point in the second image.
S2033, obtaining the edge difference degree of the region of interest in the first image and the region of interest in the second image.
According to a pixel point coordinate set e of the building edge in the ROI in the first image A And a pixel point set e of the edge corresponding to the ROI in the second image B To obtain the edge difference degree E diff 。。
In order to improve the efficiency of the calculation and improve the validity and accuracy of the result, in this embodiment, a 10 × 20 window is set to slide within the ROI of the image sequence pair, the sliding direction is the column direction of the pixels in the image, the starting position is the top of the detected edge region, the sliding step size is a variable step size, and is specifically adjusted according to the edge disparity detected at the previous time, and the specific activity process of the window includes the following contents:
1. in this embodiment, the initial step size of the sliding window is 10, and the edge difference degree detected at the initial position
Figure BDA0003991499570000121
When approaching 0, the step size is increased by 1 time in this embodiment, and the edge difference degree ≤ of the next sliding window area is continuously detected>
Figure BDA0003991499570000122
2. When the edge of the window is different
Figure BDA0003991499570000123
And in the time, the step length is continuously detected, and D =1 is a set empirical threshold value which can be adjusted according to the actual precision requirement.
3. The sliding window automatically stops when sliding for 3 times, and can also stop sliding in advance if the edge disappears before the sliding window does not reach three times.
Figure BDA0003991499570000124
Wherein->
Figure BDA0003991499570000125
Represents the degree of edge differentiation of the ith sliding window, i =1,2,3, </>,/H >>
Figure BDA0003991499570000126
Represents the abscissa of the jth pixel point of the building edge line in the first image, and/or is/are selected>
Figure BDA0003991499570000127
Represents the abscissa of the jth pixel point of the building edge line in the second image, and/or is/are judged>
Figure BDA0003991499570000128
To representLongitudinal coordinate of jth pixel point of building edge line in first image, and->
Figure BDA0003991499570000129
And expressing the longitudinal coordinate of the jth pixel point of the building edge line of the second image. The coordinates of the corresponding point in the first image and the coordinates of the corresponding point in the second image should satisfy the rule of the coordinates of the corner point in step S2.
Finally, the difference degree is constructed through the obtained three indexes
Figure BDA00039914995700001210
The three indices are normalized to exclude the effect of dimension between the indices, while normalizing the range of Z to [0 ]. 1]And (4) the following steps.
And S204, determining the image pair to be selected according to the difference index and a preset difference threshold value.
When the numerical value of the difference degree Z approaches to 1, the higher the degree of the jelly effect of the second image is; the closer the value of the difference Z is to 0, the higher the matching degree between the first image and the second image is, the smaller the jelly effect of the second image is. The image pair to be selected comprises a first image and a second image corresponding to the first image.
A difference degree threshold value Z is set, and when the value of the difference degree Z approaches 0, Z belongs to [0 ]. And z ], taking an image pair formed by the first image and the second image as a to-be-selected image pair, wherein gradient information, edge information and color information in the images are important bases for reflecting image difference, and the lack of any one of the gradient information, the edge information and the color information can cause errors in matching between the images.
And S205, if the three-dimensional angular velocity and the three-dimensional acceleration of the unmanned aerial vehicle at the moment corresponding to the pair of images to be selected are not changed, the pair of images to be selected is the pair of selected images.
After the pair of images to be selected is obtained, whether the pair of images to be selected is the pair of selected images is determined according to whether the three-dimensional angular velocity and the three-dimensional acceleration of the unmanned aerial vehicle are not changed, and when the three-dimensional angular velocity and the three-dimensional acceleration of the unmanned aerial vehicle are not changed in a time period from a previous moment to a current moment of the moment, the pair of images to be selected is the pair of selected images.
Reading the IMU readings when the first image and the second image are acquired and the IMU readings when the previous frame of image is acquired, checking whether the IMU readings are changed in the period, if the IMU readings are changed, the second image in the image pair to be selected possibly has a jelly effect, continuously selecting the image pair until the image pair to be selected is the selected image pair when the IMU readings are not changed, and the second image in the selected image pair is the optimal frame image.
And S206, correcting the first depth image according to the gray gradient of the selected image pair.
The purpose of this step is to correct the point cloud distribution of the corresponding region by using the gray gradient of the pixel point in the second image obtained by the rolling shutter camera in the selected image obtained in S205, and to correct the point cloud with abnormality and to complete the point cloud with deficiency.
The gray gradient of the building area generally does not have large change except for the edge, and meanwhile, the abnormal or missing point cloud information generally exists at the edge, so the point cloud at the edge can be well corrected by utilizing the gray gradient information of the gray image at the edge.
In this embodiment, the first depth image is a TOF (Time Of Flight) point cloud image obtained by using a high-frequency scanning strategy, and the depth information Of the point cloud near the edge is corrected at the edge Of the building according to the gray scale gradient Of the gray scale image Of the second image in the selected image pair, and the edge areas with consistent gray scale gradients and the depth gradients Of the point cloud should be kept consistent; in this embodiment, the point with the same gray gradient and the different depth value is an abnormal point cloud, and conversely, the point with the same gray gradient and the same depth value is a normal point cloud.
Correcting the abnormal point cloud through the depth value of the point cloud which is consistent with the adjacent gray level; and (4) completing the point cloud data missing at the edge by a neighbor interpolation method, and finally converting by a coordinate system to finally obtain the depth image of the building area.
Example 3
The embodiment of the invention provides a point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, which is characterized by comprising the following components as shown in figure 3: an image acquisition module 301, a region of interest acquisition module 302, a disparity calculation module 303, a candidate image pair acquisition module 304, a selected image pair acquisition module 305, and an image correction module 306.
The image obtaining module 301 is configured to obtain a first image, a second image, and a first depth image.
The region of interest obtaining module 302 is configured to obtain a region of interest in an overlapping region of the first image and the second image; the region of interest is a part to be corrected in the overlapping region.
The difference degree calculating module 303 is configured to obtain a difference degree index between the region of interest in the first image and the region of interest in the second image.
The candidate image pair obtaining module 304 is configured to determine a candidate image pair according to the difference index and a preset difference threshold.
The selected image pair obtaining module 305 is configured to determine whether the three-dimensional angular velocity and the three-dimensional acceleration at the time corresponding to the to-be-selected image pair are unchanged: and if the judgment result is yes, the image pair to be selected is the selected image pair.
The image correction module 306 is configured to correct the first depth image according to the gray scale gradient of the selected image pair.
In conclusion, the invention provides a point cloud correction method in unmanned aerial vehicle surveying and mapping based on artificial intelligence. The method comprises the steps of carrying a global shutter and a rolling shutter by an unmanned aerial vehicle to acquire images, selecting texture information, edge information and color information in an optimal ROI to match according to a camera imaging rule, selecting an image with minimum fluctuation and optimal imaging as a reference image, and filtering a jelly effect image possibly generated by the unmanned aerial vehicle in the acquired images. And correcting the TOF point cloud data according to the edge gradient information in the optimal frame image so as to obtain accurate point cloud data, generate a depth image and meet the requirement of surveying and mapping precision.
The use of words such as "including," "comprising," "having," and the like in this disclosure is an open-ended term that means "including, but not limited to," and is used interchangeably therewith. As used herein, the words "or" and "refer to, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that the various components or steps may be broken down and/or re-combined in the methods and systems of the present invention. Such decomposition and/or recombination should be considered as equivalents of the present disclosure.
The above-mentioned embodiments are merely examples for clearly illustrating the present invention and do not limit the scope of the present invention. It will be apparent to those skilled in the art that other variations and modifications may be made in the foregoing description, and it is not necessary or necessary to exhaustively enumerate all embodiments herein. All designs identical or similar to the present invention are within the scope of the present invention.

Claims (7)

1. A point cloud correction method in unmanned aerial vehicle surveying and mapping based on artificial intelligence is characterized by comprising the following steps:
acquiring a first image, a second image and a first depth image;
acquiring a region of interest in an overlapping region of the first image and the second image; the region of interest is a part to be corrected in the overlapping region;
acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image;
determining an image pair to be selected according to the difference index and a preset difference threshold;
judging whether the three-dimensional angular velocity and the three-dimensional acceleration of the image to be selected at the corresponding moment are not changed:
if the judgment result is yes, the image pair to be selected is a selected image pair;
the first depth image is corrected according to the gray scale gradients of the selected image pair.
2. The method of claim 1, wherein the disparity indicator comprises color disparity, gray scale gradient disparity, and edge disparity.
3. The method of claim 2, wherein the step of obtaining the color difference comprises:
dividing the region of interest into different regions according to the hue of the part in the region of interest within the preset distance threshold range;
calculating the hue, saturation and lightness of each region and the average value of hue, the average value of saturation and the average value of lightness of all the regions;
and obtaining the color difference according to the hue, the saturation and the brightness of each area and the average value of the hue, the average value of the saturation and the average value of the brightness of all the areas.
4. The method of claim 2, wherein the step of obtaining the gray scale gradient difference comprises:
carrying out gray processing on the first image and the second image to respectively obtain a first gray image and a second gray image;
obtaining gradient information of the first gray image according to gradient values of pixel points in the region of interest in the first gray image;
obtaining gradient information of a second gray scale image according to gradient values of pixel points in the region of interest in the second gray scale image;
and obtaining the gray gradient difference degree according to the gradient information of the first gray image and the gradient information of the second gray image.
5. The method of claim 2, wherein the step of obtaining the edge difference comprises:
acquiring a first image edge pixel point set and a second image edge pixel point set, wherein the first image edge pixel point set is a set formed by the edge pixels of the interested region in the first image, and the second image edge pixel point set is a set formed by the edge pixels of the interested region in the second image;
and obtaining the edge difference degree according to the first image edge pixel point set and the second image edge pixel point set.
6. The method of claim 1, further comprising, before acquiring the first image, the second image and the first depth image during the mapping process:
and carrying out self-adaptive adjustment on the damping of the unmanned aerial vehicle holder according to a preset adjustment interval.
7. A point cloud correction system in unmanned aerial vehicle surveying and mapping based on artificial intelligence, which is characterized by comprising: the system comprises an image acquisition module, a region-of-interest acquisition module, a difference calculation module, a to-be-selected image pair acquisition module, a selected image pair acquisition module and an image correction module;
the image acquisition module is used for acquiring a first image, a second image and a first depth image;
the interesting region acquiring module is used for acquiring an interesting region in an overlapping region of the first image and the second image; the region of interest is a part to be corrected in the overlapping region;
the difference degree calculation module is used for acquiring a difference degree index of the region of interest in the first image and the region of interest in the second image;
the image pair to be selected acquisition module is used for determining an image pair to be selected according to the difference degree index and a preset difference degree threshold value;
the selected image pair obtaining module is used for judging whether the three-dimensional angular velocity and the three-dimensional acceleration at the moment corresponding to the image pair to be selected are unchanged: if the judgment result is yes, the image pair to be selected is a selected image pair;
the image correction module is configured to correct the first depth image according to a gray scale gradient of the selected image pair.
CN202211577038.6A 2022-12-11 2022-12-11 Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence Pending CN115909108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211577038.6A CN115909108A (en) 2022-12-11 2022-12-11 Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211577038.6A CN115909108A (en) 2022-12-11 2022-12-11 Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN115909108A true CN115909108A (en) 2023-04-04

Family

ID=86496816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211577038.6A Pending CN115909108A (en) 2022-12-11 2022-12-11 Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115909108A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119360252A (en) * 2024-12-24 2025-01-24 旭日蓝天(武汉)科技有限公司 A method and system for accurate landing of unmanned aerial vehicle based on visual recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119360252A (en) * 2024-12-24 2025-01-24 旭日蓝天(武汉)科技有限公司 A method and system for accurate landing of unmanned aerial vehicle based on visual recognition
CN119360252B (en) * 2024-12-24 2025-03-25 旭日蓝天(武汉)科技有限公司 A method and system for accurate landing of unmanned aerial vehicle based on visual recognition

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
EP3438777B1 (en) Method, apparatus and computer program for a vehicle
CN110799918B (en) Method, apparatus and computer-readable storage medium for vehicle, and vehicle
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
US10909395B2 (en) Object detection apparatus
JP6545997B2 (en) Image processing device
US8400505B2 (en) Calibration method, calibration device, and calibration system including the device
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN112541953B (en) Vehicle detection method based on radar signal and video synchronous coordinate mapping
CN109343041B (en) Monocular distance measuring method for advanced intelligent auxiliary driving
US20120026329A1 (en) Method and system for measuring vehicle speed based on movement of video camera
CN111091592A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN112348775A (en) Vehicle-mounted all-round-looking-based pavement pool detection system and method
CN106488139A (en) Image compensation method, device and unmanned plane that a kind of unmanned plane shoots
CN113450418A (en) Improved method, device and system for underwater calibration based on complex distortion model
CN117061868A (en) Automatic photographing device based on image recognition
CN111951339B (en) Image processing method for parallax calculation using heterogeneous binocular cameras
CN115909108A (en) Point cloud correction method and system in unmanned aerial vehicle surveying and mapping based on artificial intelligence
CN114544006B (en) Low-altitude remote sensing image correction system and method based on ambient illumination condition
CN119205936B (en) A paper chart deformation error detection method and system based on machine vision
CN113706424B (en) Jelly effect image correction method and system based on artificial intelligence
CN114719873A (en) Low-cost fine map automatic generation method and device and readable medium
CN111833384B (en) Method and device for rapidly registering visible light and infrared images
CN116503492A (en) Binocular camera module calibration method and calibration device in automatic driving system
CN117671033A (en) Quick calibration method and system for main point of camera image based on night light tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination