CN110033411B - High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle - Google Patents
High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle Download PDFInfo
- Publication number
- CN110033411B CN110033411B CN201910292872.2A CN201910292872A CN110033411B CN 110033411 B CN110033411 B CN 110033411B CN 201910292872 A CN201910292872 A CN 201910292872A CN 110033411 B CN110033411 B CN 110033411B
- Authority
- CN
- China
- Prior art keywords
- point
- image
- pixel
- suture line
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000010276 construction Methods 0.000 title claims abstract description 30
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 10
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims 1
- 238000010187 selection method Methods 0.000 claims 1
- 238000012937 correction Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000007726 management method Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 206010034719 Personality change Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a highway construction site panoramic image efficient splicing method based on an unmanned aerial vehicle, which solves the problems of local coordinate system deviation, low matching efficiency of characteristic points of the whole image and splicing fuzziness and ghost caused by a dynamic target in the cruising process of the unmanned aerial vehicle through correction of geographic information coordinates and attitude parameters of aerial images, selection of key splicing areas, efficient matching of the characteristic points and rapid image splicing based on an optimal suture line and image fusion. The invention is suitable for the overall safety supervision and management of the highway engineering construction site.
Description
Technical Field
The invention relates to a method for efficiently splicing panoramic images of a highway construction site based on an unmanned aerial vehicle.
Background
With the acceleration of the development process of the traffic construction in China, the construction quality safety accidents of traffic engineering, particularly highway engineering, also present the situation of easy occurrence, multiple occurrence and high occurrence. At present, the safety supervision and management of a highway engineering construction site mostly adopts traditional means such as an artificial telescope and a camera, and has the defects of poor autonomy, low flexibility, limitation of an observation area, observation blind areas, great influence of terrain, incapability of realizing global overall management of the construction site and the like. In view of the above problems, scholars at home and abroad have developed the study of engineering safety management methods based on vision. With the development of image acquisition hardware equipment such as an unmanned aerial vehicle and the like and software algorithm technologies such as computer vision, image processing and the like in recent years, some construction site panoramic image generation methods based on unmanned aerial vehicles and image splicing are available at present, and an early-stage research foundation is laid for realizing overall safety supervision and management of the whole construction site. However, these methods are often not really effective for practical engineering applications. The reason is that firstly, the unmanned aerial vehicle generates local coordinate system deviation in the cruising process, such as visual angle fluctuation in a cruising plane, the traditional method does not consider geographic information coordinate and attitude parameter correction of the unmanned aerial vehicle, so that local image distortion is caused, and further the splicing error is very obvious when a panoramic image is generated; secondly, most of the current image splicing algorithms are based on the matching of feature points in the whole image area, so that the processing efficiency is low, and the realization of real-time or quasi-real-time fast splicing of a plurality of high-resolution images on an actual construction site is very difficult; moreover, due to the inevitable natural wind action and the tracking requirement of the dynamic target of the construction site, the spliced panoramic image generated by the traditional method has blurs and ghosts. How to provide an efficient and accurate panoramic image splicing method aiming at the position and attitude change of an unmanned aerial vehicle in the cruising process is a problem to be solved urgently.
Disclosure of Invention
Based on the defects, the invention provides the method for efficiently splicing the panoramic images of the road construction site based on the unmanned aerial vehicle, and solves the problems of local coordinate system deviation, low efficiency of matching of characteristic points of the whole image and fuzzy splicing and ghost images caused by dynamic targets generated in the cruising process of the unmanned aerial vehicle.
The technology adopted by the invention is as follows: an efficient road construction site panoramic image splicing method based on an unmanned aerial vehicle comprises the following steps:
the method comprises the steps that firstly, geographic information and attitude parameters of an image acquired by an unmanned aerial vehicle are extracted, conversion of position information of the unmanned aerial vehicle from a geographic coordinate system to a local coordinate system is achieved based on Gaussian projection and coordinate rotation translation, homography matrix correction is carried out according to attitude parameters of the unmanned aerial vehicle, and image distortion errors caused by wind-induced vibration cruise angle deviation are eliminated;
secondly, performing adjacent pairwise matching on the images corrected by the geographic information and the attitude parameters, selecting a local key splicing area of the feature points based on the local pixel variation maximum value, and performing feature point matching in the key area based on ORB features;
and thirdly, iteratively performing optimal suture line search and image boundary segmentation weighting fusion algorithm of adjacent images based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image.
The invention also has the following technical characteristics:
1. the first step specifically includes:
the method comprises the steps that a flight control platform is used for controlling the flight direction and speed of the unmanned aerial vehicle, the overlapping rate of adjacent images is guaranteed to be 50%, and continuous processing of multiple images is achieved;
step two, continuously numbering the images obtained in the step one by one, correcting the extracted geographic information and attitude parameters, and performing homography matrix conversion and registration on a plurality of images;
the Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
wherein a is the major semi-axial length of the ellipsoid, and e' are the first and second oblateness of the ellipsoid respectively.
2. In the second step, the method for selecting the key splicing area includes:
wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1The overlapping area of the ith image and the i +1 image is shown, and the overlapping rate is controlled to be 50% by the flight control platform.
3. In the second step, after the selection of the key splicing region is completed, the extraction flow of the ORB features is as follows: firstly, any pixel on the image is taken as the center of a circle, a circle is made on the image by a fixed radius, the gray value of the pixel passing by the peripheral arc is counted, then comparing the gray values of the peripheral arc pixels and the central point pixels, counting the number of gray difference values larger than a set threshold value, and using the data as a basis for judging whether a central pixel point is a candidate characteristic point, wherein the radius of the circular template is 3 pixels, comparing a point p to be detected with pixels in a circle formed by 16 pixels around the point p, judging whether enough pixels exist in the circle and the p has different attributes, if so, the p is an angular point, in the gray image, the algorithm compares the gray value of each point with p points, if n continuous pixel points are brighter or darker than the p points, p is an angular point, n is 9, then, N point pairs are selected in a pattern around the keypoint p, and the comparison results of the N point pairs are combined to be used as a descriptor. And taking the key point p as the center of a circle and d as the radius to make a circle O, selecting N point pairs in a certain mode in the circle O, wherein N can be 512, taking the key point as the center of a circle, and taking a connecting line of the key point and the centroid of a point taking area as an X axis to establish a two-dimensional coordinate system, and when the similarity of the two points is greater than a threshold value, the two points are successfully matched.
4. In the third step, the objective optimization function of the optimal suture line is specifically:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Ecolor=ΔIi=Ii+1-Ii (6)
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference of overlapping pixel points on the two original images. Sx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,representing a convolution operation.
5. The image boundary segmentation weighting fusion algorithm in the third step specifically comprises the following steps:
in the formula, (x, y) belongs to R and represents pixel points in a key splicing region R, f (x, y) represents an image after weighted fusion, and f (x, y) represents an image after weighted fusioni(x, y) represents the ith original image, i is 1,2 represents two continuous adjacent images needing to be spliced, diAnd (x, y) represents a sectional weighting coefficient which changes along with the change of the pixel position, the sectional weighting coefficient linearly changes along the height direction of the image, the value range is 0-1, and h is the image height of the key splicing area. The use of the segmented weighted fusion algorithm has the advantages of high calculation speed and clear physical significance. In each image to be stitched, the position farther from the previous image (the closer y is to h), the closer the distance to the optimal suture line is, the closer the corresponding weight coefficient is to 1, and the greater the effect is in image fusion. After image fusion is completed near the optimal suture line, the phenomena of splicing blurring and ghosting caused by the traditional method can be eliminated.
The invention has the beneficial effects that: aiming at the problems of local coordinate system deviation, low efficiency of matching of characteristic points of the whole image and fuzzy and ghost splicing caused by a dynamic target generated in the cruising process of the unmanned aerial vehicle, the construction site unmanned aerial vehicle panoramic high-resolution image splicing is realized by correcting geographic information coordinates and attitude parameters of aerial images, selecting key splicing areas, efficiently matching the characteristic points and quickly splicing images based on an optimal suture line and image fusion. The method improves the calculation efficiency of the unmanned aerial vehicle panoramic high-resolution image splicing and the accuracy of the splicing result, and obviously reduces the manual participation degree in the traditional method. The invention can also meet the requirements of on-line safety monitoring and early warning and real-time data processing on a construction site, directly transmits and splices the acquired images, and the result output delay can be as low as less than ten seconds. The invention improves the automation, intelligence degree and accuracy of the overall safety supervision and management of the construction site, and provides a solution for the overall safety supervision and management of the traffic engineering construction site.
Drawings
FIG. 1 is a flow chart of one embodiment of the present invention;
FIG. 2 is a flow chart of a core algorithm of the present invention;
FIG. 3 is a diagram showing the result of selecting the key area in step two of the present invention;
FIG. 4 is a diagram showing the result of the configuration of ORB feature points in the key area in step two of the present invention;
FIG. 5 is a graph of the optimal stitch line results of step three of the present invention, wherein the black broken line represents the optimal stitch line of two adjacent images;
FIG. 6 is a global high-definition splicing result diagram of a highway engineering construction site performed by the embodiment of the invention;
FIG. 7 is a diagram of the blur and ghost elimination effect of the present invention, wherein FIG. 7(a) is a local stitching blur and ghost map generated by the conventional method, and FIG. 7(b) is a high resolution result map generated by the present invention.
Detailed Description
Example 1
The embodiment is a method for efficiently splicing panoramic images of a highway engineering construction site based on unmanned aerial vehicle geographic information and attitude parameter correction, as shown in fig. 1, the method comprises the following steps:
the method comprises the steps of firstly, extracting geographic information and attitude parameters of an image acquired by the unmanned aerial vehicle, realizing conversion of position information of the unmanned aerial vehicle from a geographic coordinate system to a local coordinate system based on Gaussian projection and coordinate rotation translation, correcting a homography matrix according to the attitude parameters of the unmanned aerial vehicle, and eliminating image distortion errors caused by deviation of a wind-induced vibration cruise angle.
For example, in one embodiment, the resolution of a single original color image is 5472 × 3684, and geographical information such as longitude, latitude, elevation and the like of the corresponding shooting position and attitude parameters such as pitch angle, heading angle, roll angle and the like are extracted from the original image. And then, carrying out homography matrix conversion on the plurality of images according to pairwise matching of adjacent images to obtain a continuous registration result of the plurality of images.
And secondly, performing adjacent pairwise matching on the images after geographic information and posture parameter correction, selecting a local key splicing area of the feature points based on the local pixel change maximum value, and performing feature point matching in the key area based on ORB (ordered FAST and Rotated Brief, rotation accelerated segmentation and binarization robust independent unit) features.
The selection of the local key splicing area is based on the overlapping area of the images to be spliced. The height of the key splicing region is 50% of the image height, namely 1842 pixels; the width is 4-6 degrees according to the inclination angle of the road width edge of the analysis image, the number of transversely occupied pixels in the half image is 125-190, and 150 pixels are selected for automatic frame selection; due to the influence conditions such as wind speed of a construction site and the like, the images are laterally deviated, and the deviation value is within 100 pixels, so that the left and right of adjacent images are respectively increased by 100 pixels during frame selection, and a selected target area is ensured. Namely, the width of the key splicing region in the front image is 300 pixels, and the width of the key splicing region in the rear image is 500 pixels, so that the matching effect of the feature points in the key splicing region is ensured. Fig. 3 is a result diagram of selecting a key region of adjacent images, and fig. 4 is a result diagram of matching ORB feature point regions in the key region.
And thirdly, iteratively searching an optimal suture line of adjacent images and fusing image boundaries based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image.
The process of searching the optimal suture line is to carry out difference operation on the overlapped part of the two images according to the principle of minimum color and geometric error to generate a difference image; then, starting from the first line of the overlapping area by applying the idea of dynamic programming to the difference image, establishing a suture line taking each pixel on the line as a starting point; finally, an optimal suture line is searched from the suture lines. The method comprises the following specific steps: initializing the pixel point of each row of the first row to be a suture line, initializing the intensity value of the suture line to be a standard value of each point, and setting the current point of the suture line to be the row value of the suture line; expanding the line of which the suture line strength is calculated to expand downwards until the last line, wherein the expansion method comprises the steps of adding the current point of each suture line with the 3 pixel criterion values in the next line next to the point for comparison, taking one of the 3 pixels in the next line corresponding to the minimum intensity value as the expansion direction of the suture line, updating the intensity value of the suture line to be the minimum intensity value, and updating the current point of the suture line to be the column where the next pixel value in the next line where the minimum intensity value is obtained is located; selecting the best suture line, and selecting the suture line with the minimum intensity value from all the suture lines as the best suture line. And enabling the picture input to the model to be in accordance with the size of the picture input during training. The black fold in fig. 5 represents the best stitch line for the two adjacent images.
The operation result of the embodiment is developed under MATLAB 2016a and OpenCV 2.0 environment, and is directly suitable for construction site images shot by consumer-grade unmanned aerial vehicles, special shooting or detection equipment is not needed, and the cruising height is 30 meters. The method has the advantages of high splicing precision, high speed and low cost, can be used for offline identification of the overall safety assessment of the construction site, can also be used for quasi-real-time monitoring, has the processing time delay within 5 seconds, and improves the automation, the intellectualization, the accuracy and the processing efficiency of the overall safety supervision of the construction site.
Fig. 6 to 7 are graphs of stitching effects of an embodiment of the present invention, where fig. 6 is a high-definition panorama after 8 images are continuously stitched, fig. 7(a) is a graph of local stitching blur and ghost generated by a conventional method, and fig. 7(b) is a graph of a high-resolution result generated by the present invention.
Example 2
This embodiment is substantially the same as example 1 except that: the first step specifically comprises the following steps:
the method comprises the steps of controlling the flight direction and speed of the unmanned aerial vehicle by adopting a flight control platform PIX4D, ensuring that the overlapping rate of adjacent images is ensured to be 50%, and realizing continuous processing of a plurality of images.
And secondly, continuously numbering the images obtained in the first step, correcting the extracted geographic information and attitude parameters, and performing homography matrix conversion and registration on the plurality of images.
The Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
wherein a is the major semi-axial length of the ellipsoid, e, e' are the first and second oblateness of the ellipsoid respectively,
the method has the advantages that local coordinate conversion is achieved by extracting the geographic coordinate information of the unmanned aerial vehicle; the overlapping rate of adjacent images is ensured to be 50% through the flight control platform, and the continuous splicing of the construction site global images can be realized.
The construction site is fully covered by 8 images with the overlapping rate of 50% collected from a road pavement construction section, and the geographic coordinates are projected to a plane coordinate system to calculate the relative position by adopting Gaussian-Kluker projection. In the embodiment, geographic information and attitude parameters of 8 continuous images acquired by the unmanned aerial vehicle and having an overlapping rate of 50% are extracted, and based on gaussian projection and coordinate rotation translation, correction of the geographic information and attitude parameters of the unmanned aerial vehicle is realized, and the result is shown in table 1.
TABLE 1 geographical information and attitude parameter correction results
The other steps were the same as in example 1.
Example 3
This embodiment is substantially the same as example 1 except that: in the second step, the selection principle of the key area is
Wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1The overlapping area of the ith image and the i +1 image is shown, and the overlapping rate is controlled to be 50% by the flight control platform.
After the key area selection is completed, the extraction flow of the ORB features is as follows:
based on FAST corner detection and BRIEF feature descriptors, ORB features have good robustness and real-time, and the computation cost and memory requirement are both low. Firstly, taking any pixel on an image as a circle center, making a circle on the image by using a fixed radius, counting the gray value of the pixel through which a peripheral arc passes, then comparing the gray values of the peripheral arc pixel and a central point pixel, counting the number of gray difference values larger than a set threshold value, and taking the number as a basis for judging whether the central pixel point is a candidate characteristic point. The radius of a commonly used circular template is 3 pixels, a point p to be detected is compared with pixels in a circle formed by 16 pixels around the point p to be detected, whether enough pixels are different from the p in attribute is judged, if yes, the p can be an angular point, in a gray image, an algorithm is to compare the gray value of each point with the p, and if n continuous pixels are brighter or darker than the p, the p can be the angular point. Through tests, n is 9, and the processing effect, the speed and the robustness obtained by the algorithm are very good. Then, N point pairs are selected in a certain pattern around the key point P, and the comparison results of the N point pairs are combined to be used as a descriptor. And D is taken as the radius of the circle O with the key point P as the center of the circle, and N point pairs are selected in a certain mode in the circle O. In practical application, N may be 512. And establishing a two-dimensional coordinate system by taking the key point as a circle center and taking a connecting line of the key point and the centroid of the point taking area as an X axis. Under different rotation angles, the points extracted in the same point extraction mode are consistent, so that the problem of rotation consistency is solved. And finally, setting thresholds, such as A:10101011 and B:10101010, according to the feature descriptors, and when the similarity of the two points is greater than the threshold, successfully matching the two points.
The other steps and parameters were the same as in example 1.
Example 4
This embodiment is substantially the same as example 1 except that: in the third step, the objective optimization function of the optimal suture line is specifically as follows:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Ecolor=ΔIi=Ii+1-Ii (6)
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference of overlapping pixel points on the two original images. Sx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,representing a convolution operation.
The other steps and parameters were the same as in example 1.
Example 5
This embodiment is substantially the same as example 1 except that: in step three, a segmented weighted fusion algorithm is adopted. The reason is that if images are simply superposed during splicing, obvious splicing seams are generated at the spliced positions, and a segmentation weighting fusion algorithm of the images is introduced for eliminating the splicing seams. The weighted average weight function is selected by adopting a gradual fading method, and the Euclidean distance from a pixel point to the center of an overlapping area is used as the weight function. When the image transitions in the overlap region, the weight function changes gradually from 1 to 0. The weighted average algorithm can well process exposure difference, and has the advantages of high fusion speed, simple implementation and good real-time property. The processing time of a consumer notebook computer with a hardware configuration of 8GB DDR3 memory, i7-4790 CPUs and software environments of MATLAB 2016a and OpenCV 2.0 is about 4s, the time of a traditional image splicing method based on greedy SIFT feature matching is about 13s, and the efficiency is improved by nearly 2 times.
The other steps and parameters were the same as in example 1.
Claims (1)
1. An efficient road construction site panoramic image splicing method based on an unmanned aerial vehicle is characterized by comprising the following steps:
the method comprises the steps that firstly, a flight control platform is used for controlling the flight direction and speed of the unmanned aerial vehicle, the overlapping rate of adjacent images is guaranteed to be 50%, continuous processing of multiple images is achieved, the obtained images are numbered continuously, extracted geographic information and attitude parameters of the unmanned aerial vehicle are corrected, homography matrix conversion and registration of the multiple images are carried out, and image distortion errors caused by wind-induced vibration cruise angle deviation are eliminated;
the Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
wherein a is the length of the long semi-axis of the ellipsoid, and e' are respectively the first oblateness and the second oblateness of the ellipsoid;
and secondly, performing adjacent pairwise matching on the images corrected by the geographic information and the attitude parameters, and selecting a local key splicing area of the feature points based on the local pixel variation maximum value, wherein the key splicing area selection method comprises the following steps:
wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1Representing the overlapping area of the ith image and the i +1 image, wherein the overlapping rate is controlled to be 50% by the flight control platform, namely 1842 pixels; the width is 4-6 degrees according to the inclination angle of the road width edge of the analysis image, the number of transversely occupied pixels in the half image is 125-190, and 150 pixels are selected for automatic frame selection; because the images are transversely deviated under the influence conditions of wind speed and the like of a construction site, the deviation value is within 100 pixels, the left and right sides of adjacent images are respectively increased by 100 pixels during frame selection, and the selected target area, namely the key splicing in the previous image is ensuredThe width of the region is 300 pixels, and the width of the subsequent image is 500 pixels, so that the matching effect of the feature points in the key splicing region is ensured;
after the key splicing area is selected, carrying out feature point matching in the key area based on ORB features, wherein the matching method of the ORB features is as follows: firstly, taking any pixel on an image as a circle center, making a circle on the image by using a fixed radius, counting the gray value of a pixel through which a peripheral circular arc passes, then comparing the gray values of the peripheral circular arc pixel and a central point pixel, counting the number of gray difference values which are more than a set threshold value, and taking the number as a basis for judging whether the central pixel point is a candidate characteristic point, wherein the radius of a circular template is 3 pixels, comparing a point p to be detected with an in-circle pixel point formed by 16 pixel points around the point p to judge whether enough pixel points are different from the attribute of the point p, if so, the point p is an angular point, in the gray image, an algorithm is to compare the gray value of each point with the point p, if N continuous pixel points are brighter or darker than the point p, the point p is the angular point, if N is 9, then selecting N point pairs around the key point p in a certain mode, and combining the comparison results of the N point pairs as a descriptor, taking the key point p as the center of a circle and d as the radius to make a circle O, selecting N point pairs in a certain mode in the circle O, taking N as 512, taking the key point as the center of a circle, and taking the connecting line of the key point and the centroid of a point taking area as an X axis to establish a two-dimensional coordinate system, wherein when the similarity of the two points is greater than a threshold value, the two points are successfully matched;
iteratively carrying out optimal suture line search and image boundary segmentation weighting fusion algorithm of adjacent images based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image; the process of searching the optimal suture line is to carry out difference operation on the overlapped part of the two images according to the principle of minimum color and geometric error to generate a difference image; then, starting from the first line of the overlapping area by applying the idea of dynamic programming to the difference image, establishing a suture line taking each pixel on the line as a starting point; finally, an optimal suture line is searched from the suture lines, and the specific steps are as follows: initializing the pixel point of each row of the first row to be a suture line, initializing the intensity value of the suture line to be a standard value of each point, and setting the current point of the suture line to be the row value of the suture line; expanding the line of which the suture line strength is calculated to expand downwards until the last line, wherein the expansion method comprises the steps of adding the current point of each suture line with the 3 pixel criterion values in the next line next to the point for comparison, taking one of the 3 pixels in the next line corresponding to the minimum intensity value as the expansion direction of the suture line, updating the intensity value of the suture line to be the minimum intensity value, and updating the current point of the suture line to be the column where the next pixel value in the next line where the minimum intensity value is obtained is located; selecting an optimal suture line, and selecting the suture line with the minimum intensity value from all the suture lines as the optimal suture line so that the picture input to the model conforms to the size of the picture input during training; the target optimization function of the optimal suture line is specifically as follows:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Ecolor=ΔIi=Ii+1-Ii (6)
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference, S, of overlapping pixel points on two original imagesx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,representing a convolution operation;
the image boundary segmentation weighting fusion algorithm specifically comprises the following steps:
in the formula (I), the compound is shown in the specification,representing key splice areasF (x, y) represents weighted fused image, fi(x, y) represents the ith original image, i is 1,2 represents two continuous adjacent images needing to be spliced, diAnd (x, y) represents a sectional weighting coefficient which changes along with the change of the pixel position, the sectional weighting coefficient linearly changes along the height direction of the image, the value range is 0-1, and h is the image height of the key splicing area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910292872.2A CN110033411B (en) | 2019-04-12 | 2019-04-12 | High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910292872.2A CN110033411B (en) | 2019-04-12 | 2019-04-12 | High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110033411A CN110033411A (en) | 2019-07-19 |
CN110033411B true CN110033411B (en) | 2021-01-12 |
Family
ID=67238177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910292872.2A Active CN110033411B (en) | 2019-04-12 | 2019-04-12 | High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110033411B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569927A (en) * | 2019-09-19 | 2019-12-13 | 浙江大搜车软件技术有限公司 | Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal |
CN110796734B (en) * | 2019-10-31 | 2024-01-26 | 中国民航科学技术研究院 | Airport clearance inspection method and device based on high-resolution satellite technology |
SG10201913798WA (en) * | 2019-12-30 | 2021-07-29 | Sensetime Int Pte Ltd | Image processing method and apparatus, and electronic device |
CN111680703B (en) * | 2020-06-01 | 2022-06-03 | 中国电建集团昆明勘测设计研究院有限公司 | 360-degree construction panorama linkage positioning method based on image feature point detection and matching |
CN112308774A (en) * | 2020-09-15 | 2021-02-02 | 北京中科遥数信息技术有限公司 | Unmanned aerial vehicle-based map reconstruction method and system, transmission equipment and storage medium |
CN112184662B (en) * | 2020-09-27 | 2023-12-15 | 成都数之联科技股份有限公司 | Camera external parameter initial method and system applied to unmanned aerial vehicle image stitching |
CN112215304A (en) * | 2020-11-05 | 2021-01-12 | 珠海大横琴科技发展有限公司 | Gray level image matching method and device for geographic image splicing |
CN112907452A (en) * | 2021-04-09 | 2021-06-04 | 长春理工大学 | Optimal suture line searching method for image stitching |
CN113286081B (en) * | 2021-05-18 | 2023-04-07 | 中国民用航空总局第二研究所 | Target identification method, device, equipment and medium for airport panoramic video |
CN113450255A (en) * | 2021-06-04 | 2021-09-28 | 西安超越申泰信息科技有限公司 | Aerial image splicing method and device |
CN117687426A (en) * | 2024-01-31 | 2024-03-12 | 成都航空职业技术学院 | Unmanned aerial vehicle flight control method and system in low-altitude environment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426153A (en) * | 2013-07-24 | 2013-12-04 | 广州地理研究所 | Unmanned aerial vehicle remote sensing image quick splicing method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916452B (en) * | 2010-07-26 | 2012-04-25 | 中国科学院遥感应用研究所 | Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information |
CN106023086B (en) * | 2016-07-06 | 2019-02-22 | 中国电子科技集团公司第二十八研究所 | A kind of aerial images and geodata joining method based on ORB characteristic matching |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN109389555B (en) * | 2018-09-14 | 2023-03-31 | 复旦大学 | Panoramic image splicing method and device |
-
2019
- 2019-04-12 CN CN201910292872.2A patent/CN110033411B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426153A (en) * | 2013-07-24 | 2013-12-04 | 广州地理研究所 | Unmanned aerial vehicle remote sensing image quick splicing method |
Also Published As
Publication number | Publication date |
---|---|
CN110033411A (en) | 2019-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110033411B (en) | High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle | |
CN109509230B (en) | SLAM method applied to multi-lens combined panoramic camera | |
CN109800689B (en) | Target tracking method based on space-time feature fusion learning | |
CN115439424B (en) | Intelligent detection method for aerial video images of unmanned aerial vehicle | |
CN106529587B (en) | Vision course recognition methods based on object detection | |
CN110992263B (en) | Image stitching method and system | |
CN111830953A (en) | Vehicle self-positioning method, device and system | |
CN106886748B (en) | TLD-based variable-scale target tracking method applicable to unmanned aerial vehicle | |
CN106373088A (en) | Quick mosaic method for aviation images with high tilt rate and low overlapping rate | |
CN112163995B (en) | Splicing generation method and device for oversized aerial strip images | |
Ma et al. | Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes | |
CN110109465A (en) | A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle | |
CN104217459B (en) | A kind of spheroid character extracting method | |
CN113689331B (en) | Panoramic image stitching method under complex background | |
CN110047108A (en) | UAV position and orientation determines method, apparatus, computer equipment and storage medium | |
CN112163588A (en) | Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment | |
CN116740288B (en) | Three-dimensional reconstruction method integrating laser radar and oblique photography | |
CN111738071B (en) | Inverse perspective transformation method based on motion change of monocular camera | |
CN104636724A (en) | Vehicle-mounted camera rapid pedestrian and vehicle detection method based on goal congruence | |
CN111967337A (en) | Pipeline line change detection method based on deep learning and unmanned aerial vehicle images | |
CN112947526A (en) | Unmanned aerial vehicle autonomous landing method and system | |
CN116228539A (en) | Unmanned aerial vehicle remote sensing image stitching method | |
WO2024147898A1 (en) | Parking space detection method and system | |
CN107423766B (en) | Method for detecting tail end motion pose of series-parallel automobile electrophoretic coating conveying mechanism | |
CN113393501A (en) | Method and system for determining matching parameters of road image and point cloud data and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |