CN114926332A - Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle - Google Patents

Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle Download PDF

Info

Publication number
CN114926332A
CN114926332A CN202210423537.3A CN202210423537A CN114926332A CN 114926332 A CN114926332 A CN 114926332A CN 202210423537 A CN202210423537 A CN 202210423537A CN 114926332 A CN114926332 A CN 114926332A
Authority
CN
China
Prior art keywords
image
unmanned aerial
aerial vehicle
value
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210423537.3A
Other languages
Chinese (zh)
Inventor
肖文平
何敖东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hinge Electronic Technologies Co Ltd
Original Assignee
Shanghai Hinge Electronic Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hinge Electronic Technologies Co Ltd filed Critical Shanghai Hinge Electronic Technologies Co Ltd
Priority to CN202210423537.3A priority Critical patent/CN114926332A/en
Publication of CN114926332A publication Critical patent/CN114926332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle mother vehicle, which comprises the following steps: the method comprises the steps that an unmanned aerial vehicle master vehicle obtains an image shot by an unmanned aerial vehicle; performing cylindrical projection on the obtained image to obtain a cylindrical projection image; detecting the corner of the cylindrical projection image to obtain the corner; performing feature extraction and feature matching on the cylindrical projection image according to the obtained angular points to obtain feature matching point pairs of the spliced image; and splicing and fusing the cylindrical projection images according to the extracted feature matching points to form a panoramic image. According to the technical scheme provided by the invention, the imaging real-time performance can be provided by the primary unmanned aerial vehicle during panoramic image splicing, and the splicing effect of the panoramic image can be improved without increasing the calculation complexity by improving the splicing algorithm.

Description

Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle
Technical Field
The invention relates to the field of splicing panoramic images of unmanned aerial vehicles, in particular to a man-machine panoramic image splicing method based on a mother vehicle of an unmanned aerial vehicle.
Background
With the development and progress of the technology in recent years, the unmanned aerial vehicle technology is gradually matured along with the development of the technologies such as inertial navigation control, high-precision attitude sensors, small servo motors and the like, and the automation and intelligentization degree is obviously improved. And collecting the image that the evidence needs a plurality of visual angles, can provide more comprehensively, more accurate image, need splice the image of a plurality of angles to show and provide strong evidence for the user. However, in a new stage, the panoramic stitching algorithm is complex, the amount of calculation is large, and real-time processing is difficult to achieve.
Disclosure of Invention
Based on the defects in the prior art, the invention provides an unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle master vehicle, which comprises the following steps: the method comprises the steps that an unmanned aerial vehicle master vehicle obtains an image shot by an unmanned aerial vehicle;
performing cylindrical projection on the obtained image to obtain a cylindrical projection image; carrying out corner detection on the cylindrical projection image to obtain corners; performing feature extraction and feature matching on the cylindrical projection image according to the obtained angular points to obtain feature matching point pairs of the spliced image;
and splicing and fusing the cylindrical projection images according to the extracted feature matching points to form a panoramic image.
Performing binaryzation on the cylindrical projection image to obtain a binaryzation image;
carrying out contour detection on the binary image according to a self-adaptive threshold value to obtain a detected contour coordinate;
intercepting an area image surrounded by the contour coordinates in the cylindrical projection image according to the contour coordinates to form a new cylindrical projection image;
and acquiring an optimal matching point pair from the feature matching point pair, and performing image splicing through the optimal matching point pair.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle master vehicle further preferably corrects spliced panoramic images end to end; and cutting the corrected panoramic image to obtain a final panoramic image.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle mother vehicle is further preferable, and images are preprocessed before cylindrical projection is carried out;
the pretreatment at least comprises: acquiring a camera pose and an unmanned aerial vehicle pose corresponding to each image, and constructing a positive projection transformation matrix; and converting the image into an orthographic projection direction through the orthographic projection conversion matrix.
An unmanned aerial vehicle panoramic image splicing method based on a primary unmanned aerial vehicle is further preferable, and the angular point detection at least comprises the following steps:
acquiring coordinates and feature values of all corner points to form a first corner point matrix;
setting a filtering condition, filtering the first corner matrix, deleting corners which do not meet the condition, and forming a second corner matrix;
setting a first window with a fixed size, traversing the first window through a second angular point matrix, and if the sum of angular point characteristic values corresponding to all coordinates of a window area is not 0, acquiring coordinate values corresponding to the angular points of the maximum characteristic values in the window area;
and taking the corner point of the maximum characteristic value as a center, forming an area surrounded by the first window, acquiring a coordinate feature _ coordinate corresponding to the surrounded area, intercepting a pixel value of the corresponding area in the cylindrical projection image through the corresponding coordinate, and recording as feature _ value.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle mother vehicle is further preferred, and the acquisition of the first corner matrix at least comprises the following steps:
obtaining intensity factors Dx, Dy and Dxy in the x direction, the y direction and the xy direction;
filtering the acquired intensity factors Dx, Dy and Dxy according to a preset rule;
the preset rule filtering includes: in a preset window area, calculating the sum of Dx or Dy or Dxy of all points in the window area, and if the sum of numerical values exceeds 255, setting Dx or Dy or Dxy at the center point of the window area to be 255; sequentially traversing the whole image to obtain filtered intensity factors Dx ', Dy ' and Dxy ';
and calculating the characteristic value of the candidate corner through the intensity factor, and recording the coordinates and the characteristic value of the corner when the characteristic value of the candidate corner is greater than a preset threshold value to form a first corner matrix.
The unmanned aerial vehicle panoramic image splicing method based on the unmanned aerial vehicle mother vehicle is further preferable, and the condition for filtering the first corner matrix comprises the following steps:
comparing the corner characteristic values in the first corner matrix with a preset threshold value, acquiring all corner points larger than the preset threshold value, and setting the corner characteristic values smaller than the preset threshold value to be 0; and setting the characteristic value of the corner of the preset edge area to be 0 to form a second corner matrix.
The utility model provides an unmanned aerial vehicle panoramic image stitching method based on unmanned aerial vehicle mother car, further preferred, the acquisition of characteristic matching point pair includes:
extracting the characteristics of the cylindrical projection image through the angular points;
aiming at the adjacent ith image and the (i + 1) th image, taking yi coordinates of the characteristic value of the corner point of the ith image as a comparison reference, traversing the characteristic value of the corner point in the (i + 1) th image to be in a square area which is surrounded by yi coordinates as a center and is preset with the length of C4, and acquiring all the corner points meeting the conditions and marking as candidate characteristic points;
respectively calculating the similar distance between the feature point in the ith image and the corresponding candidate feature point to form a first distance matrix;
and acquiring the first 2 elements dmin and dmin1 with the minimum similar distance in the first matrix, comparing dmin with dmin1, and if a second preset condition is met, taking the feature point of the ith image corresponding to the corresponding element of the first distance matrix and the candidate feature point of the (i + 1) th image as mutually matched feature points.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle mother vehicle is further preferable, aiming at splicing of the ith image and the (i + 1) th adjacent image, traversing the ith image along the y axis by the yj value of a feature coordinate feature _ coordinate;
if the numerical value of the y coordinate of the corresponding feature point in the first feature coordinate matrix in the i +1 image is in the range of [ yj-range, yj + range ], calculating the distance between the feature _ value corresponding to the feature point coordinate feature _ coordinate in the i-th image and the corresponding feature _ value in the feature point coordinate feature _ coordinate of all the feature points in the i +1 image, and forming a first distance matrix.
The utility model provides an unmanned aerial vehicle panoramic image concatenation method based on unmanned aerial vehicle mother car, further preferred, the characteristic point that matches obtains including:
sorting the elements in the first distance matrix from big to small, and acquiring a minimum value dmin and a second minimum value dmin1 according to the sorting result;
if dmin/dmin1< ═ C1, the feature point coordinates in the ith map and the corresponding feature point coordinates in the i +1 maps are stored to form elements in the first pairing matrix;
and sequentially traversing all the characteristic points, and respectively obtaining coordinates which are in the ith image and are matched with the (i + 1) th image to form a final first pairing matrix, wherein the first pairing matrix stores the coordinates corresponding to the characteristic points which are matched with each other in the ith image and the (i + 1) th image.
The utility model provides an unmanned aerial vehicle panoramic image concatenation method based on unmanned aerial vehicle parent car, further preferred, the characteristic matching point of concatenation image is concentrated in the best matching point pair of calculation y axle direction, and the acquisition of best matching point pair includes:
acquiring a first pairing matrix, wherein each element in the first pairing matrix comprises characteristic points matched with each other, calculating to obtain a shifted _ value, then respectively taking out the first matching points from all elements of the first pairing matrix to respectively subtract the shifted _ value, respectively obtaining n diff _ values, and recording a set of the n different diff _ values as difference;
and sequentially traversing all elements of the first pairing matrix, finding out the mutually matched feature points corresponding to the maximum value obtained by the endpoint, and defining the mutually matched feature points as the optimal matched feature points, wherein the shift _ value corresponding to the optimal matched feature points is the best _ shift _ value obtained.
An unmanned aerial vehicle panoramic image stitching method based on an unmanned aerial vehicle mother vehicle is further preferable, and the shifting _ value obtaining method comprises the following steps:
taking out a first element from the first pairing matrix, wherein the first element comprises coordinates of a pair of matching points, the coordinates are marked as a first matching point and a second matching point, the coordinates of the first matching point are subtracted from the coordinates of the second matching point in the matching point pair, and an obtained value is marked as shift _ value;
and then all elements are taken out of the first pairing matrix, all second matching points are taken out of all elements, shift _ value is subtracted respectively, and a set formed by the obtained results is recorded as shifted _ value.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle primary vehicle is further preferable, and the method at least comprises the following steps of:
setting initial to be 0 and best _ shift to be 0, taking an element diff _ value out of the difference, calculating the sum of squares of the diff _ value and all values, if the sum of squares is smaller than a preset first threshold value, increasing 1 to the initial, after all diff _ values are sequentially traversed, recording the initial, if the value of the initial is larger than a preset second threshold value, giving the initial value to the preset second threshold value, and giving the corresponding value of shift _ value to the best _ shift _ value;
sequentially traversing all elements of the first pairing matrix to obtain a final best _ shift _ value;
and best _ shift _ value is the coordinate difference value obtained by subtracting the first matching point from the second matching point in the optimal matching point pair in the ith image and the (i + 1) th image.
The utility model provides an unmanned aerial vehicle panoramic image concatenation method based on unmanned aerial vehicle parent car, it is further preferred, when the panoramic image concatenation, through best _ shift _ value calibration adjacent image's coordinate, have the coordinate point in public area in the adjacent image and adjust, make public coordinate point in same reference coordinate system to carry out image fusion in the adjacent image.
The utility model provides an unmanned aerial vehicle panoramic image concatenation method based on unmanned aerial vehicle parent car, further preferred, the cylinder projection image is spliced and is fused including:
presetting a window with a fixed size, traversing the image along the y-axis direction, and splicing and fusing adjacent images according to the value of the abscissa x;
when W/2-C2<=x<=W/2+C 2 Then, the pixel value of the corresponding coordinate is calculated by the following formula:
Proportion=(x–seam+C 2 )/C 2 2
I(x,y)=(1–Proportion)*img2[x,y]+Proportion*img1[x,y]
when x is<W/2–C 2 The pixel value I (x, y) of the corresponding coordinate is img1[ x, y ═ img1]
When x is>W/2+C 2 The pixel value I (x, y) of the corresponding coordinate is img2[ x, y ═ img2]
Wherein W is the width of the img1 image, C2 is a preset threshold, and img1 and img2 are images to be spliced; and (x, y) is the abscissa of the image to be spliced.
An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle mother vehicle is further preferable, when the unmanned aerial vehicle mother vehicle and the unmanned aerial vehicle establish communication connection;
when the unmanned aerial vehicle mother vehicle starts a panoramic photographing instruction and sends the panoramic photographing instruction to the unmanned aerial vehicle, the unmanned aerial vehicle calls a panoramic photographing program to perform panoramic photographing, and the images for splicing are sent to the unmanned aerial vehicle mother vehicle frame by frame according to a splicing sequence;
after the unmanned aerial vehicle master vehicle completes splicing the panoramic images, marking the geographic positions of the spliced panoramic images according to the current GPS information and pose information of the unmanned aerial vehicle; and sending the panoramic image to a vehicle-mounted Ethernet display screen for displaying.
Has the beneficial effects that:
1. according to the technical scheme, the images are shot by the unmanned aerial vehicle and sent to the primary unmanned aerial vehicle, the received images are spliced into panoramic images by the primary unmanned aerial vehicle in real time, and then the panoramic images are sent to the vehicle-mounted Ethernet display screen for displaying. Through the powerful operational capability of the unmanned aerial vehicle mother vehicle, the panoramic image splicing can be completed in real time, and the unmanned aerial vehicle can be favorable for executing target searching and tracking tasks.
2. According to the technical scheme provided by the invention, when the panoramic image is spliced, the splicing algorithm is improved, so that the splicing effect of the panoramic image can be improved under the condition of not increasing the calculation complexity.
Drawings
The following drawings are only schematic illustrations and explanations of the present invention, and do not limit the scope of the present invention.
Fig. 1 is a schematic diagram of an architecture for stitching panoramic images of a mother vehicle of an unmanned aerial vehicle and the unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 2 is a flowchart of a method for realizing panoramic image stitching by a primary unmanned aerial vehicle in an embodiment of the invention.
Fig. 3 is a flowchart of a method for adaptively cropping a panoramic image in panoramic image stitching according to an embodiment of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects and effects herein, embodiments of the present invention will now be described with reference to the accompanying drawings, in which like reference numerals refer to like parts throughout. For the sake of simplicity, the drawings are intended to show the relevant parts of the invention schematically and not to represent the actual structure of the product. In addition, for simplicity and clarity of understanding, only one of the components having the same structure or function is schematically illustrated or labeled in some of the drawings.
As for the control system, the functional module, application program (APP), is well known to those skilled in the art, and may be in any suitable form, either hardware or software, or may be a plurality of functional modules arranged discretely, or may be a plurality of functional units integrated into one piece of hardware. In its simplest form, the control system may be a controller, such as a combinational logic controller, a microprogram controller, or the like, provided that the operations described herein are enabled. Of course, the control system may also be integrated as a different module into one physical device without departing from the basic principle and scope of the invention.
The term "connected" in the present invention may include direct connection, indirect connection, communication connection, and electrical connection, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, values, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, values, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items
It should be understood that the term "vehicle" or "vehicular" or other similar terms as used herein generally includes motor vehicles, such as passenger automobiles including Sport Utility Vehicles (SUVs), buses, trucks, various commercial vehicles, watercraft including a variety of boats, ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles, and other alternative fuel vehicles (e.g., fuels derived from non-petroleum sources). As referred to herein, a hybrid vehicle is a vehicle having two or more power sources, such as both gasoline-powered and electric-powered vehicles.
Further, the controller of the present disclosure may be embodied as a non-transitory computer readable medium on a computer readable medium containing executable program instructions executed by a processor, controller, or the like. Examples of computer readable media include, but are not limited to, ROM, RAM, Compact Disc (CD) -ROM, magnetic tape, floppy disk, flash drive, smart card, and optical data storage device. The computer readable recording medium CAN also be distributed over network coupled computer systems so that the computer readable medium is stored and executed in a distributed fashion, such as by a telematics server or Controller Area Network (CAN).
The invention provides an unmanned aerial vehicle panoramic image splicing system based on an unmanned aerial vehicle mother vehicle, which at least comprises an unmanned aerial vehicle and an unmanned aerial vehicle mother vehicle, wherein the unmanned aerial vehicle mother vehicle is provided with a TSN (time delay network) gateway, and the unmanned aerial vehicle mother vehicle are communicated through the TSN gateway;
when the unmanned aerial vehicle master vehicle is in communication connection with the unmanned aerial vehicle;
when the mother vehicle of the unmanned aerial vehicle starts a panoramic photographing instruction and sends the panoramic photographing instruction to the unmanned aerial vehicle, the unmanned aerial vehicle calls a panoramic photographing program to perform panoramic photographing, and the images for splicing are sent to the mother vehicle of the unmanned aerial vehicle frame by frame according to the splicing sequence;
after receiving the images, the primary unmanned aerial vehicle calls a panoramic stitching algorithm to stitch the received images to form panoramic images;
marking the geographic position of the panoramic image according to the current GPS information and pose information of the unmanned aerial vehicle;
and sending the panoramic image to a vehicle-mounted Ethernet display screen for display.
And searching for the target in the panoramic image by setting the characteristics of the tracked target.
The invention also provides an unmanned aerial vehicle panoramic image splicing method based on the primary unmanned aerial vehicle, which specifically comprises one or all of the following steps as shown in figure 2:
acquiring an image shot by an unmanned aerial vehicle;
performing cylindrical projection on the obtained image to obtain a cylindrical projection image;
detecting the corner of the cylindrical projection image to obtain the corner;
performing feature extraction and feature matching on the cylindrical projection image according to the acquired angular points;
splicing the cylindrical projection images according to the extracted feature matching to form a panoramic image;
performing end-to-end correction on the panoramic image;
and cutting the aligned panoramic image to obtain a final panoramic image.
Because unmanned aerial vehicle flies in the sky, the restricted air environment is complicated, receives the influence of wind, light and can make the image difference of shooing great, consequently, before carrying out the concatenation, needs carry out the preliminary treatment to the image. The preprocessing step is before the cylindrical projection;
the pretreatment at least comprises: acquiring a camera pose and an unmanned aerial vehicle pose corresponding to each image, and constructing a positive projection transformation matrix;
converting the image into an orthographic projection direction through an orthographic projection conversion matrix;
in the embodiment, the orthographic projection direction is defined as that the shooting angle of a camera in the unmanned aerial vehicle is vertically downward;
specifically, due to different camera performance differences and different photographing depths of field, the feature point extraction is affected, so that the registration accuracy is reduced. In order to solve the technical problem, in this embodiment, the image is subjected to cylindrical projection, and the specific steps of the cylindrical projection include:
acquiring an image and a focal length corresponding to the image;
constructing a cylindrical projection relational expression, and converting the image into a cylindrical projection image through the cylindrical projection relational expression;
and calculating the accurate position of each pixel point through bilinear interpolation to obtain the projected image.
The projection relation is:
Figure BDA0003607492420000101
Figure BDA0003607492420000102
wherein u and v are pixel coordinates of the image after projection, pixel coordinates of the image before x and y projection, f is focal length of the camera, and width and height are width and height of the image before projection respectively;
specifically, after cylindrical projection, based on the imaging characteristics of the unmanned aerial vehicle, the applicant finds that more outlier points can be generated on the edge of the unmanned aerial vehicle, and the outlier points can influence the detection of the following corner points. Therefore, the cylindrical projection image is subjected to binarization processing through the self-adaptive threshold value, contour detection is carried out, contour coordinates are obtained, a new cylindrical projection image is obtained according to the contour coordinates, and points in an outlier range are eliminated.
The method specifically comprises the following steps:
carrying out binarization on the cylindrical projection image to obtain a binarization image;
carrying out contour detection on the binary image according to a self-adaptive threshold value to obtain a detected contour coordinate;
and intercepting the area image surrounded by the contour coordinates in the cylindrical projection image according to the contour coordinates to form a new cylindrical projection image.
In the prior art, SIFT and SUFT algorithms are mainly adopted for feature extraction in image splicing, but although the algorithm can obtain a good effect, the calculation amount is particularly large, and a real-time panoramic image cannot be obtained. In order to solve the technical problem, the embodiment adopts the following steps on the premise of not reducing the splicing effect.
The method comprises the following specific steps:
if the image is a color image, converting the image into a gray scale image;
calculating gradients of each point x and y of the gray map by using Sobel or CANNY gradient operators to obtain Ix and Iy of each point in the x direction and the y direction;
obtaining intensity factors in the x direction, the y direction and the xy direction, and defining the intensity factors as Dx, Dy and Dxy;
wherein Dx ═ Ix 2 ,Dy=Iy 2 ,Dxy=Ix*Iy;
Filtering the obtained intensity factors Dx, Dy and Dxy;
in the prior art, filtering adopts mean filtering, median filtering and gaussian filtering, and although there is a noise removing effect, there still exists noise, and in order to thoroughly extract the noise, interference is reduced. In this embodiment, the conventional filtering is improved as follows:
calculating the sum of all pixel values in a preset window area, such as a 5-by-5 area, wherein if the sum of the pixel values exceeds 255, the central point pixel is 255; traversing the whole image to obtain filtered intensity factors Dx ', Dy ', Dxy ';
for each point in the image, construct
Figure BDA0003607492420000111
Obtaining R ═ det (M) -k trace (M) 2 And when the points of which R is greater than the preset threshold value are stored, the points are used as angular points, and R is an angular point characteristic value.
And acquiring coordinates and feature values of all the corner points to form a first corner point matrix.
Specifically, many angular points are obtained through angular point detection, but not every angular point is a real angular point, and an error still exists in the position of the angular point, and in order to improve the precision and obtain the real angular point, the embodiment provides a solution, which specifically includes:
comparing the corner feature values in the first corner matrix with a preset threshold value, acquiring all corner points larger than the preset threshold value, and setting the corner feature values of the corner points corresponding to the corner points smaller than the preset threshold value to be 0; setting the characteristic value of the corner of a preset edge area range to be 0 to form a second corner matrix;
setting a first window with a fixed size, traversing the first window through a second angular point matrix, and acquiring a coordinate value of a maximum angular point characteristic value in a window area if the sum of angular point characteristic values corresponding to all coordinates of the window area is not 0;
forming a region surrounded by a first window by taking the corner of the acquired maximum corner characteristic value as a center, acquiring a coordinate feature _ coordinate corresponding to the surrounded region, acquiring a pixel value in a cylindrical projection image by using a corresponding coordinate, and marking as feature _ value;
and combining all feature _ coordinates and feature _ values into a corresponding first feature coordinate matrix and a first feature value matrix.
Specifically, when the first feature coordinate matrix and the first feature value matrix are acquired, the pairing of feature points in adjacent images needs to be acquired. For example, the feature coordinates in the first image are (a1, b1), (a2, b2), (a3, b3), and the feature coordinates in the second image are (c1, d1), (c2, d2), (c3, d3), (c4, d4), it is necessary to determine whether the coordinates in the first image are paired with the feature coordinates in the second image.
Specifically, the following scheme is adopted for pairing in the embodiment:
in order to reduce the calculation amount, only the y direction is considered in the embodiment, and the feature coordinates of adjacent images correspond to the similarity of all feature values of corresponding areas;
for the splicing of the ith image and the (i + 1) th adjacent image, traversing the ith image along the y axis by the yj value of the feature _ coordinate, and within the range of [ yj-range, yj + range ] corresponding to the i +1 image;
if the numerical value of the y coordinate of the corresponding feature point in the first feature coordinate matrix in the i +1 image is in the range of [ yj-range, yj + range ], calculating the distance between the feature _ value corresponding to the feature point coordinate feature _ coordinate in the ith image and the corresponding feature _ value in the coordinate feature _ coordinate of all the feature points in the i +1 image, and forming a first distance matrix. (ii) a
Sorting the elements in the first distance matrix from big to small, and obtaining a minimum value dmin and a second minimum value dmin1 according to a sorting result, wherein if dmin/dmin1< ═ C1, the value range of C1 is 0.4-0.7;
when dmin/dmin1< ═ C1, the feature point coordinates in the ith map and the corresponding feature point coordinates in the i +1 maps are stored to form elements in a first pairing matrix;
sequentially traversing all the characteristic points, and respectively acquiring all coordinates which are in the ith image and are matched with the (i + 1) th image to form a final first pairing matrix;
the first pairing matrix stores coordinates corresponding to feature points matched with each other in the ith image and the (i + 1) th image;
specifically, the initial feature points of this embodiment are paired with corner points, which is different from the method for extracting feature points of SIFT in the prior art, where there are a plurality of corner points, theoretically, the paired points can be perfectly matched, but in the actual process, there are noise and calculation errors, which cause the matching of each point to be different, and the minimum point of the mean error used in the method in the prior art is treated as the optimal matching. However, this method requires many iterations, and is computationally expensive. In order to solve this problem, the following method is adopted in this embodiment:
obtaining a first element in a first pairing matrix, taking out the first element from the first pairing matrix, wherein the first element comprises a pair of matching points and is marked as a first matching point and a second matching point, subtracting the coordinate of the first matching point from the coordinate of the second matching point in the matching point pair, and marking the obtained value as shift _ value;
then all elements are taken out from the first pairing matrix, shift _ value is respectively subtracted from the coordinates of all second matching points taken out from all elements, and an obtained result set is recorded as shifted _ value; then all the first matching points are taken out from all the elements, shifted _ values are subtracted respectively, n diff _ values are obtained respectively, and the set of n different diff _ values is recorded as difference;
setting endpoint to be 0, and setting best _ shift to be 0, taking out an element from difference, wherein each element comprises a diff _ value, the diff _ value comprises a plurality of values, calculating the sum of squares of the diff _ value and all values, if the sum of squares is smaller than a threshold value, increasing 1 to the endpoint, after all diff-values are completed in sequence, recording the endpoint, if the value of the endpoint is larger than a preset second threshold value, assigning the value of the endpoint to the preset second threshold value, and assigning the value of the corresponding shift-value to be best _ shift _ value;
and sequentially traversing all elements of the first pairing matrix, and repeating the calculation to obtain the final best _ shift _ value.
And best _ shift _ value is the coordinate difference value obtained by subtracting the first matching point from the second matching point in the optimal matching point pair in the ith image and the (i + 1) th image.
Specifically, when the optimal matching point pair is obtained, the best _ shift _ value is obtained through the optimal matching point to splice the images;
specifically, there is a best _ shift _ value between adjacent images, and if there are 6 spliced images, there are 5 best _ shift _ values accordingly.
During splicing, coordinates of adjacent images are calibrated through best _ shift-value, coordinate points with a common area in the adjacent images are adjusted to be the same through a unified coordinate system, and then image fusion is carried out in the adjacent images;
when image fusion is carried out, in the prior art, weighted mean fusion is generally adopted, and fusion is carried out according to the distance proportion of pixel points to a boundary line, which is a simpler fusion mode, and the image formed after fusion has serious chromatic aberration. In order to solve the color difference and without increasing the computational complexity, the present embodiment provides an improved method, which specifically includes:
presetting a fixed size, traversing the image along the y-axis direction, and splicing and fusing adjacent images according to the value of the abscissa x;
when W/2-C2<=x<=W/2+C 2 Then, the pixel value of the corresponding coordinate is calculated by the following formula:
Proportion=(x–seam+C 2 )/C 2 2
I(x,y)=(1–Proportion)*img2[x,y]+Proportion*img1[x,y]
when x is<W/2–C 2 The pixel value I (x, y) of the corresponding coordinate is img1[ x, y ]]
When x is>W/2+C 2 The pixel value I (x, y) of the corresponding coordinate is img2[ x, y ═ img2]
Wherein, W is the width of the img1 image, C2 is a preset threshold, and img1 and img2 are images to be spliced; and (x, y) is the abscissa of the image to be spliced.
Specifically, splicing and fusing adjacent images in sequence to form a panoramic image;
because the synthesized image has a plurality of images, the synthesis of the plurality of images can cause the existence of accumulated errors, and only the acquaintance degree in the y direction is considered during the synthesis of the embodiment, so that the accumulated errors are too large due to the plurality of images in the y direction, and an artifact exists, in order to eliminate the artifact, the embodiment corrects the artifact of the spliced panoramic image, and specifically comprises the following steps:
acquiring the best _ shift-value of adjacent images in all spliced images, and then carrying out the best _ shift-value of all spliced images in the x direction and the y direction
Accumulating the values to obtain sum _ x and sum _ y;
if sum _ x is sum _ y is equal to 0, then taking 0 as a starting point and the absolute value of sum _ y as an end point, and obtaining the number of audience-oriented adjustment values Zq equal to the width Wq of the panoramic image at equal intervals, namely obtaining Wq audience-oriented adjustment values Zq;
Zqi=i*[(abs(sum_y)–0)/Wq]
then, the pixels in the X direction are shifted up by the rolling translation Zqi step by step in units of pixels along the Y direction, and the panoramic images are sequentially traversed to obtain the panoramic image after artifact correction.
Specifically, because multiple images are used for stitching, in the case that a cylindrical projection and an edge adjustment strategy are used in this implementation, irregular black edges are generated at the upper edge and the lower edge of the image along the y-axis direction, and in order to remove the black edges and keep the integrity of the stitched image as much as possible, this embodiment provides a self-adaptive irregular black edge removal method, as shown in fig. 3, which specifically includes:
converting the obtained panoramic image into a gray-scale image; binarizing the gray level image, and presetting a comparison value threshold value as C3;
when the pixel value of the gray map is larger than C3, the pixel value of the gray map is set to 255, and when the pixel value of the gray map is smaller than C3, the pixel value of the gray map is set to 0, and a binary image is obtained;
specifically, C3 can be set to a number within 2-10;
setting an adaptive comparison reference threshold TC (width/L of the panoramic image), wherein L is 50-200
Traversing the binary image from top to bottom along the y axis, acquiring an upper boundary top _ border of the image as yi when the number of coordinate points with pixel values equal to 0 in all (x, yi) of the binary image under the same yi coordinate is judged to be less than TC, wherein the value range of x is [0, the width of the panoramic image ],
traversing the binary image from bottom to top along the y axis, judging that the number of coordinate points with pixel values equal to 0 in all (x, yi) of the binary image is less than TC under the same yi coordinate, acquiring a lower boundary bottom _ border of the image, wherein the value range of x is [0, the width of the panoramic image ],
and intercepting the corresponding image on the panoramic image through the acquired upper boundary and lower boundary pair to obtain a final panoramic image.
What has been described above is only a preferred embodiment of the present invention, and the present invention is not limited to the above examples. It is apparent to those skilled in the art that the form in this embodiment is not limited thereto, and the adjustable manner is not limited thereto. It is to be understood that other modifications and variations directly derivable or suggested to one skilled in the art without departing from the basic idea of the present invention are to be considered within the scope of protection of the present invention.

Claims (15)

1. An unmanned aerial vehicle panoramic image splicing method based on an unmanned aerial vehicle primary vehicle is characterized by comprising the following steps: the method comprises the steps that an unmanned aerial vehicle master vehicle obtains an image shot by an unmanned aerial vehicle;
performing cylindrical projection on the obtained image to obtain a cylindrical projection image; carrying out corner detection on the cylindrical projection image to obtain corners; performing feature extraction and feature matching on the cylindrical projection image according to the obtained angular points to obtain feature matching point pairs of the spliced image;
splicing and fusing the cylindrical projection images according to the extracted feature matching points to form a panoramic image;
performing binarization on the cylindrical projection image to obtain a binarized image;
carrying out contour detection on the binary image according to a self-adaptive threshold value to obtain a detected contour coordinate;
intercepting an area image surrounded by the contour coordinates in the cylindrical projection image according to the contour coordinates to form a new cylindrical projection image;
and acquiring an optimal matching point pair from the feature matching point pair, and performing image splicing through the optimal matching point pair.
2. The unmanned aerial vehicle panoramic image splicing method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 1, wherein the spliced panoramic image is corrected end-to-end; and cutting the corrected panoramic image to obtain a final panoramic image.
3. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 1, wherein the image is preprocessed before the cylindrical projection is carried out;
the pretreatment at least comprises: acquiring a camera pose and an unmanned aerial vehicle pose corresponding to each image, and constructing an orthographic projection transformation matrix; and converting the image into an orthographic projection direction through the orthographic projection conversion matrix.
4. The unmanned aerial vehicle panoramic image stitching method based on the unmanned aerial vehicle mother vehicle as claimed in claim 1, wherein the corner detection at least comprises:
acquiring coordinates and feature values of all corner points to form a first corner point matrix;
setting a filtering condition, filtering the first corner matrix, and deleting corners which do not meet the condition to form a second corner matrix;
setting a first window with a fixed size, traversing the first window through a second corner matrix, and if the sum of corner characteristic values corresponding to all coordinates of a window area is not 0, acquiring a coordinate value corresponding to a corner of the maximum characteristic value in the window area;
and taking the corner point of the maximum characteristic value as a center, forming an area surrounded by the first window, acquiring a coordinate feature _ coordinate corresponding to the surrounded area, intercepting a pixel value of the corresponding area in the cylindrical projection image through the corresponding coordinate, and recording as feature _ value.
5. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 4, wherein the first corner matrix acquisition at least comprises:
obtaining intensity factors Dx, Dy and Dxy in the x direction, the y direction and the xy direction;
filtering the obtained intensity factors Dx, Dy and Dxy according to a preset rule;
the preset rule filtering includes: in a preset window area, calculating the sum of Dx or Dy or Dxy of all points in the window area, and if the sum of numerical values exceeds 255, setting Dx or Dy or Dxy at the center point of the window area to be 255; sequentially traversing the whole image to obtain filtered intensity factors Dx ', Dy ' and Dxy ';
and calculating the characteristic value of the candidate corner through the intensity factor, and recording the coordinates and the characteristic value of the corner when the characteristic value of the candidate corner is greater than a preset threshold value to form a first corner matrix.
6. The unmanned aerial vehicle panoramic image stitching method based on the unmanned aerial vehicle mother vehicle as claimed in claim 4, wherein the filtering condition of the first corner matrix comprises:
comparing the corner characteristic values in the first corner matrix with a preset threshold value, acquiring all corner points larger than the preset threshold value, and setting the corner characteristic values smaller than the preset threshold value to be 0; setting the characteristic value of the corner of the preset edge area to 0 to form a second corner matrix.
7. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 1, wherein the obtaining of the feature matching point pair comprises:
extracting the characteristics of the cylindrical projection image through the angular points;
aiming at the adjacent ith image and the (i + 1) th image, taking the yi coordinate of the characteristic value of the corner of the ith image as a comparison reference, and obtaining all corner points meeting the conditions in the (i + 1) th image, wherein the characteristic value of the traversal corner is in a square area which is centered on the yi coordinate and is surrounded by a preset length C4, and marking the all corner points as candidate characteristic points;
respectively calculating the similar distance between the feature point in the ith image and the corresponding candidate feature point to form a first distance matrix;
and acquiring the first 2 elements dmin and dmin1 with the minimum similar distance in the first matrix, comparing dmin with dmin1, and if a second preset condition is met, determining that the feature point of the ith image and the candidate feature point of the (i + 1) th image corresponding to the corresponding element of the first distance matrix are matched with each other.
8. The unmanned aerial vehicle panoramic image splicing method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 7, wherein for splicing the ith image and the (i + 1) th adjacent image, traversing is performed on the ith image along the y axis by using the yj value of a feature coordinate feature _ coordinate;
and if the numerical value of the y coordinate of the corresponding feature point in the first feature coordinate matrix in the i +1 image is in the range of [ yj-range, yj + range ], calculating the distance between the feature _ value corresponding to the feature point coordinate feature _ coordinate in the i +1 image and the corresponding feature _ value in the feature point coordinate feature _ coordinate in all the i +1 images to form a first distance matrix.
9. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 7, wherein the obtaining of mutually matched feature points comprises:
sorting the elements in the first distance matrix from big to small, and acquiring a minimum value dmin and a second smallest value dmin1 according to the sorting result;
if dmin/dmin1< ═ C1, the feature point coordinates in the ith map and the corresponding feature point coordinates in the i +1 maps are stored to form elements in the first pairing matrix;
and sequentially traversing all the characteristic points, and respectively obtaining coordinates which are in the ith image and are matched with the (i + 1) th image to form a final first pairing matrix, wherein the first pairing matrix stores the coordinates corresponding to the characteristic points which are in the ith image and are matched with each other in the (i + 1) th image.
10. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 1, wherein the feature matching points of the stitched image are concentrated in calculating the best matching point pair in the y-axis direction, and the obtaining of the best matching point pair comprises:
acquiring a first pairing matrix, wherein each element in the first pairing matrix comprises mutually matched characteristic points, calculating to obtain a shifted _ value, then respectively taking out the first matching points from all elements of the first pairing matrix, and respectively subtracting the shifted _ value from the first matching points to respectively obtain n diff _ values, and recording a set of the n different diff _ values as difference;
and sequentially traversing all elements of the first pairing matrix, finding out mutually matched feature points corresponding to the maximum value obtained by the endpoint, and defining the mutually matched feature points as optimal matched feature points, wherein the shift _ value corresponding to the optimal matched feature points is used for obtaining the optimal best _ shift _ value.
11. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 10, wherein the obtaining of the shifted _ value comprises:
taking out a first element from the first pairing matrix, wherein the first element comprises coordinates of a pair of matching points, and is marked as a first matching point and a second matching point, the coordinates of the first matching point are subtracted from the coordinates of the second matching point in the matching point pair, and an obtained numerical value is marked as shift _ value;
and then all elements are taken out of the first pairing matrix, all second matching points are taken out of all elements, shift _ value is subtracted respectively, and a set formed by the obtained results is recorded as shifted _ value.
12. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 10, wherein the making of the mutually matched feature points corresponding to the maximum value obtained by the endpoint at least comprises:
setting inpoint to be 0, and best _ shift to be 0, taking an element diff _ value from the difference, calculating the square sum of the diff _ value including all values, if the square sum is smaller than a preset first threshold, adding 1 to the inpoint, after all diff _ values are sequentially traversed, recording the inpoint, if the inpoint value is larger than a preset second threshold, giving the inpoint value to the preset second threshold, and giving the corresponding shift _ value to the best _ shift _ value;
sequentially traversing all elements of the first pairing matrix to obtain a final best _ shift _ value;
and the best _ shift _ value is the coordinate difference obtained by subtracting the first matching point from the second matching point in the optimal matching point pair in the ith image and the (i + 1) th image.
13. The unmanned aerial vehicle panoramic image stitching method based on the mother vehicle of the unmanned aerial vehicle as claimed in claim 10, wherein during the panoramic image stitching, coordinates of adjacent images are calibrated through best _ shift _ value, coordinate points with a common area in the adjacent images are adjusted to enable the common coordinate points to be in the same reference coordinate system, and image fusion is performed on the adjacent images.
14. The unmanned aerial vehicle panoramic image stitching method based on the unmanned aerial vehicle parent car as claimed in claim 1, wherein the stitching and fusing of the cylindrical projection images comprises:
presetting a window with a fixed size, traversing the image along the y-axis direction, and splicing and fusing adjacent images according to the value of the abscissa x;
when W/2-C2<=x<=W/2+C 2 Then, the pixel value of the corresponding coordinate is calculated by the following formula:
Proportion=(x–seam+C 2 )/C 2 2
I(x,y)=(1–Proportion)*img2[x,y]+Proportion*img1[x,y]
when x is<W/2–C 2 The pixel value I (x, y) of the corresponding coordinate is img1[ x, y ]]
When x is>W/2+C 2 The pixel value I (x, y) of the corresponding coordinate is img2[ x, y ═ img2]
W is the width of the img1 image, C2 is a preset threshold, and img1 and img2 are images to be spliced; and (x, y) is the abscissa of the image to be spliced.
15. The unmanned aerial vehicle panoramic image stitching method based on the mother unmanned aerial vehicle as claimed in claim 1, wherein when the mother unmanned aerial vehicle establishes a communication connection with the unmanned aerial vehicle;
when the mother vehicle of the unmanned aerial vehicle starts a panoramic photographing instruction and sends the panoramic photographing instruction to the unmanned aerial vehicle, the unmanned aerial vehicle calls a panoramic photographing program to perform panoramic photographing, and the images for splicing are sent to the mother vehicle of the unmanned aerial vehicle frame by frame according to the splicing sequence;
after the unmanned aerial vehicle master vehicle completes splicing the panoramic images, marking the geographic positions of the spliced panoramic images according to the current GPS information and pose information of the unmanned aerial vehicle; and sending the panoramic image to a vehicle-mounted Ethernet display screen for display.
CN202210423537.3A 2022-04-21 2022-04-21 Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle Pending CN114926332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210423537.3A CN114926332A (en) 2022-04-21 2022-04-21 Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210423537.3A CN114926332A (en) 2022-04-21 2022-04-21 Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle

Publications (1)

Publication Number Publication Date
CN114926332A true CN114926332A (en) 2022-08-19

Family

ID=82807318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210423537.3A Pending CN114926332A (en) 2022-04-21 2022-04-21 Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle

Country Status (1)

Country Link
CN (1) CN114926332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294482A (en) * 2022-09-26 2022-11-04 山东常生源生物科技股份有限公司 Edible fungus yield estimation method based on unmanned aerial vehicle remote sensing image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294482A (en) * 2022-09-26 2022-11-04 山东常生源生物科技股份有限公司 Edible fungus yield estimation method based on unmanned aerial vehicle remote sensing image
CN115294482B (en) * 2022-09-26 2022-12-20 山东常生源生物科技股份有限公司 Edible fungus yield estimation method based on unmanned aerial vehicle remote sensing image

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
CN112396650A (en) Target ranging system and method based on fusion of image and laser radar
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN111178236A (en) Parking space detection method based on deep learning
CN105825173A (en) Universal road and lane detection system and method
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
US11887336B2 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN110060259A (en) A kind of fish eye lens effective coverage extracting method based on Hough transformation
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN111553845A (en) Rapid image splicing method based on optimized three-dimensional reconstruction
CN114719873B (en) Low-cost fine map automatic generation method and device and readable medium
CN110738668B (en) Method and system for intelligently controlling high beam and vehicle
CN106780309A (en) A kind of diameter radar image joining method
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN114926332A (en) Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle
JP2002175534A (en) Method for detecting road white line
JP2022152922A (en) Electronic apparatus, movable body, imaging apparatus, and control method for electronic apparatus, program, and storage medium
CN114926331A (en) Panoramic image splicing method applied to vehicle
Geiger Monocular road mosaicing for urban environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination