CN115393196B - Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging - Google Patents
Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging Download PDFInfo
- Publication number
- CN115393196B CN115393196B CN202211306562.XA CN202211306562A CN115393196B CN 115393196 B CN115393196 B CN 115393196B CN 202211306562 A CN202211306562 A CN 202211306562A CN 115393196 B CN115393196 B CN 115393196B
- Authority
- CN
- China
- Prior art keywords
- image
- images
- point
- spliced
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000009958 sewing Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010408 sweeping Methods 0.000 claims 5
- 238000000638 solvent extraction Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003912 environmental pollution Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array scanning, which comprises the steps of stretching the gray level of an infrared image, extracting characteristic points, establishing a vector relation according to original coordinates, matching the characteristic points of images with overlapped areas, calculating a central control point by combining coordinates, partitioning the image according to the course direction and the wingspan direction, calculating the coordinate offset of each area, determining the offset of all pixels according to the distance from a central point and the angles from four directions perpendicular to the overlapped areas, calculating the cost and value matrix of adjacent images, weighting and solving the optimal suture line to realize seamless splicing. The method considers the characteristics of aerial infrared multi-sequence images of area array scanning, utilizes the characteristic points and the central control point to deform, and searches the position with the minimum difference of the overlapping areas as a suture line to realize quick seamless splicing, and has the advantages of simple and efficient implementation method, small deformation distortion and high splicing precision.
Description
Technical Field
The invention relates to the field of image splicing, in particular to an infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging.
Background
The aviation infrared image is formed by carrying an infrared sensor on an aviation platform, remotely recording ground object radiation and reflected infrared energy and acquiring the radiation and temperature information of the ground object. Compared with visible light images, infrared imaging has the advantages of all-time imaging, and can be applied to the fields of natural disasters, environmental pollution, battlefield reconnaissance and the like. Image stitching is a key technology for aviation image processing, a group of images with overlapping areas are registered and fused, the process of stitching a plurality of images into a panoramic image with a wider view field is realized, and the stitching precision directly influences the subsequent application effect. However, the infrared image has the characteristics of low signal-to-noise ratio and low contrast, so that the ground object target in the image is not obvious, the spatial correlation is large, the extraction of the image feature point is difficult, the registration error of the same-name point is large, and the infrared image swept by the unmanned aerial vehicle in an area array has the characteristic of large parallax, the image is obtained from a non-single viewpoint, and the terrain height fluctuates, so that the principle of single-point perspective is not met. The existing image splicing technology is mainly based on image matching of an overlapping area and can be divided into a gray-level-based splicing method, a transform domain-based splicing method and a feature-based splicing method, wherein feature point pairs matched with the image in the overlapping area are obtained, projection or affine transformation is carried out by combining a homography matrix, but the methods are sensitive to illumination, rotation, blurring and noise, have high requirements on overlapping degree, and need to determine a global alignment strategy and method for splicing a plurality of images, so that the method is high in calculation complexity, long in time consumption, not applicable to infrared multi-sequence images scanned by an unmanned aerial vehicle area array, less in matching feature points, high in matching error rate and poor in splicing effect.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a registration, deformation and stitching splicing method for an infrared multi-sequence image of unmanned plane area array sweep, aiming at solving the technical problem that the splicing effect of the traditional characteristic point matching and homography transformation matrix on the infrared multi-sequence image of the unmanned plane area array sweep is poor.
The purpose of the invention is realized by the following technical scheme: an unmanned aerial vehicle area array swinging infrared multi-sequence image seamless splicing method comprises the following steps:
(1) Preprocessing an image to be spliced, wherein the preprocessing is to perform gray stretching on an infrared image and enhance the image contrast, and the image to be spliced is an infrared image swept by an unmanned aerial vehicle area array;
(2) Extracting the feature points in the image in the step (1) by using an SIFT algorithm to obtain scale-invariant feature points in the image;
(3) Establishing a vector coordinate relation according to the minimum coordinate of the image, the image size and the sweep sequence in the step (2);
(4) According to the vector coordinates of the images to be spliced, calculating the overlapped images of the adjacent images of the unmanned aerial vehicle in the course and span directions
The method comprises the steps of sequence number matching of feature points of adjacent images in pairs, removing feature points which are in error in matching by using a RANSAC algorithm to obtain accurately matched feature point pairs, and taking the middle points of the corresponding feature point pairs in a local coordinate system as control points;
(5) Dividing the image to be spliced into four areas according to the connecting line of the central point and the vertex of the image to be spliced, wherein the four areas respectively correspond to left and right adjacent images of a course and images in upper and lower adjacent directions in a wingspan direction, respectively calculating coordinate offset between all characteristic points and control points in the four directions, and calculating the average value of coordinate offset of all points in each direction as the offset of the image in the direction;
(6) Fixing the position of the central point of the image to be spliced, determining the offset of the rest pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, and carrying out coordinate transformation on all the pixels according to the respective corresponding offsets to obtain a deformed image; calculating the cost and value matrix of adjacent images, and carrying out weighting to solve the optimal suture line;
(7) And (4) carrying out image splicing on the deformed images obtained in the step (6) on sewing lines in four adjacent directions.
Specifically, the gray stretching in the step (1) is to determine a gray threshold range according to a histogram of the image to be stitched, and normalize the gray value to be within a range of 0 to 255.
Further, the method for calculating the control point in step (5) is implemented by the following sub-steps:
(3.1) determining adjacent image groups of the images to be spliced in the course and span directions: the unmanned aerial vehicle acquires images of the whole area according to an S-shaped route, and if the images are the first or last image acquired by the first line of the course, or the first and last images acquired by the last line of the course, the images to be spliced only have 2 adjacent images; if the first image and the last image are acquired from the course middle row, the image to be spliced comprises 3 adjacent images; if the image group to be spliced is the intermediate image obtained by the course intermediate line, the image group to be spliced comprises 4 images;
(3.2) acquiring the feature point coordinates, feature point descriptions, image coordinates and size information of the image to be spliced and the adjacent images;
(3.3) respectively matching the feature points of the image to be spliced and the adjacent image, removing the error matching points of the preliminarily matched feature points according to the RANSAC algorithm, and keeping the number Q of the matching feature point pairs after removal10, considering that the matched reliable characteristic point pair does not exist;
and (3.4) combining the image coordinate information and the coordinates of the feature points relative to the image, endowing the coordinates under a local coordinate system to the reserved correct matching feature points, and taking the middle points of the matching feature points as central control points.
Further, the step (5) is realized by the following sub-steps:
(4.1) determining the midpoint of the images to be spliced and the maximum and minimum coordinates of all images in the image group under a local coordinate system;
(4.2) taking the midpoint of the image to be spliced as a pole and a vertically downward ray as a polar axis, calculating included angles between the midpoint of the image to be spliced and four vertexes, dividing the image to be spliced into 4 areas by connecting lines of the midpoint and the four vertexes, and adopting the mathematical expression as follows:
wherein,、、、respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,,is the horizontal and vertical coordinates of the center point of the image,,is the horizontal and vertical coordinates of the top left vertex of the image,,is the horizontal and vertical coordinates of the left lower vertex of the image,,is the horizontal and vertical coordinates of the right lower vertex of the image,,the horizontal and vertical coordinates of the upper right vertex of the image are shown;
(4.3) calculating coordinate offset of all the matched feature points and the control points in the four regions, taking the average value as the coordinate offset of the region, wherein the mathematical expression is as follows:
wherein,the extracted feature points of the images to be stitched,is a central control point, and is characterized in that,is a characteristic point number, in commonA pair of characteristic points is formed by the characteristic points,for all matching pairs of feature pointsThe amount of the directional coordinate offset is,for all of the piecesMatch with characteristic point pairsThe amount of the directional coordinate offset is,for all pairs of characteristic pointsThe average coordinate offset of the directions is,for all feature points
Further, the method for calculating coordinate offset of all pixels in the image in the sixth step is implemented by the following substeps:
(5.1) calculating a bisection angle of an angle formed by the central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
wherein,is a bisector angle of the first and second vertices and the center point, wherein,a bisecting angle of the second vertex and the third vertex with respect to the center point, wherein,a bisecting angle of the third vertex and the fourth vertex with respect to the center point, wherein,the bisection angle of the included angle between the fourth vertex and the first vertex and the central point;
(5.2) calculating a bisector of an overlapping area of the image to be spliced and the adjacent image, wherein the mathematical expression of the bisector is as follows:
wherein,、、、respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,、、andrespectively the distance between the central point of the image to be spliced and the upper, lower, left and right sides,、、andrespectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
(5.3) calculating the angles and the distances of all the pixels in the image from the center point, wherein the mathematical expression is as follows:
wherein,is the angle of the pixel from the center point,is the distance of the picture element from the center point,、is the horizontal and vertical coordinates of the center point of the image,,is the horizontal and vertical coordinates of a certain pixel,is 0, 180 or 360 degrees when the pixel is at the upper left of the center pointIs 180 degrees, when the pixel is at the left lower part of the central pointIs 360 degrees, when the pixel is positioned at the lower right part of the central pointIs 0 degree when the pixel is at the upper right of the center pointIs 180 degrees;
(5.4) judging the direction of the bisector of the pixel in the overlapping area, wherein if the pixel is far away from the center of the image to be spliced, the coordinate offset of the pixel is the average coordinate offset of the direction; if the image is close to the direction of the center of the image to be spliced, the pixel coordinate offset is weighted and calculated according to the angle between the image and two adjacent vertexes and the projection distance from the midpoint, and the mathematical expression is as follows:
wherein,is the deformation weight coefficient(s) of the object,the picture element isThe amount of shift in the coordinate direction is,the picture element isThe amount of shift in the coordinate direction is,the projection distance of the pixel at the middle point of the region to which the pixel belongs;the perpendicular distance from the midpoint to the edge for this region,、is an included angle between the vertical line of the adjacent direction and the connecting line of the pixel and the central point,in adjacent directionsAndaverage coordinate offset of direction.
Further, the suture line calculation method in the step (7) is realized by the following sub-steps:
(6.1) determining the starting point and the ending point of the suture line at the edge of the overlapping area, wherein if the starting point and the ending point of the suture line are adjacent to the course, the starting point and the ending point of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent in the span direction, the starting point and the ending point of the sewing line are positioned at the left edge and the right edge of the overlapping area;
(6.2) calculating the gray difference of the overlapping area of the adjacent imagesThe mathematical expression is as follows:
wherein,is the gray value of the overlapping area of the images to be spliced,the gray value of the overlapping area of the adjacent images; (6.3) finding the maximum and minimum gray difference of all the lines and determining the average gray difference of the maximum and minimum gray differencesThe mathematical expression is as follows:
wherein,is the minimum gray-scale difference in the overlapping region,is the maximum gray difference in the overlap region;
(6.4) Gray-weighted distance conversion of the overlapped region gray imageThe mathematical expression is as follows:
(6.5) starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the cost function is calculated by the median value of the maximum gray difference and the minimum gray difference in the overlapping regionThe mathematical expression is as follows:
and (6.6) after the suture line is determined, taking images on two sides of the suture line to construct a mask.
Further, the image stitching in the step (7) is realized by the following sub-steps:
(7.1) establishing all image masks;
and (7.2) if the repeated mask appears in the triple-overlapped or quadruple-overlapped area, only taking the mask of the image at the forefront of the shooting sequence, and if no mask exists, taking the image at the position corresponding to the first appearing image.
The invention has the following beneficial effects:
the invention provides a simple, quick and effective seamless splicing method for infrared multi-sequence images of unmanned aerial vehicle area array swinging, multiple unmanned aerial vehicle infrared images can be spliced into a high-quality wide-view-field panoramic image, the high-quality panoramic splicing result can directly promote the application of the infrared images in the fields of natural disaster monitoring and early warning, environmental pollution discovery and management, battlefield environment patrol and reconnaissance and the like, good data support is provided for further infrared profit research, and the development and application of infrared data are promoted.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings used in the detailed description or the prior art description will be briefly described below.
Fig. 1 is a flowchart of a seamless splicing method for infrared multi-sequence images of area array sweep of an unmanned aerial vehicle according to embodiment 1 of the present invention;
fig. 2 is a flowchart of the feature point and central control point extraction step provided in step two of embodiment 1 of the present invention;
fig. 3 is a specific mathematical model of image deformation provided in step four, step five and step six of embodiment 1 of the present invention;
fig. 4 is a flowchart of image deformation provided in step four, step five, and step six of embodiment 1 of the present invention;
fig. 5 is a flowchart of image stitching provided in step seven and step eight of embodiment 1 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "far", "near", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings only for the convenience of description of the present invention and simplification of description, but do not indicate or imply that the method referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment of the invention provides a rapid seamless splicing method for images of an infrared unmanned aerial vehicle, which comprises the following steps as shown in figure 1:
s1, preprocessing an image to be spliced;
in the embodiment of the invention, the infrared images to be spliced are all from an infrared imager swept by an unmanned aerial vehicle area array, the infrared images to be spliced are preprocessed, the preprocessing is to perform gray stretching on the images and enhance the contrast of the images, further, the gray stretching determines a gray value threshold range according to a histogram of the images and normalizes the gray value to be in a range of 0-255;
s2, extracting image feature points;
in the embodiment of the invention, the SIFT algorithm is utilized to extract the feature points in the image;
s3, establishing a vector coordinate relation of the images to be spliced;
in the embodiment of the invention, a vector coordinate relationship is established according to the minimum coordinate of the image to be spliced, the image size and the sweep sequence, and the image rough splicing result and the adjacent image and the overlapping area of the image to be spliced can be determined according to the vector coordinate relationship;
s4, matching the characteristic points of the images with the overlapped areas and calculating a central control point by combining coordinates;
in the embodiment of the invention, according to each image coordinate after geometric correction, the overlapped image serial number of adjacent images in the course direction and the wingspan direction is calculated, feature points of the adjacent images are matched pairwise, the RANSAC algorithm is used for eliminating the feature points with wrong matching to obtain the feature point pairs which are accurately matched, the middle points of the corresponding feature point pairs in a local coordinate system are used as control points, and the purpose of the control points is to ensure that the same-name points of the two images are converted to the positions of the control points through the coordinates to realize coordinate matching; the course direction is the flight direction of the unmanned aerial vehicle, and the wingspan direction is the vertical flight direction of the unmanned aerial vehicle;
s5, carrying out image partitioning according to the course and the wingspan direction and calculating the coordinate offset of each partition;
in the embodiment of the invention, the image with registration is divided into four areas according to the central point and the vertex connecting line of the image with registration, the four areas respectively correspond to the adjacent images in the left and right directions of the course and the adjacent images in the up and down directions of the wingspan, the coordinate offset between the characteristic points and the control points in the four directions is respectively calculated, and the average value of the coordinate offset of all points in each direction is calculated as the offset of the image in the direction because the offsets are basically consistent; the vertexes are four angular points of the image, and the common vertexes are mostly polygons, corners of polyhedrons or other higher-dimensional polyhedrons, and are composed of edges, faces or faces of the object.
And S6, determining the offset of all pixels according to the distance from the center point and the angles from the four vertical overlapping area directions.
In the embodiment of the invention, the position of the fixed central point is unchanged, the overlapped area in the upper, lower, left and right directions is divided into two parts according to the overlapped direction, all pixels in the area far away from the central point carry out coordinate transformation according to the average offset, and the offset is determined by the rest pixels according to the distance from the central point and the angles from the four directions perpendicular to the overlapped area.
S7, calculating cost and value matrixes of adjacent images and weighting to solve an optimal suture line;
in the embodiment of the invention, the cost and value matrixes of adjacent images are calculated, the optimal suture line is weighted and solved, and the areas on the two sides of the suture line generate corresponding masks;
s8, stitching the images;
in the embodiment of the invention, image splicing is carried out according to the mask generated by the suture lines of the deformed images in four directions.
Specifically, in the embodiment of the invention, the images to be spliced are subjected to gray level stretching in a preprocessing manner, then image characteristic point extraction is carried out, a vector coordinate relation of the images to be spliced is established, the characteristic points of the images in overlapped areas are matched, a central control point is calculated by combining coordinates, image partitioning is carried out according to the course direction and the wingspan direction, the coordinate offset of each area is calculated, the offset of all pixels is determined according to the distance from a central point and the angles from four directions perpendicular to the overlapped areas, the cost and the value matrix of adjacent images are calculated, the optimal suture line is weighted and solved, and the images are spliced.
Optionally, as shown in fig. 2, the extracting step of the feature points and the central control point includes:
s21, carrying out gray level stretching on an image to be spliced, and extracting the image feature points after the gray level stretching based on an SIFT algorithm;
s22, establishing a vector position relation according to the image coordinates and the flight strip relation, solving and determining adjacent images of the images to be spliced, and performing primary matching on feature points of the adjacent images;
in an embodiment of the invention, the adjacent image groups of the images in the course direction and the span direction are determined: the unmanned aerial vehicle acquires images of the whole area according to the S-shaped route, and if the images are the first or last image acquired in the first line of the course or the first and last images acquired in the last line of the course, the images to be spliced only have 2 adjacent images; if the images are the first image and the last image acquired by the course middle row, the images to be spliced comprise 3 adjacent images; if the image group to be spliced comprises 4 images, acquiring the feature point coordinates, the feature point description, the image coordinates and the size information of the image to be spliced and the adjacent images thereof;
s23, eliminating the wrong matching points based on RANSAC, and reserving the middle points of the correct matching feature point pairs as control points;
in the embodiment of the invention, the characteristic points of the image to be spliced and the adjacent image are respectively matched, then the characteristic points which are preliminarily matched are removed according to the RANSAC algorithm, and the number Q of the matched characteristic point pairs reserved after removal10, considering that the matched reliable feature point pair does not exist, combining image coordinate information and the coordinate of the feature point relative to the image, endowing the retained correct matched feature point with the coordinate under a local coordinate system, and taking the midpoint of the matched feature point as a central control point;
specifically, in the embodiment of the invention, the image to be spliced is subjected to gray scale stretching, the image feature points after the gray scale stretching are extracted based on an SIFT algorithm, the extracted data of the image feature points after the gray scale stretching is increased, a vector position relation is established according to the image coordinates and the air belt relation, the adjacent images of the image to be spliced are determined, the feature points of the adjacent images are subjected to primary matching, the wrong matching points are eliminated based on RANSAC, and the middle points of the correct matching feature point pairs are reserved as control points.
Optionally, as shown in fig. 3 and fig. 4, the specific model of the image deformation includes the following steps:
s41, connecting the center point of the image with four vertexes, dividing the image into four areas, and respectively corresponding to four adjacent images;
in the embodiment of the invention, a coordinate system is established, the center of the image is taken as an original point, the downward direction is taken as an x axis, the rightward direction is taken as a y axis, the original point and four vertexes of an effective area of the image are connected, the image is divided into 4 areas, and the angle of a connecting line of the center and the vertexes is calculated、、、The calculation formula is as follows:
wherein,、、、respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,,being the central point of the imageThe horizontal and vertical coordinates of the machine tool are,,is the horizontal and vertical coordinates of the top left vertex of the image,,is the horizontal and vertical coordinates of the left lower vertex of the image,,is the horizontal and vertical coordinates of the right lower vertex of the image,,the horizontal and vertical coordinates of the upper right vertex of the image are shown;
s42, calculating coordinate difference values of all feature points and control points in the four areas, and solving an average value as an average offset of all pixels in the direction, wherein a calculation formula is as follows:
wherein,the extracted feature points of the images to be stitched,is a central control point, and is characterized in that,is a characteristic point number, in commonA pair of characteristic points is formed by the characteristic points,for all matching pairs of feature pointsThe amount of the directional coordinate offset is,for all matching pairs of feature pointsThe amount of the directional coordinate offset is,for all pairs of characteristic pointsThe average coordinate offset of the directions is,for all feature points
The method for calculating the coordinate offset of all pixels in the image in the sixth step is realized by the following substeps:
calculating a bisection angle of an angle formed by a central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
wherein,a bisector angle of the first vertex and the second vertex with respect to the center point, whereinA bisecting angle of the second vertex and the third vertex with the center point, wherein,a bisecting angle of the third vertex and the fourth vertex with respect to the center point, wherein,is a bisection angle of the fourth vertex and the angle between the first vertex and the center point.
S43, dividing the overlapping area into two areas according to the overlapping direction, wherein the mathematical expression is as follows:
wherein,、、、respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,、、andrespectively the distance between the central point of the image to be spliced and the upper, lower, left and right sides,、、andrespectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
calculating the angles and the distances between all pixels in the image and the central point, wherein the mathematical expression is as follows:
wherein,is the angle of the picture element from the center point,is the distance of the picture element from the center point,、is the horizontal and vertical coordinates of the center point of the image,,is the horizontal and vertical coordinates of a certain pixel,is 0, 180 or 360 degrees when the pixel is at the upper left of the center pointIs 180 degrees, when the pixel is at the left lower part of the central pointIs 360 degrees, when the pixel is at the lower right part of the central pointIs 0 degree, when the pixel is at the upper right of the center pointIs 180 degrees;
s44, keeping away from one side of the central point of the image to be spliced, carrying out coordinate transformation on all pixels by using the average offset, enabling all pixels to be close to one side of the central point of the image to be spliced, and determining the offsets of all pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, wherein the specific calculation formula is as follows:
wherein,is the deformation weight coefficient(s) of the object,the picture element isThe amount of shift in the coordinate direction is,the picture element isThe amount of shift in the coordinate direction is,the projection distance of the pixel at the middle point of the region to which the pixel belongs;the perpendicular distance from the midpoint to the edge for this region,、is the included angle between the vertical line of the adjacent direction and the connecting line of the picture element and the central point,in adjacent directionsAndaverage coordinate offset of direction.
Specifically, in the embodiment of the invention, an image center point and four vertexes are connected, the image is divided into four areas, the four areas correspond to four adjacent images respectively, coordinate differences of all feature points and control points in the four areas are calculated, an average value is calculated to serve as an average offset of all pixels in the direction, an overlapping area is divided into two areas according to an overlapping direction, one side of the overlapping area far away from the center point of the image to be spliced is subjected to coordinate transformation by the average offset, one side of the overlapping area near the center point of the image to be spliced is subjected to coordinate transformation by all pixels, and the offsets of all pixels are determined according to the distance from the center point and the angles from the four directions perpendicular to the overlapping area.
Optionally, as shown in fig. 5, the image stitching includes the following steps:
s51, determining the starting point and the end point of the suture line;
in the embodiment of the invention, the starting point and the ending point of the suture line are determined at the edge of the overlapping area, and if the starting point and the ending point of the suture line are adjacent images in the course, the starting point and the ending point of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent in the span direction, the starting point and the ending point of the sewing line are positioned at the left edge and the right edge of the overlapping area;
s52, calculating the gray difference of the overlapping area of the adjacent imagesThe mathematical expression is as follows:
wherein,the gray values of the overlapped regions of the images to be stitched,the gray value of the overlapping area of the adjacent images;
s53, wherein,the gray values of the overlapped regions of the images to be stitched,the gray value of the overlapping area of the adjacent images; (6.3) obtaining allMaximum and minimum gray scale difference of row and column, and average gray scale difference of maximum and minimum gray scale differenceThe mathematical expression is as follows:
wherein,is the minimum gray-scale difference in the overlapping region,is the maximum gray difference in the overlap region;
s54, carrying out gray-scale weighted distance conversion on the gray-scale image in the overlapping areaThe mathematical expression is as follows:
s55, starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the cost function is calculated by the method of calculating the median value of the maximum gray difference and the minimum gray difference in the overlapping regionThe mathematical expression is as follows:
in the embodiment of the invention, starting from the starting point of the suture line, the point with the minimum cost function in the next row is calculated as the position of the next point of the suture line, and the cost function calculation method is the average value of the gray difference values of two adjacent points;
s56, taking images on two sides of the suture line to construct a mask;
in the embodiment of the invention, if repeated masks or no masks appear in a triple-overlapped or quadruple-overlapped area, only the mask of the image of the shooting sequence at the front is taken;
specifically, in the embodiment of the present invention, the starting and ending points of the suture line are determined, the grayscale difference of the overlapping area of the adjacent images is calculated, the maximum and minimum grayscale difference of all rows and columns and the average grayscale difference of the maximum and minimum grayscale differences are obtained, the grayscale weighted distance transformation is performed on the grayscale images in the overlapping area, starting from the starting point of the suture line, the point with the minimum cost function in the next row is calculated as the position of the next point of the suture line, the cost function calculation method is the average value of the grayscale differences of the two adjacent points, and the images on both sides of the suture line are taken to construct a mask.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the claims.
Claims (7)
1. An unmanned aerial vehicle area array swinging infrared multi-sequence image seamless splicing method is characterized by comprising the following steps:
(1) Preprocessing an image to be spliced, wherein the preprocessing is to perform gray stretching on an infrared image and enhance the image contrast, and the image to be spliced is an infrared image swept by an unmanned aerial vehicle area array;
(2) Extracting the feature points in the image in the step (1) by utilizing an SIFT algorithm to obtain scale-invariant feature points in the image;
(3) Establishing a vector coordinate relation according to the minimum coordinate of the image, the image size and the sweep sequence in the step (2);
(4) Calculating overlapped images of adjacent images in the course direction and the wingspan direction of the unmanned aerial vehicle according to the vector coordinates of the images to be spliced
The sequence number is used for matching every two feature points of adjacent images, eliminating the feature points with wrong matching by using an RANSAC algorithm to obtain feature point pairs which are accurately matched, and using the midpoint of the corresponding feature point pair in a local coordinate system as a control point;
(5) Dividing the image to be spliced into four areas according to the connecting line of the central point and the vertex of the image to be spliced, wherein the four areas respectively correspond to left and right adjacent images of a course and images in upper and lower adjacent directions in a wingspan direction, respectively calculating coordinate offset between all characteristic points and control points in the four directions, and calculating the average value of coordinate offset of all points in each direction as the offset of the image in the direction;
(6) Fixing the position of the central point of the image to be spliced, determining the offset of the rest pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, and carrying out coordinate transformation on all the pixels according to the respective corresponding offsets to obtain a deformed image; calculating the cost and value matrix of adjacent images, and carrying out weighting to solve the optimal suture line;
(7) And (4) carrying out image splicing on the deformed images obtained in the step (6) on sewing lines in four adjacent directions.
2. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method as claimed in claim 1, wherein the gray stretching in the step (1) is to determine a gray value threshold range according to a histogram of the image to be spliced and normalize the gray value to be within a range of 0-255.
3. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the control point calculation method in the step (5) is realized through the following sub-steps:
(3.1) determining adjacent image groups of the images to be spliced in the course and span directions: the unmanned aerial vehicle acquires images of the whole area according to the S-shaped route, and if the images are the first or last image acquired in the first line of the course or the first and last images acquired in the last line of the course, the images to be spliced only have 2 adjacent images; if the first image and the last image are obtained from the course middle line, the images to be spliced comprise 3 adjacent images; if the image group to be spliced is the intermediate image obtained by the course intermediate line, the image group to be spliced comprises 4 images;
(3.2) acquiring the feature point coordinates, feature point descriptions, image coordinates and size information of the image to be spliced and the adjacent images;
(3.3) respectively matching the feature points of the image to be spliced and the adjacent image, removing the error matching points of the preliminarily matched feature points according to the RANSAC algorithm, and keeping the number Q of the matching feature point pairs after removal10, considering that the matched reliable characteristic point pair does not exist;
and (3.4) combining the image coordinate information and the coordinates of the feature points relative to the image, endowing the coordinates under the local coordinate system to the reserved correct matching feature points, and taking the middle points of the matching feature points as central control points.
4. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the step (5) is realized by the following sub-steps:
(4.1) determining the midpoint of the images to be spliced and the maximum and minimum coordinates of all the images in the image group under the local coordinate system;
(4.2) taking the midpoint of the image to be spliced as a pole and a vertically downward ray as a polar axis, calculating included angles between the midpoint of the image to be spliced and four vertexes, dividing the image to be spliced into 4 areas by connecting lines of the midpoint and the four vertexes, and adopting the mathematical expression as follows:
wherein,、、、respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,,is the horizontal and vertical coordinates of the center point of the image,,is the horizontal and vertical coordinates of the top left vertex of the image,,is the horizontal and vertical coordinates of the left lower vertex of the image,,is the horizontal and vertical coordinates of the right lower vertex of the image,,the horizontal and vertical coordinates of the upper right vertex of the image are shown;
(4.3) calculating coordinate offset of all the matched feature points and the control points in the four regions, taking the average value as the coordinate offset of the region, wherein the mathematical expression of the coordinate offset is as follows:
wherein,the extracted feature points of the images to be stitched,is a central control point, and is characterized in that,is a characteristic point number, in commonA pair of characteristic points is formed by the characteristic points,for all matching pairs of feature pointsThe amount of the directional coordinate offset is,for all matching pairs of feature pointsThe amount of the directional coordinate offset is,for all pairs of characteristic pointsThe average coordinate offset of the direction is,for all feature points
5. The seamless splicing method for the infrared multi-sequence images swept by the unmanned aerial vehicle area array according to claim 1, wherein the calculating method for the coordinate offset of all pixels in the images in the step (6) is realized by the following sub-steps:
(5.1) calculating a bisection angle of an angle formed by the central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
wherein,is a bisector angle of the first and second vertices and the center point, wherein,a bisecting angle of the second vertex and the third vertex with respect to the center point, wherein,a bisecting angle of the third vertex and the fourth vertex with respect to the center point, wherein,the bisection angle of an included angle between the fourth vertex and the first vertex and the central point is included;
(5.2) calculating a bisector of an overlapping area of the image to be spliced and the adjacent image, wherein the mathematical expression of the bisector is as follows:
wherein,、、、respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,、、andrespectively the distance between the central point of the image to be spliced and the upper, lower, left and right sides,、、andrespectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
(5.3) calculating the angles and the distances of all the pixels in the image from the center point, wherein the mathematical expression is as follows:
wherein,is the angle of the picture element from the center point,is the distance of the picture element from the center point,、is the horizontal and vertical coordinates of the center point of the image,,is the horizontal and vertical coordinates of a certain pixel,is 0, 180 or 360 degrees when the pixel is at the upper left of the center pointIs 180 degrees, when the pixel is at the left lower part of the central pointIs 360 degrees, when the pixel is at the lower right part of the central pointIs 0 degree when the pixel is at the upper right of the center pointIs 180 degrees;
(5.4) judging the direction of the bisector of the pixel in the overlapping area, if the pixel is far away from the center of the image to be spliced, the coordinate offset of the pixel is the average coordinate offset of the direction; if the image is close to the direction of the center of the image to be spliced, the pixel coordinate offset is weighted and calculated according to the angle between the image and two adjacent vertexes and the projection distance from the midpoint, and the mathematical expression is as follows:
wherein,is the deformation weight coefficient(s) of the object,the picture element isThe amount of shift in the coordinate direction is,the picture element isThe amount of shift in the coordinate direction is,the projection distance of the pixel at the middle point of the region to which the pixel belongs;the perpendicular distance from the midpoint to the edge for this region,、is the included angle between the vertical line of the adjacent direction and the connecting line of the picture element and the central point,in adjacent directionsAndaverage coordinate offset of direction.
6. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the suture line calculation method in the step (7) is realized through the following sub-steps:
(6.1) determining starting and ending points of a suture line at the edge of the overlapping area, wherein if the starting and ending points of the suture line are the adjacent images of the course, the starting and ending points of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent in the span direction, the starting point and the ending point of the sewing line are positioned at the left edge and the right edge of the overlapping area;
(6.2) calculating the gray difference of the overlapping areas of the adjacent imagesThe mathematical expression is as follows:
wherein,the gray values of the overlapped regions of the images to be stitched,the gray value of the overlapping area of the adjacent images; (6.3) finding the maximum and minimum gray difference of all the lines and determining the average gray difference of the maximum and minimum gray differencesThe mathematical expression is as follows:
wherein,is the minimum gray-scale difference in the overlapping region,is the maximum gray difference in the overlap region;
(6.4) Gray-weighted distance conversion of the overlapped region gray imageThe mathematical expression is as follows:
(6.5) starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the cost function is calculated by the median value of the maximum gray difference and the minimum gray difference in the overlapping regionThe mathematical expression is as follows:
and (6.6) after the suture line is determined, taking images of two sides of the suture line to construct a mask.
7. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the image splicing in the step (7) is realized by the following sub-steps:
(7.1) establishing all image masks;
and (7.2) if repeated masks appear in the triple-overlapped or quadruple-overlapped area, only taking the mask of the image at the forefront of the shooting sequence, and if no mask exists, taking the image at the position corresponding to the first appearing image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211306562.XA CN115393196B (en) | 2022-10-25 | 2022-10-25 | Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211306562.XA CN115393196B (en) | 2022-10-25 | 2022-10-25 | Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115393196A CN115393196A (en) | 2022-11-25 |
CN115393196B true CN115393196B (en) | 2023-03-24 |
Family
ID=84129183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211306562.XA Active CN115393196B (en) | 2022-10-25 | 2022-10-25 | Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115393196B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117132913B (en) * | 2023-10-26 | 2024-01-26 | 山东科技大学 | Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN109118429A (en) * | 2018-08-02 | 2019-01-01 | 武汉大学 | A kind of medium-wave infrared-visible light multispectral image rapid generation |
CN109961399A (en) * | 2019-03-15 | 2019-07-02 | 西安电子科技大学 | Optimal stitching line method for searching based on Image distance transform |
AU2020101709A4 (en) * | 2020-05-18 | 2020-09-17 | Zhejiang University | Crop yield prediction method and system based on low-altitude remote sensing information from unmanned aerial vehicle |
CN112862683A (en) * | 2021-02-07 | 2021-05-28 | 同济大学 | Adjacent image splicing method based on elastic registration and grid optimization |
CN113506216A (en) * | 2021-06-24 | 2021-10-15 | 煤炭科学研究总院 | Rapid suture line optimization method for panoramic image splicing |
WO2021213508A1 (en) * | 2020-04-24 | 2021-10-28 | 安翰科技(武汉)股份有限公司 | Capsule endoscopic image stitching method, electronic device, and readable storage medium |
WO2022027313A1 (en) * | 2020-08-05 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Panoramic image generation method, photography apparatus, flight system, and storage medium |
CN114936971A (en) * | 2022-06-08 | 2022-08-23 | 浙江理工大学 | Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area |
CN115082314A (en) * | 2022-06-28 | 2022-09-20 | 中国科学院光电技术研究所 | Method for splicing optical surface defect images in step mode through self-adaptive feature extraction |
-
2022
- 2022-10-25 CN CN202211306562.XA patent/CN115393196B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN109118429A (en) * | 2018-08-02 | 2019-01-01 | 武汉大学 | A kind of medium-wave infrared-visible light multispectral image rapid generation |
CN109961399A (en) * | 2019-03-15 | 2019-07-02 | 西安电子科技大学 | Optimal stitching line method for searching based on Image distance transform |
WO2021213508A1 (en) * | 2020-04-24 | 2021-10-28 | 安翰科技(武汉)股份有限公司 | Capsule endoscopic image stitching method, electronic device, and readable storage medium |
AU2020101709A4 (en) * | 2020-05-18 | 2020-09-17 | Zhejiang University | Crop yield prediction method and system based on low-altitude remote sensing information from unmanned aerial vehicle |
WO2022027313A1 (en) * | 2020-08-05 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Panoramic image generation method, photography apparatus, flight system, and storage medium |
CN112862683A (en) * | 2021-02-07 | 2021-05-28 | 同济大学 | Adjacent image splicing method based on elastic registration and grid optimization |
CN113506216A (en) * | 2021-06-24 | 2021-10-15 | 煤炭科学研究总院 | Rapid suture line optimization method for panoramic image splicing |
CN114936971A (en) * | 2022-06-08 | 2022-08-23 | 浙江理工大学 | Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area |
CN115082314A (en) * | 2022-06-28 | 2022-09-20 | 中国科学院光电技术研究所 | Method for splicing optical surface defect images in step mode through self-adaptive feature extraction |
Non-Patent Citations (3)
Title |
---|
"面阵摆扫航空相机序列图像的大区域无缝拼接";杨国鹏; 周欣; 韦红波; 邢平;《测绘科学》;20200117;第45卷(第3期);全文 * |
V T Manu ; B M Mehtre."Visual artifacts based image splicing detection in uncompressed images".《2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS)》.2016, * |
基于投影变换与SIFT结合的摆扫图像拼接技术;袁艳等;《现代电子技术》;20150501(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115393196A (en) | 2022-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111882612B (en) | Vehicle multi-scale positioning method based on three-dimensional laser detection lane line | |
CN105245841B (en) | A kind of panoramic video monitoring system based on CUDA | |
CN109903227B (en) | Panoramic image splicing method based on camera geometric position relation | |
CN106373088B (en) | The quick joining method of low Duplication aerial image is tilted greatly | |
CN107918927A (en) | A kind of matching strategy fusion and the fast image splicing method of low error | |
US9818034B1 (en) | System and method for pattern detection and camera calibration | |
CN104732482A (en) | Multi-resolution image stitching method based on control points | |
CN110569861B (en) | Image matching positioning method based on point feature and contour feature fusion | |
CN108961286B (en) | Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building | |
CN109118429B (en) | Method for rapidly generating intermediate wave infrared-visible light multispectral image | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
US20120076409A1 (en) | Computer system and method of matching for images and graphs | |
CN106952219B (en) | Image generation method for correcting fisheye camera based on external parameters | |
CN106886976B (en) | Image generation method for correcting fisheye camera based on internal parameters | |
CN115393196B (en) | Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging | |
CN110084743B (en) | Image splicing and positioning method based on multi-flight-zone initial flight path constraint | |
CN113642463B (en) | Heaven and earth multi-view alignment method for video monitoring and remote sensing images | |
CN111192194A (en) | Panoramic image splicing method for curtain wall building vertical face | |
CN107154017A (en) | A kind of image split-joint method based on SIFT feature Point matching | |
Wang et al. | Automatic registration of point cloud and panoramic images in urban scenes based on pole matching | |
CN116228539A (en) | Unmanned aerial vehicle remote sensing image stitching method | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
Zhang et al. | Distinguishable keypoint detection and matching for optical satellite images with deep convolutional neural networks | |
CN114581307A (en) | Multi-image stitching method, system, device and medium for target tracking identification | |
CN117056377B (en) | Infrared image processing method, system and storage medium based on graph theory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |