CN111709876B - Image splicing method, device, equipment and storage medium - Google Patents

Image splicing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111709876B
CN111709876B CN202010312548.5A CN202010312548A CN111709876B CN 111709876 B CN111709876 B CN 111709876B CN 202010312548 A CN202010312548 A CN 202010312548A CN 111709876 B CN111709876 B CN 111709876B
Authority
CN
China
Prior art keywords
image
images
original
feature points
reduced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010312548.5A
Other languages
Chinese (zh)
Other versions
CN111709876A (en
Inventor
田江浩
禹卫东
张志敏
王宇
邓云凯
王沛
范怀涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202010312548.5A priority Critical patent/CN111709876B/en
Publication of CN111709876A publication Critical patent/CN111709876A/en
Application granted granted Critical
Publication of CN111709876B publication Critical patent/CN111709876B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for image splicing, wherein the method comprises the following steps: acquiring K original images which are sequentially arranged according to a first dimension, wherein K is an integer which is more than or equal to 1; reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension; determining the overlapping rate of the target images corresponding to two adjacent reduced images; cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images; and splicing the K original images according to the overlapped area of the K-1 pair of adjacent reduced images.

Description

Image splicing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to image processing, and relates to but is not limited to a method, a device, equipment and a storage medium for image splicing.
Background
Scanning Synthetic Aperture Radar (ScanSAR) is an important working mode of the satellite-borne Synthetic Aperture Radar, and the wide mapping capability of the Scanning Synthetic Aperture Radar is favorable for shortening the global observation period and monitoring large-scale earth surface phenomena with quick change. However, due to the limitation of factors such as imaging equipment and a technical remote sensing satellite mapping area, the visual angle of the image often cannot meet the actual requirement, and a plurality of small sub-images with overlapping areas formed by ScanSAR are generally required to be spliced into a wide scene image. The problems of overlarge image, too many characteristic points and too low matching speed exist when the ScanSAR image is spliced by using the prior art.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for image stitching.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a method for image stitching, where the method includes: acquiring K original images which are sequentially arranged according to a first dimension, wherein K is an integer which is more than or equal to 1; reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension; determining the overlapping rate of the target images corresponding to the two adjacent reduced images; cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images; and splicing the K original images according to the overlapping area of the K-1 pair of adjacent reduced images.
In a second aspect, an embodiment of the present application provides an apparatus for image stitching, where the apparatus includes: the acquisition module is used for acquiring K original images which are sequentially arranged according to a first dimension, wherein K is an integer greater than or equal to 1; the reducing module is used for reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension; the determining module is used for determining the overlapping rate of the target images corresponding to the two adjacent reduced images; the cutting module is used for cutting the two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between the K-1 pair of the two adjacent reduced images; and the splicing module is used for splicing the K original images according to the overlapping area of the K-1 pair of the two adjacent reduced images.
In a third aspect, an embodiment of the present application provides an apparatus for image stitching, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements, when executing the program, the steps in the method for image stitching according to any one of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.
In the embodiment of the application, K pictures arranged according to a first dimension are firstly reduced, then the K pictures are cut according to the calculated overlapping rate between every two adjacent images to obtain an overlapping area of K-1 pairs of two adjacent reduced images, and finally the K original images are spliced according to the overlapping area of the K-1 pairs of two adjacent reduced images. Therefore, on the premise that the number of the feature points which are matched correctly is almost unchanged, a large number of useless feature points can be effectively removed by cutting preprocessing, and the operation efficiency is improved while the registration accuracy is ensured.
Drawings
Fig. 1A is two satellite-borne ScanSAR images to be transversely spliced according to an embodiment of the present application;
fig. 1B is a schematic flowchart illustrating an implementation process of a method for image stitching according to an embodiment of the present application;
fig. 2 is a schematic implementation flowchart of another image stitching method according to the embodiment of the present application;
FIG. 3 is a schematic diagram of image overlap region segmentation and halving provided by an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating an implementation process of another image stitching method according to an embodiment of the present application;
fig. 5 is an image stitching effect graph after image preprocessing and an image stitching effect graph without image preprocessing provided in the embodiment of the present application;
FIG. 6 is a schematic structural diagram of an apparatus for image stitching according to an embodiment of the present disclosure;
fig. 7 is a hardware entity diagram of an apparatus for image stitching according to an embodiment of the present application.
Detailed Description
The image stitching technology is a research hotspot problem in the ScanSAR field. The problems existing in the splicing process of the satellite-borne ScanSAR image are represented in two aspects:
on the first hand, the original image resolution of the satellite-borne ScanSAR is high, and the pixel points are too many, so that the data volume processed during splicing is too large, and the splicing speed is too low. The size of an original spaceborne ScanSAR image before splicing is 1.5 GB-2 GB, and each image to be spliced consists of hundreds of millions of pixel points. When the SIFT algorithm is used for extracting the feature points of the whole image, the problems of too many feature points and too low extraction speed can occur.
In the second aspect, when the full-map feature point extraction is performed on the original spaceborne ScanSAR image, many wrong feature points are extracted, and the matching accuracy is influenced. The image splicing is based on the same or similar characteristic points between the images to be spliced, and the relative positions between the images can be found according to the characteristic points, so that accurate splicing can be performed. For the satellite-borne ScanSAR images, the feature points which can be used for matching between two adjacent images come from the overlapping region. In the prior art, when feature point extraction is performed on a whole graph, some feature points which are extremely similar but do not belong to an overlapping region are extracted, so that a phenomenon of mismatching is caused when the feature points are matched, and the matching accuracy is influenced.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that some of the embodiments described herein are only for explaining the technical solutions of the present application, and are not intended to limit the technical scope of the present application.
Fig. 1A is two satellite-borne ScanSAR images to be transversely stitched according to an embodiment of the present application, and as shown in fig. 1A, it is assumed that the longitude of a point A1 in the image a is lon (A1) and the longitude of a point A2 is lon (A2); the longitude of the B1 point in the image B is lon (B1), and the longitude of the B2 point is lon (B2).
Calculating the transverse (longitudinal) overlapping rate between the graph A and the graph B, and expressing the overlapping rate a of the image A by the following formula (1); equation (2) represents the overlapping ratio B of the image B.
Figure GDA0003926983680000041
Figure GDA0003926983680000042
The overlapping rate of the overlapping area of the A picture in the whole A picture can be calculated by formula (1), and the overlapping rate of the overlapping area of the B picture in the whole B picture can be calculated by formula (2). Cutting the reduced image A by using the calculated A image overlapping rate a to obtain the overlapping area of each reduced A image; and cutting the reduced image B by using the calculated B image overlapping rate B to obtain the overlapping area of each reduced B image.
And calculating the overlapping rate of the overlapping area in the whole image by using the satellite image and the matched geographical position information and the real longitude and latitude coordinates corresponding to the corner points of the adjacent images. The overlapping rate is used for cutting preprocessing of the overlapping area of the images.
Referring to fig. 1B, an image stitching method provided in the embodiment of the present application executes the following steps:
s101, acquiring K original images which are sequentially arranged according to a first dimension, wherein K is an integer which is more than or equal to 1;
here, the first dimension may be obtained from attributes of the image including location information, creation time, size, occupied space, file type, and the like of the image. Wherein the position information of the image may include longitude and latitude information where a picture taken by the image is located. The first dimension may be longitude information of the image, latitude information of the image, or time of image capturing. For example, K original images sequentially arranged in the first dimension may be acquired, the K original images may be sequentially arranged in accordance with longitude or latitude information of the images, or the K original images may be sequentially arranged in accordance with the time of image capturing.
When the pictures shot by the satellites are spliced, original images generated by the satellite data have respective numbers, and the numbers are consistent with the arrangement sequence of the images. Here, the first dimension may be an order in which the original images are obtained, that is, an order in which the original images are obtained is an order in which they are to be stitched.
S102, reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
here, the reduction factor may be set according to the calculation accuracy of the user. It is recommended to reduce the image side length by one-half, one-third, one-fourth, etc. of the original image, but it is not recommended to reduce the original image too small, which affects the calculation accuracy of the transform matrix offset.
Step S103, determining the overlapping rate of the target images corresponding to the two adjacent reduced images;
as shown in fig. 1A, the overlap ratio of the overlap region of each of the a diagram and the B diagram in the whole diagram can be calculated by the calculation formulas (1) and (2).
And calculating the overlapping rate of the overlapping area in the whole image by using the satellite image and the matched geographical position information and the real longitude and latitude coordinates corresponding to the corner points of the adjacent images. The overlapping rate is used for cutting preprocessing of the overlapping area of the images.
Step S104, cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images;
here, an image to be left-right stitched is taken as an example: the leftmost image has only an overlapping area with its right-adjacent image, so only the overlapping area to the right of the image needs to be cut. The rightmost image has only an overlapping region with its left-adjacent image, so only the overlapping region on the left side of the image needs to be cut. Since the middle image has an overlapping area with both the left and right adjacent images, the overlapping area of the left and right portions needs to be cut. After the K images are cut in this way, an overlapping area between K-1 pairs of the two adjacent reduced images is obtained.
And S105, splicing the K original images according to the overlapped area of the K-1 pairs of adjacent reduced images.
In the embodiment of the application, K pictures arranged according to a specific first dimension are firstly reduced, then the K pictures are cut according to the calculated overlapping rate between every two adjacent images to obtain an overlapping area of K-1 pairs of two adjacent reduced images, and finally the K original images are spliced according to the overlapping area of the K-1 pairs of two adjacent reduced images. Therefore, on the premise of ensuring that the number of the feature points which are matched correctly is almost unchanged, the cutting preprocessing can effectively remove a large number of useless feature points, namely, the operation efficiency is improved while the registration accuracy is ensured.
An image stitching method provided in an embodiment of the present application, as shown in fig. 2, executes the following steps:
step S201, K original images which are sequentially arranged according to a first dimension are obtained, wherein K is an integer which is larger than or equal to 1;
step S202, reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
step S203, determining the image overlapping rate between two adjacent original images in the K original images, and determining the image overlapping rate between the two adjacent original images as the target image overlapping rate; or, determining an image overlapping rate between two adjacent reduced images in the K reduced images as the target image overlapping rate.
Since the overlap ratio between images does not change due to the scaling of the images. Therefore, when calculating the overlapping ratio of two adjacent images, the calculation of the overlapping ratio may be completed using the original image or the reduced image.
Step S204, cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images;
step S205, extracting by using a scale invariant feature transform algorithm and matching J of each pair of overlapping regions by using Euclidean distance p For the matched feature points, J is an integer which is more than or equal to 1, and p is an integer which is more than or equal to 1 and less than or equal to K-1;
scale-invariant feature transform (SIFT) algorithm is a description used in the field of image processing. This description has scale invariance and keypoints can be detected in the image. Euclidean distance (Euclidean metric) discrimination is a commonly used distance definition. The method can carry out feature point matching on the overlapped area between two adjacent images. Extracting J of each pair of said overlapping regions p And matching the feature points.
Step S206, compensating J in each reduced image by using a compensation formula p Obtaining compensated J for the coordinate of the matched characteristic point p Matching the feature points;
each pixel point of the image has a corresponding coordinate in the matrix. And cutting the image to make partial pixel points disappear. And the coordinates of the rest pixels in the new matrix after cutting can be deviated relative to the coordinates before cutting. This coordinate offset needs to be compensated for before the transformation matrix calculation can be performed.
Step S207, according to the compensated J of each pair of the reduced images p For the matched feature points, obtaining K-1 target transformation matrixes corresponding to K reduced images by using a random sampling consistency algorithm and affine transformation;
random Sample Consensus (RANSAC) is an algorithm for obtaining valid Sample data by calculating mathematical model parameters of data according to a set of Sample data sets containing abnormal data. Affine transformation, also called affine mapping, refers to a geometric transformation in which one vector space is linearly transformed and then translated into another vector space.
The transformation matrix is used for splicing two adjacent images, namely, the two adjacent images with an overlapping area can be spliced by using the transformation matrix.
According to compensated J of each pair of reduced images p For the matched feature points, obtaining by using random sampling consensus algorithm and affine transformationAnd obtaining K-1 target transformation matrixes corresponding to the K reduced images.
And S208, projecting the K original images by using the K-1 transformation matrixes to obtain spliced images.
Here, the projection refers to that the original images to be stitched are arranged in left and right directions by taking the original image to be stitched as an example, and the original image on the leftmost side is taken as a reference, and the original images are put into a new matrix according to the mutual position relationship (obtained by transforming the matrix) to form a stitched large image.
And projecting the K original images by using the K-1 transformation matrixes to obtain spliced images.
In the embodiment of the present application, first, when calculating the overlapping ratio of two adjacent images, the calculation of the overlapping ratio may be completed by using the original image, or the calculation of the overlapping ratio may be completed by using the reduced image. Then, extracting feature points of the image subjected to the reduction and cutting pretreatment by using an SIFT algorithm, matching the feature points, and then compensating the matched feature points; and finally, according to the compensated matching feature points, obtaining K-1 target transformation matrixes corresponding to the K reduced images by using a random sampling consensus algorithm and affine transformation, and projecting the K original images by using the K-1 transformation matrixes to obtain spliced images. Therefore, the cutting pretreatment has a certain screening effect, can effectively remove some wrong matching feature points from the non-overlapping area, and improves the accuracy of the matching feature points. The method for scaling preprocessing is used for scaling down the original image, the matching feature points of two adjacent images need to be extracted and compensated, and the speed of extracting and matching the feature points can be increased. The problems of overlarge satellite-borne ScanSAR image, excessive useless feature points and excessively slow splicing speed can be effectively solved.
The image stitching method provided by the embodiment of the application executes the following steps:
s301, acquiring K original images sequentially arranged according to a first dimension, wherein K is an integer greater than or equal to 1;
s302, reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
step S303, determining the overlapping rate of the target images corresponding to the two adjacent reduced images;
step S304, cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images;
step S305, equally dividing each pair of overlapping regions into M parts along a second dimension to obtain M cutting images of each pair of overlapping regions;
where M may be dependent on the number of cores in a Central Processing Unit (CPU). For example, when the core of the computer CPU is 4, the value of M takes 4.
Here, when the specific first dimension is a physical location latitude, the second dimension is a physical location longitude; when the specific first dimension is a physical location longitude, the second dimension is a physical location latitude; when the specific first dimension is the horizontal direction, the second dimension is the vertical direction. And equally dividing each pair of overlapping areas into M parts along the second dimension to obtain M cut images of each pair of overlapping areas, wherein the M cut images can be used for parallel acceleration processing during feature point extraction and matching.
Fig. 3 is a schematic diagram of image overlap region segmentation and bisection provided in the embodiment of the present application, and as shown in fig. 3, 301 is a feature point of one of the bisector images in the M bisector in the a diagram, and 302 is a feature point of one of the bisector images in the M bisector in the B diagram, and feature point extraction is performed on the a diagram and the B diagram by using a SIFT algorithm. Cutting out the overlapping area can greatly reduce the extraction of useless feature points. Feature points which are more likely to be matched are extracted from the overlapped region of the images by using a scale-invariant feature transformation algorithm, so that the effects of increasing the splicing speed and improving the matching accuracy are achieved.
S306, extracting the feature points of each cutting image in the M cutting images by using a scale invariant feature transform algorithm and matching by using Euclidean distance to obtain a corresponding pair of J of the overlapping area p For the matched feature points, J is an integer greater than or equal to 1, and p is an integer greater than or equal to 1 and less than or equal to K-1Counting;
when the scale-invariant feature transformation algorithm is used for feature point extraction, the feature point extraction in the M cut images can be performed in parallel, and the processing speed is improved.
As shown in fig. 3, after the overlapping area between two adjacent left and right images is cut and divided by M, the corresponding left and right areas are matched with each other.
Step S307, acquiring an original matrix corresponding to each reduced image;
when each image is processed, each pixel point corresponds to one point in the matrix. Thus, each image corresponds to a matrix. Here, an original matrix corresponding to each reduced image is acquired.
Step S308, obtaining the compensation formula according to the number of columns in the original matrix and the overlapping rate;
here, the number of columns in the matrix, i.e. the side length of the matrix.
Column coordinate value Cut requiring compensation A And can be expressed as formula (3):
Cut A =C A -[C A *a] (3);
wherein, C A Is the number of columns in image A, and a is the overlap ratio of image A]An integer operation is taken down.
Step S309, compensating J in each reduced image using the compensation formula p Obtaining compensated J for the coordinate of the matched characteristic point p Matching the characteristic points;
each pixel point of the image has a corresponding coordinate in the matrix. Cutting the image can make part of the pixel disappear. And the coordinates of the rest pixels in the new matrix after cutting can be deviated relative to the coordinates before cutting. This coordinate offset needs to be compensated for before the transformation matrix calculation can be performed.
Compensating for J in each reduced image using compensation formula (3) p Obtaining compensated J for the coordinate of the matched characteristic point p And matching the feature points.
Step S310, according to the compensated image of each pair of the reduced imagesJ of (A) p For the matched feature points, obtaining K-1 initial transformation matrixes corresponding to the K reduced images by using a random sampling consensus algorithm and affine transformation;
and (3) calculating an initial transformation matrix by affine transformation on the matched feature points, wherein the initial transformation matrix is specifically expressed as a formula (4):
Figure GDA0003926983680000101
where ax and ay represent the scaling amount of the abscissa and the ordinate, bx and by represent the rotation amount and the shearing amount of the abscissa and the ordinate, and cx and cy represent the offset amount of the abscissa and the ordinate. According to the change of the feature point coordinates before and after transformation, corresponding parameters can be obtained, and further an initial matrix of the two-dimensional pixel coordinate transformation relation of two adjacent images is obtained.
The random sampling matching algorithm randomly extracts 4 pairs of feature points in two images that have been matched, and calculates a transformation matrix using affine transformation. And performing coordinate transformation on all the feature points of the image B to a coordinate system taking the image A as a reference by using the calculated transformation matrix, performing distance calculation on the feature point coordinates of the transformed image B and the feature point coordinates of the image A, selecting the feature points with the Euclidean distance of not more than 3 as correct points, and counting the number of the correct points corresponding to the transformation matrix. Repeating the process 2000 times, selecting the transformation matrix with the most correct points as the best transformation matrix, thereby completing the screening process and improving the correct rate of the transformation matrix.
Step S311, obtaining the offset of each original image and the reduced image;
firstly, calculating the side length cols0 of the spliced original images by using an initial transformation matrix;
then, calculating the side length cols1 of the spliced reduced images by using the initial transformation matrix;
then, the offset caused by scaling to the initial transformation matrix is calculated, and the calculation formula is formula (5):
offset=cols1/N-cols0 (5);
and N is the scaling multiple of the side length of the original image.
Step S312, compensating each initial transformation matrix according to the offset to obtain a target transformation matrix;
here, the case where the side length of the image is reduced to one-half of the side length of the original image will be described as an example, where the matrix (6) is a target transformation matrix obtained by sampling the original image and compensated for offset, and the matrix (7) is a transformation matrix calculated from the original image.
Figure GDA0003926983680000102
Figure GDA0003926983680000111
As can be seen from the above matrix (6) and matrix (7), the target matrix calculated from the reduced image has very little error and almost the same error as the transformation matrix calculated from the original image after offset compensation.
And S313, projecting the K original images by using the K-1 transformation matrixes to obtain spliced images.
In the embodiment of the application, the overlapped area is cut into M parts, M cut areas are processed in parallel to obtain matching feature points, a compensation formula is provided for compensating coordinates of the matching feature points, then an initial transformation matrix is obtained by using a random sampling consistency algorithm and affine transformation, an offset calculation formula is provided for calculating the offset of each original image and the reduced image, and finally the initial transformation matrix is compensated according to the calculated offset to obtain a target transformation matrix. Therefore, the M cutting areas can be accelerated in parallel during feature point extraction and matching, and the processing efficiency is improved. The target matrix obtained by compensating the initial transformation matrix by using the offset calculation formula can be used for accurately matching two adjacent images.
The image stitching method provided by the embodiment of the application executes the following steps:
step S320, K original images which are sequentially arranged according to a first dimension are obtained, wherein K is an integer which is larger than or equal to 1;
s321, reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
step S322, determining the target image overlapping rate corresponding to two adjacent reduced images;
step S323, cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between the K-1 pair of the two adjacent reduced images;
step S324, extracting by using a scale invariant feature transform algorithm and matching J of each pair of overlapping regions by using Euclidean distance p For the matched feature points, J is an integer which is more than or equal to 1, and p is an integer which is more than or equal to 1 and less than or equal to K-1;
step S325, compensating J in each of the reduced images using a compensation formula p Obtaining compensated J for the coordinate of the matched characteristic point p And matching the feature points.
Step S326, according to the compensated J of each pair of the reduced images p For the matched feature points, obtaining K-1 initial transformation matrixes corresponding to the K reduced images by using a random sampling consensus algorithm and affine transformation;
step S327, obtaining I of each pair of original images j Angular point coordinates of where I j Is an integer of 2 or more, j is an integer of 1 or more and K or less;
here, I j For four corner points of each pair of original images, or for three corner points of each pair of original images. I is j A minimum of 2, i.e. a minimum of 2 angular point coordinates, needs to be taken.
Step S328, obtaining I of each pair of the reduced images j Coordinates of the angular points;
as step S327, it is also necessary to acquire I for each pair of reduced images j Angular point coordinates.
Step S329, according to the I of each pair of original images j Angular point coordinates such thatSplicing the angular points of each pair of the original images by using the initial transformation matrix to obtain the side length of the pre-spliced original images;
in order to obtain the side length of the pre-spliced original images, an initial transformation matrix is used for obtaining the I of each pair of original images j The coordinates of each corner point are calculated.
Step S330, according to the I of each pair of reduced images j The angular point coordinates are used for splicing the angular points of each pair of the reduced images by using the initial transformation matrix to obtain the side length of the pre-spliced reduced images;
also, in order to obtain the side lengths of the pre-stitched reduced images, it is necessary to use the initial transformation matrix according to the I of each pair of the reduced images j The coordinates of the individual corner points are calculated.
And step S331, dividing the side length of the pre-spliced reduced image by the scaling multiple of the side length of the original image, and then subtracting the side length of the pre-spliced original image to obtain the offset of each original image and the reduced image.
Here, the calculation formula of the offset amount is formula (5):
offset=cols1/N-cols0 (5);
wherein cols0 is the side length of the spliced original images calculated by using the initial transformation matrix; cols1 is the side length of the reduced images after splicing calculated by using the initial transformation matrix; n is the scaling multiple of the side length of the original image; offset is the amount of offset the calculated scaling causes to the initial transform matrix.
S332, compensating each initial transformation matrix according to the offset to obtain a target transformation matrix;
and compensating each initial transformation matrix for the offset of the initial transformation matrix caused by the obtained scaling through calculation to obtain a target transformation matrix.
And S333, projecting the K original images by using the K-1 transformation matrixes to obtain spliced images.
In the embodiment of the present application, how to select appropriate feature points to respectively pre-stitch an original image and a reduced image is described, then the side length of the pre-stitched original image and the side length of the reduced image are respectively obtained by calculation, and finally the offset is obtained by calculation using the offset calculation formula provided in the present application. Thus, the target matrix obtained by compensating the initial transformation matrix by using the offset calculation formula can be used for accurately matching two adjacent images.
Synthetic Aperture Radar (SAR) is an active remote sensor operating in the microwave frequency band, and it actively transmits microwaves and receives their echoes. Compared with an optical sensor, the synthetic aperture radar has the characteristics of no limitation of sunshine and weather conditions and all-weather, all-weather and all-directional earth observation, so that the synthetic aperture radar has important application in the field of modern microwave remote sensing.
ScanSAR is a new synthetic aperture radar working mode developed on the basis of SAR. The scanning visual angle is changed in a preset angle range according to a certain sequence, so that the width of an observation band is greatly expanded. This is very important for the observation of large-scale earth surface phenomena with rapid changes, such as the investigation and monitoring of earth surface natural disasters, such as vegetation growth changes, floods, and the like, large-scale marine phenomena, and the like.
The ScanSAR changes the antenna elevation angle within one synthetic aperture time, performs beam switching, and performs multiple scans along the distance direction to acquire multiple strip data, which is called Burst. In the working mode of the satellite-borne ScanSAR, each Burst has a certain overlapping area in the azimuth direction and the range direction. In the conventional image splicing technology, a plurality of adjacent images which have overlapping areas and are irradiated at different time and different visual angles are spliced into a wide scene graph without splicing gaps.
In the existing satellite-borne ScanSAR image splicing process, the overlapping area between adjacent images is kept between 10% and 40%. And the feature points that can be matched must be feature points from the overlapping area. In other words, the feature point extraction is performed on the whole image, and most of the obtained feature points are invalid feature points, which is not helpful for the image matching work. This does not improve the accuracy of image matching, but also greatly increases the time consumed in extracting and matching the unwanted feature points, reducing efficiency. The method provided by the embodiment of the application can effectively solve the problem and greatly accelerate the splicing speed of the satellite-borne ScanSAR image.
Fig. 4 is a schematic flow chart of an implementation of another image stitching method provided in the embodiment of the present application, and as shown in fig. 4, the method includes the following steps:
s401, K original images sequentially arranged according to a first dimension are obtained, wherein K is an integer larger than or equal to 1;
step S402, calculating the overlapping rate of each image in K original images;
the overlap ratio is a ratio representing how much the overlapping area of the graph occupies in the image. This ratio does not change with the scaling of the image and the overlap ratio of the original image is still applicable to the scaled down image. The method for calculating the overlapping rate is as shown in fig. 1A, the overlapping rate of the overlapping area of the a diagram and the B diagram occupying the a diagram can be calculated by using formula (1), and the overlapping rate of the overlapping area of the B diagram and the a diagram occupying the B diagram can be calculated by using formula (2).
S403, reducing the side length of each image to be N times of that of the original image to obtain a reduced image;
here, N is preferably 0.5, which greatly affects the calculation accuracy of the shift amount of the transform matrix.
S404, cutting each reduced image according to the overlapping rate of each image to obtain the overlapping area of each image and the adjacent image;
because the overlapping rate between the images does not change due to the scaling of the images, here, the overlapping rate between the images is obtained through calculation on the original images.
Here, an image to be left-right stitched is taken as an example: the leftmost image has only an overlapping area with its right-adjacent image, so only the overlapping area to the right of the image needs to be cut. The rightmost image has only an overlapping region with its left-adjacent image, so only the overlapping region on the left side of the image needs to be cut. Since the middle image has an overlapping area with both the left and right adjacent images, the overlapping area of the left and right portions needs to be cut.
The acquisition of the overlapping area takes the left and right stitched images as an example: according to actual geographic coordinates (which are known information) corresponding to four corner points of each image, calculating the overlapping rate between two adjacent images, and then determining the overlapping area (approximate range, no need of accuracy) of each image and the adjacent image according to the overlapping rate.
Step S405, equally dividing the overlapping area of each reduced image and the adjacent reduced image into M parts along the vertical direction of splicing;
m may depend on the number of cores in a Central Processing Unit (CPU), and is used for parallel accelerated Processing in feature point extraction and matching.
S406, simultaneously extracting feature points of each cutting overlapping area to be matched by using an SIFT algorithm, and matching to obtain matched feature points in each pair of reduced images;
scale-invariant feature transform (SIFT) algorithm is a description used in the field of image processing. This description has scale invariance and keypoints can be detected in the image.
The matching method takes left and right spliced images as an example: as shown in fig. 3, after the overlapping area between the two adjacent left and right images is cut and divided by M, the corresponding left and right areas are matched with each other.
The matching method is to use the Euclidean distance discrimination method to match the characteristic points of the overlapping region between two adjacent images.
Fig. 5 is an image stitching effect diagram after image preprocessing and an image stitching effect diagram without image preprocessing provided by the embodiment of the present application, as shown in fig. 5, 51 in fig. 5 is a left image in two satellite-borne ScanSAR images to be transversely stitched provided by the embodiment of the present application for verification, and 52 in fig. 5 is a right image in two satellite-borne ScanSAR images to be transversely stitched provided by the embodiment of the present application for verification. As shown in table 1 below, the number of feature points extracted and the logarithm of matched feature points of two images to be stitched, such as 51 in fig. 5 and 52 in fig. 5, without the cutting preprocessing, and the number of feature points extracted and the logarithm of matched feature points, such as 51 in fig. 5 and 52 in fig. 5, after the cutting preprocessing, are recorded. As can be seen from table 1 below, without the overlap region cutting preprocessing, the SIFT algorithm extracts a number of feature points far greater than the number of feature points extracted after preprocessing. From 51 in fig. 5 and 52 in fig. 5, the number of the former is about 8 to 9 times the number of the latter (the multiple is related to the overlapping ratio of the images themselves, and the smaller the overlapping ratio, the larger the multiple). Before and after the cutting pretreatment, the number of the feature points which can be successfully matched is almost consistent. And only the matched feature points are helpful for image splicing. Therefore, on the premise of ensuring that the number of the feature points which are correctly matched is almost unchanged, the cutting preprocessing can effectively remove a large number of useless feature points, namely, the operation efficiency is improved while the registration accuracy is ensured.
TABLE 1 comparison table of feature point number extracted before and after cutting pretreatment and matching feature point number
Figure GDA0003926983680000161
Step S407, compensating the coordinates of the matched feature points in each reduced image by using a compensation calculation method;
each pixel point of the image has a corresponding coordinate in the matrix. Cutting the image can make part of the pixel disappear. And the coordinates of the rest pixels in the new matrix after cutting can be deviated relative to the coordinates before cutting. This coordinate offset needs to be compensated for before the transformation matrix calculation can be performed.
Column coordinate value Cut requiring compensation A It can be expressed as:
Cut A =C A -[C A *a] (3);
wherein, C A Is the number of rows of the image A, and a is the overlap ratio of the image A, [ 2 ]]An integer operation is taken down.
Step S408, obtaining an initial transformation matrix by using RANSAC algorithm and affine transformation according to the matched feature points in each reduced image;
random Sample Consensus (RANSAC) is an algorithm for obtaining valid Sample data by calculating mathematical model parameters of data according to a set of Sample data sets containing abnormal data.
Affine transformation, also called affine mapping, refers to a geometric transformation in which one vector space is linearly transformed and then translated into another vector space.
And (3) calculating an initial transformation matrix by affine transformation on the matched feature points, wherein the initial transformation matrix is specifically expressed as a formula (4):
Figure GDA0003926983680000171
where ax and ay represent the scaling amount of the abscissa and the ordinate, bx and by represent the rotation amount and the shearing amount of the abscissa and the ordinate, and cx and cy represent the offset amount of the abscissa and the ordinate. According to the change of the feature point coordinates before and after transformation, corresponding parameters can be obtained, and then a matrix of two-dimensional pixel coordinate transformation relation of two adjacent images is obtained.
For example, 4 pairs of feature points are randomly extracted from two images that have been matched, and the transform matrix is calculated using affine transformation. And performing coordinate transformation on all the feature points of the image B to a coordinate system taking the image A as a reference by using the calculated transformation matrix, performing distance calculation on the feature point coordinates of the transformed image B and the feature point coordinates of the image A, selecting the feature points with the Euclidean distance of not more than 3 as correct points, and counting the number of the correct points corresponding to the transformation matrix. The process is repeated 2000 times, and the transformation matrix with the most correct points is selected to be designated as the optimal transformation matrix, so that the screening process is completed, and the accuracy of the transformation matrix is improved.
And after splicing is finished, specifying the point pair with the Euclidean distance between the matched feature points not greater than 3 as a correct point, otherwise, specifying the point pair as an error point. And dividing the number of the correct points by the total number of the matched feature points to obtain the correct rate of the matched feature points. As shown in table 2, the matching feature point accuracy without the cutting pre-processing and the matching feature point accuracy with the cutting pre-processing are recorded.
TABLE 2 comparison table of matching feature point splicing accuracy before and after cutting pretreatment
—— Without cutting pretreatment Is subjected to cutting pretreatment
Accuracy of matched feature points 95.48% 99.03%
As can be seen from table 2 above, although some matching feature points are added before the cutting pretreatment, most of the matching feature points are mismatching feature points, and thus the splicing accuracy is not helped. Therefore, the cutting preprocessing also has a certain screening effect, can effectively remove some wrong matching feature points from the non-overlapping area, improves the accuracy of the matching feature points, and is helpful to the accuracy of image splicing.
Step 409, calculating and obtaining the offset of each original image and the reduced image by using the initial transformation matrix;
firstly, calculating the side length cols0 of the spliced original images by using an initial transformation matrix; then, calculating the side length cols1 of the spliced reduced images by using the initial transformation matrix; then, the offset caused by scaling the initial transformation matrix is calculated, and the calculation formula is formula (5):
offset=cols1/N-cols0 (5);
s410, compensating the initial transformation matrix according to the offset to obtain a target transformation matrix;
and compensating the offset of the initial transformation matrix, and compensating the offset of the ordinate in the initial transformation matrix to cx + offset to obtain a target transformation matrix.
The offset refers to the offset of all pixel positions of the image, that is, the position error of the pixel. The transformation matrix obtained by the reduced image calculation is directly acted on the original image, so that the coordinates of pixel points after the original image is spliced generate errors. After the offset is compensated in the initial transformation matrix, the obtained target transformation matrix can realize the overall offset of the images during splicing, and the error caused by calculation by using the reduced images is eliminated.
Here, the description will be given taking an example in which the image side length is reduced to one half of the original image side length, and equations (6) and (7) below are comparisons between the target transformation matrix after offset compensation and the transformation matrix corresponding to the original image. Where equation (6) is the reduced transform matrix after offset compensation, and equation (7) is the transform matrix calculated from the original image.
Figure GDA0003926983680000181
Figure GDA0003926983680000182
As can be seen from the above matrices (6) and (7), the target transformation matrix has very little error, almost the same as the transformation matrix calculated from the original image. Therefore, the original image can be reduced by the scaling preprocessing method, so that the speed of extracting and matching the feature points can be greatly increased, and only the offset compensation needs to be carried out on the transformation matrix before the image projection.
S411, projecting the original image to be spliced by using a target transformation matrix to obtain a big image subjected to splicing;
here, the left and right stitched images are taken as an example: the projection means that the images to be stitched are placed in a new matrix according to the mutual position relationship (obtained by transforming the matrix) by taking the leftmost image as a reference, so as to form a stitched large image.
And S412, performing smooth transition on the overlapped area between every two spliced images by using a weighted average algorithm, eliminating the splicing seams and obtaining a complete wide satellite-borne ScanSAR image.
The projection simply places the image at the stitching location corresponding to the image. After the projection is finished, the pixel values of the overlapping area between the adjacent images are overlapped. In other words, the overlapping area between the images becomes bright. In the step, a weighted average algorithm is used for pixel values in an image overlapping area, so that the overlapping effect is removed, and the purpose of smooth transition of the pixel values between adjacent images is achieved.
The purpose of processing by the weighted average algorithm is to make the weight occupied by the image A larger in the overlapping area of the two images and close to the image A; and the weight occupied by the image B is larger at a position close to the image B. Therefore, relatively obvious brightness change can not occur, and obvious splicing strips or splicing gaps can not occur, so that the purpose of eliminating the splicing gaps or splicing strips is achieved, and a complete image spliced by the satellite-borne ScanSAR is obtained.
The scaling is given here in comparison with the image stitching speed before and after the cutting pre-processing, as shown in table 3: the time required for image stitching without the scaling and cutting pre-processing and the time required for image stitching after the scaling and cutting pre-processing are recorded.
TABLE 3 comparison of image stitching times before and after zoom and cut preprocessing
Without scaling and cutting preprocessing Through scaling and cutting pretreatment
Image stitching time (seconds) 263.29 14.83
Therefore, the image subjected to zooming and cutting preprocessing is greatly improved in splicing speed, and the preprocessing method has obvious advantages for splicing large-size images.
Fig. 5 is an image stitching effect diagram after image preprocessing and an image stitching effect diagram without image preprocessing provided in the embodiment of the present application, where 51 is a left image to be stitched, 52 is a right image to be stitched, 53 is an overlapping portion of 51 and 52 on 51, 54 is an overlapping portion of 51 and 52 on 52, 55 is an image stitching effect diagram after image preprocessing, and 56 is an image stitching effect diagram without image preprocessing. Comparing 55 with 56, it can be seen that the image preprocessing method improves the processing speed, but does not affect the image stitching effect.
According to the embodiment of the application, image scaling and overlapping region cutting are used for carrying out equal preprocessing, feature point extraction and matching based on a scale-invariant feature transformation algorithm, transformation matrix calculation based on an affine transformation and random sampling consistency algorithm, image projection based on transformation matrix scaling compensation and image fusion based on a weighted average algorithm. The problems of overlarge image, excessive feature points and excessively slow matching speed in the splicing process of the ScanSAR image are solved.
Based on the foregoing embodiments, an image stitching apparatus is provided in the embodiments of the present application, where the apparatus includes modules and units included in the modules, and may be implemented by a processor in an image stitching device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 6 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus 600 includes an obtaining module 601, a reducing module 602, a determining module 603, a cutting module 604, and a stitching module 605, where:
an obtaining module 601, configured to obtain K original images sequentially arranged according to a first dimension, where K is an integer greater than or equal to 1;
a reducing module 602, configured to reduce each original image to obtain K reduced images sequentially arranged according to the first dimension;
a determining module 603, configured to determine a target image overlapping rate corresponding to two adjacent reduced images;
a cutting module 604, configured to cut two corresponding adjacent reduced images according to an overlap ratio of each target image, so as to obtain an overlap area between K-1 pairs of the two adjacent reduced images;
and a stitching module 605, configured to stitch the K original images according to the overlapping area of the K-1 pair of the two adjacent reduced images.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the method for image stitching is implemented in the form of a software functional module and is sold or used as a standalone product, the method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an apparatus for image stitching (which may be a tablet computer, a notebook computer, a desktop computer, a server cluster, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an image stitching device, and fig. 7 is a schematic diagram of a hardware entity of the image stitching device according to the embodiment of the present application, as shown in fig. 7, the hardware entity of the electronic device 700 includes: comprising a memory 701 and a processor 702, said memory 701 storing a computer program operable on the processor 702, said processor 702 implementing the steps in the method of image stitching provided in the embodiments described above when executing said program.
The Memory 701 is configured to store instructions and applications executable by the processor 702, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the apparatus 700 for image stitching and the processor 702, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Correspondingly, the present application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for image stitching provided in the above embodiments.
Here, it should be noted that: the above description of the storage medium and device embodiments, similar to the description of the method embodiments above, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application or portions thereof that contribute to the related art may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions for enabling an apparatus for image stitching to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several product embodiments presented in this application can be combined arbitrarily, without conflict, to arrive at new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of image stitching, the method comprising:
acquiring K original images which are sequentially arranged according to a first dimension, wherein K is an integer which is more than or equal to 1;
reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
determining the overlapping rate of the target images corresponding to the two adjacent reduced images;
cutting two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between K-1 pairs of the two adjacent reduced images;
extracting by using a scale invariant feature transform algorithm and matching J of each pair of the overlapping regions by using Euclidean distance p For the matched feature points, J is an integer which is more than or equal to 1, and p is an integer which is more than or equal to 1 and less than or equal to K-1;
compensating for J in each of the reduced images using a compensation formula p Obtaining compensated J for the coordinates of the matched feature points p Matching the feature points;
according to the compensated J of each pair of the reduced images p For the matched feature points, obtaining K-1 target transformation matrixes corresponding to K reduced images by using a random sampling consistency algorithm and affine transformation;
and projecting the K pairs of original images by using the K-1 target transformation matrixes to obtain spliced images.
2. The method according to claim 1, wherein the determining the target image overlapping ratio corresponding to two adjacent reduced images comprises:
determining the image overlapping rate between two adjacent original images in the K original images, and determining the image overlapping rate between the two adjacent original images as the target image overlapping rate; alternatively, the first and second electrodes may be,
and determining the image overlapping rate between two adjacent reduced images in the K reduced images as the target image overlapping rate.
3. The method of claim 1, wherein the J of each pair of the overlapping regions is extracted and matched using Euclidean distance using a scale invariant feature transform algorithm p For the matched feature points, the method comprises the following steps:
equally dividing each pair of overlapping areas into M parts along a second dimension to obtain M cutting images of each pair of overlapping areas;
extracting feature points of each cutting image in the M cutting images by using a scale invariant feature transform algorithm and matching by using Euclidean distance to obtain a corresponding pair of J of the overlapping regions p And matching the feature points.
4. The method of claim 1, wherein the compensating J in each reduced image using a compensation formula p Obtaining compensated J for the coordinates of the matched feature points p For the matched feature points, the method comprises the following steps:
acquiring an original matrix corresponding to each reduced image;
obtaining the compensation formula according to the number of columns of the original matrix and the overlapping rate;
compensating for J in each reduced image using the compensation formula p Obtaining compensated J for the coordinates of the matched feature points p And matching the feature points.
5. The method of claim 1, wherein the compensated J is based on each pair of the scaled-down images p For the matched feature points, K-1 corresponding to K reduced images are obtained by using a random sampling consistency algorithm and affine transformationAn object transformation matrix comprising:
according to the compensated J of each pair of the reduced images p For the matched feature points, obtaining K-1 initial transformation matrixes corresponding to the K reduced images by using a random sampling consensus algorithm and affine transformation;
obtaining the offset of each original image and the reduced image;
and compensating each initial transformation matrix according to the offset to obtain a target transformation matrix.
6. The method of claim 5, wherein obtaining the offset of each original image from the reduced image comprises:
acquiring I of each pair of the original images j Angular point coordinates of where I j Is an integer of 2 to 4 inclusive, j is an integer of 1 to 4 inclusive;
obtaining I of each pair of said reduced images j Coordinates of the angular points;
according to I of each pair of the original images j The angular point coordinates are used for splicing each pair of the original images by using the initial transformation matrix to obtain the side length of the pre-spliced original images;
according to I of each pair of said reduced images j The angular point coordinates are used for splicing each pair of reduced images by using the initial transformation matrix to obtain the side length of the pre-spliced reduced images;
and dividing the side length of the pre-spliced reduced image by the scaling multiple of the side length of the original image, and subtracting the side length of the pre-spliced original image to obtain the offset of each original image and the reduced image.
7. An apparatus for image stitching, the apparatus comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring K original images which are sequentially arranged according to a first dimension, and K is an integer which is more than or equal to 1;
the reducing module is used for reducing each original image to obtain K reduced images which are sequentially arranged according to the first dimension;
the determining module is used for determining the overlapping rate of the target images corresponding to the two adjacent reduced images;
the cutting module is used for cutting the two corresponding adjacent reduced images according to the overlapping rate of each target image to obtain an overlapping area between the K-1 pair of the two adjacent reduced images;
an extraction module for extracting and matching J of each pair of the overlapping regions by using a scale invariant feature transform algorithm p For the matched feature points, J is an integer which is more than or equal to 1, and p is an integer which is more than or equal to 1 and less than or equal to K-1;
a compensation module for compensating J in each of the reduced images using a compensation formula p Obtaining compensated J for the coordinates of the matched feature points p Matching the feature points;
a transform module for transforming the compensated J according to each pair of the downscaled images p For the matched feature points, obtaining K-1 target transformation matrixes corresponding to K reduced images by using a random sampling consistency algorithm and affine transformation;
and the projection module is used for projecting the K pairs of original images by using the K-1 target transformation matrixes to obtain spliced images.
8. An apparatus for image stitching, comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of the method for image stitching according to any one of claims 1 to 6.
CN202010312548.5A 2020-04-20 2020-04-20 Image splicing method, device, equipment and storage medium Active CN111709876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010312548.5A CN111709876B (en) 2020-04-20 2020-04-20 Image splicing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010312548.5A CN111709876B (en) 2020-04-20 2020-04-20 Image splicing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111709876A CN111709876A (en) 2020-09-25
CN111709876B true CN111709876B (en) 2023-02-03

Family

ID=72536673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010312548.5A Active CN111709876B (en) 2020-04-20 2020-04-20 Image splicing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111709876B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223065B (en) * 2021-03-30 2023-02-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Automatic matching method for SAR satellite image and optical image
CN115625435B (en) * 2022-11-14 2023-05-26 广东瑞洲科技有限公司 Intelligent identification cutting method, device and system
CN116909240B (en) * 2023-09-13 2023-11-17 深圳市今天国际智能机器人有限公司 Loading and unloading vehicle dynamic path planning method and system of AGV trolley and related medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104614725A (en) * 2015-01-21 2015-05-13 中国科学院电子学研究所 Scanning synthetic aperture radar image quality improving method and device
WO2016169699A1 (en) * 2015-04-23 2016-10-27 Forest Vision As A system, an apparatus and a method for determining mass change in a study area using remote sensing data
CN110211076A (en) * 2019-05-09 2019-09-06 上海联影智能医疗科技有限公司 Image split-joint method, image mosaic device and readable storage medium storing program for executing
CN110208798A (en) * 2019-05-27 2019-09-06 西安空间无线电技术研究所 A kind of spaceborne mosaic SAR image processing method of high score wide cut and system
CN110515078A (en) * 2019-07-27 2019-11-29 西南电子技术研究所(中国电子科技集团公司第十研究所) Beam position design method for airspace covering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104614725A (en) * 2015-01-21 2015-05-13 中国科学院电子学研究所 Scanning synthetic aperture radar image quality improving method and device
EP2985627A2 (en) * 2015-01-21 2016-02-17 Institute of Electronics, Chinese Academy of Sciences Method and device for improving quality of scansar image
WO2016169699A1 (en) * 2015-04-23 2016-10-27 Forest Vision As A system, an apparatus and a method for determining mass change in a study area using remote sensing data
CN110211076A (en) * 2019-05-09 2019-09-06 上海联影智能医疗科技有限公司 Image split-joint method, image mosaic device and readable storage medium storing program for executing
CN110208798A (en) * 2019-05-27 2019-09-06 西安空间无线电技术研究所 A kind of spaceborne mosaic SAR image processing method of high score wide cut and system
CN110515078A (en) * 2019-07-27 2019-11-29 西南电子技术研究所(中国电子科技集团公司第十研究所) Beam position design method for airspace covering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于方位向多通道的星载SAR Mosaic模式研究;禹卫东等;《电子与信息学报》;20141215(第12期);第2994-3000页 *
基于无人机航拍图像拼接算法的优化;吴成东等;《沈阳建筑大学学报(自然科学版)》;20150115(第01期);第182-192页 *

Also Published As

Publication number Publication date
CN111709876A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111709876B (en) Image splicing method, device, equipment and storage medium
AU2017100064A4 (en) Image Stitching
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
KR101165523B1 (en) Geospatial modeling system and related method using multiple sources of geographic information
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
JP5216834B2 (en) Object display device and object display method
CN111179358A (en) Calibration method, device, equipment and storage medium
CN108986152B (en) Foreign matter detection method and device based on difference image
KR20110120317A (en) Registration of 3d point cloud data to 2d electro-optical image data
US20150104097A1 (en) Image processing apparatus and image processing method
Gonçalves et al. CHAIR: Automatic image registration based on correlation and Hough transform
CN114155150A (en) Image stitching method and device applied to large parallax scene
CN115236655A (en) Landslide identification method, system, equipment and medium based on fully-polarized SAR
CN115035235A (en) Three-dimensional reconstruction method and device
CN112233062A (en) Surface feature change detection method, electronic device, and storage medium
CN113077523B (en) Calibration method, calibration device, computer equipment and storage medium
CN112288813B (en) Pose estimation method based on multi-view vision measurement and laser point cloud map matching
CN113223176A (en) Method and device for acquiring multi-dimensional pipeline characteristic parameters
CN115176284A (en) Image processing apparatus, image processing method, and image processing program
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN111738061A (en) Binocular vision stereo matching method based on regional feature extraction and storage medium
Sustika et al. Generative adversarial network with residual dense generator for remote sensing image super resolution
CN116091998A (en) Image processing method, device, computer equipment and storage medium
CN115661689A (en) Red tide region determining method, storage medium and electronic device
EP3839882B1 (en) Radiometric correction in image mosaicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant