CN113723465B - Improved feature extraction method and image stitching method based on same - Google Patents

Improved feature extraction method and image stitching method based on same Download PDF

Info

Publication number
CN113723465B
CN113723465B CN202110883189.3A CN202110883189A CN113723465B CN 113723465 B CN113723465 B CN 113723465B CN 202110883189 A CN202110883189 A CN 202110883189A CN 113723465 B CN113723465 B CN 113723465B
Authority
CN
China
Prior art keywords
image
feature
images
algorithm
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110883189.3A
Other languages
Chinese (zh)
Other versions
CN113723465A (en
Inventor
石振锋
张萌菲
张孟琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110883189.3A priority Critical patent/CN113723465B/en
Publication of CN113723465A publication Critical patent/CN113723465A/en
Application granted granted Critical
Publication of CN113723465B publication Critical patent/CN113723465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

An improved feature extraction method and an image stitching method based on the method belong to the technical field of digital image processing. The invention improves the splicing speed of the aerial image under the condition of not reducing the splicing precision. The feature extraction method utilizes FAST-9 algorithm to splice images I A And I B And carrying out feature extraction, obtaining feature points by utilizing Harris corner detection, obtaining feature strings by utilizing an improved BRISK algorithm, and realizing feature extraction. According to the image stitching method, the ORB-based image stitching algorithm is combined with the characteristics of the aerial image returned by the unmanned aerial vehicle, an improved ORB-based rapid image stitching algorithm is provided, the technical effect of rapidly and efficiently obtaining the panoramic image is achieved, and through simulation experiment verification, the stitching speed of various stitching algorithms can be effectively improved under the condition that accuracy is not lost. The invention can be widely applied to splicing the aerial image. The global image of the aerial photographing region can be obtained rapidly and accurately.

Description

Improved feature extraction method and image stitching method based on same
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an improved feature extraction algorithm and an image stitching algorithm based on the algorithm.
Background
In a severe environment after a disaster occurs, on-site image information acquisition becomes dangerous and slow, and an image acquired by using a common camera tends to have a smaller field of view, so that when a shooting scene becomes large, the resolution of the acquired image is low, and distortion is often caused by using a panoramic camera or a wide-angle lens. The unmanned aerial vehicle shooting has the characteristics of flexibility, maneuverability and the like, and is very suitable for on-site image acquisition after disaster occurrence. When a disaster occurs, global image information of the disaster affected area is acquired rapidly, and the situation of disaster distribution of the area is mastered globally, so that the situation is very important for subsequent post-disaster rescue work.
Global image information depends on an image stitching technology, and the image stitching technology is one of the problems which are always focused on in image processing and computer vision. The image stitching is achieved by two key techniques, one is image registration and one is image fusion. The image registration is to align two images obtained under different shooting conditions under the same coordinate system, and the common registration method mainly comprises two methods, namely an area-based image registration method and a feature-based image registration method. The image fusion technology is mainly used for solving the problem of joint due to uneven illumination and the like.
The Harris corner detection algorithm is one of the typical feature detection algorithms in the early stages, but this algorithm does not meet the scale invariance (Smith S M, brady J M SUSAN a New Approach to Low Level Image Processing [ J ] International Journal of Computer Vision, 1997, 23 (1): 45-78.). Thus, the SIFT (Scale Invariant Feature Transform) algorithm was proposed in 1999, and in 2004, we increased SIFT (Lowe D g, distinctive Image Features from Scale-Invariant Keypoints J International Journal of Computer Vision, 2004, 60 (2): 91-110), which performed very well in terms of robustness, and the extracted features of the algorithm remained unchanged even if the images of the features to be extracted were rotated, scaled, affine transformed, or subjected to different noise and illumination conditions. However, the SIFT operator uses 128-dimensional feature vectors to express the direction of the feature points, resulting in a significant amount of time required for the algorithm to calculate the feature vectors. In order to solve the disadvantage that the SFIT algorithm consumes a large amount of calculation time, many scholars give different improvements: in 2006, bay and Ess proposed a SURF (Speeded Up Robust Feature) operator, which was based on invariant techniques (Bay H, tuytelaars T, van Gool L. SURF: speeded up Robust Features [ C ]. European conference on computer vision, 2006:404-417.). The method changes the mode of generating the scale pyramid and the constitution of the feature vector of the feature descriptor, keeps the robustness and anti-interference performance of the SIFT feature detection operator, greatly improves the feature extraction speed, and is 3-5 times of that of the SIFT feature extraction algorithm. In 2011, rubree et al improved the FAST feature detector and BRIEF feature descriptor, resulting in an operator ORB (Oriented FAST and Rotated BRIEF) (Bay H, tuytelaars T, van Gool L. SURF: speeded up Robust Features [ C ]. European conference on computer vision, 2006: 404-417.) that can be quickly calculated and perform well under varying lighting conditions and rotational variations, but that performs poorly under varying scale scenarios.
The image registration algorithm based on the characteristics can ensure the quality of the image and has better robustness, however, the algorithm has high complexity and large calculated amount; the region-based registration algorithm can well solve the problems of rotation, translation and scaling of images, however, the algorithm has higher requirements on the overlapping degree and the size of the images and is not suitable for the stitching of unmanned aerial vehicle images. Therefore, the method has the advantages of realizing high registration speed, reducing ineffective search, reducing mismatching, guaranteeing robustness and trade-off between the quality of image stitching of the evidence and finally realizing rapid automatic stitching, and is needed to be solved by the image stitching algorithm of the current unmanned aerial vehicle.
Disclosure of Invention
The invention provides an improved feature extraction method and an image splicing method based on the method, which aims to solve the problem of improving the splicing speed of aerial images under the condition of not reducing the splicing precision:
an improved region-based feature extraction method, comprising the steps of:
s1, utilizing a FAST-9 algorithm to splice imagesI A AndI B extracting features and obtaining coordinates of feature points, sequencing the obtained feature points from good to bad by utilizing Harris corner detection, and screening to obtain feature points with better corner record performance;
s2, determining the direction of each characteristic point descriptor through the intensity centroid;
s3, obtaining a binary characteristic string through an improved BRISK algorithm, namely the extracted characteristic information.
2. Further defined, S1 utilizes FAST-9 algorithm to splice imagesI A AndI B in the process of feature extraction, the range of values of feature extraction is as follows:
images to be splicedI A AndI B the overlap ratio between them is γ, γ e (0,1) Setting threshold delta, delta epsilon (0, gamma) for feature extraction, and splicing imagesI A AndI B is all wide ofwAll heights arehOverlapping areas of the images to be spliced are positioned at the upper side, the lower side, the left side or the right side of the images to be spliced, and the corresponding areas needing to extract the features are marked as omega t 、Ω b 、Ω l 、Ω r The specific value range is shown in the following formula:
further defined, the method of determining the direction of each feature point descriptor by the intensity centroid as set forth in S2 is as follows:
determining the direction of the feature point descriptors through the intensity centroid: setting the positions of the characteristic points in the images to be spliced asOThe moment defining the neighborhood in which the feature point is located is:
wherein,p,qe (0, 1), and defining the brightness center of the neighborhood where the feature point is located by using the moment as follows:
obtaining the position of the characteristic pointOPointing to the centre of brightnessCVector of (3)The direction defining the feature area is thus:
where atan2 is a quadrant version of arctan, i.e., the output is a vectorAndXand an included angle in the positive direction of the shaft.
Further defined, the modified BRISK algorithm of S3 is as follows:
in smoothing pixel blockspThe corresponding binary test is defined as:
wherein,p(x) Is a pixel blockPAt the pointxIs finally characterized by the brightness ofnBinary vector of dimensions:
the other scheme of the invention is as follows: the image stitching method based on the characteristic extraction method comprises the following steps:
(1) Image preprocessing: inputting images to be splicedI A AndI B performing image rotation, image enhancement and smoothing pretreatment;
(2) ORB feature extraction: acquiring a binary feature string according to the feature extraction method;
(3) Eliminating mismatching: characteristic point pairs are obtained through a k nearest neighbor algorithm, and then screening is carried out through a random consistency sampling algorithmI A AndI B to-be-matched feature points, and eliminating a large number of mismatching; by setting threshold values, euclidean distance between feature descriptors is used as main reference basis for feature registrationtSelecting feature points with better matching effect, for each feature point in the images to be spliced, searching potential matching points closest to the feature points in the images to be matched, and when the potential matching points are closest to the feature pointsd 1 Close distance to each otherd 2 The relation between them satisfies the inequalityd 1 /d 2tWhen the feature matching point is found, the nearest point is considered to be the correct feature matching point;
(4) Image registration: after obtaining at least 4 pairs of paired points, solving a transformation model between images to be registered by the following formula, and applying matrix parameters obtained by solving to the images to be splicedI B Obtaining a transformed image B
Wherein, (x, y, 1) T Homogeneous coordinates representing feature points of images to be stitched, (x)´,y´,1) T Homogeneous coordinates representing the feature points registered therewith;
(5) And (3) image fusion: transform the image B And images to be splicedI A As input, respectively image B AndI A the pixel points are calculated according to the formula (1) to obtain a conversion distanced 1d 2 Further, α for Alpha fusion is obtained, and substituted into formula (2), and a fused image, that is, a final image, is obtained according to the Alpha fusion algorithm, wherein formulas (1) and (2) are as follows:
(1)
(2)。
the present invention also provides a computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor performing the improved region-based feature extraction method described above when the processor runs the computer program stored in the memory.
The invention also provides a computer device which comprises a memory and a processor, wherein the memory stores a computer program, and when the processor runs the computer program stored in the memory, the processor executes the image stitching method.
The invention has the beneficial effects that:
the invention improves the splicing speed of the aerial image on the premise of ensuring the splicing precision, and has the specific effects that:
the improved region-based feature extraction method is provided by improving the feature extraction mode, and is applied to the image stitching method based on feature extraction, so that the good performance of the algorithm in the aerial image stitching is effectively verified through a specific comparison experiment.
The invention applies the improvement to the SIFT-based image stitching algorithm, the SURF-based image stitching algorithm and the ORB-based image stitching algorithm, and the simulation experiment is carried out, and the result shows that: the improved region-based feature extraction algorithm can effectively reduce the number of feature extraction, is suitable for various image stitching algorithms, and can effectively reduce the time required by stitching while guaranteeing the image stitching precision. Especially for an image splicing algorithm based on SIFT features and a complex feature descriptor, in twenty groups of simulation experiments, the minimum splicing speed of the improved algorithm is 2 times of the original algorithm splicing speed, and when the number of pictures is large, the significance of the algorithm splicing speed is particularly obvious, and the total time consumption of the improved algorithm is less than 1/4 of the time consumption of the original algorithm. The improved algorithm also shows very good performance in SURF-based image stitching simulation, the total stitching speed is 1/2 of that of the original algorithm, in addition, the average PSNR value is consistent with that of the original algorithm, and the stitching precision is not reduced while the stitching speed is improved. In the improved simulation based on the ORB algorithm, even if the ORB feature extraction operator has the characteristics of low computational complexity and high feature extraction speed, the original ORB-based image stitching is greatly improved in stitching speed compared with the prior two algorithms, the improved region-based feature extraction algorithm provided herein still improves the stitching speed of the algorithm on the original basis, the improved algorithm saves about 20% of time in the time consumption of 20 pairs of image stitching compared with the original algorithm, and the average PSNR value is not greatly changed, namely the image stitching is finished, and the speed is quickened without influencing the accuracy.
The invention can be widely applied to splicing the aerial image. The global image of the aerial photographing region can be obtained rapidly and accurately.
Drawings
FIG. 1 is a basic flow chart of image stitching;
fig. 2 is an exemplary diagram of an image transformation effect, in which a in fig. 2 is an original image, b is a rigid transformation, c is a similarity transformation, d is an affine transformation, and e is a perspective transformation;
FIG. 3 is an image stitching effect diagram; wherein a in fig. 3 is an image to be spliced, b is a splice effect diagram based on SIFT characteristics, c is a splice effect diagram based on SURF characteristics, and d is a splice effect diagram based on ORB characteristics;
FIG. 4 is a diagram of a "bow" shaped line capture path;
FIG. 5 is a simulated view of an aerial photography process; wherein a in fig. 5 is a shooting process demonstration diagram, and b is a return pattern effect diagram;
FIG. 6 is a schematic diagram of a feature extraction region;
fig. 7 is a view of a panoramic stitching result, where a in fig. 7 is an image to be stitched, and b is a modified ORB-based image stitching effect.
Detailed Description
Example 1: improved region-based feature extraction method
There are many ways to implement image stitching, and the specific details of the implementation of different algorithms vary somewhat, but include implementation steps that are approximately the same. In general, image stitching is mainly performed according to the flow shown in fig. 1.
The general steps of image stitching are as follows:
(1) Feature extraction
By analyzing the image, the position coordinates of the feature points in the image are obtained by finding solutions meeting the corresponding extremum conditions, while for describing the feature points, corresponding feature descriptors are constructed as description vectors of the feature points. For the subsequent related work, the extracted features should remain unchanged under the conditions of uneven illumination, image translation, rotation and scaling. The feature detection algorithm which meets the above conditions and has better performance at present mainly comprises a SIFT operator, a SURF operator and an ORB feature detection operator which introduces a scale space.
(2) Image registration
The image registration refers to two images to be registered, one image is mapped onto the other image by constructing a space transformation model, and finally coordinate points of the two images positioned at the same position in space are mutually overlapped, so that image pixels are further matched. Feature-based image registration is carried out by determining feature points and feature vectors of images to be spliced, selecting feature point pairs meeting the conditions through a corresponding algorithm, and finally solving to obtain a corresponding transformation model.
Is provided withI 1 (x,y),I 2 (u,v) Respectively representing imagesI 1 ,I 2 Pixel point of [ ]x,y),(u,v) Is used for registering the space coordinate transformation between the images, and the following formula is shown:
some of the spatial transformation models commonly used at present include rigid transformation, similarity transformation, affine transformation, perspective transformation, and the like. The perspective transformation model is the most general transformation model, and a specific transformation effect of each transformation model is shown in fig. 2.
According to the imaging principle of a camera, when the camera moves under a certain condition, the transformation relation between the coordinates of different images in the same scene acquired by the camera is a 3×3 matrix, which is called homography matrix. The transformation between images generally belongs to perspective transformation, and a homography matrix of a perspective transformation model is given below.
When the image is subjected to perspective transformation, parallel straight lines in the image may not be parallel any more, and when the original parallel straight lines are compared with an infinity point, the transformation is compared with the original image into perspective transformation, and the transformed effect is shown as an e-diagram in fig. 2. The corresponding normalized homography matrix is as follows:
the number of degrees of freedom of the transformation is 8, and for perspective transformation with 8 degrees of freedom, at least 4 sets of non-collinear feature matching pairs exist to calculate all parameters of the matrix.
(3) Image fusion
Due to different effects of shooting angles, illumination and shooting environments, obvious stitching seams tend to occur if images are directly stitched, and blurring and distortion may occur in the overlapping area. In order to achieve a good stitching effect, a suitable image fusion method is required to be selected.
In consideration of the characteristics of the images shot herein, the Alpha fusion method is used herein to realize image fusion in the image stitching simulation process. Alpha fusion involves an important concept-Alpha channel. In a picture taken normally, there are only three channels of RGB, and a channel representing the transparency size of each pixel is called Alpha channel, except for three primary color channels for describing a digital image. The Alpha channel concept indicates: one pixel can be formed byr,g,b,α) Four channels represent @ and @αr,αg,αb) Representing the contribution of each color of the image to the pixel value of the point, wherein alpha #x,y)∈[0,1]Representing pixel points [ ]x,y) The transparency was divided into 256 levels, pure white was opaque (α=1, corresponding to 255 levels), and pure black was completely transparent (α=0, corresponding to 0 levels).
For image fusion, two images to be fused are regarded as a foreground and a background respectively, and then the foreground image is extracted from a single ground color to obtain a foreground image with a channel, which is called Mask. On Mask, alpha is found at each pixel in the imagex,y) =1, image outside αx,y) =0. In order to solve the jaggy phenomenon at the image edge, alpha values at edge pixels of Mask satisfy Alpha #x,y)∈(0,1)。
After Alpha values at each pixel are determined, the image to be fused is calculated according to the pixel values of three channels of RGB respectively, and finally components of the three channels are synthesized into one pixel to be output.
Wherein alpha is%x,y)∈[0,1,]I=1, 2,3 denotes three pixel channels.
Under the condition of the same environment configuration and parameter setting, the invention respectively carries out preliminary image splicing simulation experiments on the aerial image with the resolution of 7952 multiplied by 5304 according to the image splicing flow based on SIFT features, SURF features and ORB features. The experiment is used for comparing the performances of three different algorithms in aerial image splicing, the specific splicing effect is shown in fig. 3, and the simulation experiment result is shown in table 1.
Table 1 comparison of the results of the aerial image stitching simulation experiments of three different algorithms
From the view of the image stitching effect shown in fig. 3, three kinds of stitching effects are not too different intuitively, and the stitching effect is ideal after the images are fused.
The performance of the image stitching algorithm is judged by taking the peak signal-to-noise ratio (PSNR) of the stitched image as a main basis, and the specific definition of the PSNR is as follows:
wherein,MNrefers to the width and length of the image;ntaking 8 generally, and counting the corresponding bit number of each pixel;I o representing original pictures, representing spliced and fused images, wherein the larger the PSNR value is, the better the effect of splicing and fusing the images is.
The evaluation of the simulation experiment effect is that 20 pictures are respectively subjected to overlapping cutting, then the cut images are spliced according to different splicing algorithms, the spliced images are respectively compared with original pictures to obtain corresponding PSRNR values, and finally the PSRNR values are averaged to be used as PSNR for evaluation. The invention records the index value related to time and the index value for measuring the splicing effect in the simulation experiment process respectively, and the specific value is shown in the table 1.
As can be seen from the data in table 1, in the application scenario, the image stitching algorithm based on the ORB has great advantages in speed and small difference in precision from the image stitching algorithm based on the SIFT feature, so that the invention is improved on the basis of the image stitching algorithm based on the ORB.
From the results of table 1, the calculation amount of image stitching is mainly brought by image registration, so that improvement of the image registration algorithm will directly affect the image stitching speed. The image registration is only required to extract 4 pairs of sufficiently matched characteristic points, and the existing registration mode is to directly extract the characteristics of the two registered images, so that a large number of characteristic point pairs are obtained. The feature extraction and description is a process requiring complex computation such as convolution, and brings great time complexity to the algorithm. The data in Table 1 shows that feature extraction takes up approximately 3/4 of the time in the algorithm time for image stitching, which is what verifies.
In combination with the needs of the real application scene, when shooting in real time, the unmanned aerial vehicle shoots according to the shooting path shown in fig. 4 and stores shooting data. An illustration of the process of capturing and collecting data by the unmanned aerial vehicle and a return picture style effect diagram are shown in fig. 5.
According to the distribution characteristics of the overlapping areas of the images to be spliced, the characteristic points of the overlapping parts are really useful in characteristic pairing. Therefore, before feature extraction, the image to be spliced is firstly subjected to block pair segmentation, and then ORB feature extraction is carried out on each block, so that the calculated amount is reduced fundamentally, and the aim of improving the timeliness of an algorithm is fulfilled. The schematic diagram of the feature extraction area is shown in fig. 6.
Setting the overlapping rate between the images to be spliced as gamma, gamma epsilon (0, 1), setting the threshold delta, delta epsilon (0, gamma) for feature extraction, and setting the width of the images aswHigh ishOverlapping areas of the images to be spliced are positioned at the upper side, the lower side, the left side or the right side of the images, and the corresponding areas needing to extract the features are marked as omega t 、Ω b 、Ω l 、Ω r Specific valueThe range is shown in the following formula:
the ORB-based image stitching algorithm combines the characteristics of the aerial image returned by the unmanned aerial vehicle, and provides an improved region-based feature extraction method, which comprises the following specific steps:
s1, utilizing a FAST-9 algorithm to splice imagesI A AndI B and carrying out feature extraction, obtaining coordinates of feature points, sequencing the obtained feature points from good to bad by utilizing Harris corner detection, and screening to obtain feature points with better corner record performance.
S1, using FAST-9 algorithm to splice imagesI A AndI B in the process of feature extraction, the range of values of feature extraction is as follows:
images to be splicedI A AndI B the overlapping rate between the two images is gamma, gamma epsilon (0, 1), a threshold delta, delta epsilon (0, gamma) for feature extraction is set, and the images to be spliced areI A AndI B is all wide ofwAll heights arehOverlapping areas of the images to be spliced are positioned at the upper side, the lower side, the left side or the right side of the images to be spliced, and the corresponding areas needing to extract the features are marked as omega t 、Ω b 、Ω l 、Ω r The specific value range is shown in the following formula:
s2, determining the direction of each characteristic point descriptor through the intensity centroid:
determining the direction of the feature point descriptors through the intensity centroid: setting the positions of the characteristic points in the images to be spliced asOThe moment defining the neighborhood in which the feature point is located is:
wherein,p,qe (0, 1), and defining the brightness center of the neighborhood where the feature point is located by using the moment as follows:
obtaining the position of the characteristic pointOPointing to the centre of brightnessCVector of (3)The direction defining the feature area is thus:
where atan2 is a quadrant version of arctan, i.e., the output is a vectorAndXand an included angle in the positive direction of the shaft.
S3, obtaining a binary characteristic string through an improved BRISK algorithm, namely the extracted characteristic information.
The modified BRISK algorithm is as follows:
in smoothing pixel blockspThe corresponding binary test is defined as:
wherein,p(x) Is a pixel blockPAt the pointxIs finally characterized by the brightness ofnBinary vector of dimensions:
example 2: image stitching method based on feature extraction method described in embodiment 1
(1) Image preprocessing
Inputting images to be splicedI AI B Respectively toI AI B Image rotation, image enhancement and smoothing preprocessing are performed.
(2) ORB feature extraction
The binary feature string is acquired according to the feature extraction method described in embodiment 1.
(3) Eliminating mismatching
Characteristic point pairs are obtained through a k-nearest neighbor algorithm, and then are screened through a random consensus sampling algorithm (RANSAC)I A AndI B to-be-matched feature points, and eliminating a large number of mismatching; at the same time, euclidean distance between feature descriptors is used as a main reference basis for feature registration by setting a threshold valuetSelecting feature points with better matching effect, for each feature point in the images to be spliced, searching potential matching points closest to the feature points in the images to be matched, and when the potential matching points are closest to the feature pointsd 1 Close distance to each otherd 2 The relation between them satisfies the inequalityd 1 /d 2tIn this case, the closest point is considered to be the correct feature matching point.
(4) Image registration
After obtaining at least 4 pairs of matable points, a transformation model between the images to be registered is solved by the following equation. Wherein, (x, y, 1) T Homogeneous coordinates representing feature points of images to be stitched, (x)´,y´,1) T Representing the homogeneous coordinates of the feature points that are exactly registered with them.
Applying the matrix parameters obtained by solving to the images to be splicedI B Obtaining a transformed image B
(5) Image fusion
Transform the image B And images to be splicedI A As input, respectively image B AndI A the pixel points are calculated according to the formula (1) to obtain a conversion distanced 1d 2 Further, α for Alpha fusion is obtained, and substituted into formula (2), and a fused image, that is, a final image, is obtained according to the Alpha fusion algorithm, wherein formulas (1) and (2) are as follows:
(1)
(2)。
(7) Quality assessment-peak signal to noise ratio (PSNR)
The peak signal-to-noise ratio is one of the indexes used for objective evaluation of image quality. The method can reflect the approximation degree between the processed image and the standard image, but can not introduce visual influence factors of human eyes when performing error sensitivity analysis, so that subjective visual feeling of the human needs to be considered when performing experimental result comparison so as to comprehensively and objectively evaluate the image quality. The definition of PSNR is given below:
wherein,MNrefers to the width and length of the image; taking 8 generally, and counting the corresponding bit number of each pixel;I o the original image is represented by the original image,Ithe larger the PSNR value is, the better the image splicing and fusion effect is.
Algorithm simulation and performance comparison:
in order to ensure that the experimental result can verify the general rule to a certain extent and avoid the accident of the result, the invention respectively carries out image stitching simulation and performance evaluation on 20 groups of images. The relevant environment and parameter configuration of the simulation experiment are shown in table 2.
Table 2 environment and parameter formulation
20 aerial images with the resolution of 7952 multiplied by 5304 are cut into 20 images to be spliced according to the overlapping rate of 75%, and the original images are used as reference images to measure the performance of an image splicing algorithm. Under the environment and parameter configuration of table 2, the improved region-based feature extraction algorithm is applied to the SIFT-based image stitching algorithm, the SURF-based image stitching algorithm and the ORB-based image stitching algorithm, respectively, and simulation experiments are performed. To verify the performance of the improved region-based feature extraction algorithm. The algorithm operation effect is shown in table 3.
Table 3 improved feature extraction algorithm simulation result data display based on three algorithms
According to the results in table 3, the designed algorithm for eliminating mismatching can effectively reduce the number of feature point matching, reduce mismatching and improve the speed and accuracy of image stitching to a certain extent. The improved region-based feature extraction algorithm can effectively reduce the number of feature extraction, is suitable for various image stitching algorithms, and can effectively reduce the time required by stitching while guaranteeing the image stitching precision. Especially for an image splicing algorithm based on SIFT features and a complex feature descriptor, in twenty groups of simulation experiments, the minimum splicing speed of the improved algorithm is 2 times of the original algorithm splicing speed, and when the number of pictures is large, the significance of the algorithm splicing speed is particularly obvious, and the total time consumption of the improved algorithm is less than 1/4 of the time consumption of the original algorithm. The improved algorithm also shows very good performance in SURF-based image stitching simulation, the total stitching speed is 1/2 of that of the original algorithm, in addition, the average PSNR value is consistent with that of the original algorithm, and the stitching precision is not reduced while the stitching speed is improved. In the improved simulation based on the ORB algorithm, even if the ORB feature extraction operator has the characteristics of low computational complexity and high feature extraction speed, the original ORB-based image stitching is greatly improved in stitching speed compared with the prior two algorithms, the improved region-based feature extraction algorithm provided herein still improves the stitching speed of the algorithm on the original basis, the improved algorithm saves about 20% of time in the time consumption of 20 pairs of image stitching compared with the original algorithm, and the average PSNR value is not greatly changed, namely the image stitching is finished, and the speed is quickened without influencing the accuracy.
According to the invention, 12 aerial images with the resolution of 5964 multiplied by 5304 are spliced by an improved ORB-based image splicing method, and the obtained panoramic image is shown in fig. 7. The improved algorithm only takes 15.25 seconds to obtain a seamless high-resolution panorama with the resolution of 10120 multiplied by 7951, and the improved method has better performance in the splicing of aerial images, and has both speed and precision.
Although the improved algorithm of the invention has slight loss in accuracy, the loss is not obviously reflected visually. Meanwhile, the algorithm effectively improves the splicing speed, so that the algorithm is very applicable in the aspect of rapidly acquiring the panoramic image after disaster and is superior to the existing algorithm.

Claims (4)

1. An improved region-based feature extraction method, comprising the steps of:
s1, utilizing a FAST-9 algorithm to splice imagesI A AndI B extracting features and obtaining coordinates of feature points, sequencing the obtained feature points from good to bad by utilizing Harris corner detection, and screening to obtain feature points with better corner record performance;
s2, determining the direction of each characteristic point descriptor through the intensity centroid;
s3, obtaining a binary characteristic string through an improved BRISK algorithm, namely the extracted characteristic information;
s1, utilizing FAST-9 algorithm to splice imagesI A AndI B in the process of feature extraction, the range of values of feature extraction is as follows:
images to be splicedI A AndI B the overlapping rate between the two is gamma, gamma epsilon (0, 1), a threshold delta, delta epsilon (0, gamma) for feature extraction is set, and the images to be spliced areImage forming apparatusI A AndI B is all wide ofwAll heights arehOverlapping areas of the images to be spliced are positioned at the upper side, the lower side, the left side or the right side of the images to be spliced, and the corresponding areas needing to extract the features are marked as omega t 、Ω b 、Ω l 、Ω r The specific value range is shown in the following formula:
the method for determining the direction of each feature point descriptor through the intensity centroid as described in S2 is as follows:
determining the direction of the feature point descriptors through the intensity centroid: setting the positions of the characteristic points in the images to be spliced asOThe moment defining the neighborhood in which the feature point is located is:
wherein,p,qe (0, 1), and defining the brightness center of the neighborhood where the feature point is located by using the moment as follows:
obtaining the position of the characteristic pointOPointing to the centre of brightnessCVector of (3)The direction defining the feature area is thus:
where atan2 is a quadrant version of arctan, i.e., the output is a vectorAndXan included angle in the axial direction;
the modified BRISK algorithm of S3 is as follows:
in smoothing pixel blockspThe corresponding binary test is defined as:
wherein,p(x) Is a pixel blockPAt the pointxIs finally characterized by the brightness ofnBinary vector of dimensions:
2. an image stitching method based on the feature extraction method of claim 1, characterized in that the image stitching method comprises the steps of:
(1) Image preprocessing: inputting images to be splicedI A AndI B performing image rotation, image enhancement and smoothing pretreatment;
(2) ORB feature extraction: acquiring a binary feature string according to the feature extraction method of claim 1;
(3) Eliminating mismatching: characteristic point pairs are obtained through a k nearest neighbor algorithm, and then screening is carried out through a random consistency sampling algorithmI A AndI B to-be-matched feature points, and eliminating a large number of mismatching; by setting threshold values, euclidean distance between feature descriptors is used as main reference basis for feature registrationtSelecting feature points with better matching effect, for each feature point in the images to be spliced, searching potential matching points closest to the feature points in the images to be matched, and when the potential matching points are closest to the feature pointsd 1 Close distance to each otherd 2 The relation between them satisfies the inequalityd 1 /d 2tWhen the feature matching point is found, the nearest point is considered to be the correct feature matching point;
(4) Image registration: after obtaining at least 4 pairs of matable points, solving for the difference between the images to be registered byTransforming the model, and applying matrix parameters obtained by solving to the images to be splicedI B Obtaining a transformed image B
Wherein, (x, y, 1) T Homogeneous coordinates representing feature points of images to be stitched, (x)´,y´,1) T Homogeneous coordinates representing the feature points registered therewith;
(5) And (3) image fusion: transform the image B And images to be splicedI A As input, respectively image B AndI A the pixel points are calculated according to the formula (1) to obtain a conversion distanced 1d 2 Further, α for Alpha fusion is obtained, and substituted into formula (2), and a fused image, that is, a final image, is obtained according to the Alpha fusion algorithm, wherein formulas (1) and (2) are as follows:
(1)
(2)。
3. a computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor performing the improved region-based feature extraction method of claim 1 when the processor runs the computer program stored in the memory.
4. A computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor performing the image stitching method of claim 2 when the processor runs the computer program stored in the memory.
CN202110883189.3A 2021-08-02 2021-08-02 Improved feature extraction method and image stitching method based on same Active CN113723465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110883189.3A CN113723465B (en) 2021-08-02 2021-08-02 Improved feature extraction method and image stitching method based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110883189.3A CN113723465B (en) 2021-08-02 2021-08-02 Improved feature extraction method and image stitching method based on same

Publications (2)

Publication Number Publication Date
CN113723465A CN113723465A (en) 2021-11-30
CN113723465B true CN113723465B (en) 2024-04-05

Family

ID=78674730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110883189.3A Active CN113723465B (en) 2021-08-02 2021-08-02 Improved feature extraction method and image stitching method based on same

Country Status (1)

Country Link
CN (1) CN113723465B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN111080529A (en) * 2019-12-23 2020-04-28 大连理工大学 Unmanned aerial vehicle aerial image splicing method for enhancing robustness

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN111080529A (en) * 2019-12-23 2020-04-28 大连理工大学 Unmanned aerial vehicle aerial image splicing method for enhancing robustness

Also Published As

Publication number Publication date
CN113723465A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
Zhang et al. An image stitching algorithm based on histogram matching and SIFT algorithm
CN109685913B (en) Augmented reality implementation method based on computer vision positioning
Oliveira et al. A probabilistic approach for color correction in image mosaicking applications
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN110992263B (en) Image stitching method and system
CN105809626A (en) Self-adaption light compensation video image splicing method
CN111553939B (en) Image registration algorithm of multi-view camera
CN108022228A (en) Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
TWI639136B (en) Real-time video stitching method
US20220172331A1 (en) Image inpainting with geometric and photometric transformations
CN106952312B (en) Non-identification augmented reality registration method based on line feature description
CN104392416A (en) Video stitching method for sports scene
CN113902657A (en) Image splicing method and device and electronic equipment
CN115883988A (en) Video image splicing method and system, electronic equipment and storage medium
CN116132610A (en) Fully-mechanized mining face video stitching method and system
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
Wang et al. A real-time correction and stitching algorithm for underwater fisheye images
CN117114997A (en) Image stitching method and device based on suture line search algorithm
CN113723465B (en) Improved feature extraction method and image stitching method based on same
CN115035281B (en) Rapid infrared panoramic image stitching method
Zhang Binocular Stereo Vision
Jagadeeswari et al. A comparative study based on video stitching methods
Bai Overview of image mosaic technology by computer vision and digital image processing
Yao et al. An effective dual-fisheye lens stitching method based on feature points
Xu et al. A two-stage progressive shadow removal network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant