CN106683046A - Real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining - Google Patents
Real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining Download PDFInfo
- Publication number
- CN106683046A CN106683046A CN201610954653.2A CN201610954653A CN106683046A CN 106683046 A CN106683046 A CN 106683046A CN 201610954653 A CN201610954653 A CN 201610954653A CN 106683046 A CN106683046 A CN 106683046A
- Authority
- CN
- China
- Prior art keywords
- image
- points
- feature
- point
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000011835 investigation Methods 0.000 title abstract 2
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 230000004927 fusion Effects 0.000 claims abstract description 12
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000001228 spectrum Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 2
- 230000008901 benefit Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining. The method includes the three steps of improved ORB algorithm operation, image registration and image fusion. According to the method, a multi-scale space is constructed; a significance analysis model is adopted to obtain an optimal angle point detection threshold value, and feature points are extracted; an ORB descriptor is adopted to describe the feature points; and a Hamming distance and RANSAC-combined method is adopted to achieve fast matching. As indicated by results, an improved ORB algorithm maintains the superiority of speed, and the matching rate of the improved ORB algorithm is improved as for images on which scale, angle of view, rotation and illumination change occur.
Description
Technical Field
The invention belongs to the technical field of police unmanned aerial vehicle shooting and evidence obtaining, and particularly relates to a real-time image splicing method for reconnaissance and evidence obtaining of a police unmanned aerial vehicle.
Background
In recent years, the increasing rate of computer crime at home and abroad poses serious threats to national security and social security, serious losses to legal public and private property, and new challenges and requirements on computer evidence-taking technology. Computer evidence collection is used as a new research field, and has great significance for crime fighting and social stability maintenance. The rotary wing type unmanned aerial vehicle is applied to reconnaissance and evidence obtaining, has the advantages of fast response, high real-time performance, real and reliable images and the like, and can effectively solve the problems of insufficient reconnaissance and evidence obtaining means and low efficiency. However, as the aerial image of the unmanned aerial vehicle has a large amount of information and a plurality of visual angles, certain challenges are brought to subsequent information analysis.
The unmanned aerial vehicle is used for reconnaissance and evidence obtaining, and in order to timely and accurately reflect the field situation, the obtained images need to be spliced in real time. The accuracy and efficiency of image feature point matching affect the quality of image stitching. At present, many algorithms are applied to matching of image feature points. The SIFT algorithm is used as a classic feature point matching algorithm, although the matching accuracy is high, the calculation complexity is high, and the real-time requirement cannot be met. Then Bay et al improve it and propose an algorithm for SURF feature point extraction. In recent years, many new feature point matching algorithms, such as BRIEF, ORB, BRISK, FREAK, etc., have emerged.
ORB is an algorithm based on FAST feature extraction and BRIEF feature description, has the advantage of high speed, but does not have scale invariance, and in the feature extraction stage, the algorithm threshold is selected fixedly, and the difference of significant features between images is not considered. Therefore, the method has extremely strong application value to the improvement of the ORB algorithm.
Disclosure of Invention
The invention aims at the problems and provides a real-time image splicing method for police unmanned aerial vehicle reconnaissance and evidence collection.
In order to achieve the technical purpose, the invention adopts a real-time image splicing method for police unmanned aerial vehicle reconnaissance and evidence obtaining, which comprises three steps of improved ORB algorithm, image registration and image splicing;
the improved ORB algorithm includes: the ORB is improved, a significant model based on space-frequency domain analysis is applied to the selection of the optimal threshold value in the characteristic extraction stage by combining with a KSW entropy method, and a multi-scale space is constructed by utilizing a Gaussian pyramid;
an image I is input, first transformed from the RGB color space to the CIE Lab color space, where the image contains three channels: a brightness channel (L channel) and two color channels (a channel and b channel), for the color channels, eliminating fine texture details by adopting Gaussian blur in a time domain to obtain a characteristic diagram of the corresponding channel; for the brightness channel, calculating by adopting a high-frequency enhanced Butterworth high-pass filter to obtain a feature map of the L channel, and finally combining the feature maps of all the channels together to form a saliency map of an original image;
any input image can be represented by a magnitude spectrum and a phase spectrum, wherein the phase spectrum contains texture detail information of the image, the magnitude spectrum contains light and shade contrast information, if only the phase spectrum is reserved, the obtained image significant features contain partial background information and interfere the detection of image feature points to cause a mismatching phenomenon, a high-pass filter can realize the purpose of sharpening the edge of a target and simultaneously reserve the edge information to the maximum extent by attenuating and inhibiting low-frequency components, and in order to enhance the image detail information, sharpen the edge of the target and reduce noise interference, a high-frequency enhanced Butterworth high-pass filter is adopted;
cut-off frequency of D0The nth order butterworth high pass filter is defined as:
wherein,representing the frequency midpoint (u, v) and the frequency rectangle centerThe distance of (d);
inputting an image I, combining feature maps of three channels L, a and b together to obtain a final significant feature SM which is:
SM(x,y)=||IL-IL(x,y)||+||Ia-Ia(x,y)||+||Ib-Ib(x,y)||
wherein, IL(x, y) is the corresponding characteristic image pixel value of the brightness channel of the original image after passing through a high-frequency enhanced Butterworth high-pass filter, Ia(x,y),Ib(x, y) are respectively the corresponding characteristic map pixel values of the color channels, IL,Ia,IbRespectively, the average feature vectors of the corresponding channel images,is a two-norm euclidean distance;
the selection of the threshold value should reasonably change along with the change of the image gray scale characteristics, and the selection of the optimal threshold value is determined by combining a KSW entropy method according to the image significant characteristics, and the specific steps are as follows:
let the image gray scale range be [0, L-1]The threshold t divides the data into A, B categories, and the corresponding probability distributions are { p } respectively0,p1,p2,...,pt},{pt+1,pt+2,p2,...,pL-1In which p isiFor the frequency of occurrence of the corresponding gray level, orderThen the entropy for A, B classes would be:
the total entropy of the image is H ═ HA+HB. The optimal threshold value T is
Where k is a scaling factor, since the feature point threshold is related to the pixel contrast of the image, it is not necessary to determine the scale factor
Determining an image gray level difference, namely an optimal threshold value, through image salient features and an entropy method;
extracting feature points by using a best threshold obtained by combining a Fast corner detection algorithm, and describing the feature points by using an rBRIEF descriptor for subsequent image registration;
the image registration comprises the steps of utilizing feature point matching and utilizing RANSAC algorithm to screen matching points;
the characteristic point matching is used for searching two characteristic points with the shortest distance in two groups of characteristic point sets by using a distance function, and the distance between two binary descriptors can be represented by using a Hamming distance, wherein the Hamming distance refers to the number of characters with different corresponding positions between two character strings with the same length. The smaller the Hamming distance is, the more similar the two binary descriptors are;
calculating the shortest Hamming distance and the next shortest Hamming distance of each feature point to obtain a group of feature point matching pairs, and considering that the two feature points are matched when the ratio of the shortest distance to the next shortest distance is less than a threshold value;
the method comprises the steps of selecting a certain number of samples at random to estimate model parameters by screening matching points by using a RANSAC algorithm, classifying the rest data according to the estimated parameters, determining that one part of data is within an allowable error range, namely an inner point, and removing an error matching point pair through multiple hypothesis verification if the part of data is an outer point;
the RANSAC algorithm needs to use a homography matrix H which describes the transformation relation between the coordinates of two image points, including translation, rotation, scaling and the like, and can find the position of a point in one image in the other image through the matrix H, and assume a pair of matching points p in an image 1 and an image 21(x,y),p2The transformation relationship between (x ', y') is:
8 parameters of the matrix H can be calculated by 4 pairs of matching points, and the RANSAC algorithm comprises the following steps:
(1) setting an initial value of iteration times as 0, a maximum iteration time N, an internal point number threshold T1 and an error threshold T2;
(2) randomly selecting 4 pairs from the n pairs of points to be matched, and calculating parameters of a transformation matrix H between the two images;
(3) calculating the distance between the coordinate values of the other feature points after H transformation and the matching points of the feature points, if the distance is smaller than an error threshold value T2, considering the matching point pair as an inner point, and if the distance is not smaller than the error threshold value T2, calculating the number of the inner points;
(4) if the number of inliers is greater than the threshold number of inliers T1, the current model is saved as the optimal model. Otherwise, adding one to the iteration times, and turning to the step (2) to continue the next iteration;
(5) if the maximum iteration number N is reached, returning a group of interior points with the maximum number of corresponding interior points, and obtaining a transformation matrix H;
the image stitching means: after the images are registered, the images are spliced through image fusion, wherein the image fusion is the last step of image splicing and mainly comprises two parts: merging the images and eliminating image splicing lines, wherein the merging of the images is to eliminate redundant pixel information in the overlapped area and align the images to be spliced according to the image registration result; eliminating image splicing lines and performing weighted average fusion processing near the splicing lines; the weighted average weighting function can use a gradual-in gradual-out method, the complexity is low, the speed is high, and smooth transition of images can be realized in the overlapping area. The gradual-in gradual-out method is to calculate a weight according to the distance from a pixel to be fused to the boundary of a coincidence region, the weight is changed linearly, and a fusion formula is as follows:
in the formula (d)1、d2Representing the weight of the pixel point (x, y) on the image corresponding to the overlapped part, and meeting the following conditions: d1+d2=1,0<d1,d2<1;
d1、d2The calculation formula of (2) is as follows:
wherein xiIs the abscissa, x, of the pixel point to be fusedi、xrRespectively are the horizontal coordinates of the left and right borders of the image overlapping area;
the method comprises the steps of firstly constructing a multi-scale space, obtaining an optimal corner detection threshold value by utilizing a significance analysis model, extracting feature points, describing the feature points by utilizing an ORB descriptor, and finally realizing rapid matching by utilizing a Hamming distance and RANSAC method.
Drawings
FIG. 1 shows an algorithm flow diagram of the present invention;
FIG. 2 is a graph showing the gradual-in and gradual-out weight variation;
Detailed Description
The invention is further illustrated with reference to the following figures and detailed description.
With reference to fig. 1, first, it is understood what is called the ORB algorithm, which combines the FAST feature point detection method with the BRIEF feature descriptor, and performs improvement and optimization, and the present invention mainly introduces two parts, feature point detection and feature point description, of the algorithm.
1) Feature point detection
The ORB algorithm uses a gaussian pyramid structure and calculates its principal direction for each feature point, so that the detected feature points have scale invariance and rotation invariance.
(1) Firstly, establishing a scale space, constructing an image pyramid, and only one image is arranged on each layer, which is different from SIFT.
(2) Calculating the number n of feature points to be extracted of each layer according to a formula, detecting the feature points on images with different scales by using a FAST algorithm, sequencing according to FAST corner response values, reserving the first 2n points, then calculating Harris corner response values of the feature points, sequencing, and reserving the first n points as the feature points of the layer.
(3) The principal direction of the feature points is calculated. ORB proposes a gray centroid method, i.e. there is an offset between the gray of a corner point and the centroid in its neighborhood, and this vector is taken as the direction of the feature point.
Defining the moment of the neighborhood S of any one feature point p as:
where I (x, y) is the gray value at point (x, y).
The centroid of the neighborhood S is:
the included angle between the characteristic point and the mass center is defined as the main direction of the characteristic point: θ ═ arctan (M)0,1/M1,0)
2) Description of characteristic points
The ORB algorithm improves the BRIEF descriptor, namely, the rBRIEF description method enables the descriptor to have rotation invariance. The BRIEF descriptor is essentially a binary code string with the length of m, m point pairs are selected around the characteristic points, the gray value of each point pair is compared, and the descriptor is coded into a binary form.
A binary comparison criterion function τ is defined as:
where p (x) is the gray value at x in the neighborhood. In order to remove noise interference, the ORB algorithm selects a 5 × 5 image block at a feature point, and replaces the gray value of the feature point with the average gray value of the image block after smoothing.
Selecting m point pairs near the feature points, and comparing to obtain a binary string with the length of m as a feature descriptor:
the ORB algorithm uses the above calculated principal directions of feature points to determine the direction of a feature descriptor in order to make the descriptor rotationally invariant. The m point pairs around the feature point are combined into a matrix S:
defining a rotation matrix corresponding to the characteristic point direction theta as RθCharacteristic point pair matrix S corresponding to direction thetaθ=RθAnd S. Wherein,θ is the principal direction of the feature point.
The feature descriptors after determining the direction are: gm(p,θ)=fm(p)|(xi,yi)∈Sθ
In order to improve the discrimination performance of the descriptors, the ORB uses greedy search to select 256 test point pairs with the largest variance and the lowest correlation from all possible binary tests to form the required feature descriptors.
On the basis, the invention discloses a real-time image splicing method for reconnaissance and evidence obtaining of an unmanned aerial vehicle for police, which comprises three steps of improved ORB algorithm, image registration and image fusion;
the improved ORB algorithm includes: the ORB is improved, a significant model based on space-frequency domain analysis is applied to the selection of the optimal threshold value in the characteristic extraction stage by combining with a KSW entropy method, and a multi-scale space is constructed by utilizing a Gaussian pyramid;
an image I is input, first transformed from the RGB color space to the CIE Lab color space, where the image contains three channels: a brightness channel (L channel) and two color channels (a channel and b channel), for the color channels, eliminating fine texture details by adopting Gaussian blur in a time domain to obtain a characteristic diagram of the corresponding channel; for the brightness channel, calculating by adopting a high-frequency enhanced Butterworth high-pass filter to obtain a feature map of the L channel, and finally combining the feature maps of all the channels together to form a saliency map of an original image;
any input image can be represented by a magnitude spectrum and a phase spectrum, wherein the phase spectrum contains texture detail information of the image, the magnitude spectrum contains light and shade contrast information, if only the phase spectrum is reserved, the obtained image significant features contain partial background information and interfere the detection of image feature points to cause a mismatching phenomenon, a high-pass filter can realize the purpose of sharpening the edge of a target and simultaneously reserve the edge information to the maximum extent by attenuating and inhibiting low-frequency components, and in order to enhance the image detail information, sharpen the edge of the target and reduce noise interference, a high-frequency enhanced Butterworth high-pass filter is adopted;
cut-off frequency of D0The nth order butterworth high pass filter is defined as:
wherein,representing the frequency midpoint (u, v) and the frequency rectangle centerThe distance of (d);
inputting an image I, combining feature maps of three channels L, a and b together to obtain a final significant feature SM which is:
SM(x,y)=||IL-IL(x,y)||+||Ia-Ia(x,y)||+||Ib-Ib(x,y)||
wherein, IL(x, y) is the corresponding characteristic image pixel value of the brightness channel of the original image after passing through a high-frequency enhanced Butterworth high-pass filter, Ia(x,y),Ib(x, y) are respectively the corresponding characteristic map pixel values of the color channels, IL,Ia,IbRespectively, the average feature vectors of the corresponding channel images,is a two-norm euclidean distance;
the selection of the threshold value should reasonably change along with the change of the image gray scale characteristics, and the selection of the optimal threshold value is determined by combining a KSW entropy method according to the image significant characteristics, and the specific steps are as follows:
let the image gray scale range be [0, L-1]The threshold t divides the data into A, B categories, and the corresponding probability distributions are { p } respectively0,p1,p2,...,pt},{pt+1,pt+2,p2,...,pL-1In which p isiFor the frequency of occurrence of the corresponding gray level, orderThen the entropy for A, B classes would be:
the total entropy of the image is H ═ HA+HB. The optimal threshold value T is
K is a proportionality coefficient, and since the threshold of the feature point is related to the pixel contrast of the image, the gray level difference of the image, namely the optimal threshold, is determined by the significant feature of the image and an entropy method;
extracting feature points by using a best threshold obtained by combining a Fast corner detection algorithm, and describing the feature points by using an rBRIEF descriptor for subsequent image registration;
the image registration comprises the steps of utilizing feature point matching and utilizing RANSAC algorithm to screen matching points;
the characteristic point matching is used for searching two characteristic points with the shortest distance in two groups of characteristic point sets by using a distance function, and the distance between two binary descriptors can be represented by using a Hamming distance, wherein the Hamming distance refers to the number of characters with different corresponding positions between two character strings with the same length. The smaller the Hamming distance is, the more similar the two binary descriptors are;
calculating the shortest Hamming distance and the next shortest Hamming distance of each feature point to obtain a group of feature point matching pairs, and considering that the two feature points are matched when the ratio of the shortest distance to the next shortest distance is less than a threshold value;
the method comprises the steps of selecting a certain number of samples at random to estimate model parameters by screening matching points by using a RANSAC algorithm, classifying the rest data according to the estimated parameters, determining that one part of data is within an allowable error range, namely an inner point, and removing an error matching point pair through multiple hypothesis verification if the part of data is an outer point;
the RANSAC algorithm needs to use a homography matrix H which describes the transformation relation between the coordinates of two image points, including translation, rotation, scaling and the like, and can find the position of a point in one image in the other image through the matrix H, and assume a pair of matching points p in an image 1 and an image 21(x,y),p2The transformation relationship between (x ', y') is:
8 parameters of the matrix H can be calculated by 4 pairs of matching points, and the RANSAC algorithm comprises the following steps:
(1) setting an initial value of iteration times as 0, a maximum iteration time N, an internal point number threshold T1 and an error threshold T2;
(2) randomly selecting 4 pairs from the n pairs of points to be matched, and calculating parameters of a transformation matrix H between the two images;
(3) calculating the distance between the coordinate values of the other feature points after H transformation and the matching points of the feature points, if the distance is smaller than an error threshold value T2, considering the matching point pair as an inner point, and if the distance is not smaller than the error threshold value T2, calculating the number of the inner points;
(4) if the number of inliers is greater than the threshold number of inliers T1, the current model is saved as the optimal model. Otherwise, adding one to the iteration times, and turning to the step (2) to continue the next iteration;
(5) if the maximum iteration number N is reached, returning a group of interior points with the maximum number of corresponding interior points, and obtaining a transformation matrix H;
the image stitching means: after the images are registered, the images are spliced through image fusion, wherein the image fusion is the last step of image splicing and mainly comprises two parts: merging the images and eliminating image splicing lines, wherein the merging of the images is to eliminate redundant pixel information in the overlapped area and align the images to be spliced according to the image registration result; eliminating image splicing lines and performing weighted average fusion processing near the splicing lines; the weighted average weighting function can use a gradual-in gradual-out method, the complexity is low, the speed is high, and smooth transition of images can be realized in the overlapping area. The gradual-in gradual-out method is to calculate a weight according to the distance from a pixel to be fused to the boundary of a coincidence region, the weight is changed linearly, and a fusion formula is as follows:
in the formula (d)1、d2Representing the weight of the pixel point (x, y) on the image corresponding to the overlapped part, and meeting the following conditions: d1+d2=1,0<d1,d2<1;
d1、d2The calculation formula of (2) is as follows:
wherein xiIs the abscissa, x, of the pixel point to be fusedi、xrRespectively, the abscissa of the left and right borders of the image overlapping area.
As shown in fig. 2, d1 changes gradually from 1 to 0, and the corresponding d2 changes from 0 to 1, so that a smooth transition is achieved in the overlapping region of the images.
The method comprises the steps of firstly constructing a multi-scale space, obtaining an optimal corner detection threshold value by utilizing a significance analysis model, extracting feature points, describing the feature points by utilizing an ORB descriptor, and finally realizing rapid matching by utilizing a Hamming distance and RANSAC method.
Table 1 and table 2 show the comparison results of the registration rate and registration time of the improved ORB algorithm herein with BRISK algorithm, FAST-ORB algorithm, respectively. The FAST-ORB algorithm refers to an image feature point matching algorithm which is subjected to feature extraction by the FAST algorithm and is described by an ORB descriptor. The improved ORB algorithm herein has the advantage of high registration rate compared to other algorithms. The comparison result of the time used by each algorithm is shown, and it can be seen that the registration time is slightly higher than that of the other two algorithms due to the addition of the step of significance analysis in the algorithm, but is almost the same, and the advantage of high speed is maintained. The experimental result shows that the improved ORB algorithm keeps the superiority of speed, and the matching rate of the image with changes of scale, rotation, visual angle, illumination and the like is improved.
TABLE 1 image registration ratio comparison results
TABLE 2 image registration time comparison results
Claims (1)
1. The image real-time splicing method for police unmanned aerial vehicle reconnaissance and evidence collection is characterized by comprising three steps of improved ORB algorithm, image registration and image fusion;
the improved ORB algorithm includes: the ORB is improved, a significant model based on space-frequency domain analysis is applied to the selection of the optimal threshold value in the characteristic extraction stage by combining with a KSW entropy method, and a multi-scale space is constructed by utilizing a Gaussian pyramid;
an image I is input, first transformed from RGB color space to CIELab color space, the image containing three channels in Lab color space: a brightness channel (L channel) and two color channels (a channel and b channel), for the color channels, eliminating fine texture details by adopting Gaussian blur in a time domain to obtain a characteristic diagram of the corresponding channel; for the brightness channel, calculating by adopting a high-frequency enhanced Butterworth high-pass filter to obtain a feature map of the L channel, and finally combining the feature maps of all the channels together to form a saliency map of an original image;
any input image can be represented by a magnitude spectrum and a phase spectrum, wherein the phase spectrum contains texture detail information of the image, the magnitude spectrum contains light and shade contrast information, if only the phase spectrum is reserved, the obtained image significant features contain partial background information and interfere the detection of image feature points to cause a mismatching phenomenon, a high-pass filter can realize the purpose of sharpening the edge of a target and maximally reserve edge information by attenuating and inhibiting low-frequency components, and in order to enhance the image detail information, sharpen the edge of the target and reduce noise interference, a high-frequency enhanced Butterworth high-pass filter is adopted;
cut-off frequency of D0The nth order butterworth high pass filter is defined as:
wherein,representing the frequency midpoint (u, v) and the frequency rectangle centerThe distance of (d);
inputting an image I, combining feature maps of three channels L, a and b together to obtain a final significant feature SM which is:
SM(x,y)=||IL-IL(x,y)||+||Ia-Ia(x,y)||+||Ib-Ib(x,y)||
wherein, IL(x, y) is the corresponding characteristic image pixel value of the brightness channel of the original image after passing through a high-frequency enhanced Butterworth high-pass filter, Ia(x,y),Ib(x, y) are respectively the corresponding characteristic map pixel values of the color channels, IL,Ia,IbRespectively, the average feature vectors of the corresponding channel images,is a two-norm euclidean distance;
the selection of the threshold value should reasonably change along with the change of the image gray scale characteristics, and the selection of the optimal threshold value is determined by combining a KSW entropy method according to the image significant characteristics, and the specific steps are as follows:
let the image gray scale range be [0, L-1]The threshold t divides the data into A, B categories, and the corresponding probability distributions are { p } respectively0,p1,p2,...,pt},{pt+1,pt+2,p2,...,pL-1In which p isiFor the frequency of occurrence of the corresponding gray level, orderThen the entropy for A, B classes would be:
the total entropy of the image is H ═ HA+HB. The optimal threshold value T is
K is a proportionality coefficient, and since the threshold of the feature point is related to the pixel contrast of the image, the gray level difference of the image, namely the optimal threshold, is determined by the significant feature of the image and an entropy method;
extracting feature points by using a best threshold obtained by combining a Fast corner detection algorithm, and describing the feature points by using an rBRIEF descriptor for subsequent image registration;
the image registration comprises the steps of utilizing feature point matching and utilizing RANSAC algorithm to screen matching points;
the characteristic point matching is used for searching two characteristic points with the shortest distance in two groups of characteristic point sets by using a distance function, and the distance between two binary descriptors can be represented by using a Hamming distance, wherein the Hamming distance refers to the number of characters with different corresponding positions between two character strings with the same length. The smaller the Hamming distance is, the more similar the two binary descriptors are;
calculating the shortest Hamming distance and the next shortest Hamming distance of each feature point to obtain a group of feature point matching pairs, and considering that the two feature points are matched when the ratio of the shortest distance to the next shortest distance is less than a threshold value;
the method comprises the steps of selecting a certain number of samples at random to estimate model parameters by screening matching points by using a RANSAC algorithm, classifying the rest data according to the estimated parameters, determining that one part of data is within an allowable error range, namely an inner point, and removing an error matching point pair through multiple hypothesis verification if the part of data is an outer point;
the RANSAC algorithm needs to use a homography matrix H which describes the transformation relation between the coordinates of two image points, including the relation of translation, rotation, scaling and the like, and the position of a point in one image in the other image can be found through the matrix HSuppose a pair of matching points p in image 1 and image 21(x,y),p2The transformation relationship between (x ', y') is:
8 parameters of the matrix H can be calculated by 4 pairs of matching points, and the RANSAC algorithm comprises the following steps:
(1) setting an initial value of iteration times as 0, a maximum iteration time N, an internal point number threshold T1 and an error threshold T2;
(2) randomly selecting 4 pairs from the n pairs of points to be matched, and calculating parameters of a transformation matrix H between the two images;
(3) calculating the distance between the coordinate values of the other feature points after H transformation and the matching points of the feature points, if the distance is smaller than an error threshold value T2, considering the matching point pair as an inner point, and if the distance is not smaller than the error threshold value T2, calculating the number of the inner points;
(4) if the number of inliers is greater than the threshold number of inliers T1, the current model is saved as the optimal model. Otherwise, adding one to the iteration times, and turning to the step (2) to continue the next iteration;
(5) if the maximum iteration number N is reached, returning a group of interior points with the maximum number of corresponding interior points, and obtaining a transformation matrix H;
the image stitching means: after the images are registered, the images are spliced through image fusion, wherein the image fusion is the last step of image splicing and mainly comprises two parts: merging the images and eliminating image splicing lines, wherein the merging of the images is to eliminate redundant pixel information in the overlapped area and align the images to be spliced according to the image registration result; eliminating image splicing lines and performing weighted average fusion processing near the splicing lines; the weighted average weighting function can use a gradual-in gradual-out method, the complexity is low, the speed is high, and smooth transition of images can be realized in the overlapping area. The gradual-in gradual-out method is to calculate a weight according to the distance from a pixel to be fused to the boundary of a coincidence region, the weight is changed linearly, and a fusion formula is as follows:
in the formula (d)1、d2Representing the weight of the pixel point (x, y) on the image corresponding to the overlapped part, and meeting the following conditions: d1+d2=1,0<d1,d2<1;
d1、d2The calculation formula of (2) is as follows:
wherein xiIs the abscissa, x, of the pixel point to be fusedi、xrRespectively, the abscissa of the left and right borders of the image overlap region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610954653.2A CN106683046B (en) | 2016-10-27 | 2016-10-27 | Image real-time splicing method for police unmanned aerial vehicle reconnaissance and evidence obtaining |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610954653.2A CN106683046B (en) | 2016-10-27 | 2016-10-27 | Image real-time splicing method for police unmanned aerial vehicle reconnaissance and evidence obtaining |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106683046A true CN106683046A (en) | 2017-05-17 |
CN106683046B CN106683046B (en) | 2020-07-28 |
Family
ID=58840349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610954653.2A Active CN106683046B (en) | 2016-10-27 | 2016-10-27 | Image real-time splicing method for police unmanned aerial vehicle reconnaissance and evidence obtaining |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106683046B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107369170A (en) * | 2017-07-04 | 2017-11-21 | 云南师范大学 | Image registration treating method and apparatus |
CN107464252A (en) * | 2017-06-30 | 2017-12-12 | 南京航空航天大学 | A kind of visible ray based on composite character and infrared heterologous image-recognizing method |
CN107490377A (en) * | 2017-07-17 | 2017-12-19 | 五邑大学 | Indoor map-free navigation system and navigation method |
CN108319961A (en) * | 2018-01-23 | 2018-07-24 | 西南科技大学 | A kind of image ROI rapid detection methods based on local feature region |
CN108805812A (en) * | 2018-06-04 | 2018-11-13 | 东北林业大学 | Multiple dimensioned constant ORB algorithms for image mosaic |
CN108921848A (en) * | 2018-09-29 | 2018-11-30 | 长安大学 | Bridge Defect Detecting device and detection image joining method based on more mesh cameras |
CN108961276A (en) * | 2018-04-04 | 2018-12-07 | 山东鲁能智能技术有限公司 | The distribution line inspection automatic data collection method and system of view-based access control model servo |
CN109543561A (en) * | 2018-10-31 | 2019-03-29 | 北京航空航天大学 | Saliency of taking photo by plane method for detecting area and device |
CN109712071A (en) * | 2018-12-14 | 2019-05-03 | 电子科技大学 | Unmanned plane image mosaic and localization method based on track constraint |
CN109801220A (en) * | 2019-01-23 | 2019-05-24 | 北京工业大学 | Mapping parameters method in a kind of splicing of line solver Vehicular video |
CN109919971A (en) * | 2017-12-13 | 2019-06-21 | 北京金山云网络技术有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN110132302A (en) * | 2019-05-20 | 2019-08-16 | 中国科学院自动化研究所 | Merge binocular vision speedometer localization method, the system of IMU information |
CN110211363A (en) * | 2019-04-12 | 2019-09-06 | 张长阵 | Intelligent Household appliance switch platform |
CN111353933A (en) * | 2018-12-20 | 2020-06-30 | 重庆金山医疗器械有限公司 | Image splicing and fusing method and system |
CN112884649A (en) * | 2021-02-06 | 2021-06-01 | 哈尔滨理工大学 | B-spline-based image stitching feature point extraction algorithm |
CN114143517A (en) * | 2021-10-26 | 2022-03-04 | 深圳华侨城卡乐技术有限公司 | Fusion mask calculation method and system based on overlapping area and storage medium |
CN114283065A (en) * | 2021-12-28 | 2022-04-05 | 北京理工大学 | ORB feature point matching system and matching method based on hardware acceleration |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104167003A (en) * | 2014-08-29 | 2014-11-26 | 福州大学 | Method for fast registering remote-sensing image |
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
US20160042251A1 (en) * | 2014-07-03 | 2016-02-11 | Oim Squared Inc. | Interactive content generation |
-
2016
- 2016-10-27 CN CN201610954653.2A patent/CN106683046B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160042251A1 (en) * | 2014-07-03 | 2016-02-11 | Oim Squared Inc. | Interactive content generation |
CN104167003A (en) * | 2014-08-29 | 2014-11-26 | 福州大学 | Method for fast registering remote-sensing image |
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
Non-Patent Citations (2)
Title |
---|
YUANYUAN DU 等: "Markless augmented reality registration algorithm based on ORB", 《2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP)》 * |
佘建国 等: "基于 ORB 和改进 RANSAC 算法的图像拼接技术", 《江苏科技大学学报:自然科学版》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107464252A (en) * | 2017-06-30 | 2017-12-12 | 南京航空航天大学 | A kind of visible ray based on composite character and infrared heterologous image-recognizing method |
CN107369170A (en) * | 2017-07-04 | 2017-11-21 | 云南师范大学 | Image registration treating method and apparatus |
CN107490377A (en) * | 2017-07-17 | 2017-12-19 | 五邑大学 | Indoor map-free navigation system and navigation method |
CN109919971A (en) * | 2017-12-13 | 2019-06-21 | 北京金山云网络技术有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109919971B (en) * | 2017-12-13 | 2021-07-20 | 北京金山云网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN108319961A (en) * | 2018-01-23 | 2018-07-24 | 西南科技大学 | A kind of image ROI rapid detection methods based on local feature region |
CN108319961B (en) * | 2018-01-23 | 2022-03-25 | 西南科技大学 | Image ROI rapid detection method based on local feature points |
CN108961276A (en) * | 2018-04-04 | 2018-12-07 | 山东鲁能智能技术有限公司 | The distribution line inspection automatic data collection method and system of view-based access control model servo |
CN108961276B (en) * | 2018-04-04 | 2020-09-25 | 国网智能科技股份有限公司 | Distribution line inspection data automatic acquisition method and system based on visual servo |
CN108805812A (en) * | 2018-06-04 | 2018-11-13 | 东北林业大学 | Multiple dimensioned constant ORB algorithms for image mosaic |
CN108921848A (en) * | 2018-09-29 | 2018-11-30 | 长安大学 | Bridge Defect Detecting device and detection image joining method based on more mesh cameras |
CN109543561A (en) * | 2018-10-31 | 2019-03-29 | 北京航空航天大学 | Saliency of taking photo by plane method for detecting area and device |
CN109712071A (en) * | 2018-12-14 | 2019-05-03 | 电子科技大学 | Unmanned plane image mosaic and localization method based on track constraint |
CN109712071B (en) * | 2018-12-14 | 2022-11-29 | 电子科技大学 | Unmanned aerial vehicle image splicing and positioning method based on track constraint |
CN111353933A (en) * | 2018-12-20 | 2020-06-30 | 重庆金山医疗器械有限公司 | Image splicing and fusing method and system |
CN109801220A (en) * | 2019-01-23 | 2019-05-24 | 北京工业大学 | Mapping parameters method in a kind of splicing of line solver Vehicular video |
CN109801220B (en) * | 2019-01-23 | 2023-03-28 | 北京工业大学 | Method for solving mapping parameters in vehicle-mounted video splicing on line |
CN110211363A (en) * | 2019-04-12 | 2019-09-06 | 张长阵 | Intelligent Household appliance switch platform |
CN110132302A (en) * | 2019-05-20 | 2019-08-16 | 中国科学院自动化研究所 | Merge binocular vision speedometer localization method, the system of IMU information |
CN112884649A (en) * | 2021-02-06 | 2021-06-01 | 哈尔滨理工大学 | B-spline-based image stitching feature point extraction algorithm |
CN114143517A (en) * | 2021-10-26 | 2022-03-04 | 深圳华侨城卡乐技术有限公司 | Fusion mask calculation method and system based on overlapping area and storage medium |
CN114283065A (en) * | 2021-12-28 | 2022-04-05 | 北京理工大学 | ORB feature point matching system and matching method based on hardware acceleration |
CN114283065B (en) * | 2021-12-28 | 2024-06-11 | 北京理工大学 | ORB feature point matching system and method based on hardware acceleration |
Also Published As
Publication number | Publication date |
---|---|
CN106683046B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106683046B (en) | Image real-time splicing method for police unmanned aerial vehicle reconnaissance and evidence obtaining | |
Zhang et al. | Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation | |
Ashraf et al. | Dogfight: Detecting drones from drones videos | |
WO2018076138A1 (en) | Target detection method and apparatus based on large-scale high-resolution hyper-spectral image | |
CN112818903A (en) | Small sample remote sensing image target detection method based on meta-learning and cooperative attention | |
US20210081695A1 (en) | Image processing method, apparatus, electronic device and computer readable storage medium | |
CN103020985B (en) | A kind of video image conspicuousness detection method based on field-quantity analysis | |
US8861853B2 (en) | Feature-amount calculation apparatus, feature-amount calculation method, and program | |
CN107507172A (en) | Merge the extra high voltage line insulator chain deep learning recognition methods of infrared visible ray | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
CN105405154A (en) | Target object tracking method based on color-structure characteristics | |
CN109614936B (en) | Layered identification method for remote sensing image airplane target | |
Yang et al. | Beyond digital domain: Fooling deep learning based recognition system in physical world | |
CN105427350A (en) | Color image replication tamper detection method based on local quaternion index moment | |
CN112560852A (en) | Single-stage target detection method with rotation adaptive capacity based on YOLOv3 network | |
CN108010065A (en) | Low target quick determination method and device, storage medium and electric terminal | |
CN104966054A (en) | Weak and small object detection method in visible image of unmanned plane | |
CN115661754B (en) | Pedestrian re-recognition method based on dimension fusion attention | |
Sun et al. | IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes | |
CN111241943B (en) | Scene recognition and loopback detection method based on background target and triple loss | |
Liu et al. | Multi-scale feature fusion UAV image object detection method based on dilated convolution and attention mechanism | |
CN105023264A (en) | Infrared image remarkable characteristic detection method combining objectivity and background property | |
Jin Kim et al. | Learned contextual feature reweighting for image geo-localization | |
CN104573703A (en) | Method for quickly identifying power transmission line based on partial derivative distribution and boundary strategy | |
Liang et al. | Cross-layer triple-branch parallel fusion network for small object detection in uav images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220214 Address after: No.19 Keyuan Road, Lixia District, Jinan City, Shandong Province Patentee after: Shandong public safety inspection and Testing Technology Co.,Ltd. Address before: 250014 No. 19, ASTRI Road, Ji'nan, Shandong Patentee before: INFORMATION Research Institute OF SHANDONG ACADEMY OF SCIENCES |
|
TR01 | Transfer of patent right |