CN105631860A - Local sorted orientation histogram descriptor-based image correspondence point extraction method - Google Patents

Local sorted orientation histogram descriptor-based image correspondence point extraction method Download PDF

Info

Publication number
CN105631860A
CN105631860A CN201510965338.5A CN201510965338A CN105631860A CN 105631860 A CN105631860 A CN 105631860A CN 201510965338 A CN201510965338 A CN 201510965338A CN 105631860 A CN105631860 A CN 105631860A
Authority
CN
China
Prior art keywords
dimension
neighborhood
characteristic point
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510965338.5A
Other languages
Chinese (zh)
Other versions
CN105631860B (en
Inventor
王山虎
郝雪涛
王峰
吕江安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Center for Resource Satellite Data and Applications CRESDA
Original Assignee
China Center for Resource Satellite Data and Applications CRESDA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Center for Resource Satellite Data and Applications CRESDA filed Critical China Center for Resource Satellite Data and Applications CRESDA
Priority to CN201510965338.5A priority Critical patent/CN105631860B/en
Publication of CN105631860A publication Critical patent/CN105631860A/en
Application granted granted Critical
Publication of CN105631860B publication Critical patent/CN105631860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

The invention discloses a local sorted orientation histogram descriptor-based image correspondence point extraction method. In order to solve the problem that an SIFT (Scale Invariant Feature Transform) block gradient location orientation histogram descriptor has insufficient discrimination as SAR image scene feature similarities are high, a novel local sorted orientation histogram-based descriptor is provided. Directional derivatives of eight directions of each pixel point in a feature neighborhood are calculated, direction positions with the maximum derivative and the second maximum derivative are taken as feature primitives, the neighborhood is blocked, feature primitive distribution is counted, and a descriptor vector is formed. Through adding the second maximum direction and the position information thereof, perfect neighborhood information is considered, and the discrimination of the descriptor is improved. An experimental result on scenes such as a mountain area, a city and a rural area shows that the method of the invention can better balance differences between features, the discrimination of the descriptor is enhanced, more matching points can be extracted, and the method is more applicable to SAR image matching.

Description

The image same place extracting method of son is described based on partial ordering's direction histogram
Technical field
The invention belongs to image processing field, relate to the extracting method of a kind of remote sensing images same place, it is adaptable to the registration of SAR image, splicing, block adjustment etc.
Background technology
It is extract characteristic point of the same name from the two width SAR image that there is geometry and tonal distortion that SAR image same place extracts, and is the key technology of numerous application such as image registration, image mosaic, Target detection and identification.
In recent years, method based on invariant features is the study hotspot that same place automatically extracts, such method is divided into feature detection, feature description, characteristic matching and Feature Selection four-stage: first with point stable in detection son extraction image, line or region feature, then the structure of extraction feature neighborhood, shape, texture and other information are formed and describe son, by calculating the similarity described between son and then obtaining match point, consistency check algorithm is finally adopted to filter out correct match point. SIFT (ScaleInvariantFeatureTransform) is the one being most widely used in such method, first it detect block point (blob) stable in image Gaussian scale-space and determine Size of Neighborhood, there is scale invariability, then the gradient orientation histogram utilizing characteristic point neighborhood calculates characteristic point principal direction, there is invariable rotary shape, finally calculate piecemeal gradient orientation histogram and form description, there is stronger differentiation power, and local affine invariant and gray scale invariance. Describing sub-piecemeal+feature primitive+histogrammic framework based on SIFT, it has been improved by follow-up a lot of algorithms. SURF (Speed-UpRobustFeatures) adds up Haar small echo at the component of x and y both direction and absolute value thereof, forms rectangular histogram; Feature neighborhood is divided into sector region statistical gradient direction histogram by GLOH (GradientLocationOrientationHistogram) under log-polar; WLD (WeberLocalDescriptor) adds up two-dimensional histogram, and wherein one-dimensional is gradient direction, and another dimension is the gray-value variation ratio of 3*3 neighborhood relative centre pixel.
For the natural scene image that computer vision processes, its resolution is higher, contains abundant structural texture information in image, extracts these features and can well distinguish different local image pattern, adopts foregoing description application effect preferably. But SAR image is as the one of remote sensing images, its resolution is well below the natural scene image in computer vision, and SAR is mainly for the big regional extent imaging in earth's surface, common earth's surface scene has mountain area, waters, building, vegetation etc., the detailed information that these scenes comprise is limited, relatively strong with the characteristic similarity of similar scene in piece image, it is not easily distinguishable. As for grass roots imaging, the similarity between different croplands is significantly high. Therefore, mating for SAR image, the differentiation power adopting foregoing description is limited.
Summary of the invention
Present invention solves the technical problem that and be: overcome SIFT to describe subvector and SAR image scene similarity is distinguished hypodynamic problem, propose a kind of based on partial ordering direction histogram description son (LocalSortedOrientationHistogram, LSOH) image same place extracting method, SIFT is adopted to describe sub-piecemeal+feature primitive+histogrammic framework, improve its feature primitive extracting mode, can better weigh the diversity between feature, strengthen the differentiation power describing son, extract more match point, be more suitable for SAR image coupling.
The technical solution of the present invention is: describes the image same place extracting method of son based on partial ordering's direction histogram, comprises the steps:
(1) two width images are adopted the method for following steps (2)��step (4) to process as original image by input reference picture and image to be matched;
(2) gaussian kernel that variance constantly increases is adopted to set up Gaussian scale-space with original image convolution, subtracted each other by Gaussian scale-space adjacent layer and set up difference scale space, each pixel is compared with this layer 8 neighborhood and upper and lower two-layer same position place 3*3 neighborhood, filter out extreme point, as characteristic point;
(3) for each characteristic point, calculate gradient magnitude and the direction of each pixel in its neighborhood, add up its gradient direction part with the gradient magnitude of pixel, build its gradient orientation histogram, and using the direction of amplitude maximum as principal direction, then the angle according to principal direction, turns clockwise neighborhood so that it is principal direction is horizontal direction;
(4) the partial ordering's direction histogram building each characteristic point describes son, concretely comprises the following steps:
(41) to each pixel in characteristic point neighborhood, according to its 3*3 neighborhood, calculate the directional derivative in these 8 directions of pixel first with Sobel operator, try to achieve and maximum and secondary respond greatly direction;
(42) the 0th dimension is adopted to show 8 orientation in maximum direction to the 7th dimension table, wherein east orientation is the 0th dimension, North-East Bound is the 1st dimension, counterclockwise successively by north, northwest, west, southwest, south, the southeast determines that the 2nd dimension is tieed up to the 7th, the 8th dimension is adopted to show 8 orientation of time general orientation to the 15th dimension table, wherein east orientation is the 8th dimension, North-East Bound is the 9th dimension, counterclockwise determine the 10th to the 15th dimension successively, result according to step (41) determines the maximum and secondary big dimension responding direction, and with respective directional response value weighting, form 16 dimensional vectors, 16 dimensional vectors there is value with the dimension in maximum direction and time general orientation, other dimensions are 0,
(43) characteristic point neighborhood is divided into 4*4 sub-block, for each sub-block, the direction weight of pixel each in sub-block is accumulated in the part of corresponding direction, particularly as follows:
H ( i ) = Σ j = 0 N - 1 p j ( i ) , i ∈ [ 0 , 15 ]
Wherein pjThe value of i-th dimension in the direction dimensional vector of jth pixel in (i) expression sub-block, N is number of pixels in sub-block;
(44) the 16 of 4*4 sub-block dimension partial ordering direction histograms are connected in turn obtain 256 dimensions and describe subvector, form rectangular histogram and describe son;
(5) for two width images, travel through each characteristic point on a reference, and on image to be matched, choose the arest neighbors corresponding respectively with each characteristic point and time neighbour's characteristic point, by rectangular histogram, son is described, calculate the Euclidean distance of the characteristic point on reference picture and corresponding arest neighbors on image to be matched and time neighbour's characteristic point, if calculated Euclidean distance is less than the threshold value specified, then judge that characteristic point on reference picture and the characteristic point of corresponding arest neighbors on image to be matched are as match point, then pass through RANSAC algorithm and filter out all correct match point, obtain the same place of two width images.
Present invention advantage compared with prior art is in that: in piecemeal+feature primitive+histogrammic framework, and SIFT extracts the gradient direction of pixel as basic feature primitive, but gradient direction only describes the direction that neighborhood of pixels grey scale change is maximum. Partial ordering's direction histogram of the present invention describes son and gradient direction has been expanded, consider more comprehensively neighborhood characteristic, by calculating the directional derivative in eight directions around pixel, and these response values are sorted, and then obtain the position, direction of maximum and secondary big response. By the impact of noise and deformation, the value of neighborhood of pixels all directions can change, and then is likely to affect the size order of directional derivative. But generally, compare the direction that response value is less, the direction that response value is bigger is more stable, therefore consider differentiation power, robustness and describe the dimension of son, partial ordering's direction histogram of the present invention describes son and utilizes maximum and secondary big both direction to portray the fundamental characteristics of pixel, the differentiation power describing son been significantly enhanced, it is possible to be obviously improved the differentiation power to SAR image scene similarity.
Accompanying drawing explanation
Fig. 1 is directional derivative calculation template schematic diagram of the present invention;
Fig. 2 is the direction encoding schematic diagram of the present invention, and wherein Fig. 2 (a) is maximum direction encoding, and Fig. 2 (b) is time general orientation coding;
Fig. 3 is that directional derivative of the present invention asks for schematic diagram;
Fig. 4 is the direction histogram of some pixel of the present invention;
Fig. 5 is the local image pattern schematic diagram that the present invention two is different;
Fig. 6 is the FB(flow block) of the inventive method;
Fig. 7 is the SIFT (a) in City scenarios of the present invention and LSOH (b) performance comparison figure;
Fig. 8 is the SIFT (a) in the scene of rural area of the present invention and LSOH (b) performance comparison figure;
Fig. 9 is the SIFT (a) in the scene of mountain area of the present invention and LSOH (b) performance comparison figure.
Detailed description of the invention
Mate for SAR image, the characteristic that between its scene characteristic, similarity degree is higher should be taken into full account, strengthen emphatically the differentiation power describing son.
Characteristic point neighborhood is carried out 4*4 partition by SIFT, and gradient orientation histogram the formation that cascades up of adding up each sub-block describe subvector. Gradient direction has a degree of differentiation power, but gradient direction is the direction that grey scale pixel value change is maximum, if merely with maximum direction, have ignored other directions, can cause that differentiation power is not enough. Given this, the present invention proposes a kind of new for partial ordering direction histogram (LocalSortedOrientationHistogram, LSOH) description, first this description calculates the directional derivative in each eight directions of point, then maximum and secondary big directional response is taken as the description to this point, finally characteristic point neighborhood is done 4*4 partition, forms rectangular histogram and describe son.
Concrete, to each pixel in characteristic point neighborhood, according to its 3*3 neighborhood, the directional derivative in these 8 directions of point is calculated first with Sobel operator, calculation template is as it is shown in figure 1, the directional response value absolute value that in figure, same column (such as east and west) is corresponding is equal, and symbol is contrary, therefore can only calculating the response value in 4 directions, the value in other 4 directions negates; Then to these 8 response value sequences, the directional response value symbol symmetrical about center pixel is contrary, positive number represents and is gradually increased along direction gray value, negative number representation gray value is gradually reduced, therefore during sequence, response value need not take absolute value, directly ratio size, can try to achieve and maximum and secondary respond greatly direction. Finally according to the coding rule shown in Fig. 2,0th dimension shows 8 orientation in maximum direction to the 7th dimension table, east orientation is the 0th dimension, North-East Bound is the 1st dimension, counterclockwise determine the 2nd to the 7th dimension successively, 8th dimension shows 8 orientation of time general orientation to the 15th dimension table, east orientation is the 8th dimension, North-East Bound is the 9th dimension, counterclockwise determines the 10th to the 15th dimension successively, it is determined that the dimension of maximum and secondary general orientation part, and with respective directional response value weighting, ultimately forming 16 dimensional vectors, only two dimensions have value, and other dimensions are 0.
In Fig. 3, Fig. 3, left figure gives certain central pixel point and the value of 3*3 neighborhood thereof, and the directional derivative value in these eight directions of center pixel is right figure, is shown as Fig. 4 (a) with represented as histograms. They sequences can be obtained maximum and secondary general orientation respectively north and northeastward, i.e. the 2nd peacekeeping the 9th dimension, response value respectively 39 and 32, therefore the 2nd peacekeeping the 9th describing sub-direction dimension is tieed up respectively with 39 and 32 weightings, other dimension values are 0, and the direction dimensional vector of formation is such as shown in Fig. 4 (b).
After obtaining the azimuth dimension vector that each pixel is corresponding, the make describing son is identical with SIFT, feature neighborhood is divided into 4*4 sub-block, adds up the directional spreding of each sub-block, namely the direction weight of each pixel is accumulated in the part of corresponding direction, as shown in formula (1)
H ( i ) = Σ j = 0 N - 1 p j ( i ) , i ∈ [ 0 , 15 ] - - - ( 1 )
Wherein, pjI the value of i-th dimension in the direction dimensional vector of () expression jth pixel, N is number of pixels in sub-block. Namely the 16 of 4*4 totally 16 sub-blocks dimension histogram vectors being sequentially connected with cascades up obtains 256 dimensions and describes subvector.
In piecemeal+feature primitive+histogrammic framework, SIFT extracts the gradient direction of pixel as basic feature primitive. Comparing the simple Grad relying on gray value or gray scale, gradient direction can tackle more complicated grey scale change, and then the local image pattern that the stable differentiation of energy is different. But gradient direction only describes the direction that neighborhood of pixels grey scale change is maximum, and the LSOH of the present invention may be considered the expansion to gradient direction, it considers more comprehensively neighborhood characteristic, calculate the directional derivative in eight directions around pixel, and these response values are sorted, and then obtain the position, direction of maximum and secondary big response. By the impact of noise and deformation, the value of neighborhood of pixels all directions can change, and then is likely to affect the size order of directional derivative. But generally, comparing the direction that response value is less, the direction that response value is bigger is more stable, therefore considering differentiation power, robustness and describe the dimension of son, LSOH utilizes maximum and secondary big both direction to portray the fundamental characteristics of pixel.
When adding up the rectangular histogram of every piece, maximum and secondary general orientation response value is not be added to identical direction part, but adds up respectively. Fig. 5 gives an example intuitively, Fig. 5 (a) is two different local image pattern with (b), the maximum direction of first subpattern of Fig. 5 (a) is northeastward, response value is 80, and secondary general orientation is positive east orientation, and response value is 60, the maximum direction of second subpattern is positive east orientation, response value is 110, and secondary general orientation is North-East Bound, and response value is 100. If not differentiating between maximum general orientation, the response value of all North-East Bounds and east orientation added up respectively, then the rectangular histogram obtained is such as shown in the 3rd row, and the response value of North-East Bound is 180, and the response value of positive east orientation is 170. The maximum direction of first subpattern of Fig. 5 (b) is direction, due east, and response value is 70, and secondary general orientation is North-East Bound, response value is 40, and the maximum direction of second subpattern is North-East Bound, and response value is 140, secondary general orientation is positive east orientation, and response value is 100. It can be seen that the rectangular histogram that this and Fig. 5 (b) obtain is identical (the 3rd row), (a) and (b) both local image pattern can not be distinguished. If but distinguish maximum direction and time general orientation, and their response value is added up respectively, then the rectangular histogram obtained is such as shown in the 4th row, it can be seen that this rectangular histogram generation type can effectively distinguish the two local image pattern.
As shown in Figure 6, for the flow chart of the inventive method, core is the piecemeal gradient orientation histogram of SIFT to be described son be improved to LSOH, and key step is as follows:
(1) feature detection
The first step, adopts the gaussian kernel that variance constantly increases to set up metric space with original image convolution, such as formula (2)
L (x, y, ��)=I (x, y) * G (x, y, ��) (2)
Wherein, (x, y) for original image, the Gaussian scale-space image that L (x, y, ��) is original image, * represents convolution algorithm to I, and G (x, y, ��) is the Gaussian function of �� for standard deviation, such as formula (3)
G ( x , y , σ ) = 1 2 πσ 2 e - ( x 2 + y 2 ) 2 σ 2 - - - ( 3 )
Second step, is subtracted each other by Gaussian scale-space adjacent layer and sets up difference scale space C (x, y, ��), such as formula (4)
C (x, y, ��)=I (x, y) * (G (x, y, k ��)-G (x, y, ��))
=L (x, y, k ��)-L (x, y, ��) (4)
K in formula > 1, L (x, y, k ��) and L (x, y, ��) is adjacent scale space images.
3rd step, each pixel, compared with this layer 8 neighborhood and upper and lower two-layer same position place 3*3 neighborhood, filters out extreme point, as candidate feature point.
Detecting extreme point in difference scale space, be equivalent to detect Laplce in metric space and respond extreme point, amount of calculation greatly reduces.
4th step, manually sets threshold value according to characteristic point quantity and removes the relatively low point of contrast easily affected by noise and the marginal point not easily mated, obtain characteristic point.
Contrast is responded by the extreme point of the 3rd step and weighs, after interpolated calculating as shown in the formula (10) of the 5th step. Marginal point utilizes the ratio of principal curvatures to judge, principal curvatures can be obtained by the eigenvalue of Hessian matrix, and Hessian matrix expression is formula (5)
H = D x x D x y D x y D y y - - - ( 5 )
In formula, D is two dimension partial derivative.
If the bigger eigenvalue that �� is H, �� is less, and r is both ratio, ��=r ��, uses for reference the computational methods of Harris angle point, need not calculate eigenvalue, it is considered to the mark of H and determinant, and the ratio of the two is formula (6)
T r ( H ) 2 D e t ( H ) = ( α + β ) 2 α β = ( r β + β ) 2 rβ 2 = ( r + 1 ) 2 r - - - ( 6 )
During r >=1,Increasing along with the increase of r, r=1, when namely �� and �� is equal, this ratio is minimum. Therefore characteristic point can be screened according to formula (7), T, for specifying threshold value, if being unsatisfactory for this formula, then gives up this characteristic point.
T r ( H ) 2 D e t ( H ) < T - - - ( 7 )
5th step, utilizes Taylor expansion to improve the positioning precision of characteristic point.
Difference scale space D (x, y, ��) is at extreme point X0=(x0,y0,��0)TPlace carries out Taylor series expansion, and ignores the item of more than secondary, obtains formula (8).
D ( X ) = D + &part; D T &part; X X + 1 2 X T &part; 2 D &part; X 2 X
&part; D &part; X = &part; D &part; x &part; D &part; y &part; D &part; &sigma; &part; 2 D &part; X 2 = &part; 2 D &part; x 2 &part; 2 D &part; x y &part; 2 D &part; x &sigma; &part; 2 D &part; x y &part; 2 D &part; y 2 &part; 2 D &part; y &sigma; &part; 2 D &part; x &sigma; &part; 2 D &part; y &sigma; &part; 2 D &part; &sigma; 2 - - - ( 8 )
Wherein, D and single order and second dervative thereof are relative extreme point X0Value, X=(x, y, ��)TIt is relative extreme point X0Side-play amount. Making D (X) is zero to the single order local derviation of X, then can obtain extreme point position such as formula (9).
X n e w = - &part; 2 D &part; X 2 - 1 &part; D &part; X - - - ( 9 )
If the X tried to achievenewPosition and yardstick and X0Position and yardstick difference more than 0.5, then should change the position of extreme point to XnewImmediate position. The D (X) value at final extreme point place can weigh the contrast of image herein, such as formula (10).
D ( X n e w ) = D + 1 2 &part; D T &part; X X n e w - - - ( 10 )
If the absolute value of this value is less than 0.03, then give up this extreme point.
(2) feature description
First the gradient magnitude m and direction �� of each pixel in characteristic point neighborhood are calculated according to following formula,
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 &theta; ( x , y ) = tan - 1 ( ( L ( x + 1 , y ) - L ( x , y - 1 ) ) / ( L ( x + 1 , y ) - L ( x - 1 , y ) ) )
In formula, (x, y) is (x, y) gray value at place to L, then statistical gradient direction histogram, divides portion by gradient direction every 10 ��, amounts to 36 parts, if the gradient direction of certain pixel falls into i-th part, then its gradient magnitude is added in direction part. In gradient orientation histogram, the direction represented by part of amplitude maximum is principal direction, if the amplitude of other direction parts is more than 0.8 times of maximum amplitude, then also using them as principal direction. By giving multiple principal direction, it is effectively increased the algorithm robustness to noise. After obtaining the direction part at principal direction place, utilize the amplitude of its contiguous both direction part interpolation can go out more accurate principal direction.
After the principal direction obtaining characteristic point, the angle according to principal direction, neighborhood is turned clockwise so that it is principal direction is horizontal direction, then rotating employing LSOH generation description subvector on image. First the directional derivative in each eight directions of point is calculated, then maximum and secondary big directional response is taken as the description to this point, finally characteristic point neighborhood is done 4*4 partition, add up the directional spreding of each sub-block, namely the direction weight of each pixel is accumulated in the part of corresponding direction, 4*4 sub-block is cascaded up and obtains 256 dimensions and describe subvector, form rectangular histogram and describe son.
(3) characteristic matching and match point screening
Generate after describing son, utilize the Euclidean distance described between son to set up coupling right. With ratio method, if namely characteristic point and the ratio of distances constant of its arest neighbors and time neighbour are less than specifying threshold value, then judge that this characteristic point and its arest neighbors are as match point. The accuracy of the method is higher than the simple method based on arest neighbors threshold value. Because it is higher to describe sub-dimension, so adopting BestBinFirst (BBF) method to accelerate matching process, filter out correct match point finally by RandomSampleConsensus (RANSAC) algorithm.
Same place extraction algorithm needs to solve the problem of differentiation power and invariance, and SIFT solves scale invariability by setting up metric space, solves rotational invariance by calculating principal direction, describes son by Gauss weighting and bilinear interpolation solves affine robustness. LSOH is built upon on the basis of SIFT, changes and forms the feature primitive described used by the period of the day from 11 p.m. to 1 a.m, thus without affecting invariance, only can change differentiation power. The differentiation power of LSOH and SIFT under different scenes is compared emphatically below by way of experiment. Typical scene includes the grass roots that physical features is smooth, has the urban area that relative relief rises and falls and the mountain area etc. that relief is bigger. Two kinds describe son is all carry out mating in the feature that SIFT detection extracts, and feature is counted identical, and therefore the match point quantity after ratio method mates and the quantity of the correct match point after RANSAC algorithm screens mainly are compared in this experiment. It addition, this experiment also compares adopts two kinds of speed described when son mates, the computer CPU used in experiment is Duo i5-2400, dominant frequency 3.10GHz, inside saves as 2GB.
Provide lower three the representational experimental results of scene such as city, rural area and mountain area that different sensors obtains separately below, concrete outcome be analyzed as follows.
One, City scenarios experiment
Having substantial amounts of building in city, can produce secondary and multiple reflections phenomenon, between the image therefore obtained under different image-forming conditions, tonal distortion is bigger. Fig. 7 gives the image of two width Beijing City, Fig. 7 (a) is the RADARSAT-1 image obtained, resolution is 3m, Fig. 7 (b) is the ALOS image obtained, and resolution is 5m, it can be seen that image speckle noise is serious, and resolution is relatively low, between scene, similarity is high, is not easily distinguishable, and difficulty of matching is bigger. Table 1 and Fig. 7 give the match point of SIFT and LSOH and extract result, SIFT obtains 66 match points, wherein, correct match point has 8, whole match point accounts for 12%, and LOSH obtains 87 match points, wherein having 17 correct match points, the ratio occupying whole match point is 20%. No matter being absolute quantity or relative scale, LSOH is better than SIFT. LSOH is extracted more match point, it should more consuming time than SIFT, but because the ratio of the correct match point of LSOH extraction is higher, RANSAC is consuming time less, and therefore the overall time is many unlike SIFT.
Same place Comparative result under table 1 City scenarios
Two, rural area scene experiment
Grass roots physical features is smooth, mostly is the scene such as farmland, river, is affected less by image-forming condition, but if the image of Various Seasonal acquisition, scene itself there occurs change, and the match point also resulting in extraction is less. Fig. 8 and Biao 2 gives the rural image near two width Beijing that different phase obtains, it can be seen that farmland has a greater change. SIFT is extracted 13 correct match points, accounts for 27%, and LOSH is extracted 21 correct match points in whole match point, accounts for 27% in whole match point. Two kinds of method relative scales are identical, but in absolute quantity, LSOH is the twice nearly of SIFT, and therefore, LSOH is consuming time more than SIFT.
Same place Comparative result under the scene of table 2 rural area
Three, mountain area scene experiment
Mountain area relief is relatively big, can produce folded to cover, the phenomenon such as shade. During imaging, angle of incidence difference can cause that the geometric distortion of two width images is different, and tonal distortion is also bigger. Fig. 9 gives the mountain area image near two width Mianyang, Sichuans, and they are irradiates from left to right, but angle of incidence is different, can be seen that, left figure is in face of the domatic compression factor of angle of incidence less than right figure, and the size and location of shadow region is also different, and the gray scale attribute of two width images has a greater change. Fig. 9 and Biao 3 gives the extraction result of same place. SIFT is extracted 16 correct match points, accounts for 33%, and LOSH is extracted 24 correct match points in whole match point, accounts for 39% in whole match point. No matter being absolute quantity or relative scale, LSOH is better than SIFT. The coupling that LSOH extracts is counted more than SIFT, a little higher than SIFT of relative scale, and compromise latter two algorithm speed is suitable.
Same place Comparative result under the scene of table 3 mountain area
The content not being described in detail in description of the present invention belongs to the known technology of those skilled in the art.

Claims (1)

1. the image same place extracting method of son is described, it is characterised in that comprise the steps: based on partial ordering's direction histogram
(1) two width images are adopted the method for following steps (2)��step (4) to process as original image by input reference picture and image to be matched;
(2) gaussian kernel that variance constantly increases is adopted to set up Gaussian scale-space with original image convolution, subtracted each other by Gaussian scale-space adjacent layer and set up difference scale space, each pixel is compared with this layer 8 neighborhood and upper and lower two-layer same position place 3*3 neighborhood, filter out extreme point, as characteristic point;
(3) for each characteristic point, calculate gradient magnitude and the direction of each pixel in its neighborhood, add up its gradient direction part with the gradient magnitude of pixel, build its gradient orientation histogram, and using the direction of amplitude maximum as principal direction, then the angle according to principal direction, turns clockwise neighborhood so that it is principal direction is horizontal direction;
(4) the partial ordering's direction histogram building each characteristic point describes son, concretely comprises the following steps:
(41) to each pixel in characteristic point neighborhood, according to its 3*3 neighborhood, calculate the directional derivative in these 8 directions of pixel first with Sobel operator, try to achieve and maximum and secondary respond greatly direction;
(42) the 0th dimension is adopted to show 8 orientation in maximum direction to the 7th dimension table, wherein east orientation is the 0th dimension, North-East Bound is the 1st dimension, counterclockwise successively by north, northwest, west, southwest, south, the southeast determines that the 2nd dimension is tieed up to the 7th, the 8th dimension is adopted to show 8 orientation of time general orientation to the 15th dimension table, wherein east orientation is the 8th dimension, North-East Bound is the 9th dimension, counterclockwise determine the 10th to the 15th dimension successively, result according to step (41) determines the maximum and secondary big dimension responding direction, and with respective directional response value weighting, form 16 dimensional vectors, 16 dimensional vectors there is value with the dimension in maximum direction and time general orientation, other dimensions are 0,
(43) characteristic point neighborhood is divided into 4*4 sub-block, for each sub-block, the direction weight of pixel each in sub-block is accumulated in the part of corresponding direction, particularly as follows:
H ( i ) = &Sigma; j = 0 N - 1 p j ( i ) , i &Element; &lsqb; 0 , 15 &rsqb;
Wherein pjThe value of i-th dimension in the direction dimensional vector of jth pixel in (i) expression sub-block, N is number of pixels in sub-block;
(44) the 16 of 4*4 sub-block dimension partial ordering direction histograms are connected in turn obtain 256 dimensions and describe subvector, form rectangular histogram and describe son;
(5) for two width images, travel through each characteristic point on a reference, and on image to be matched, choose the arest neighbors corresponding respectively with each characteristic point and time neighbour's characteristic point, by rectangular histogram, son is described, calculate the Euclidean distance of the characteristic point on reference picture and corresponding arest neighbors on image to be matched and time neighbour's characteristic point, if calculated Euclidean distance is less than the threshold value specified, then judge that characteristic point on reference picture and the characteristic point of corresponding arest neighbors on image to be matched are as match point, then pass through RANSAC algorithm and filter out all correct match point, obtain the same place of two width images.
CN201510965338.5A 2015-12-21 2015-12-21 Image point extracting method of the same name based on partial ordering's direction histogram description Active CN105631860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510965338.5A CN105631860B (en) 2015-12-21 2015-12-21 Image point extracting method of the same name based on partial ordering's direction histogram description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510965338.5A CN105631860B (en) 2015-12-21 2015-12-21 Image point extracting method of the same name based on partial ordering's direction histogram description

Publications (2)

Publication Number Publication Date
CN105631860A true CN105631860A (en) 2016-06-01
CN105631860B CN105631860B (en) 2018-07-03

Family

ID=56046746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510965338.5A Active CN105631860B (en) 2015-12-21 2015-12-21 Image point extracting method of the same name based on partial ordering's direction histogram description

Country Status (1)

Country Link
CN (1) CN105631860B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128573A (en) * 2021-03-31 2021-07-16 北京航天飞腾装备技术有限责任公司 Infrared-visible light heterogeneous image matching method
CN113379006A (en) * 2021-08-16 2021-09-10 北京国电通网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium
CN114694040A (en) * 2022-05-31 2022-07-01 潍坊绘圆地理信息有限公司 Data identification method for optical remote sensing data block registration based on dynamic threshold
CN117349764A (en) * 2023-12-05 2024-01-05 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080469A1 (en) * 2008-10-01 2010-04-01 Fuji Xerox Co., Ltd. Novel descriptor for image corresponding point matching
US20110026832A1 (en) * 2009-05-20 2011-02-03 Lemoigne-Stewart Jacqueline J Automatic extraction of planetary image features
CN102663401A (en) * 2012-04-18 2012-09-12 哈尔滨工程大学 Image characteristic extracting and describing method
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN103295014A (en) * 2013-05-21 2013-09-11 上海交通大学 Image local feature description method based on pixel location arrangement column diagrams
CN104866851A (en) * 2015-03-01 2015-08-26 江西科技学院 Scale-invariant feature transform (SIFT) algorithm for image matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080469A1 (en) * 2008-10-01 2010-04-01 Fuji Xerox Co., Ltd. Novel descriptor for image corresponding point matching
US20110026832A1 (en) * 2009-05-20 2011-02-03 Lemoigne-Stewart Jacqueline J Automatic extraction of planetary image features
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN102663401A (en) * 2012-04-18 2012-09-12 哈尔滨工程大学 Image characteristic extracting and describing method
CN103295014A (en) * 2013-05-21 2013-09-11 上海交通大学 Image local feature description method based on pixel location arrangement column diagrams
CN104866851A (en) * 2015-03-01 2015-08-26 江西科技学院 Scale-invariant feature transform (SIFT) algorithm for image matching

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DI HUANG等: "《HSOG:A Novel Local Image Descriptor Based on Histograms of the Second-Order Gradient》", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
SHANHU WANG等: "《BFSIFT:A Novel Method to Find Feature Matches for SAR Image Registration》", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
吕江安等: "《基于SIFT特征的三线阵CCD影像立体匹配》", 《航天返回与遥感》 *
王山虎等: "《基于大尺度双边SIFT的SAR图像同名点自动提取方法》", 《电子与信息学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128573A (en) * 2021-03-31 2021-07-16 北京航天飞腾装备技术有限责任公司 Infrared-visible light heterogeneous image matching method
CN113379006A (en) * 2021-08-16 2021-09-10 北京国电通网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium
CN113379006B (en) * 2021-08-16 2021-11-02 北京国电通网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium
CN114694040A (en) * 2022-05-31 2022-07-01 潍坊绘圆地理信息有限公司 Data identification method for optical remote sensing data block registration based on dynamic threshold
CN117349764A (en) * 2023-12-05 2024-01-05 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data
CN117349764B (en) * 2023-12-05 2024-02-27 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data

Also Published As

Publication number Publication date
CN105631860B (en) 2018-07-03

Similar Documents

Publication Publication Date Title
CN107067415B (en) A kind of object localization method based on images match
CN110097093B (en) Method for accurately matching heterogeneous images
Fan et al. Registration of optical and SAR satellite images by exploring the spatial relationship of the improved SIFT
CN104574347A (en) On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN104361590B (en) High-resolution remote sensing image registration method with control points distributed in adaptive manner
CN103065135A (en) License number matching algorithm based on digital image processing
CN103729643A (en) Recognition and pose determination of 3d objects in multimodal scenes
CN105427298A (en) Remote sensing image registration method based on anisotropic gradient dimension space
CN103400384A (en) Large viewing angle image matching method capable of combining region matching and point matching
CN104867126A (en) Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN110992263B (en) Image stitching method and system
CN104050675B (en) Feature point matching method based on triangle description
CN103400388A (en) Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus)
CN102122359B (en) Image registration method and device
CN102903109B (en) A kind of optical image and SAR image integration segmentation method for registering
CN104200463A (en) Fourier-Merlin transform and maximum mutual information theory based image registration method
CN105631860A (en) Local sorted orientation histogram descriptor-based image correspondence point extraction method
JP5289412B2 (en) Local feature amount calculation apparatus and method, and corresponding point search apparatus and method
CN109493384A (en) Camera position and orientation estimation method, system, equipment and storage medium
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
CN102663733A (en) Characteristic points matching method based on characteristic assembly
CN103336964A (en) SIFT image matching method based on module value difference mirror image invariant property

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant