CN102663391B - Image multifeature extraction and fusion method and system - Google Patents

Image multifeature extraction and fusion method and system Download PDF

Info

Publication number
CN102663391B
CN102663391B CN201210045645.8A CN201210045645A CN102663391B CN 102663391 B CN102663391 B CN 102663391B CN 201210045645 A CN201210045645 A CN 201210045645A CN 102663391 B CN102663391 B CN 102663391B
Authority
CN
China
Prior art keywords
image
color
similarity
sigma
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210045645.8A
Other languages
Chinese (zh)
Other versions
CN102663391A (en
Inventor
王军
吴金勇
王一科
龚灼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongan Xiao Co ltd
Original Assignee
China Security and Surveillance Technology PRC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Security and Surveillance Technology PRC Inc filed Critical China Security and Surveillance Technology PRC Inc
Priority to CN201210045645.8A priority Critical patent/CN102663391B/en
Publication of CN102663391A publication Critical patent/CN102663391A/en
Application granted granted Critical
Publication of CN102663391B publication Critical patent/CN102663391B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an image multifeature extraction and fusion method and the system, comprising the following steps of, extracting color features from an image to be matched, matching color with the target image, making sure the color similarity degree, if the color similarity degree exceeds the given color similarity degree threshold, entering the next step; extracting additional feature from the image to be matched, matching the auxiliary features with the target image, determining the auxiliary feature similarity degree, the auxiliary features comprise at least one of texture feature and shape feature; on the base of color similarity degree and auxiliary features similarity degree, making synthesis judgment and acquiring the synthesis similarity degree between the image to be matched and the target image, or matching the target image with the auxiliary features extracted from the matching zone of the image to be matched. The method and the system of the invention apply a cascade connected way to match from roughness to exactness, and can determine exactly the similarity zone of the target image and the image to be matched. The method and the system of the invention achieve a rapid and highly effective matching effect and save manpower and resources.

Description

A kind of multi-feature extraction of image and fusion method and system
Technical field
The present invention relates to technical field of image processing, particularly relate to a kind of multi-feature extraction of image and fusion method and system.
Background technology
Along with the construction of the large-scale safety defense monitoring system such as " safe city " and " smart city ", the data volume of monitor video is day by day huge, from the image and video data of magnanimity, how to retrieve needs fast and accurately image and video become a problem become more and more important, and retrieve a maximum difficult problem at present and be how to obtain one robust features fast and accurately.
Content-based image and video retrieval technology are by extracting the interested feature of user in image, comprise color, texture, some visual signatures such as shape, the image of user's input is retrieved in large nuber of images, achieve the retrieval of real image vision content characteristic, this mode is to the important breakthrough of key search, improve work scientific and technological content, improve way to manage, strengthen law-enforcing supervision, improve the level of keeping a lookout of public security, such as Chinese patent: a kind of comprehensive multi-feature image retrieval method (publication number: CN101551823, publication date: 2009-10-07) color that will obtain, texture and shape facility synthesize total similarity by weighting summation, the design patent image search method (publication number a: CN101847163A, publication date: 2010-09-29) of multiple features fusion, by the distance weighted fusion after each feature normalization, obtains similarity between final image.
Existing multi-feature extraction and integration technology mainly contain some deficiency following:
1, speed is slow: existing multi-feature extraction and integration technology are all for comparing between image and image, along with the trend in video progressively high Qinghua, has adopted this method, and speed is extremely slow, in the middle of the real-time retrieval that can not be applied to video.
2, the change such as the convergent-divergent of target and rotation false drop rate is high: the combination of prior art many employings color, edge and texture, because these three kinds of features are high to change false drop rates such as the convergent-divergent of target and rotations, is difficult to practical.
3, existing multi-feature extraction and integration technology are by mode in parallel by color, edge and texture, weighted sum is adopted to obtain final degree of confidence, because color, edge and texture do not have comparability, the mode of this weighted sum can produce great error.
Summary of the invention
Aspects and advantages of the present invention are partly stated in the following description, or can be apparent from this description, or learn by putting into practice the present invention.
For overcoming the problems such as speed adds up slowly, by each feature unfavorable factor, false drop rate is high existed in existing multi-feature extraction and integration technology, the invention provides a kind of multi-feature extraction of image and fusion method and system, the advantage of each feature is made full use of by the mode of cascade, from slightly progressively mating to essence, can similar area in Query refinement determination target image and image to be matched, and be no longer the direct coupling between two sub-pictures, reach effect that is quick and efficiently and accurately coupling, save manpower and materials.
It is as follows that the present invention solves the problems of the technologies described above adopted technical scheme:
According to an aspect of the present invention, provide a kind of multi-feature extraction and fusion method of image, it comprises the following steps:
S1. image to be matched is divided into polylith image-region, color characteristic is extracted from described image-region, slightly mate with target image, determine color similarity, find out image-region the most similar in described image to be matched as matching area, if the color similarity of described matching area exceedes the color similarity threshold value of setting, enter next step;
S2. from described matching area, extract supplemental characteristic, carry out supplemental characteristic and mate, determine supplemental characteristic similarity with described target image, described supplemental characteristic comprises at least one item in textural characteristics and shape facility;
S3. on the basis of described color similarity and supplemental characteristic similarity, the fusion carrying out color characteristic and textural characteristics judges, judge whether to meet degree of confidence requirement, if meet, then provide comprehensive degree of confidence: if described textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then described comprehensive degree of confidence is determined by textural characteristics similarity or based on textural characteristics similarity; Otherwise the proportion in comprehensive degree of confidence shared by color similarity improves;
Wherein, the thick coupling in described step S1 comprises step:
A1, treat matching image and carry out color space conversion and color layered method;
Described color layered method comprises: tone H is divided into 8 parts, and saturation degree S and brightness V is divided into 3 parts, and according to color space and people, the subjective perception characteristic to color carries out quantification layering, color space is divided into 72 kinds of colors; Formula is as follows:
H = 0 if h ∈ [ 316,20 ] 1 if h ∈ [ 21,40 ] 2 if h ∈ [ 41,75 ] 3 if h ∈ [ 76,155 ] 4 if h ∈ [ 156,190 ] 5 if h ∈ [ 191,270 ] 6 if h ∈ [ 271,195 ] 7 if h ∈ [ 296,315 ]
S = 0 if s ∈ [ 0,0.2 ] 1 if s ∈ [ 0.2,0.7 ] 2 if s ∈ [ 0.7,1 ]
V = 0 if v ∈ [ 0,0.2 ] 1 if v ∈ [ 0.2,0.7 ] 2 if v ∈ [ 0.7,1 ]
A2, undertaken determining described color similarity based on the template matches of color by color histogram: the image-region divided each, according to splitting the color region obtained, absolute value distance method is adopted to calculate the similarity of the image-region of sample color region and image to be matched
If two color regions are respectively I, Q, by concentric rectangles division methods, image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that layering obtains, the distance D of corresponding part ifor:
D i = Σ j = 0 71 ( | h i ( j ) - h q ( j ) | )
Wherein, h i(j), h qj () respectively corresponding color region I, Q ties up histogrammic value in jth, sequencing of similarity preserved;
When carrying out supplemental characteristic coupling in described step S2, comprise step:
B1, converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is that N*N ties up matrix, namely wherein, the element m of (h, k) is positioned at hkvalue represent that the gray scale at a distance of (h, k) is h, and another gray scale is that the pixel of k is to the number of times occurred.
Four characteristic quantities extracted by texture co-occurrence matrix are:
Contrast: CON = Σ h Σ k ( h - k ) 2 m hk
Energy: ASM = Σ h Σ k ( m hk ) 2
Entropy: ENT = - Σ h Σ k m hk lg ( m hk )
Relevant: COR = [ Σ h Σ k hkm hk - μ x μ y ) ] / σ x σ y
Wherein, it is every column element sum in matrix M; it is every row element sum; μ x, μ y, σ x, σ ym respectively x, m yaverage and standard deviation;
B2, the Gaussian difference pyrene of different scale and image convolution is utilized to generate Gaussian difference scale space (DOG scale-space);
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
Wherein G (x, y, σ) is changeable scale Gaussian function, and (x, y) is volume coordinate, and σ is yardstick coordinate.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) / 2 σ 2
B3, each sampled point and its all consecutive point to be compared, see that it is whether large or little than the consecutive point of its image area and scale domain, middle check point and it with 8 consecutive point of yardstick and 9 × 2 points corresponding to neighbouring yardstick totally 26 points compare, to guarantee all extreme point to be detected at metric space and two dimensional image space;
B4, by fitting three-dimensional quadratic function accurately to determine position and the yardstick of key point, remove the key point of low contrast and unstable skirt response point, to strengthen coupling stability, to improve noise resisting ability simultaneously;
B5, utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance;
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ ( x , y ) = α tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) )
Upper two formulas are modulus value and the direction formula of (x, y) place gradient; The yardstick that wherein L is used is the yardstick at each key point place separately.
B6, be the direction of key point by X-axis rotate, to guarantee rotational invariance;
B7, get centered by key point 8 × 8 window, central authorities' stain is the position of current key point, each little lattice represent a pixel of key point neighborhood place metric space, then on the fritter of every 4 × 4, calculate the gradient orientation histogram in 8 directions, draw the accumulated value of each gradient direction, a Seed Points can be formed;
B8, each key point used 4 × 4 totally 16 Seed Points describe, 128 data are produced to a key point, the final SIFT feature vector forming 128 dimensions;
B9, the similarity determination adopting the Euclidean distance of key point proper vector to be used as key point in two width images are measured; Certain key point in sampling illustration, and find out it and mate European nearest the first two key point in figure, in these two key points, if nearest distance is less than preset ratio threshold value except distance near in proper order, then accept this pair match point.
According to one embodiment of present invention, in described step S2, described supplemental characteristic adopts textural characteristics, and it is one or more that described textural characteristics comprises in following features: the textural characteristics of gray level co-occurrence matrixes, rotate the constant textural characteristics of convergent-divergent.
According to another aspect of the present invention, provide a kind of multi-feature extraction and emerging system of image, it comprises matching module, and described matching module comprises:
Color matching module, for image to be matched is divided into polylith image-region, color characteristic is extracted from described image-region, slightly mate with target image, determine color similarity, find out image-region the most similar in described image to be matched as matching area, if the color similarity of described matching area exceedes the color similarity threshold value of setting, enter supplemental characteristic matching module and process;
Supplemental characteristic matching module, for extracting supplemental characteristic from described matching area, carrying out supplemental characteristic and mating, determine supplemental characteristic similarity with described target image, described supplemental characteristic comprises at least one item in textural characteristics and shape facility;
Synthetic determination module, for the basis at described color similarity and supplemental characteristic similarity, the fusion carrying out color characteristic and textural characteristics judges, judge whether to meet degree of confidence requirement, if meet, then provide comprehensive degree of confidence: if described textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then described comprehensive degree of confidence is determined by textural characteristics similarity or based on textural characteristics similarity; Otherwise the proportion in comprehensive degree of confidence shared by color similarity improves;
Color matching module, when carrying out described thick coupling, carries out color space conversion and color layered method for treating matching image; Described color layered method comprises: tone H is divided into 8 parts, and saturation degree S and brightness V is divided into 3 parts, and according to color space and people, the subjective perception characteristic to color carries out quantification layering, color space is divided into 72 kinds of colors; Formula is as follows:
H = 0 if h ∈ [ 316,20 ] 1 if h ∈ [ 21,40 ] 2 if h ∈ [ 41,75 ] 3 if h ∈ [ 76,155 ] 4 if h ∈ [ 156,190 ] 5 if h ∈ [ 191,270 ] 6 if h ∈ [ 271,195 ] 7 if h ∈ [ 296,315 ]
S = 0 if s ∈ [ 0,0.2 ] 1 if s ∈ [ 0.2,0.7 ] 2 if s ∈ [ 0.7,1 ]
V = 0 if v ∈ [ 0,0.2 ] 1 if v ∈ [ 0.2,0.7 ] 2 if v ∈ [ 0.7,1 ]
Also determine described color similarity for being undertaken by color histogram based on the template matches of color: the image-region that each is divided, according to splitting the color region obtained, absolute value distance method is adopted to calculate the similarity of the image-region of sample color region and image to be matched
If two color regions are respectively I, Q, by concentric rectangles division methods, image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that layering obtains, the distance D of corresponding part ifor:
D i = Σ j = 0 71 ( | h i ( j ) - h q ( j ) | )
Wherein, h i(j), h qj () respectively corresponding color region I, Q ties up histogrammic value in jth, sequencing of similarity preserved;
Supplemental characteristic matching module carries out supplemental characteristic coupling for performing step:
B1, converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is that N*N ties up matrix, namely wherein, the element m of (h, k) is positioned at hkvalue represent that the gray scale at a distance of (h, k) is h, and another gray scale is that the pixel of k is to the number of times occurred.
Four characteristic quantities extracted by texture co-occurrence matrix are:
Contrast: CON = Σ h Σ k ( h - k ) 2 m hk
Energy: ASM = Σ h Σ k ( m hk ) 2
Entropy: ENT = - Σ h Σ k m hk lg ( m hk )
Relevant: COR = [ Σ h Σ k hkm hk - μ x μ y ) ] / σ x σ y
Wherein, it is every column element sum in matrix M; it is every row element sum; μ x, μ y, σ x, σ ym respectively x, m yaverage and standard deviation;
B2, the Gaussian difference pyrene of different scale and image convolution is utilized to generate Gaussian difference scale space (DOG scale-space);
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
Wherein G (x, y, σ) is changeable scale Gaussian function, and (x, y) is volume coordinate, and σ is yardstick coordinate.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) / 2 σ 2
B3, each sampled point and its all consecutive point to be compared, see that it is whether large or little than the consecutive point of its image area and scale domain, middle check point and it with 8 consecutive point of yardstick and 9 × 2 points corresponding to neighbouring yardstick totally 26 points compare, to guarantee all extreme point to be detected at metric space and two dimensional image space;
B4, by fitting three-dimensional quadratic function accurately to determine position and the yardstick of key point, remove the key point of low contrast and unstable skirt response point, to strengthen coupling stability, to improve noise resisting ability simultaneously;
B5, utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance;
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ ( x , y ) = α tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) )
Upper two formulas are modulus value and the direction formula of (x, y) place gradient; The yardstick that wherein L is used is the yardstick at each key point place separately.
B6, be the direction of key point by X-axis rotate, to guarantee rotational invariance;
B7, get centered by key point 8 × 8 window, central authorities' stain is the position of current key point, each little lattice represent a pixel of key point neighborhood place metric space, then on the fritter of every 4 × 4, calculate the gradient orientation histogram in 8 directions, draw the accumulated value of each gradient direction, a Seed Points can be formed;
B8, each key point used 4 × 4 totally 16 Seed Points describe, 128 data are produced to a key point, the final SIFT feature vector forming 128 dimensions;
B9, the similarity determination adopting the Euclidean distance of key point proper vector to be used as key point in two width images are measured; Certain key point in sampling illustration, and find out it and mate European nearest the first two key point in figure, in these two key points, if nearest distance is less than preset ratio threshold value except distance near in proper order, then accept this pair match point.
Certainly, described matching module also can be set to perform other the one or more step in the multi-feature extraction of above-mentioned image and fusion method feature.Multi-feature extraction of the present invention and fusion method, be mainly used in images match or image and video retrieval technology, overcome the problem that each feature unfavorable factor is cumulative existed in existing multi-feature extraction and fusion method, the advantage of each feature is made full use of by the mode of cascade, from slightly progressively mating to essence, the method of simultaneously being mated by cascade can similar area in Query refinement determination target image and image to be matched, and be no longer the direct contrast between two sub-pictures, reach effect that is quick and efficiently and accurately coupling, save manpower and materials.
Specifically, relative to prior art, the present invention can bring following beneficial effect:
1, the present invention is by the method for cascade, from slightly progressively mating to essence, and can reduce surveyed area, matching speed can be made greatly to improve.
2, the present invention is utilized, shape facility (as edge feature) can be removed and adopt textural characteristics as supplemental characteristic, adopt more healthy and stronger, convergent-divergent and rotation change to be had to detection very well SIFT (or SURF) algorithm, accuracy can improve greatly.
3, the cascade system of the present invention by color, supplemental characteristic are combined by series, parallel, the threshold value finally whether being greater than setting according to two similarities gets final degree of confidence, not only substantially increase matching speed, avoid the error that prior art directly adopts weighted sum to cause to the feature that color, edge and texture etc. do not have comparability simultaneously.
4, when intended target (as vehicle, the row humans and animals etc.) retrieval being applied to image, first the present invention by obtaining the approximate region of target from image based on the template matching method of color, and then adopt more accurate supplemental characteristic (can textural characteristics be adopted, as rotated convergent-divergent invariant features) to target area exact matching, thus obtain the accurate similarity of target.
By reading instructions, those of ordinary skill in the art will understand the characteristic sum aspect of these embodiments and other embodiment better.
Accompanying drawing explanation
Below by with reference to accompanying drawing describe the present invention particularly in conjunction with example, advantage of the present invention and implementation will be more obvious, wherein content shown in accompanying drawing is only for explanation of the present invention, and does not form restriction of going up in all senses of the present invention, in the accompanying drawings:
Fig. 1 is the multi-feature extraction of image and the general flow chart of fusion method according to an embodiment of the invention.
Fig. 2 is the thick coupling process flow diagram in Fig. 1.
Fig. 3 is the structural representation of matching module according to an embodiment of the invention.
Embodiment
The invention provides a kind of multi-feature extraction and emerging system of image, this system can be a main frame or special equipment, also can be a network system, or the software systems that can be installed in main frame or Special Equipment, key is that it comprises matching module, as shown in Figure 3, matching module comprises:
Color matching module, for extracting color characteristic from image to be matched, carrying out color-match with target image, determining color similarity, if color similarity exceedes the color similarity threshold value of setting, enters supplemental characteristic matching module and processes;
Supplemental characteristic matching module, for extracting supplemental characteristic from image to be matched, carrying out supplemental characteristic and mating, determine supplemental characteristic similarity with target image, supplemental characteristic comprises at least one item in textural characteristics and shape facility;
Synthetic determination module, for the basis at color similarity and supplemental characteristic similarity, carries out synthetic determination, draws the comprehensive similarity of image to be matched and target image.
According to embodiments of the invention, Color matching module is placed through the template matches of image to be matched and target image to be carried out based on color, determines color similarity.Preferably, Color matching module is set to utilize the template matches based on color to determine matching area the most similar with target image in image to be matched simultaneously; Supplemental characteristic matching module is placed through and from the matching area of image to be matched, extracts supplemental characteristic mate with the supplemental characteristic of target image, determines supplemental characteristic similarity.
In the following embodiments, textural characteristics is adopted to be described for supplemental characteristic, certain those of ordinary skill in the art also can adopt shape facility (such as edge feature) as supplemental characteristic, also textural characteristics and shape facility can be adopted as supplemental characteristic, and these all within the scope of the present invention simultaneously.
As depicted in figs. 1 and 2, treat the color histogram of matching image (original image) by colo(u)r breakup and after obtaining layering and other color characteristic to carry out the first order and slightly mate, by slightly mating the target area that the most similar target area of acquisition is carefully mated as next stage, most of color characteristic can be got rid of and differ great target, simultaneously because condition is not harsh, so undetected target cumulative errors can not be caused; The basis of thick coupling is carefully mated in conjunction with supplemental characteristic (for textural characteristics), and thin coupling is further confirmation, is not the direct negative to thick coupling; Slightly mating and on thin basis of mating, by the similarity of slightly mating and thin coupling obtains respectively, carrying out synthetic determination, draw the comprehensive degree of confidence of image to be matched and target target image.Its concrete steps are as follows:
1, color space conversion and color layered method;
2, the first order is slightly mated (adopting the template matching technique based on color): on the basis of color layering, is slightly mated by color histogram and further feature, determines the Probability Area (matching area) the most similar with intended target; If color similarity exceedes the color similarity threshold value of setting, enter next step; Otherwise, terminate the coupling to current image to be matched.
3, the second level is carefully mated: texture feature extraction carries out exact matching, is mated, determine textural characteristics similarity by texture feature extraction in the matching area from image to be matched with the textural characteristics of target image;
4, third level synthetic determination: combine thick coupling and carry out synthetic determination with thin coupling according to certain rule, draw comprehensive degree of confidence.
The process flow diagram of the method refers to Fig. 1 and Fig. 2, and in specific embodiment as shown in Figure 1, the treatment scheme of matching module comprises the following steps:
101. obtain original image (i.e. image to be matched);
102. in conjunction with the parameter of input, carry out color space conversion;
103. carry out color layered method;
104. carry out color-match (namely slightly mating) with target image;
105. determine matching area the most similar in image to be matched;
106. extractions carrying out textural characteristics from matching area;
107. carry out textural characteristics with target image mates (namely carefully mating);
108. fusions carrying out color and textural characteristics judge;
109. judge whether to meet degree of confidence requirement, if it is provide comprehensive degree of confidence, otherwise exit.
In specific embodiment as shown in Figure 2, the treatment scheme of thick coupling comprises the following steps:
201. layered image inputs;
202. combine the parameter inputted, and the size of image according to target image are divided;
203. take out one piece of image-region;
204. calculate HSV (hue, saturation, intensity) color histogram;
205.HSV Histogram Matching;
Sequencing of similarity is preserved by 206.;
207. determine whether last block image-region;
208. if then take out the most similar matching area, otherwise get back to step 203, repeats the coupling of next block image-region.
Below several steps of the present embodiment are described in detail successively:
The conversion of the first step, color space and color layered method
1) color space conversion;
Because needs come colo(u)r breakup at HSV (hue, saturation, intensity) color space, thus first by image from RGB (red, green, blue) color space conversion to hsv color space:
H = arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B ≤ G ) 2 π - arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B > G ) - - - ( 1 )
S = max ( R , G , B ) - min ( R , G , B ) max ( R + G + B ) - - - ( 2 )
V = max ( R , G , B ) 255 - - - ( 3 )
2) color layered method
Color layering is exactly be mapped in certain subset by color space, thus improves images match speed.General color of image system nearly 2 24plant color, and the color that human eye can really distinguish is limited, therefore when carrying out image procossing, needs to carry out layering to color space, the dimension size of layering is extremely important, and layering dimension is higher, and matching precision is higher, but matching speed can decline thereupon.
Color layering is divided into the colo(u)r breakup of equivalent spacing and the colo(u)r breakup of non-equivalent spacing, if due to the dimension of equivalent spacing layering too low, then precision can decline greatly, calculation of complex can be caused again if too high, by analysis and experiment, the present embodiment selects the colo(u)r breakup of non-equivalent spacing, and step is as follows:
According to the perception of people, tone H is divided into 8 parts, saturation degree S and brightness V is divided into 3 parts, and according to color space and people, the subjective perception characteristic to color carries out quantification layering, and formula is as follows:
H = 0 if h ∈ [ 316,20 ] 1 if h ∈ [ 21,40 ] 2 if h ∈ [ 41,75 ] 3 if h ∈ [ 76,155 ] 4 if h ∈ [ 156,190 ] 5 if h ∈ [ 191,270 ] 6 if h ∈ [ 271,195 ] 7 if h ∈ [ 296,315 ] - - - ( 4 )
S = 0 if s ∈ [ 0,0.2 ] 1 if s ∈ [ 0.2,0.7 ] 2 if s ∈ [ 0.7,1 ] - - - ( 5 )
V = 0 if v ∈ [ 0,0.2 ] 1 if v ∈ [ 0.2,0.7 ] 2 if v ∈ [ 0.7,1 ] - - - ( 6 )
According to above method, color space is divided into 72 kinds of colors.
Second step, the first order are slightly mated: on the basis of color layering, slightly mated, determine the Probability Area the most similar with target by color histogram
1) image-region divides;
For the coupling of intended target, better mate to make the target in sample target and image to be matched, matching image is divided into the region of one piece of block and the few size of sample goal discrepancy by us in the first order, the step-length of horizontal and vertical movement can set according to the requirement of matching precision, wanting that precision is high a bit can a little bit smaller by step size settings, want speed fast, can larger by step size settings.
2) color template and characteristic matching
To the image-region that each divides, according to splitting the color region obtained, calculating the similarity in sample color region and region to be matched, adopting absolute value distance method here.
If two color regions are respectively I, Q, by concentric rectangles division methods, image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that layering above obtains, the distance D of corresponding part ifor:
D i = Σ j = 0 71 ( | h i ( j ) - h q ( j ) | ) - - - ( 7 )
Wherein, h i(j), h qj () respectively corresponding color region I, Q ties up histogrammic value in jth, to result of calculation sequence, find out the most similar region as matching area.
3rd step, carefully coupling: extract supplemental characteristic (for textural characteristics) and carry out exact matching
It is one or more that textural characteristics can comprise in following features: the textural characteristics of gray level co-occurrence matrixes, rotate the constant textural characteristics of convergent-divergent (as SIFT feature).
1) textural characteristics of gray level co-occurrence matrixes
First converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is that N*N ties up matrix, namely wherein, the element m of (h, k) is positioned at hkvalue represent that the gray scale at a distance of (h, k) is h, and another gray scale is that the pixel of k is to the number of times occurred.
Four characteristic quantities extracted by texture co-occurrence matrix are:
Contrast: CON = Σ h Σ k ( h - k ) 2 m hk - - - ( 8 )
Energy: ASM = Σ h Σ k ( m hk ) 2 - - - ( 9 )
Entropy: ENT = - Σ h Σ k m hk lg ( m hk ) - - - ( 10 )
Relevant: COR = [ Σ h Σ k hkm hk - μ x μ y ) ] / σ x σ y - - - ( 11 )
Wherein, it is every column element sum in matrix M; it is every row element sum; μ x, μ y, σ x, σ ym respectively x, m yaverage and standard deviation.
Concrete steps are in the present embodiment as follows:
A, the gray scale of image is divided into 64 gray shade scales;
B, structure four direction gray level co-occurrence matrixes: M (1,0), M (0,1), M (1,1), M (1 ,-1)
C, calculate four texture characteristic amounts on each co-occurrence matrix respectively;
With the average of each characteristic quantity and standard deviation: μ cON, σ cON, μ aSM, σ aSM, μ eNT, σ eNT, μ cOR, σ cORas eight components of textural characteristics.
2) SIFT (scale invariant feature conversion) feature
SIFT algorithm is a kind of algorithm extracting local feature, finds extreme point, extracting position, yardstick, rotational invariants at metric space.
Its main detecting step is as follows:
A) yardstick spatial extrema point is detected;
B) accurately extreme point is located;
C) be each key point assigned direction parameter;
D) generation of key point descriptor
● the generation of metric space
Scale-space theory object is the Analysis On Multi-scale Features of simulated image data, and Gaussian convolution core is the unique linear core realizing change of scale, so the metric space of a secondary two dimensional image is defined as:
L(x,y,σ)=G(x,yσ)*I(x,y) (12)
Wherein G (x, y, σ) is changeable scale Gaussian function.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) / 2 σ 2 - - - ( 13 )
(x, y) is volume coordinate, and σ is yardstick coordinate.
Stable key point detected in order to effective at metric space, propose Gaussian difference scale space (DOG scale-space).The Gaussian difference pyrene of different scale and image convolution is utilized to generate.
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (14)
The structure of image pyramid: image pyramid is O group altogether, and often group has S layer, the image of next group is obtained by upper one group of image drop sampling, O and S is set by the user.
● spatial extrema point detects
In order to find the extreme point of metric space, the consecutive point that each sampled point will be all with it compare, and see that it is whether large or little than the consecutive point of its image area and scale domain.Middle check point and it with 8 consecutive point of yardstick and 9 × 2 points corresponding to neighbouring yardstick totally 26 points compare, to guarantee all extreme point to be detected at metric space and two dimensional image space.
● build the parameter that metric space need be determined
σ-metric space coordinate
O-octave coordinate
S-sub-level coordinate
Relation σ (o, the s)=σ of σ and O, S 02 o+s/S,
o∈o min+[0,...,O-1],s∈[0,...,S-1]
Wherein σ 0it is key horizon yardstick.
Volume coordinate x is the function of group octave, if x 0the volume coordinate of 0 group, then x=2 ox 0, o ∈ Z, x 0∈ [0 ..., N 0-1] × [0 ..., M 0-1]
If (M 0, N 0) be the resolution of base set o=0, then resolution of other groups are obtained by following formula:
The parameter that general use is following:
σ n=0.5,σ 0=1.6·2 1/S,o min=-1,S=3
At group o=-1, the expansion of image bilinear interpolation is twice (for the image σ expanded n=1).
● accurately determine extreme point position
By fitting three-dimensional quadratic function accurately to determine position and the yardstick (reaching sub-pixel precision) of key point, remove key point and the unstable skirt response point (because DoG operator can produce stronger skirt response) of low contrast, to strengthen coupling stability, to improve noise resisting ability simultaneously.
The removal of skirt response
An extreme value defining bad difference of Gaussian has larger principal curvatures in the place across edge, and has less principal curvatures in the direction of vertical edge.Principal curvatures is obtained by the Hessian matrix H of a 2x2:
H = D xx D xy D xy D yy - - - ( 16 )
Derivative is obtained by the adjacent poor estimation of sampled point.
The principal curvatures of D and the eigenwert of H are directly proportional, and make α be eigenvalue of maximum, β is minimum eigenwert, then
Tr(H)=D xx+D yy=α+β (17)
Det(H)=D xxD yy-(D xy) 2=αβ (18)
Make α=γ β, then:
Tr ( H ) 2 Det ( H ) = ( α + β ) 2 αβ = ( rβ + β ) 2 r β 2 = ( r + 1 ) 2 r - - - ( 19 )
(r+1) 2the value of/r is minimum when two eigenwerts are equal, increases along with the increase of r, therefore, in order to detect principal curvatures whether under certain thresholding r, only needs to detect:
Tr ( H ) 2 Det ( H ) < ( r + 1 ) 1 r - - - ( 20 )
Generally get r=10.
● key point direction is distributed
Utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 21 )
&theta; ( x , y ) = &alpha; tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) ) - - - ( 22 )
Upper two formulas are modulus value and the direction formula of (x, y) place gradient.The yardstick that wherein L is used is the yardstick at each key point place separately.
When actual computation, we sample in the neighborhood window centered by key point, and with the gradient direction of statistics with histogram neighborhood territory pixel.The scope of histogram of gradients is 0 ~ 360 degree, wherein every 10 degree of posts, altogether 36 posts.Histogrammic peak value then represents the principal direction of this key point place neighborhood gradient, namely as the direction of this key point.
In gradient orientation histogram, when there is another and being equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxiliary direction of this key point.A key point may be designated has multiple directions (principal direction, more than one auxiliary direction), and this can strengthen the robustness of coupling.
So far, the key point of image has detected complete, and each key point has three information: position, residing yardstick, direction.A SIFT feature region can be determined thus.
● unique point descriptor generates
First be the direction of key point by X-axis rotate, to guarantee rotational invariance.
Next centered by key point, get the window of 8 × 8, central authorities' stain is the position of current key point, each little lattice represent a pixel of key point neighborhood place metric space, then on the fritter of every 4 × 4, calculate the gradient orientation histogram in 8 directions, draw the accumulated value of each gradient direction, a Seed Points can be formed.
In actual computation process, in order to strengthen the robustness of coupling, to each key point use 4 × 4 totally 16 Seed Points describe, just can produce 128 data for a key point like this, the SIFT feature namely finally forming 128 dimensions are vectorial.Now SIFT feature vector has eliminated the impact of the geometry deformation such as dimensional variation, rotation factor, then continues the length normalization method of proper vector, then can remove the impact of illumination variation further.
After the SIFT feature vector of two width images generates, we adopt the Euclidean distance of key point proper vector to be used as the similarity determination tolerance of key point in two width images for next step.Certain key point in sampling illustration, and find out it and mate European nearest the first two key point in figure, in these two key points, if nearest distance is less than certain proportion threshold value except distance near in proper order, then accept this pair match point.Reduce this proportion threshold value, SIFT match point number can reduce, but more stable.
3) comprehensive characteristics
Utilize single features to carry out images match and have respective advantage, in order to improve the accuracy of coupling, combined with texture feature of the present invention, constructs a structured features and carries out images match.
Because color is not identical with the physical significance of textural characteristics, do not have direct comparability, need to do normalization process to several feature, formula is as follows:
D=w 1d 1+w 2d 2(23)
Wherein, d 1, d 2represent the distance between the Color Characteristic of 2 width images, texture characteristic amount respectively; w 1, w 2for the weights (0≤w of characteristic quantity 1≤ 1, and w 1+ w 2=1).
4th step, synthetic determination: combine thick coupling and carry out synthetic determination with thin coupling according to certain rule, draw comprehensive degree of confidence
The rule of synthetic determination is such:
If a) thick coupling and thin coupling have high similarity, are similar coupling target so certainly, based on textural characteristics (as SIFT feature) similarity during comprehensive confidence calculations;
If b) similarity of thick coupling is general, but the similarity of thin coupling is high, and due to the stability of SIFT feature, based on textural characteristics similarity during comprehensive confidence calculations, but the proportion of color similarity improves;
If c) similarity of thick coupling is high, and the similarity of carefully mating is general, during comprehensive confidence calculations, the proportion of two kinds of similarities is more or less the same, and determines according to actual conditions;
If d) two kinds of similarities are all very low, so not think it is similar purpose.
In the present embodiment, the realization of method is mainly by the matching way of cascade, from slightly progressively determining to essence, comprises three parts: the first order is slightly mated, the second level is carefully mated and third level synthetic determination; Thick coupling determines the approximate region that may comprise target according to the color histogram after colo(u)r breakup by the method for template matches; Thin coupling, mainly by obtaining textural characteristics, the basis of thick coupling is determined further; Synthetic determination is the independent similarity obtained according to the thick coupling of color and the thin coupling of textural characteristics, draws comprehensive similarity (or comprehensive degree of confidence).
Adopt the method for the invention, compared with prior art, overcome the some shortcomings existed in existing multi-feature extraction and fusion method, the advantage of each feature is made full use of by the mode of cascade, from slightly progressively mating to essence, accurately can determine the similar area in target image and image to be matched, and being no longer the direct contrast between two sub-pictures, reach effect that is quick and efficiently and accurately coupling, save manpower and materials.
The present invention not only can be applied in the retrieval to static images, also can be applied in the middle of video frequency searching simultaneously.Those of ordinary skill in the art should be appreciated that, when being applied to video frequency searching, the acquisition of matching area the most similar in image to be matched except by above-mentioned based on except the template matching method of color, also can pass through other technology, obtain motion target area as by background subtraction.
Above with reference to the accompanying drawings of the preferred embodiments of the present invention, those skilled in the art do not depart from the scope and spirit of the present invention, and multiple flexible program can be had to realize the present invention.For example, to illustrate as the part of an embodiment or the feature that describes can be used for another embodiment to obtain another embodiment.These are only the better feasible embodiment of the present invention, not thereby limit to interest field of the present invention that the equivalence change that all utilizations instructions of the present invention and accompanying drawing content are done all is contained within interest field of the present invention.

Claims (3)

1. the multi-feature extraction of image and a fusion method, is characterized in that, comprise the following steps:
S1. image to be matched is divided into polylith image-region, color characteristic is extracted from described image-region, slightly mate with target image, determine color similarity, find out image-region the most similar in described image to be matched as matching area, if the color similarity of described matching area exceedes the color similarity threshold value of setting, enter next step;
S2. from described matching area, extract supplemental characteristic, carry out supplemental characteristic and mate, determine supplemental characteristic similarity with described target image, described supplemental characteristic comprises at least one item in textural characteristics and shape facility;
S3. on the basis of described color similarity and supplemental characteristic similarity, the fusion carrying out color characteristic and textural characteristics judges, judge whether to meet degree of confidence requirement, if meet, then provide comprehensive degree of confidence: if described textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then described comprehensive degree of confidence is determined by textural characteristics similarity or based on textural characteristics similarity; Otherwise the proportion in comprehensive degree of confidence shared by color similarity improves;
Wherein, the thick coupling in described step S1 comprises step:
A1, treat matching image and carry out color space conversion and color layered method;
Described color layered method comprises: tone H is divided into 8 parts, and saturation degree S and brightness V is divided into 3 parts, and according to color space and people, the subjective perception characteristic to color carries out quantification layering, color space is divided into 72 kinds of colors; Formula is as follows:
H = 0 if h &Element; [ 316,20 ] 1 if h &Element; [ 21,40 ] 2 if h &Element; [ 41,75 ] 3 if h &Element; [ 76,155 ] 4 if h &Element; [ 156,190 ] 5 if h &Element; [ 191,270 ] 6 if h &Element; [ 271,195 ] 7 if h &Element; [ 296,315 ]
S = 0 if s &Element; [ 0,0.2 ] 1 if s &Element; [ 0.2,0.7 ] 2 if s &Element; [ 0.7,1 ]
V = 0 if v &Element; [ 0,0.2 ] 1 if v &Element; [ 0.2,0.7 ] 2 if v &Element; [ 0.7,1 ]
A2, undertaken determining described color similarity based on the template matches of color by color histogram: the image-region divided each, according to splitting the color region obtained, absolute value distance method is adopted to calculate the similarity of the image-region of sample color region and image to be matched
If two color regions are respectively I, Q, by concentric rectangles division methods, image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that layering obtains, the distance D of corresponding part ifor:
D i = &Sigma; j = 0 71 ( | h i ( j ) - h q ( j ) | )
Wherein, h i(j), h qj () respectively corresponding color region I, Q ties up histogrammic value in jth, sequencing of similarity preserved;
When carrying out supplemental characteristic coupling in described step S2, comprise step:
B1, converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is that N*N ties up matrix, namely wherein, the element m of (h, k) is positioned at hkvalue represent that the gray scale at a distance of (h, k) is h, and another gray scale is that the pixel of k is to the number of times occurred;
Four characteristic quantities extracted by texture co-occurrence matrix are:
Contrast: CON = &Sigma; h &Sigma; k ( h - k ) 2 m hk
Energy: ASM = &Sigma; h &Sigma; k ( m hk ) 2
Entropy: ENT = - &Sigma; h &Sigma; k m hk lg ( m hk )
Relevant: COR = [ &Sigma; h &Sigma; k hkm hk - &mu; x &mu; y ) ] / &sigma; x &sigma; y
Wherein, it is every column element sum in matrix M; it is every row element sum; μ x, μ y, σ x, σ ym respectively x, m yaverage and standard deviation;
B2, the Gaussian difference pyrene of different scale and image convolution is utilized to generate Gaussian difference scale space (DOG scale-space);
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
Wherein G (x, y, σ) is changeable scale Gaussian function, and (x, y) is volume coordinate, and σ is yardstick coordinate;
G ( x , y , &sigma; ) = 1 2 &pi; &sigma; 2 e - ( x 2 + y 2 ) / 2 &sigma; 2
B3, each sampled point and its all consecutive point to be compared, see that it is whether large or little than the consecutive point of its image area and scale domain, middle check point and it with 8 consecutive point of yardstick and 9 × 2 points corresponding to neighbouring yardstick totally 26 points compare, to guarantee all extreme point to be detected at metric space and two dimensional image space;
B4, by fitting three-dimensional quadratic function accurately to determine position and the yardstick of key point, remove the key point of low contrast and unstable skirt response point simultaneously;
B5, utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance;
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
&theta; ( x , y ) = &alpha; tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) )
Upper two formulas are modulus value and the direction formula of (x, y) place gradient; The yardstick that wherein L is used is the yardstick at each key point place separately;
B6, be the direction of key point by X-axis rotate, to guarantee rotational invariance;
B7, get centered by key point 8 × 8 window, central authorities' stain is the position of current key point, each little lattice represent a pixel of key point neighborhood place metric space, then on the fritter of every 4 × 4, calculate the gradient orientation histogram in 8 directions, draw the accumulated value of each gradient direction, a Seed Points can be formed;
B8, each key point used 4 × 4 totally 16 Seed Points describe, 128 data are produced to a key point, the final SIFT feature vector forming 128 dimensions;
B9, the similarity determination adopting the Euclidean distance of key point proper vector to be used as key point in two width images are measured; Certain key point in sampling illustration, and find out it and mate European nearest the first two key point in figure, in these two key points, if nearest distance is less than preset ratio threshold value except distance near in proper order, then accept this pair match point.
2. the multi-feature extraction of image according to claim 1 and fusion method, it is characterized in that, in described step S2, described supplemental characteristic adopts textural characteristics, and it is one or more that described textural characteristics comprises in following features: the textural characteristics of gray level co-occurrence matrixes, textural characteristics.
3. the multi-feature extraction of image and an emerging system, it is characterized in that, comprise matching module, described matching module comprises:
Color matching module, for image to be matched is divided into polylith image-region, color characteristic is extracted from described image-region, slightly mate with target image, determine color similarity, find out image-region the most similar in described image to be matched as matching area, if the color similarity of described matching area exceedes the color similarity threshold value of setting, enter supplemental characteristic matching module and process;
Supplemental characteristic matching module, for extracting supplemental characteristic from described matching area, carrying out supplemental characteristic and mating, determine supplemental characteristic similarity with described target image, described supplemental characteristic comprises at least one item in textural characteristics and shape facility;
Synthetic determination module, for the basis at described color similarity and supplemental characteristic similarity, the fusion carrying out color characteristic and textural characteristics judges, judge whether to meet degree of confidence requirement, if meet, then provide comprehensive degree of confidence: if described textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then described comprehensive degree of confidence is determined by textural characteristics similarity or based on textural characteristics similarity; Otherwise the proportion in comprehensive degree of confidence shared by color similarity improves;
Color matching module, when carrying out described thick coupling, carries out color space conversion and color layered method for treating matching image; Described color layered method comprises: tone H is divided into 8 parts, and saturation degree S and brightness V is divided into 3 parts, and according to color space and people, the subjective perception characteristic to color carries out quantification layering, color space is divided into 72 kinds of colors; Formula is as follows:
H = 0 if h &Element; [ 316,20 ] 1 if h &Element; [ 21,40 ] 2 if h &Element; [ 41,75 ] 3 if h &Element; [ 76,155 ] 4 if h &Element; [ 156,190 ] 5 if h &Element; [ 191,270 ] 6 if h &Element; [ 271,195 ] 7 if h &Element; [ 296,315 ]
S = 0 if s &Element; [ 0,0.2 ] 1 if s &Element; [ 0.2,0.7 ] 2 if s &Element; [ 0.7,1 ]
V = 0 if v &Element; [ 0,0.2 ] 1 if v &Element; [ 0.2,0.7 ] 2 if v &Element; [ 0.7,1 ]
Also determine described color similarity for being undertaken by color histogram based on the template matches of color: the image-region that each is divided, according to splitting the color region obtained, absolute value distance method is adopted to calculate the similarity of the image-region of sample color region and image to be matched
If two color regions are respectively I, Q, by concentric rectangles division methods, image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that layering obtains, the distance D of corresponding part ifor:
D i = &Sigma; j = 0 71 ( | h i ( j ) - h q ( j ) | )
Wherein, h i(j), h qj () respectively corresponding color region I, Q ties up histogrammic value in jth, sequencing of similarity preserved;
Supplemental characteristic matching module carries out supplemental characteristic coupling for performing step:
B1, converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is that N*N ties up matrix, namely wherein, the element m of (h, k) is positioned at hkvalue represent that the gray scale at a distance of (h, k) is h, and another gray scale is that the pixel of k is to the number of times occurred;
Four characteristic quantities extracted by texture co-occurrence matrix are:
Contrast: CON = &Sigma; h &Sigma; k ( h - k ) 2 m hk
Energy: ASM = &Sigma; h &Sigma; k ( m hk ) 2
Entropy: ENT = - &Sigma; h &Sigma; k m hk lg ( m hk )
Relevant: COR = [ &Sigma; h &Sigma; k hkm hk - &mu; x &mu; y ) ] / &sigma; x &sigma; y
Wherein, it is every column element sum in matrix M; it is every row element sum; μ x, μ y, σ x, σ ym respectively x, m yaverage and standard deviation;
B2, the Gaussian difference pyrene of different scale and image convolution is utilized to generate Gaussian difference scale space (DOG scale-space);
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
Wherein G (x, y, σ) is changeable scale Gaussian function, and (x, y) is volume coordinate, and σ is yardstick coordinate;
G ( x , y , &sigma; ) = 1 2 &pi; &sigma; 2 e - ( x 2 + y 2 ) / 2 &sigma; 2
B3, each sampled point and its all consecutive point to be compared, see that it is whether large or little than the consecutive point of its image area and scale domain, middle check point and it with 8 consecutive point of yardstick and 9 × 2 points corresponding to neighbouring yardstick totally 26 points compare, to guarantee all extreme point to be detected at metric space and two dimensional image space;
B4, by fitting three-dimensional quadratic function accurately to determine position and the yardstick of key point, remove the key point of low contrast and unstable skirt response point simultaneously;
B5, utilize the gradient direction distribution characteristic of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance;
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
&theta; ( x , y ) = &alpha; tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) )
Upper two formulas are modulus value and the direction formula of (x, y) place gradient; The yardstick that wherein L is used is the yardstick at each key point place separately;
B6, be the direction of key point by X-axis rotate, to guarantee rotational invariance;
B7, get centered by key point 8 × 8 window, central authorities' stain is the position of current key point, each little lattice represent a pixel of key point neighborhood place metric space, then on the fritter of every 4 × 4, calculate the gradient orientation histogram in 8 directions, draw the accumulated value of each gradient direction, a Seed Points can be formed;
B8, each key point used 4 × 4 totally 16 Seed Points describe, 128 data are produced to a key point, the final SIFT feature vector forming 128 dimensions;
B9, the similarity determination adopting the Euclidean distance of key point proper vector to be used as key point in two width images are measured; Certain key point in sampling illustration, and find out it and mate European nearest the first two key point in figure, in these two key points, if nearest distance is less than preset ratio threshold value except distance near in proper order, then accept this pair match point.
CN201210045645.8A 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system Expired - Fee Related CN102663391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210045645.8A CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210045645.8A CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Publications (2)

Publication Number Publication Date
CN102663391A CN102663391A (en) 2012-09-12
CN102663391B true CN102663391B (en) 2015-03-25

Family

ID=46772875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210045645.8A Expired - Fee Related CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Country Status (1)

Country Link
CN (1) CN102663391B (en)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903244B (en) * 2012-12-25 2017-08-29 腾讯科技(深圳)有限公司 A kind of similar block search method and device of image
CN103106265B (en) * 2013-01-30 2016-10-12 北京工商大学 Similar image sorting technique and system
CN103473544A (en) * 2013-04-28 2013-12-25 南京理工大学 Robust human body feature rapid extraction method
CN104700532B (en) * 2013-12-11 2018-03-09 杭州海康威视数字技术股份有限公司 A kind of video alarm method and apparatus
CN104699726B (en) * 2013-12-18 2018-03-23 杭州海康威视数字技术股份有限公司 A kind of vehicle image search method and device applied to traffic block port
CN103761454B (en) * 2014-01-23 2017-02-08 南昌航空大学 Matching method of protein points between two gel images based on protein point multi-dimensional features
CN104361573B (en) * 2014-09-26 2017-10-03 北京航空航天大学 The SIFT feature matching algorithm of Fusion of Color information and global information
CN104240261B (en) * 2014-10-11 2017-12-15 中科九度(北京)空间信息技术有限责任公司 Image registration method and device
CN104376334B (en) * 2014-11-12 2018-05-29 上海交通大学 A kind of pedestrian comparison method of multi-scale feature fusion
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN104574271B (en) * 2015-01-20 2018-02-23 复旦大学 A kind of method of advertising logo insertion digital picture
CN104751470A (en) * 2015-04-07 2015-07-01 东南大学 Image quick-matching method
CN104811622B (en) * 2015-04-30 2017-03-15 努比亚技术有限公司 Image color implantation method and device
CN104809245A (en) * 2015-05-13 2015-07-29 信阳师范学院 Image retrieval method
CN104834732A (en) * 2015-05-13 2015-08-12 信阳师范学院 Texture image retrieving method
CN105163043B (en) * 2015-08-31 2018-04-13 北京奇艺世纪科技有限公司 The method and apparatus that a kind of picture is converted to output video
CN106933816A (en) * 2015-12-29 2017-07-07 北京大唐高鸿数据网络技术有限公司 Across camera lens object retrieval system and method based on global characteristics and local feature
CN107193816B (en) * 2016-03-14 2021-03-30 杭州华为企业通信技术有限公司 Image searching method, virtual character image obtaining method and device
CN106708943A (en) * 2016-11-22 2017-05-24 安徽睿极智能科技有限公司 Image retrieval reordering method and system based on arrangement fusion
WO2018123607A1 (en) * 2016-12-27 2018-07-05 ソニー株式会社 Upward-facing marker, image processing device, image processing method, and program
CN107239780A (en) * 2017-04-29 2017-10-10 安徽慧视金瞳科技有限公司 A kind of image matching method of multiple features fusion
CN107389697B (en) * 2017-07-10 2019-08-30 北京交通大学 A kind of crack detection method based on half interactive mode
CN108170711A (en) * 2017-11-28 2018-06-15 苏州市东皓计算机系统工程有限公司 A kind of image indexing system of computer
CN108537769A (en) * 2018-01-08 2018-09-14 黑龙江省农业科学院植物保护研究所 A kind of recognition methods of the leaf blight of corn, device, equipment and medium
CN110245667A (en) * 2018-03-08 2019-09-17 中华映管股份有限公司 Object discrimination method and its device
CN108230409B (en) * 2018-03-28 2020-04-17 南京大学 Image similarity quantitative analysis method based on multi-factor synthesis of color and content
CN108334644B (en) * 2018-03-30 2019-03-15 百度在线网络技术(北京)有限公司 Image-recognizing method and device
CN108765365A (en) * 2018-04-03 2018-11-06 东南大学 A kind of rotor winding image qualification detection method
CN108829711B (en) * 2018-05-04 2021-06-01 上海得见计算机科技有限公司 Image retrieval method based on multi-feature fusion
CN109308456B (en) * 2018-08-31 2021-06-08 北京字节跳动网络技术有限公司 Target object information determination method, device, equipment and storage medium
CN109255387A (en) * 2018-09-20 2019-01-22 珠海市君天电子科技有限公司 A kind of image matching method, device, electronic equipment and storage medium
CN109325497A (en) * 2018-09-20 2019-02-12 珠海市君天电子科技有限公司 A kind of image binaryzation method, device, electronic equipment and storage medium
CN109948655A (en) * 2019-02-21 2019-06-28 华中科技大学 It is a kind of based on multi-level endoscope operation instruments detection method
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110134111A (en) * 2019-05-16 2019-08-16 哈尔滨理工大学 A kind of calculator room equipment fault detection means and method based on signal lamp identification
CN110378379B (en) * 2019-06-17 2023-10-13 东南大学 Aviation image feature point matching method
CN110866460B (en) * 2019-10-28 2020-11-27 衢州学院 Method and device for detecting specific target area in complex scene video
CN110826445B (en) * 2019-10-28 2021-04-23 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110826446B (en) * 2019-10-28 2020-08-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN112122175B (en) * 2020-08-12 2021-08-10 浙江大学 Material enhanced feature recognition and selection method of color sorter
CN112528056B (en) * 2020-11-29 2021-09-07 枞阳县中邦科技信息咨询有限公司 Double-index field data retrieval system and method
CN112767426B (en) * 2021-01-07 2023-11-17 珠海格力电器股份有限公司 Target matching method and device and robot
CN116389855A (en) * 2023-06-01 2023-07-04 旷智中科(北京)技术有限公司 Video tagging method based on OCR

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789005A (en) * 2010-01-22 2010-07-28 深圳创维数字技术股份有限公司 Image searching method based on region of interest (ROI)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789005A (en) * 2010-01-22 2010-07-28 深圳创维数字技术股份有限公司 Image searching method based on region of interest (ROI)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机器人的混合特征视觉环境感知方法;杨俊友等;《中国图象图形学报》;20120131;第17卷(第1期);114-122 *

Also Published As

Publication number Publication date
CN102663391A (en) 2012-09-12

Similar Documents

Publication Publication Date Title
CN102663391B (en) Image multifeature extraction and fusion method and system
CN102662949B (en) Method and system for retrieving specified object based on multi-feature fusion
CN103077512B (en) Based on the feature extracting and matching method of the digital picture that major component is analysed
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN100461204C (en) Method for recognizing facial expression based on 2D partial least square method
CN100433016C (en) Image retrieval algorithm based on abrupt change of information
CN106844739B (en) Remote sensing image change information retrieval method based on neural network collaborative training
CN103034860A (en) Scale-invariant feature transform (SIFT) based illegal building detection method
CN105654122B (en) Based on the matched spatial pyramid object identification method of kernel function
CN103345760B (en) A kind of automatic generation method of medical image object shapes template mark point
CN107644227A (en) A kind of affine invariant descriptor of fusion various visual angles for commodity image search
CN103473545A (en) Text-image similarity-degree measurement method based on multiple features
Xie et al. Fabric defect detection method combing image pyramid and direction template
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN1286064C (en) An image retrieval method based on marked interest point
Zhao et al. Hyperspectral target detection method based on nonlocal self-similarity and rank-1 tensor
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN107103327B (en) Dyeing counterfeit image detection method based on color statistical difference
Kekre et al. SAR image segmentation using vector quantization technique on entropy images
CN111414958B (en) Multi-feature image classification method and system for visual word bag pyramid
CN106096650B (en) Based on the SAR image classification method for shrinking self-encoding encoder
CN112232249A (en) Remote sensing image change detection method and device based on depth features
CN103455798B (en) Histogrammic human body detecting method is flowed to based on maximum geometry
CN112396089B (en) Image matching method based on LFGC network and compression excitation module
CN107220651A (en) A kind of method and device for extracting characteristics of image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160509

Address after: 200062, No. 28, Danba Road, Putuo District, Shanghai, No. 5, No. 6, first floor

Patentee after: Zhongan Xiao Co.,Ltd.

Address before: 41, 518034 floor, Press Plaza, Shennan Avenue, Futian District, Guangdong, Shenzhen

Patentee before: ANKE SMART CITY TECHNOLOGY (PRC) Co.,Ltd.

PP01 Preservation of patent right

Effective date of registration: 20190710

Granted publication date: 20150325

PP01 Preservation of patent right
PD01 Discharge of preservation of patent
PD01 Discharge of preservation of patent

Date of cancellation: 20220710

Granted publication date: 20150325

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20220811

Granted publication date: 20150325

PD01 Discharge of preservation of patent
PD01 Discharge of preservation of patent

Date of cancellation: 20230523

Granted publication date: 20150325

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150325