CN102663391A - Image multifeature extraction and fusion method and system - Google Patents

Image multifeature extraction and fusion method and system Download PDF

Info

Publication number
CN102663391A
CN102663391A CN2012100456458A CN201210045645A CN102663391A CN 102663391 A CN102663391 A CN 102663391A CN 2012100456458 A CN2012100456458 A CN 2012100456458A CN 201210045645 A CN201210045645 A CN 201210045645A CN 102663391 A CN102663391 A CN 102663391A
Authority
CN
China
Prior art keywords
image
color
similarity
supplemental characteristic
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100456458A
Other languages
Chinese (zh)
Other versions
CN102663391B (en
Inventor
王军
吴金勇
王一科
龚灼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongan Xiao Co ltd
Original Assignee
China Security and Surveillance Technology PRC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Security and Surveillance Technology PRC Inc filed Critical China Security and Surveillance Technology PRC Inc
Priority to CN201210045645.8A priority Critical patent/CN102663391B/en
Publication of CN102663391A publication Critical patent/CN102663391A/en
Application granted granted Critical
Publication of CN102663391B publication Critical patent/CN102663391B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image multifeature extraction and fusion method and the system, comprising the following steps of, extracting color features from an image to be matched, matching color with the target image, making sure the color similarity degree, if the color similarity degree exceeds the given color similarity degree threshold, entering the next step; extracting additional feature from the image to be matched, matching the auxiliary features with the target image, determining the auxiliary feature similarity degree, the auxiliary features comprise at least one of texture feature and shape feature; on the base of color similarity degree and auxiliary features similarity degree, making synthesis judgment and acquiring the synthesis similarity degree between the image to be matched and the target image, or matching the target image with the auxiliary features extracted from the matching zone of the image to be matched. The method and the system of the invention apply a cascade connected way to match from roughness to exactness, and can determine exactly the similarity zone of the target image and the image to be matched. The method and the system of the invention achieve a rapid and highly effective matching effect and save manpower and resources.

Description

A kind of many feature extractions of image and fusion method and system
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of many feature extractions and fusion method and system of image.
Background technology
Construction along with " safe city " and large-scale safety defense monitoring systems such as " wisdom cities "; The data volume of monitor video is huge day by day; How from the image of magnanimity and video data, retrieving the image and the video that need fast and accurately becomes a problem that becomes more and more important, and the difficult problem of retrieval maximum at present is how to obtain a kind of robust features fast and accurately.
Content-based image and video retrieval technology are through extracting user's interest characteristic in the image; Comprise some visual signatures such as color, texture, shape; Image to user's input is retrieved in large nuber of images; Realized the retrieval of real image vision content characteristic; This mode is to the important breakthrough with key search; Improved the work scientific and technological content, improved way to manage, strengthened law-enforcing supervision, improved the level of keeping a lookout of public security, for example Chinese patent: a kind of comprehensive multi-feature image retrieval method (publication number: CN101551823, open day: 2009-10-07) color that obtains, texture and shape facility are synthesized total similarity through weighting summation; A kind of design patent image search method of many Feature Fusion (publication number: CN101847163A, open day: 2010-09-29), obtain similarity between final image with the distance weighted fusion after each characteristic normalization.
Existing many feature extractions and integration technology mainly contain following some deficiency:
1, speed is slow: existing many feature extractions all are the comparisons that is directed against between image and the image with integration technology, along with the video trend in high Qinghua progressively, have adopted this method, and speed is extremely slow, can not be applied in the middle of the real-time retrieval of video.
2, variation false drop rates such as the convergent-divergent of target and rotation are high: prior art adopts the combination of color, edge and texture more, because that these three kinds of characteristics change false drop rate to the convergent-divergent of target and rotation etc. is high, is difficult to practicality.
3, existing many feature extractions are through parallelly connected mode with color, edge and texture with integration technology; Adopt weighted sum to obtain final degree of confidence; Because color, edge and texture do not have comparability, the mode of this weighted sum can produce great error.
Summary of the invention
Partly statement in the description hereinafter of aspect of the present invention and advantage perhaps can be described obviously from this, perhaps can learn through putting into practice the present invention.
For overcome the speed that exists in existing many feature extractions and the integration technology slowly, with each characteristic unfavorable factor add up, problem such as false drop rate height; The invention provides a kind of many feature extractions and fusion method and system of image; Make full use of the advantage of each characteristic through the mode of cascade; From slightly progressively mating, can progressively accurately confirm the similar area in target image and the image to be matched, and no longer be the direct coupling between two sub-pictures to essence; Reach effect quick and the efficiently and accurately coupling, saved manpower and materials.
It is following that the present invention solves the problems of the technologies described above the technical scheme that is adopted:
According to an aspect of the present invention, a kind of many feature extractions and fusion method of image is provided, it may further comprise the steps:
S1. from image to be matched, extract color characteristic, carry out color-match, confirm the color similarity degree, if said color similarity degree surpasses the color similarity degree threshold value of setting then gets into next step with target image;
S2. from image to be matched, extract supplemental characteristic, carry out the supplemental characteristic coupling with said target image, confirm the supplemental characteristic similarity, said supplemental characteristic comprises at least one in textural characteristics and the shape facility;
S3. on the basis of said color similarity degree and supplemental characteristic similarity, carry out synthetic determination, draw the comprehensive similarity of said image to be matched and said target image.
According to one embodiment of present invention, in said step S1,, confirm said color similarity degree through said image to be matched and target image are carried out the template matches based on color.
Preferably, in said step S1, utilize simultaneously based on the template matches of color and confirm in the said image to be matched and the most similar matching area of said target image; In said step S2, mate through the supplemental characteristic that from the said matching area of said image to be matched, extracts supplemental characteristic and said target image, confirm said supplemental characteristic similarity.
According to one embodiment of present invention, in said step S1, treat matching image earlier and carry out color space transformation and color layering calculating, and then carry out said template matches based on color.
According to one embodiment of present invention, in said step S1, carry out said template matches based on color through color histogram.
According to one embodiment of present invention, in said step S2, said supplemental characteristic adopts textural characteristics, and said textural characteristics comprises in the feature one or multinomial: the textural characteristics of gray level co-occurrence matrixes, the constant textural characteristics of rotation convergent-divergent.
According to one embodiment of present invention, in said step S3, the method for carrying out synthetic determination is following:
If said textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then said comprehensive similarity is main by the decision of textural characteristics similarity or with the textural characteristics similarity; Otherwise the shared proportion of color similarity degree improves in the comprehensive similarity.
According to another aspect of the present invention, a kind of many feature extractions and emerging system of image is provided, it comprises matching module, and said matching module comprises:
The color-match module is used for extracting color characteristic from image to be matched, carries out color-match with target image, confirms the color similarity degree, handles if said color similarity degree surpasses the color similarity degree threshold value of setting then gets into the supplemental characteristic matching module;
The supplemental characteristic matching module is used for extracting supplemental characteristic from image to be matched, carries out the supplemental characteristic coupling with said target image, confirms the supplemental characteristic similarity, and said supplemental characteristic comprises at least one in textural characteristics and the shape facility;
The synthetic determination module is used for carrying out synthetic determination on the basis of said color similarity degree and supplemental characteristic similarity, draws the comprehensive similarity of said image to be matched and said target image.
According to one embodiment of present invention, said color-match module is set to confirm said color similarity degree through said image to be matched and target image are carried out the template matches based on color.
Preferably, said color-match module is set to utilize simultaneously based on the template matches of color and confirms in the said image to be matched and the most similar matching area of said target image; Said supplemental characteristic matching module is set to mate through the supplemental characteristic that from the said matching area of said image to be matched, extracts supplemental characteristic and said target image, confirms said supplemental characteristic similarity.
Certainly, said matching module also can be set to carry out many feature extractions of above-mentioned image and other or multinomial step in the fusion method characteristic.
Many feature extractions of the present invention and fusion method; Be mainly used in images match or image and the video retrieval technology; Overcome the problem that each characteristic unfavorable factor is added up that exists in existing many feature extractions and the fusion method, made full use of the advantage of each characteristic through the mode of cascade, from slightly progressively mating to essence; Can progressively accurately confirm the similar area in target image and the image to be matched through the method for cascade coupling simultaneously; And no longer be the direct contrast between two sub-pictures, reached effect quick and the efficiently and accurately coupling, saved manpower and materials.
Particularly, with respect to prior art, the present invention can bring following beneficial effect:
1, the present invention is through the method for cascade, from slightly progressively mating to essence, and can dwindle surveyed area, and matching speed is improved greatly.
2, utilize the present invention, can remove shape facility (like edge feature) and adopt textural characteristics as supplemental characteristic, adopt more healthy and stronger, convergent-divergent and rotation change are had SIFT (or SURF) algorithm of fine detection, accuracy can improve greatly.
3, the present invention can be through the cascade system that color, supplemental characteristic are combined through series, parallel; At last whether get final degree of confidence greater than preset threshold according to two similarities; Not only improved matching speed greatly, the error that the characteristic of having avoided prior art that color, edge and texture etc. are not had a comparability simultaneously directly adopts weighted sum and caused.
4, when being applied to the intended target of image (like vehicle, pedestrian and animal etc.) retrieval; The present invention is through can at first obtaining the approximate region of target from image based on the template matching method of color; And then adopt more accurate supplemental characteristic (can adopt textural characteristics; As rotate the convergent-divergent invariant features) target area is accurately mated, thus obtain the accurate similarity of target.
Through reading instructions, those of ordinary skills will understand characteristic and the aspect of these embodiment and other embodiment better.
Description of drawings
Below through with reference to accompanying drawing and combine instance to describe the present invention particularly; Advantage of the present invention and implementation will be more obvious; Wherein content shown in the accompanying drawing only is used for explaining to of the present invention, and does not constitute the restriction of going up in all senses of the present invention, in the accompanying drawings:
Fig. 1 is the many feature extractions of image according to an embodiment of the invention and the general flow chart of fusion method.
Fig. 2 is the thick coupling process flow diagram among Fig. 1.
Fig. 3 is the structural representation of matching module according to an embodiment of the invention.
Embodiment
The present invention provides a kind of many feature extractions and emerging system of image; This system can be a main frame or special equipment; Also can be a network system, or can be installed on the software systems in main frame or the Special Equipment that key is that it comprises matching module; As shown in Figure 3, matching module comprises:
The color-match module is used for extracting color characteristic from image to be matched, carries out color-match with target image, confirms the color similarity degree, handles if the color similarity degree surpasses the color similarity degree threshold value of setting then gets into the supplemental characteristic matching module;
The supplemental characteristic matching module is used for extracting supplemental characteristic from image to be matched, carries out the supplemental characteristic coupling with target image, confirms the supplemental characteristic similarity, and supplemental characteristic comprises at least one in textural characteristics and the shape facility;
The synthetic determination module is used for carrying out synthetic determination on the basis of color similarity degree and supplemental characteristic similarity, draws the comprehensive similarity of image to be matched and target image.
According to embodiments of the invention, the color-match module is set to confirm the color similarity degree through image to be matched and target image are carried out the template matches based on color.Preferably, the color-match module is set to utilize simultaneously the template matches based on color to confirm matching area the most similar with target image in the image to be matched; The supplemental characteristic matching module is set to mate through the supplemental characteristic that from the matching area of image to be matched, extracts supplemental characteristic and target image, confirms the supplemental characteristic similarity.
Among the embodiment below; Adopting textural characteristics with supplemental characteristic is that example describes; Certainly those of ordinary skills also can adopt shape facility (for example edge feature) as supplemental characteristic; Also can adopt textural characteristics and shape facility as supplemental characteristic simultaneously, these are all within scope of the present invention.
As depicted in figs. 1 and 2; Treat matching image (original image) through colo(u)r breakup and the color histogram after obtaining layering and other color characteristic carry out the first order and slightly mate; Obtain the target area of the most similar target area through thick coupling as the thin coupling of next stage; Can get rid of most of color characteristic and differ great target, simultaneously because condition is not harsh, so can the omission target not cause cumulative errors; On the basis of thick coupling, combine supplemental characteristic (is example with the textural characteristics) carefully to mate, thin coupling is further affirmation, is not to negate to the direct of thick coupling; On the basis of thick coupling and thin coupling, through thick coupling and the similarity that thin coupling obtains respectively, carry out synthetic determination, draw the comprehensive degree of confidence of image to be matched and target target image.Its concrete steps are following:
1, color space transformation and color layering are calculated;
2, the first order is slightly mated (adopting the template matches technology based on color): on the basis of color layering, slightly mate through color histogram and further feature, confirm the Probability Area (matching area) the most similar with intended target; If the color similarity degree surpasses the color similarity degree threshold value of setting then gets into next step; Otherwise, finish coupling to current image to be matched.
3, carefully mate the second level: texture feature extraction accurately matees, and the textural characteristics through texture feature extraction and target image from the matching area of image to be matched matees, and confirms the textural characteristics similarity;
4, third level synthetic determination: combine thick coupling and thin coupling to carry out synthetic determination, draw comprehensive degree of confidence according to certain rule.
The process flow diagram of this method sees Fig. 1 and Fig. 2 for details, and in the specific embodiment as shown in Figure 1, the treatment scheme of matching module comprises the following steps:
101. obtain original image (being image to be matched);
102., carry out color space transformation in conjunction with the parameter of input;
103. carrying out the color layering calculates;
104. carry out color-match (promptly thick coupling) with target image;
105. confirm the most similar matching area in the image to be matched;
106. from matching area, carry out the extraction of textural characteristics;
107. carry out textural characteristics coupling (promptly thin coupling) with target image;
108. carrying out the fusion of color and textural characteristics judges;
109. judge whether to meet requirement of confidence,, otherwise withdraw from if then provide comprehensive degree of confidence.
In the specific embodiment as shown in Figure 2, the treatment scheme of thick coupling comprises the following steps:
201. layered image input;
202., image is divided according to the size of target image in conjunction with the parameter of input;
203. take out an image-region;
204. calculate HSV (hue, saturation, intensity) color histogram;
205.HSV histogram coupling;
206. sequencing of similarity is preserved;
207. judge whether to be last piece image-region;
208. if, then take out the most similar matching area, otherwise get back to step 203, repeat the coupling of next piece image-region.
Several steps in the face of present embodiment is described in detail successively down:
The first step, color space transformation and color layering are calculated
1) color space transformation;
Owing to need come colo(u)r breakup at HSV (hue, saturation, intensity) color space, thus at first with image from RGB (red, green, blue) color space conversion to the hsv color space:
H = arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B ≤ G ) 2 π - arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B > G ) - - - ( 1 )
S = max ( R , G , B ) - min ( R , G , B ) max ( R + G + B ) - - - ( 2 )
V = max ( R , G , B ) 255 - - - ( 3 )
2) the color layering is calculated
The color layering is exactly that color space is mapped in certain subclass, thereby improves images match speed.General color of image system nearly 2 24Plant color, and the color that human eye can really be distinguished is limited, therefore when carrying out Flame Image Process, need carries out layering to color space, the dimension size of layering is extremely important, and the layering dimension is high more, and matching precision is just high more, but matching speed can descend thereupon.
The color layering is divided into colo(u)r breakup of equivalent spacing and the colo(u)r breakup of non-equivalent spacing, if because the dimension of equivalent spacing layering is low excessively, then precision can descend greatly; If too highly can cause calculation of complex again; Through analyzing and experiment, present embodiment is selected the colo(u)r breakup of non-equivalent spacing for use, and step is following:
According to people's perception, be divided into 8 parts to tone H, saturation degree S and brightness V are divided into 3 parts, according to color space and people the subjective perception characteristic of color are quantized layering, and formula is following:
H = 0 ifh ∈ [ 316,20 ] 1 ifh ∈ [ 21,40 ] 2 if ∈ [ 41,75 ] 3 ifh ∈ [ 76,155 ] 4 ifh ∈ [ 156,190 ] 5 ifh ∈ [ 191,270 ] 6 ifh ∈ [ 271,195 ] 7 ifh ∈ [ 296,315 ] - - - ( 4 )
S = 0 ifs ∈ [ 0,0.2 ] 1 ifs ∈ [ 0.2,0.7 ] 2 ifs ∈ [ 0.7,1 ] - - - ( 5 )
V = 0 ifv ∈ [ 0,0.2 ] 1 ifv ∈ [ 0.2,0.7 ] 2 ifv ∈ [ 0.7,1 ] - - - ( 6 )
According to above method color space is divided into 72 kinds of colors.
Second step, the first order are slightly mated: on the basis of color layering, slightly mate through color histogram, confirm the Probability Area the most similar with target
1) image-region is divided;
Coupling to intended target; In order to make the target better matching in sample target and the image to be matched; We are divided into the zone of the few size of a piece and sample goal discrepancy in the first order with matching image, and the step-length of level and vertical moving can be set according to the requirement of matching precision, precision high any can be with the step-length setting a little bit smaller; Want speed fast, can with step-length set bigger.
2) color template and characteristic matching
To each divided image zone, according to cutting apart the color region that obtains, calculate the similarity in sample color region and zone to be matched, adopt the absolute value Furthest Neighbor here.
If two color regions are respectively I, Q, with the concentric rectangles division methods image is divided, obtain a n concentric rectangles, according to the 72 dimension HSV histograms that the front layering obtains, the distance B of counterpart iFor:
D i = Σ j = 0 71 ( | h i ( j ) - h q ( j ) | ) - - - ( 7 )
Wherein, h i(j), h q(j) corresponding color area I, Q tie up histogrammic value at j respectively, to the result of calculation ordering, find out the most similar zone as matching area.
The 3rd step, thin coupling: extract supplemental characteristic (is example with the textural characteristics) and accurately mate
Textural characteristics can comprise in the feature one or multinomial: the textural characteristics of gray level co-occurrence matrixes, the rotation constant textural characteristics of convergent-divergent (like the SIFT characteristic).
1) textural characteristics of gray level co-occurrence matrixes
At first converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is a N*N dimension matrix, promptly Wherein, be positioned at (h, element m k) HkValue representation at a distance of (h, gray scale k) is h, and another gray scale is the number of times of pixel to occurring of k.
Four characteristic quantities that extracted by the texture co-occurrence matrix are:
Contrast: CON = Σ h Σ k ( h - k ) 2 m Hk - - - ( 8 )
Energy: ASM = Σ h Σ k ( m Hk ) 2 - - - ( 9 )
Entropy: ENT = - Σ h Σ k m Hk Lg ( m Hk ) - - - ( 10 )
Relevant: COR = [ Σ h Σ k Hkm Hk - μ x μ y ) ] / σ x σ y - - - ( 11 )
Wherein,
Figure BDA0000138609030000106
It is every column element sum in the matrix M;
Figure BDA0000138609030000107
It is every row element sum; μ x, μ y, σ x, σ yBe respectively m x, m yAverage and standard deviation.
Concrete steps in the present embodiment are following:
A, the gray scale of image is divided into 64 gray shade scales;
B, structure four direction gray level co-occurrence matrixes: M (1,0), M (0,1), M (1,1), M (1 ,-1)
C, calculate four texture characteristic amounts on each co-occurrence matrix respectively;
Average and standard deviation with each characteristic quantity: μ CON, σ CON, μ ASM, σ ASM, μ ENT, σ ENT, μ COR, σ COREight components as textural characteristics.
2) SIFT (conversion of yardstick invariant features) characteristic
The SIFT algorithm is a kind of algorithm that extracts local feature, seeks extreme point, extracting position, yardstick, rotational invariants at metric space.
It is following that it mainly detects step:
A) detect yardstick spatial extrema point;
B) accurately locate extreme point;
C) be each key point assigned direction parameter;
D) generation of key point descriptor
● the generation of metric space
The theoretical purpose of metric space is the multi-scale characteristic of simulated image data, and Gaussian convolution nuclear is the unique linear kernel that realizes change of scale, so the metric space of a secondary two dimensional image is defined as:
L(x,y,σ)=G(x,yσ)*I(x,y) (12)
Wherein (x, y σ) are the changeable scale Gaussian function to G.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) / 2 σ 2 - - - ( 13 )
(x y) is volume coordinate, and σ is the yardstick coordinate.
In order effectively to detect stable key point, difference of gaussian metric space (DOG scale-space) has been proposed at metric space.Utilize the Gaussian difference pyrene and the image convolution of different scale to generate.
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ) (14)
The structure of image pyramid: image pyramid is the O group altogether, and every group has the S layer, and the image of next group looks like to fall sampling by last set of diagrams and obtains, and O and S are set by the user.
● spatial extrema point detects
In order to seek the extreme point of metric space, each sampled point will with its all consecutive point relatively, consecutive point of image area and scale domain than it are greatly perhaps little to see it.Middle check point and its 8 consecutive point and corresponding 9 * 2 points totally 26 somes comparison of neighbouring yardstick with yardstick are to guarantee all to detect extreme point at metric space and two dimensional image space.
● make up the parameter that metric space need be confirmed
σ-metric space coordinate
The O-octave coordinate
The S-sub-level coordinate
σ and O, S concern σ (o, s)=σ 02 O+s/S, o ∈ o Min+ [0 ..., O-1], s ∈ [0 ..., S-1] σ wherein 0It is the key horizon yardstick.
Volume coordinate x is the function of group octave, establishes x 0Be the volume coordinate of O group, then
x=2 ox 0,o∈Z,x 0∈[0,...,N 0-1]×[0,...,M 0-1]
If (M 0, N 0) be the resolution of base set o=O, then the resolution of other groups is obtained by following formula:
Figure BDA0000138609030000121
Figure BDA0000138609030000122
The following parameter of general use:
σ n=0.5,σ 0=1.6·2 1/S,o min=-1,S=3
At group o=-1, image is twice (for the image σ that enlarges with the bilinear interpolation expansion n=1).
● accurately confirm the extreme point position
Through fitting three-dimensional quadratic function accurately to confirm the position and the yardstick (reaching sub-pixel precision) of key point; Remove the key point and the unsettled skirt response point (because the DoG operator can produce stronger skirt response) of low contrast simultaneously, to strengthen coupling stability, to improve noise resisting ability.
The removal of skirt response
An extreme value that defines bad difference of gaussian operator has bigger principal curvatures in the place across the edge, and in the direction of vertical edge less principal curvatures is arranged.Principal curvatures is obtained through the Hessian matrix H of a 2x2:
H = D xx D xy D xy D yy - - - ( 16 )
Derivative is estimated to obtain by the adjacent difference of sampled point.
The principal curvatures of D and the eigenwert of H are directly proportional, and make that α is an eigenvalue of maximum, and β is minimum eigenwert, then
Tr(H)=D xx+D yy=α+β (17)
Det(H)=D xxD yy-(D xy) 2=αβ (18)
Make α=γ β, then:
Tr ( H ) 2 Det ( H ) = ( α + β ) 2 αβ = ( rβ + β ) 2 rβ 2 = ( r + 1 ) 2 r - - - ( 19 )
(r+1) 2The value of/r is minimum when two eigenwerts equate, increases along with the increase of r, therefore, in order to detect principal curvatures whether under certain thresholding r, only needs to detect:
Tr ( H ) 2 Det ( H ) < ( r + 1 ) 2 r - - - ( 20 )
Generally get r=10.
● the key point direction is distributed
Utilize the gradient direction distribution character of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 21 )
&theta; ( x , y ) = &alpha; tan 2 ( ( L ( x , y + 1 ) - L ( x , y - 1 ) ) / ( ( L ( x + 1 , y ) - L ( x - 1 , y ) ) ) - - - ( 22 )
Last two formulas are that (x y) locates the mould value and the direction formula of gradient.The yardstick that belongs to separately for each key point of the used yardstick of L wherein.
When actual computation, we sample in the neighborhood window that with the key point is the center, and with the gradient direction of statistics with histogram neighborhood territory pixel.The scope of histogram of gradients is 0~360 degree, wherein per a 10 degree post, 36 posts altogether.Histogrammic peak value has then been represented the principal direction of this key point place neighborhood gradient, promptly as the direction of this key point.
In gradient orientation histogram, when existing another to be equivalent to the peak value of main peak value 80% energy, then this direction is thought the auxilliary direction of this key point.A key point may designatedly have a plurality of directions (principal direction, auxilliary direction more than), and this can strengthen the robustness of coupling.
So far, the key point of image has detected and has finished, and each key point has three information: position, yardstick of living in, direction.Can confirm a SIFT characteristic area thus.
● the unique point descriptor generates
At first coordinate axis is rotated to be the direction of key point, to guarantee rotational invariance.
Next be that 8 * 8 window is got at the center with the key point; Central authorities' stain is the position of current key point; Each little lattice is represented a pixel of key point neighborhood place metric space; On per 4 * 4 fritter, calculate the gradient orientation histogram of 8 directions then, draw the accumulated value of each gradient direction, can form a seed points.
In the actual computation process, in order to strengthen the robustness of coupling, to each key point use 4 * 4 totally 16 seed points describe, just can produce 128 data for a key point like this, promptly finally form the 128 SIFT proper vectors tieed up.The influence that this moment, the SIFT proper vector was removed geometry deformation factors such as dimensional variation, rotation continues the length normalization method with proper vector again, then can further remove the influence of illumination variation.
After the SIFT proper vector of two width of cloth images generated, we adopted the Euclidean distance of key point proper vector to be used as the similarity determination tolerance of key point in two width of cloth images next step.Sampling certain key point in the illustration, and find out its with match map in European nearest preceding two key points, in these two key points, be less than certain proportion threshold value if nearest distance is removed the following near distance, then accept this a pair of match point.Reduce this proportion threshold value, SIFT match point number can reduce, but more stable.
3) comprehensive characteristics
Utilizing single characteristic to carry out images match has advantage separately, and in order to improve matched accuracy, combined with texture characteristic of the present invention is constructed a structured features and carried out images match.
Because the physical significance of color and textural characteristics is inequality, does not have direct comparability, need do normalization to several kinds of characteristics and handle, formula is following:
D=w 1d 1+w 2d 2 (23)
Wherein, d 1, d 2Represent the color characteristic amount of 2 width of cloth images, the distance between the texture characteristic amount respectively; w 1, w 2Weights (0≤w for characteristic quantity 1≤1, and w 1+ w 2=1).
The 4th step, synthetic determination: combine thick coupling and thin coupling to carry out synthetic determination, draw comprehensive degree of confidence according to certain rule
The rule of synthetic determination is such:
If a) thick coupling all has high similarity with thin coupling, is similar coupling target so certainly, be main with textural characteristics (like the SIFT characteristic) similarity during comprehensive confidence calculations;
B) if the similarity of thick coupling is general, but the similarity of thin coupling is high, because the stability of SIFT characteristic, be main with the textural characteristics similarity during comprehensive confidence calculations, but the proportion of color similarity degree improves;
C) if the similarity of thick coupling is high, and the similarity of thin coupling is general, and the proportion of two kinds of similarities is more or less the same during comprehensive confidence calculations, confirms according to actual conditions;
D) if two kinds of similarities are all very low, think so not to be similar target.
The realization of method mainly is the matching way through cascade in the present embodiment, from slightly confirming progressively that to essence comprise three parts: the first order is slightly mated, carefully mate the second level and third level synthetic determination; Thick coupling confirms to comprise the approximate region of target through the method for template matches according to the color histogram after the colo(u)r breakup; Thin coupling mainly is through obtaining textural characteristics, on the basis of thick coupling, further confirming; Synthetic determination is the independent similarity that the thin coupling according to the thick coupling of color and textural characteristics obtains, and draws comprehensive similarity (or comprehensive degree of confidence).
Adopt the method for the invention, compared with prior art, overcome the some shortcomings that exist in existing many feature extractions and the fusion method; Make full use of the advantage of each characteristic through the mode of cascade; From slightly progressively mating, can accurately confirm the similar area in target image and the image to be matched, and no longer be the direct contrast between two sub-pictures to essence; Reach effect quick and the efficiently and accurately coupling, saved manpower and materials.
The present invention not only can be applied in the retrieval to static images, also can be applied in the middle of the video frequency searching simultaneously.Those of ordinary skills should be appreciated that; When being applied to video frequency searching; The acquisition of the most similar matching area is except can also passing through other technology, such as obtaining motion target area through background subtraction through the above-mentioned template matching method based on color in the image to be matched.
Above with reference to description of drawings the preferred embodiments of the present invention, those skilled in the art do not depart from the scope and spirit of the present invention, and can have multiple flexible program to realize the present invention.For example, the characteristic that illustrates or describe as the part of an embodiment can be used for another embodiment to obtain another embodiment.More than be merely the preferable feasible embodiment of the present invention, be not so limit to interest field of the present invention, the equivalence that all utilizations instructions of the present invention and accompanying drawing content are done changes, and all is contained within the interest field of the present invention.

Claims (10)

1. the many feature extractions and the fusion method of an image is characterized in that, may further comprise the steps:
S1. from image to be matched, extract color characteristic, carry out color-match, confirm the color similarity degree, if said color similarity degree surpasses the color similarity degree threshold value of setting then gets into next step with target image;
S2. from image to be matched, extract supplemental characteristic, carry out the supplemental characteristic coupling with said target image, confirm the supplemental characteristic similarity, said supplemental characteristic comprises at least one in textural characteristics and the shape facility;
S3. on the basis of said color similarity degree and supplemental characteristic similarity, carry out synthetic determination, draw the comprehensive similarity of said image to be matched and said target image.
2. the many feature extractions and the fusion method of image according to claim 1 is characterized in that, in said step S1, through said image to be matched and target image are carried out the template matches based on color, confirm said color similarity degree.
3. the many feature extractions and the fusion method of image according to claim 2 is characterized in that, in said step S1, utilize simultaneously based on the template matches of color and confirm in the said image to be matched and the most similar matching area of said target image; In said step S2, mate through the supplemental characteristic that from the said matching area of said image to be matched, extracts supplemental characteristic and said target image, confirm said supplemental characteristic similarity.
4. according to the many feature extractions and the fusion method of claim 2 or 3 described images, it is characterized in that, in said step S1, treat matching image earlier and carry out color space transformation and color layering calculating, and then carry out said template matches based on color.
5. according to the many feature extractions and the fusion method of claim 2 or 3 described images, it is characterized in that, in said step S1, carry out said template matches based on color through color histogram.
6. according to the many feature extractions and the fusion method of claim 2 or 3 described images; It is characterized in that; In said step S2; Said supplemental characteristic adopts textural characteristics, and said textural characteristics comprises in the feature one or multinomial: the textural characteristics of gray level co-occurrence matrixes, the constant textural characteristics of rotation convergent-divergent.
7. the many feature extractions and the fusion method of image according to claim 6 is characterized in that, in said step S3, the method for carrying out synthetic determination is following:
If said textural characteristics similarity is greater than the textural characteristics similarity threshold of setting, then said comprehensive similarity is main by the decision of textural characteristics similarity or with the textural characteristics similarity; Otherwise the shared proportion of color similarity degree improves in the comprehensive similarity.
8. the many feature extractions and the emerging system of an image is characterized in that, comprise matching module, and said matching module comprises:
The color-match module is used for extracting color characteristic from image to be matched, carries out color-match with target image, confirms the color similarity degree, handles if said color similarity degree surpasses the color similarity degree threshold value of setting then gets into the supplemental characteristic matching module;
The supplemental characteristic matching module is used for extracting supplemental characteristic from image to be matched, carries out the supplemental characteristic coupling with said target image, confirms the supplemental characteristic similarity, and said supplemental characteristic comprises at least one in textural characteristics and the shape facility;
The synthetic determination module is used for carrying out synthetic determination on the basis of said color similarity degree and supplemental characteristic similarity, draws the comprehensive similarity of said image to be matched and said target image.
9. the many feature extractions and the emerging system of image according to claim 8 is characterized in that, said color-match module is set to confirm said color similarity degree through said image to be matched and target image are carried out the template matches based on color.
10. the many feature extractions and the emerging system of image according to claim 9; It is characterized in that said color-match module is set to utilize simultaneously based on the template matches of color to be confirmed in the said image to be matched and the most similar matching area of said target image; Said supplemental characteristic matching module is set to mate through the supplemental characteristic that from the said matching area of said image to be matched, extracts supplemental characteristic and said target image, confirms said supplemental characteristic similarity.
CN201210045645.8A 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system Expired - Fee Related CN102663391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210045645.8A CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210045645.8A CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Publications (2)

Publication Number Publication Date
CN102663391A true CN102663391A (en) 2012-09-12
CN102663391B CN102663391B (en) 2015-03-25

Family

ID=46772875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210045645.8A Expired - Fee Related CN102663391B (en) 2012-02-27 2012-02-27 Image multifeature extraction and fusion method and system

Country Status (1)

Country Link
CN (1) CN102663391B (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106265A (en) * 2013-01-30 2013-05-15 北京工商大学 Method and system of classifying similar images
CN103473544A (en) * 2013-04-28 2013-12-25 南京理工大学 Robust human body feature rapid extraction method
CN103761454A (en) * 2014-01-23 2014-04-30 南昌航空大学 Matching method of protein points between two gel images based on protein point multi-dimensional features
CN103903244A (en) * 2012-12-25 2014-07-02 腾讯科技(深圳)有限公司 Image similar block searching method and apparatus
CN104240261A (en) * 2014-10-11 2014-12-24 中科九度(北京)空间信息技术有限责任公司 Image registration method and device
CN104361573A (en) * 2014-09-26 2015-02-18 北京航空航天大学 Color information and global information fused SIFT (scale invariant feature transform) feature matching algorithm
CN104376334A (en) * 2014-11-12 2015-02-25 上海交通大学 Pedestrian comparison method based on multi-scale feature fusion
CN104574271A (en) * 2015-01-20 2015-04-29 复旦大学 Method for embedding advertisement icon into digital image
CN104699726A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Vehicle image retrieval method and device for traffic block port
CN104700532A (en) * 2013-12-11 2015-06-10 杭州海康威视数字技术股份有限公司 Video alarm method and video alarm device
CN104751470A (en) * 2015-04-07 2015-07-01 东南大学 Image quick-matching method
CN104811622A (en) * 2015-04-30 2015-07-29 努比亚技术有限公司 Method and device for migrating image colors
CN104809245A (en) * 2015-05-13 2015-07-29 信阳师范学院 Image retrieval method
CN104834732A (en) * 2015-05-13 2015-08-12 信阳师范学院 Texture image retrieving method
CN105163043A (en) * 2015-08-31 2015-12-16 北京奇艺世纪科技有限公司 Method and device for converting picture into output video
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN106708943A (en) * 2016-11-22 2017-05-24 安徽睿极智能科技有限公司 Image retrieval reordering method and system based on arrangement fusion
CN106933816A (en) * 2015-12-29 2017-07-07 北京大唐高鸿数据网络技术有限公司 Across camera lens object retrieval system and method based on global characteristics and local feature
WO2017157261A1 (en) * 2016-03-14 2017-09-21 华为技术有限公司 Image search method, and virtual character image acquisition method and device
CN107239780A (en) * 2017-04-29 2017-10-10 安徽慧视金瞳科技有限公司 A kind of image matching method of multiple features fusion
CN107389697A (en) * 2017-07-10 2017-11-24 北京交通大学 A kind of crack detection method based on half interactive mode
CN108170711A (en) * 2017-11-28 2018-06-15 苏州市东皓计算机系统工程有限公司 A kind of image indexing system of computer
CN108230409A (en) * 2018-03-28 2018-06-29 南京大学 Image similarity quantitative analysis method based on color and content multi-factor comprehensive
CN108334644A (en) * 2018-03-30 2018-07-27 百度在线网络技术(北京)有限公司 Image-recognizing method and device
CN108537769A (en) * 2018-01-08 2018-09-14 黑龙江省农业科学院植物保护研究所 A kind of recognition methods of the leaf blight of corn, device, equipment and medium
CN108765365A (en) * 2018-04-03 2018-11-06 东南大学 A kind of rotor winding image qualification detection method
CN108829711A (en) * 2018-05-04 2018-11-16 上海得见计算机科技有限公司 A kind of image search method based on multi-feature fusion
CN109255387A (en) * 2018-09-20 2019-01-22 珠海市君天电子科技有限公司 A kind of image matching method, device, electronic equipment and storage medium
CN109308456A (en) * 2018-08-31 2019-02-05 北京字节跳动网络技术有限公司 The information of target object determines method, apparatus, equipment and storage medium
CN109325497A (en) * 2018-09-20 2019-02-12 珠海市君天电子科技有限公司 A kind of image binaryzation method, device, electronic equipment and storage medium
CN109948655A (en) * 2019-02-21 2019-06-28 华中科技大学 It is a kind of based on multi-level endoscope operation instruments detection method
CN110100149A (en) * 2016-12-27 2019-08-06 索尼公司 Survey label, image processing apparatus, image processing method and program
CN110134111A (en) * 2019-05-16 2019-08-16 哈尔滨理工大学 A kind of calculator room equipment fault detection means and method based on signal lamp identification
CN110245667A (en) * 2018-03-08 2019-09-17 中华映管股份有限公司 Object discrimination method and its device
CN110378379A (en) * 2019-06-17 2019-10-25 东南大学 Aerial image characteristic point matching method
CN110826446A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN110826445A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
WO2020186678A1 (en) * 2019-03-19 2020-09-24 中国科学院深圳先进技术研究院 Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium
CN112122175A (en) * 2020-08-12 2020-12-25 浙江大学 Material enhanced feature recognition and selection method of color sorter
CN112528056A (en) * 2020-11-29 2021-03-19 泰州芯源半导体科技有限公司 Double-index field data retrieval system and method
CN112767426A (en) * 2021-01-07 2021-05-07 珠海格力电器股份有限公司 Target matching method and device and robot
CN116389855A (en) * 2023-06-01 2023-07-04 旷智中科(北京)技术有限公司 Video tagging method based on OCR

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789005A (en) * 2010-01-22 2010-07-28 深圳创维数字技术股份有限公司 Image searching method based on region of interest (ROI)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789005A (en) * 2010-01-22 2010-07-28 深圳创维数字技术股份有限公司 Image searching method based on region of interest (ROI)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《中国图象图形学报》 20120131 杨俊友等 机器人的混合特征视觉环境感知方法 114-122 第17卷, 第1期 *
廖倩倩: "基于颜色与纹理融合的图像特征提取与检索方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 05, 15 November 2007 (2007-11-15), pages 138 - 1210 *
杨俊友等: "机器人的混合特征视觉环境感知方法", 《中国图象图形学报》, vol. 17, no. 1, 31 January 2012 (2012-01-31), pages 114 - 122 *

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903244A (en) * 2012-12-25 2014-07-02 腾讯科技(深圳)有限公司 Image similar block searching method and apparatus
CN103903244B (en) * 2012-12-25 2017-08-29 腾讯科技(深圳)有限公司 A kind of similar block search method and device of image
CN103106265A (en) * 2013-01-30 2013-05-15 北京工商大学 Method and system of classifying similar images
CN103106265B (en) * 2013-01-30 2016-10-12 北京工商大学 Similar image sorting technique and system
CN103473544A (en) * 2013-04-28 2013-12-25 南京理工大学 Robust human body feature rapid extraction method
CN104700532A (en) * 2013-12-11 2015-06-10 杭州海康威视数字技术股份有限公司 Video alarm method and video alarm device
CN104700532B (en) * 2013-12-11 2018-03-09 杭州海康威视数字技术股份有限公司 A kind of video alarm method and apparatus
CN104699726A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Vehicle image retrieval method and device for traffic block port
CN104699726B (en) * 2013-12-18 2018-03-23 杭州海康威视数字技术股份有限公司 A kind of vehicle image search method and device applied to traffic block port
CN103761454B (en) * 2014-01-23 2017-02-08 南昌航空大学 Matching method of protein points between two gel images based on protein point multi-dimensional features
CN103761454A (en) * 2014-01-23 2014-04-30 南昌航空大学 Matching method of protein points between two gel images based on protein point multi-dimensional features
CN104361573B (en) * 2014-09-26 2017-10-03 北京航空航天大学 The SIFT feature matching algorithm of Fusion of Color information and global information
CN104361573A (en) * 2014-09-26 2015-02-18 北京航空航天大学 Color information and global information fused SIFT (scale invariant feature transform) feature matching algorithm
CN104240261A (en) * 2014-10-11 2014-12-24 中科九度(北京)空间信息技术有限责任公司 Image registration method and device
CN104240261B (en) * 2014-10-11 2017-12-15 中科九度(北京)空间信息技术有限责任公司 Image registration method and device
CN104376334A (en) * 2014-11-12 2015-02-25 上海交通大学 Pedestrian comparison method based on multi-scale feature fusion
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN104574271B (en) * 2015-01-20 2018-02-23 复旦大学 A kind of method of advertising logo insertion digital picture
CN104574271A (en) * 2015-01-20 2015-04-29 复旦大学 Method for embedding advertisement icon into digital image
CN104751470A (en) * 2015-04-07 2015-07-01 东南大学 Image quick-matching method
CN104811622A (en) * 2015-04-30 2015-07-29 努比亚技术有限公司 Method and device for migrating image colors
CN104834732A (en) * 2015-05-13 2015-08-12 信阳师范学院 Texture image retrieving method
CN104809245A (en) * 2015-05-13 2015-07-29 信阳师范学院 Image retrieval method
CN105163043A (en) * 2015-08-31 2015-12-16 北京奇艺世纪科技有限公司 Method and device for converting picture into output video
CN105163043B (en) * 2015-08-31 2018-04-13 北京奇艺世纪科技有限公司 The method and apparatus that a kind of picture is converted to output video
CN106933816A (en) * 2015-12-29 2017-07-07 北京大唐高鸿数据网络技术有限公司 Across camera lens object retrieval system and method based on global characteristics and local feature
WO2017157261A1 (en) * 2016-03-14 2017-09-21 华为技术有限公司 Image search method, and virtual character image acquisition method and device
CN106708943A (en) * 2016-11-22 2017-05-24 安徽睿极智能科技有限公司 Image retrieval reordering method and system based on arrangement fusion
CN110100149A (en) * 2016-12-27 2019-08-06 索尼公司 Survey label, image processing apparatus, image processing method and program
CN110100149B (en) * 2016-12-27 2021-08-24 索尼公司 Survey mark, image processing apparatus, image processing method, and program
CN107239780A (en) * 2017-04-29 2017-10-10 安徽慧视金瞳科技有限公司 A kind of image matching method of multiple features fusion
CN107389697A (en) * 2017-07-10 2017-11-24 北京交通大学 A kind of crack detection method based on half interactive mode
CN108170711A (en) * 2017-11-28 2018-06-15 苏州市东皓计算机系统工程有限公司 A kind of image indexing system of computer
CN108537769A (en) * 2018-01-08 2018-09-14 黑龙江省农业科学院植物保护研究所 A kind of recognition methods of the leaf blight of corn, device, equipment and medium
CN110245667A (en) * 2018-03-08 2019-09-17 中华映管股份有限公司 Object discrimination method and its device
CN108230409B (en) * 2018-03-28 2020-04-17 南京大学 Image similarity quantitative analysis method based on multi-factor synthesis of color and content
CN108230409A (en) * 2018-03-28 2018-06-29 南京大学 Image similarity quantitative analysis method based on color and content multi-factor comprehensive
US10762373B2 (en) 2018-03-30 2020-09-01 Baidu Online Network Technology (Beijing) Co., Ltd. Image recognition method and device
CN108334644B (en) * 2018-03-30 2019-03-15 百度在线网络技术(北京)有限公司 Image-recognizing method and device
CN108334644A (en) * 2018-03-30 2018-07-27 百度在线网络技术(北京)有限公司 Image-recognizing method and device
CN108765365A (en) * 2018-04-03 2018-11-06 东南大学 A kind of rotor winding image qualification detection method
CN108829711A (en) * 2018-05-04 2018-11-16 上海得见计算机科技有限公司 A kind of image search method based on multi-feature fusion
CN108829711B (en) * 2018-05-04 2021-06-01 上海得见计算机科技有限公司 Image retrieval method based on multi-feature fusion
CN109308456A (en) * 2018-08-31 2019-02-05 北京字节跳动网络技术有限公司 The information of target object determines method, apparatus, equipment and storage medium
CN109325497A (en) * 2018-09-20 2019-02-12 珠海市君天电子科技有限公司 A kind of image binaryzation method, device, electronic equipment and storage medium
CN109255387A (en) * 2018-09-20 2019-01-22 珠海市君天电子科技有限公司 A kind of image matching method, device, electronic equipment and storage medium
CN109948655A (en) * 2019-02-21 2019-06-28 华中科技大学 It is a kind of based on multi-level endoscope operation instruments detection method
WO2020186678A1 (en) * 2019-03-19 2020-09-24 中国科学院深圳先进技术研究院 Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium
CN110134111A (en) * 2019-05-16 2019-08-16 哈尔滨理工大学 A kind of calculator room equipment fault detection means and method based on signal lamp identification
CN110378379A (en) * 2019-06-17 2019-10-25 东南大学 Aerial image characteristic point matching method
CN110378379B (en) * 2019-06-17 2023-10-13 东南大学 Aviation image feature point matching method
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN110826445B (en) * 2019-10-28 2021-04-23 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110826446B (en) * 2019-10-28 2020-08-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN110826445A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110826446A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN112122175A (en) * 2020-08-12 2020-12-25 浙江大学 Material enhanced feature recognition and selection method of color sorter
CN112528056A (en) * 2020-11-29 2021-03-19 泰州芯源半导体科技有限公司 Double-index field data retrieval system and method
CN112767426A (en) * 2021-01-07 2021-05-07 珠海格力电器股份有限公司 Target matching method and device and robot
WO2022148091A1 (en) * 2021-01-07 2022-07-14 珠海格力电器股份有限公司 Target matching method and device, and robot
CN112767426B (en) * 2021-01-07 2023-11-17 珠海格力电器股份有限公司 Target matching method and device and robot
CN116389855A (en) * 2023-06-01 2023-07-04 旷智中科(北京)技术有限公司 Video tagging method based on OCR

Also Published As

Publication number Publication date
CN102663391B (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN102663391B (en) Image multifeature extraction and fusion method and system
CN102662949B (en) Method and system for retrieving specified object based on multi-feature fusion
CN103077512B (en) Based on the feature extracting and matching method of the digital picture that major component is analysed
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN103530638B (en) Method for pedestrian matching under multi-cam
CN100433016C (en) Image retrieval algorithm based on abrupt change of information
CN102509104B (en) Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
CN103761295B (en) Automatic picture classification based customized feature extraction method for art pictures
CN106844739B (en) Remote sensing image change information retrieval method based on neural network collaborative training
CN103034860A (en) Scale-invariant feature transform (SIFT) based illegal building detection method
Smith et al. Classification of archaeological ceramic fragments using texture and color descriptors
CN101004791A (en) Method for recognizing facial expression based on 2D partial least square method
CN103699578B (en) Image retrieval method based on spectrum analysis
CN101930537A (en) Method and system for identifying three-dimensional face based on bending invariant related features
CN105654122B (en) Based on the matched spatial pyramid object identification method of kernel function
CN107644227A (en) A kind of affine invariant descriptor of fusion various visual angles for commodity image search
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
Liu et al. A novel feature fusion approach for VHR remote sensing image classification
CN111414958B (en) Multi-feature image classification method and system for visual word bag pyramid
CN105405138A (en) Water surface target tracking method based on saliency detection
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN107894996A (en) The image intelligent analysis method for clapping device is supervised based on intelligence
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN109711387B (en) Gait image preprocessing method based on multi-class energy maps

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160509

Address after: 200062, No. 28, Danba Road, Putuo District, Shanghai, No. 5, No. 6, first floor

Patentee after: Zhongan Xiao Co.,Ltd.

Address before: 41, 518034 floor, Press Plaza, Shennan Avenue, Futian District, Guangdong, Shenzhen

Patentee before: ANKE SMART CITY TECHNOLOGY (PRC) Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20190710

Granted publication date: 20150325

PD01 Discharge of preservation of patent

Date of cancellation: 20220710

Granted publication date: 20150325

PD01 Discharge of preservation of patent
PP01 Preservation of patent right

Effective date of registration: 20220811

Granted publication date: 20150325

PP01 Preservation of patent right
PD01 Discharge of preservation of patent

Date of cancellation: 20230523

Granted publication date: 20150325

PD01 Discharge of preservation of patent
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150325

CF01 Termination of patent right due to non-payment of annual fee