CN101980250B - Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field - Google Patents

Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field Download PDF

Info

Publication number
CN101980250B
CN101980250B CN201010515864.9A CN201010515864A CN101980250B CN 101980250 B CN101980250 B CN 101980250B CN 201010515864 A CN201010515864 A CN 201010515864A CN 101980250 B CN101980250 B CN 101980250B
Authority
CN
China
Prior art keywords
vector
descriptor
point
image
conditional random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010515864.9A
Other languages
Chinese (zh)
Other versions
CN101980250A (en
Inventor
李超
池毅韬
郭信谊
熊璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201010515864.9A priority Critical patent/CN101980250B/en
Publication of CN101980250A publication Critical patent/CN101980250A/en
Application granted granted Critical
Publication of CN101980250B publication Critical patent/CN101980250B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for identifying a target based on a dimension reduction local feature descriptor and hidden conditional random field. The method is to establish a target identification model for identifying an object, and the model establishing process is a process that the model performs supervised training by using a training image as a sample, wherein each object in the training image corresponds to different label values. The method comprises the following steps of: calculating a descriptor vector of SIFT (Scale invariant feature transform) for the training images of different objects, wherein the descriptor vector corresponding to each image forms a corresponding high-dimensional vector set; performing dimension reduction on the SIFT set by adopting a Neighbor Preserving Embedding (NPE) method; and allowing the vector group subjected to dissension reduction and a label of the object corresponding to a source image to form a dualistic group, namely, each image has a corresponding dualistic group, and the set consisting of the dualistic groups can be used as a sample for training a hidden conditional random field model. An identifying process by the model, namely for a set test image comprises the following steps of: calculating the SIFT feature descriptor set of the test image; reducing dimension of the acquired SIFT set by the NPE method; inputting the vector set subjected to dimension reduction to the hidden conditional random field acquired by training; and outputting the final object label serving as an identification result.

Description

Based on the target identification method of dimensionality reduction local feature description and hidden conditional random fields
Technical field
The invention belongs to a kind of target identification method based on dimensionality reduction local feature description and hidden conditional random fields.Specifically, it is that local feature extraction, dimension reduction method and the hidden conditional random fields of combining image in current computer vision field carries out modeling and target image is sentenced to method for distinguishing.
Background technology
Target is identified as one of most important direction of computer vision field, is that follow-up various higher level processing example is as the basis of target classification, video frequency searching, behavior understanding etc.Existing many methods, comprising: based on changing detection, the detection based on feature modeling of profile, detection, the detection based on region method and detection based on frame difference method of the color rarity based on EM algorithm etc.Succinct and the easy to understand of these classical ways, but its effect can not be satisfactory.Use simple feature information to be not sufficient to object to differentiate, thereafter in the middle of the improvement algorithm occurring, owing to still also there is the characteristic of cancelling out each other between some feature, so comparatively successfully target identification method is up to this point all under certain scene.
Local feature is the feature extracting method of the nearest computer vision field rising, and is widely used in target identification, image registration, image retrieval, three-dimensional reconstruction field.Local feature has unchangeability for geometric transformation, illumination conversion, for noise, block and background interference all has good robustness, and between feature, has very high discrimination.
For target identification mission, the extraction of local feature has completed a most basic step.The information that local feature extracts comprises characteristic point information and descriptor information corresponding to unique point.Also need to be afterwards described sub coupling, mate screening, utilize the process of probability model just can complete target identification, this does not also comprise the process of establishing about Object representation word bank.And in the whole process of utilizing local feature to mate and then to identify, what also must use is institute's recognition object surface correspondence physically.
The present invention proposes a kind of target identification method based on dimensionality reduction local feature description and hidden conditional random fields.First it extract SIFT (Scale invariant feature transform to image, yardstick invariant features) Feature Descriptor, pass in the higher dimensional space that keeps SIFT descriptor is prerequisite, utilize neighbour to keep embedding (Neighbor Preserving Embedding, NPE) method is carried out dimensionality reduction to higher-dimension descriptor, set up hidden conditional random fields (Hidden Conditional Random Fields, HCRF) model and identify for target.
Summary of the invention
A kind of target identification method based on dimensionality reduction local feature description and hidden conditional random fields of the present invention, for the problem solving: the descriptor of SIFT feature that extracts image, descriptor is used to NPE method dimensionality reduction, and use hidden conditional random fields to carry out modeling and complete the task that target is identified.
A kind of target identification method based on dimensionality reduction local feature description and hidden conditional random fields that the present invention proposes, its target is to set up a Model of Target Recognition for object identification, comprises modeling and two processes of identification.Wherein the step of modeling comprises:
(1) every piece image of the object that includes corresponding label value of concentrating for training sample, extracts its SIFT Feature Descriptor;
(2) use NPE method to the higher-dimension SIFT Feature Descriptor dimensionality reduction extracting, obtain dimensionality reduction vector set later;
(3) the vector set after the dimensionality reduction that every piece image is corresponding, with a sample of the tag number composing training HCRF model of object, obtains can be used for the hidden conditional random fields model of recognition object through all sample learnings;
The step of identification comprises:
(1) the every piece image that includes corresponding object of concentrating for test sample book to be identified, extracts its SIFT Feature Descriptor;
(2) use NPE method to the higher-dimension SIFT Feature Descriptor dimensionality reduction extracting, obtain dimensionality reduction vector set later;
(3) the vector set after the dimensionality reduction that every piece image is corresponding, the hidden conditional random fields model that input training obtains, output object tag number, as final recognition result.
Wherein, every piece image of the object that includes corresponding label value of concentrating for training sample, or the every piece image that includes corresponding object that test sample book to be identified is concentrated, extract corresponding SIFT feature, include feature point detection and descriptor and calculate two processes, wherein feature point detection step is:
(1) metric space extreme point detects: detect the extreme point on metric space, need to travel through the point in the image D (x, y, σ) after Gaussian difference (Difference-of-Gaussian, DoG) computing.D (x, y, σ) is expressed as
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
Wherein k is the scale factor between adjacent two yardsticks.G (x, y, σ) is taking initial point as average, the Gaussian function that σ is mean square deviation, and L (x, y, σ) is called the Gaussian smoothing about variable dimension σ of piece image.I (x, y) represents source images, and * represents convolution algorithm.The relatively gray-scale value of each point in D (x, y, σ) and adjacent 8 points and upper and lower two-layer 9 adjacent points, if this corresponding grey scale value is adjacent area greatly or minimal value, set it as candidate's key point;
(2) accurate feature points location: if the Local Extremum detecting is X 0=(x 0, y 0, σ), D (x, y, σ) is used to Taylor expansion, and to expansion differentiate, making derivative is 0, obtains corresponding to Local Extremum X 0exact position
X acc = [ - ( ∂ 2 D ∂ X 2 ) ∂ D ∂ X ] | X = X 0 ;
Descriptor calculation procedure comprises:
(1) principal direction is determined: the image L (x, y, σ) for each width through Gaussian smoothing, and the gradient amplitude m (x, y) of surrounding's point at unique point place and direction θ (x, y) are calculated by following two formulas:
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ(x,y)=arctan(L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))
Be divided into 36 deciles by 0 °~360 °, 10 ° of each deciles, make the histogram about respective amplitude m (x, y) according to direction θ (x, y), and in corresponding histogram, the direction of peaked correspondence is as the principal direction of this unique point;
(2) descriptor calculates: centered by unique point, rotatable coordinate axis makes the Primary Direction Superposition of x direction and this unique point, get the window of 16 × 16 sizes, be divided into 4 × 4 the foursquare trellis of equalization region, use mean square deviation is that big or small i.e. 8 the Gaussian function of elongated half that calculates descriptor window used carries out weights distribution to the point in region, again for each region calculated level, vertically, principal diagonal, the each both sides of counter-diagonal in totally 8 directions about the histogram of Grad, Grad corresponding in each direction is as the one-component in Feature Descriptor, form the vector of 4 × 4 × 8=128 dimension, and do normalization and generate final descriptor vector.
Wherein, every piece image of the object that includes corresponding label value of concentrating for training sample, or the concentrated every piece image that includes corresponding object of test sample book to be identified, the SIFT Feature Descriptor of extraction adopts NPE method dimensionality reduction.NPE method can be summit to the high dimension vector taking identical dimensional, and phase mutual edge distance is that the vector in the non-directed graph of weights on limit carries out dimensionality reduction, and keeps the unchangeability of weights on limit; For given sequence vector x=[x 1, x 2..., x m], the sequence vector after dimensionality reduction is y=[y 1, y 2.., y m], by x tto y tmapping table be shown
Figure BSA00000314694300033
wherein
Figure BSA00000314694300034
d=r × c, d < < D, A npebe the transition matrix of D × d dimension, its step is as follows:
(1) structure adjacent map: if G represents to have the figure of m node, t and the sequence number of s correspondence image in characteristic point sequence, construct in such a way adjacent map:
If a) x tand x sbelong to same source object, calculate Euclidean distance dist (t, s) between the two=|| x t-x s|; Otherwise, dist (t, s)=C, C is predefine constant;
If b) x sbe positioned at x tk neighbour within the scope of, at x tto x sbetween set up directed connection line;
(2) calculate weight matrix: each data can be formed by the vectorial linear combination reconstruct of its contiguous numbering, are meeting ∑ sw tsunder=1 prerequisite, minimize objective function ∑ t|| x t-∑ sw tsx s||, the optimum obtaining represents the weight matrix W of local contiguous linear dependence, wherein, and W tsrepresent x tby its neighbor point x saccording to the coefficient after space length normalization reconstruct;
(3) calculate projection matrix: minimize cost function Φ (Y)=∑ t(y t-∑ sw tsy s) 2=a txMX ta, M=(I-W) t(I-W), I is unit matrix, and imposes restriction
Figure BSA00000314694300041
converting vector a is by solving Generalized Characteristic Equation xMx ta=λ xx tthe minimal eigenvalue of a obtains, and supposes column vector a 1, a 2..., a daccording to eigenvalue λ 1≤ λ 2≤ ...≤λ dthe homographic solution of sequence, final mapping relations are expressed as
Figure BSA00000314694300042
Figure BSA00000314694300043
that D × d ties up matrix.
Hidden conditional random fields (Hidden Conditional Random Fields) model, can be according to the same dimension observation vector sequences y={ y of input 1, y 2..., y mdifferentiate mark value z, and the parameter model of a hidden conditional random fields is made up of observation vector and the label value of hidden state layer, input, and HCRF utilizes following formula to carry out modeling and differentiation to the conditional probability of label:
P ( z | y , &theta; , &omega; ) = &Sigma; h ( z , h | y , &theta; , &omega; ) = &Sigma; h e &Psi; ( z , h , y &theta; , &omega; ) &Sigma; z &prime; &Element; Z , h &Element; H e &Psi; ( z &prime; , h , y&theta; , &omega; )
Wherein, h={h 1, h 2.., h mcorresponding to observation sequence y, h i∈ H, H represents the hidden state set likely occurring; Parameter is θ=[θ h, θ z, θ e] and the potential-energy function Ψ (z, h, y: θ, ω) of window size ω be
Figure E is non-directed graph, and (j, k) represents a limit wherein, the corresponding hidden state in each summit in figure;
Figure BSA00000314694300046
can represent the arbitrary characteristics in observation window; Parameter group θ=[θ h, θ z, θ e] in, θ hrepresent corresponding implicit state h ithe parameter of ∈ H, θ zthat measure is hidden state h iand the compatibility between label z, θ ewhat measure is the compatibility being connected between state j and k and label z;
(1) in the training process of HCRF model, parameter group θ=[θ h, θ z, θ e] optimal value according to following formula determine
θ *=argmax θL(θ)
Wherein, estimation function L (θ) is
L ( &theta; ) = &Sigma; i = 1 n log P ( z i | y i , &theta; , w ) - 1 2 &sigma; &theta; 2 | | &theta; | | 2
Wherein n represents total number of training sample sequence, and it is σ that parameter θ obeys variance θ 2gaussian distribution;
(2) differentiation process, for the observation vector sequences y of input, the label value of differentiation
Figure BSA00000314694300052
for
z ~ = arg max z &Element; Z P ( z | y , &omega; , &theta; * ) .
Brief description of the drawings
Fig. 1 is the foundation of whole model and the process flow diagram of identifying.
Fig. 2 is gradient and the Gauss's weights areal map in 16 × 16 regions around unique point.
Fig. 3 is descriptor net result figure.
Fig. 4 is the hidden conditional random fields model schematic diagram of the single target that comprises 4 hidden states.
Concrete technical scheme
The foundation of model and the identifying of target are as shown in Figure 1.Image collection for training objective model of cognition comprises L object, wherein l object corresponding k again lwidth training image.The source images img that one width comprises certain certain objects ithe SIFT unique point set that obtains after calculating, wherein information corresponding to each unique point can be by element group representation more than:
Sift j:=<j,(x,y),σ,θ,descriptor 128×1>
Wherein j represents the label of this unique point in set, and (x, y) represents the position of this unique point in source images, and σ represents corresponding yardstick information, and θ represents principal direction information, descriptor 12 × 1what represent is the descriptor vectors of 128 dimensions corresponding to this unique point.
The major part that determines characteristic point information is descriptor vector, and it is the Main Basis of matching process.For source images img ithe SIFT feature calculating, extracts its descriptor part to form the set of descriptor vector:
SiftSet i={SiftDescriptor j}
={<j,descriptor 128×1>}
By NPE method, SIFT descriptor is carried out to dimensionality reduction again, wherein original dimension D=128, after dimensionality reduction, dimension is chosen d=6.Reduction process is expressed as
SiftSet t ( red ) = { SiftDescripto r j } ( red )
- { < j , A npe desciptor 128 &times; 1 }
Wherein A npeit is dimensionality reduction transformation matrix.
The corresponding label value obj of each image that comprises certain source object i, wherein obj i=l, 1≤l≤L.The input set of training process be combined into
Figure BSA00000314694300061
obj i>, j=1 ..., n}, n represents total number of training image.According to training sample, to model, training obtains corresponding model parameter, and for the test pattern of input, through SIFT feature extraction, NPE reduction process obtains the vector set after dimensionality reduction, obtains the target recognition result of output after input model.
First SIFT feature extraction wants the Gaussian difference (DoG) of computed image Gaussian smoothing (LoG) and image.In the process of the Gaussian smoothing of computed image, need to use the concept of metric space.Metric space is divided into different layers, and every one deck is corresponding different sampling rate for image, and the sampling step length of calculating one deck is 1, and it is that the sampling step length of 4, the k layers is 2 that the second layer is 2, the three layers k-1.And in every one deck, be divided into again S level, wherein on s level, the corresponding mean square deviation for level and smooth Gaussian function is σ s=2 s/Sσ 0, wherein σ 0=16, S gets 5 conventionally.In one deck, two adjacent levels do the poor Gaussian difference that obtains image, and the number of the Gaussian difference image in every layer is S-1.Judge whether certain in Gaussian difference is a bit possible unique point, will by it with in this level and upper and lower two levels the value of totally 26 points compare, if extreme point, just elect candidate's unique point as.
Unique point is around for calculating the gradient group of descriptor and corresponding Gauss's weights distribution range as shown in Figure 2.In Gaussian smoothing image, get unique point 16 × 16 points around, according to
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ(x,y)=arctan(L(x,y+1)-L(x,y?1))/(L(x+1,y)L(x-1,y))
Be identified for amplitude corresponding to each position and direction.Use mean square deviation is that big or small i.e. 8 the Gaussian function of elongated half that calculates descriptor window used carries out weights distribution to the point in region, to strengthen the indeformable of illumination and Geometrical change.Again for each region calculated level, vertically, principal diagonal, the each both sides of counter-diagonal in totally 8 directions about the histogram of Grad, Grad corresponding in each direction is as the one-component in Feature Descriptor, form the vector of 4 × 4 × 8=128 dimension, and do naturalization and generate final descriptor vector.As shown in Figure 3.
Shown in Fig. 4 be in the hidden conditional random fields that single body is corresponding abstract model out.Model is divided into three layers: last layer representative be label corresponding to target, during for training pattern, it is the part of input, in identifying, it is final Output rusults; Middle one deck is the non-directed graph that hidden state forms, in these hidden states, between every two summits that represent hidden state, all have a limit, and weights on value and the limit of the value on limit between object label and hidden state, hidden state are all constantly adjusted to complete in training process; Each hidden state is a corresponding observation vector again, and for the present invention, what these observation vectors were corresponding is the vector set after NPE method dimensionality reduction.In Fig. 4, only include 4 hidden states, corresponding 4 observation vectors.And in real process, typically, the image about object that one width comprises certain texture extracts through SIFT algorithm, possess even thousands of unique point descriptors of hundreds of, thereby vector set after corresponding similar number dimensionality reduction, it has formed the importation of training and the identification of the Model of Target Recognition based on HCRF.

Claims (1)

1. the target identification method based on dimensionality reduction local feature description and hidden conditional random fields, it is characterized in that: its target is to set up a Model of Target Recognition for object identification, comprise model foundation and two stages of object identification, wherein the step of modeling comprises:
Every piece image of the object that includes corresponding label value of 1.1, concentrating for training sample, extracts its yardstick invariant features SIFT Feature Descriptor;
1.2, use neighbour to keep embedding grammar to the higher-dimension yardstick invariant features SIFT Feature Descriptor dimensionality reduction extracting, obtain dimensionality reduction vector set later;
1.3, the vector set after the dimensionality reduction that every piece image is corresponding, with a sample of the tag number composing training hidden conditional random fields model of object, obtains can be used for the hidden conditional random fields model of recognition object through all sample learnings; The step of identification comprises:
2.1,, for the every piece image that includes corresponding object in test set, extract its yardstick invariant features SIFT Feature Descriptor;
2.2, use neighbour to keep embedding grammar to the higher-dimension yardstick invariant features SIFT Feature Descriptor dimensionality reduction extracting, obtain dimensionality reduction vector set later;
2.3, the vector set after the dimensionality reduction that every piece image is corresponding, the hidden conditional random fields model that input training obtains, output object tag number, as final recognition result;
Wherein, every piece image of the object that includes corresponding label value of concentrating for training sample, or the every piece image that includes corresponding object that test sample book to be identified is concentrated, extract corresponding yardstick invariant features SIFT feature, include feature point detection and descriptor and calculate two processes, wherein feature point detection step is:
3.1, metric space extreme point detects: detect the extreme point on metric space, need to travel through the point in the image D (x, y, σ) after Gaussian difference computing; D (x, y, σ) is expressed as
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
Wherein k is the scale factor between adjacent two yardsticks; G (x, y, σ) is taking initial point as average, the Gaussian function that σ is mean square deviation, and L (x, y, σ) is called the Gaussian smoothing about variable dimension σ of piece image; I (x, y) represents source images, and * represents convolution algorithm; The relatively gray-scale value of each point in D (x, y, σ) and adjacent 8 points and upper and lower two-layer 9 adjacent points, if very big or minimal value, the key point using this as candidate that this corresponding grey scale value is adjacent area;
3.2, accurate feature points location: if the Local Extremum detecting is X 0=(x 0, y 0, σ), D (x, y, σ) is used to Taylor expansion, and to expansion differentiate, making derivative is 0, obtains corresponding to Local Extremum X 0exact position
Figure FDA0000410155350000021
Descriptor calculation procedure comprises:
4.1, principal direction is determined: the image L (x, y, σ) for each width through Gaussian smoothing, and the gradient amplitude m (x, y) of surrounding's point at unique point place and direction θ (x, y) are calculated by following two formulas:
Figure FDA0000410155350000022
θ(x,y)=arctan(L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))
Be divided into 36 deciles by 0 °~360 °, 10 ° of each deciles, make the histogram about respective amplitude m (x, y) according to direction θ (x, y), and in corresponding histogram, the direction of peaked correspondence is as the principal direction of this unique point;
4.2, descriptor calculates: centered by unique point, rotatable coordinate axis makes the Primary Direction Superposition of x direction and this unique point, get the window of 16 × 16 sizes, be divided into 4 × 4 the foursquare trellis of equalization region, use mean square deviation is that big or small i.e. 8 the Gaussian function of elongated half that calculates descriptor window used carries out weights distribution to the point in region, for each region calculated level, vertically, principal diagonal, the each both sides of counter-diagonal in totally 8 directions about the histogram of Grad, Grad corresponding in each direction is as the one-component in Feature Descriptor, form the vector of 4 × 4 × 8=128 dimension, and do normalization and generate final descriptor vector,
Wherein, described neighbour keeps embedding grammar, is summit to the high dimension vector taking identical dimensional, and phase mutual edge distance is that the non-directed graph of weights on limit carries out dimensionality reduction, and keeps the unchangeability of weights on limit; For given sequence vector x=[x 1, x 2..., x m], the sequence vector after dimensionality reduction is y=[y 1, y 2..., yx], by x tto y tmapping table be shown
Figure FDA0000410155350000023
wherein
Figure FDA0000410155350000024
d=r × c, d<<D, A npebe the transition matrix of D × d dimension, its step is as follows:
5.1, structure adjacent map: if G represents to have the figure of m node, t and the sequence number of s correspondence image in characteristic point sequence, construct in such a way adjacent map:
If a) x tand x sbelong to same source object, calculate Euclidean distance dist (t, s) between the two=|| x t-x s||; Otherwise, dist (t, s)=C, C is predefine constant;
If b) x sbe positioned at x tk neighbour within the scope of, at x tto x sbetween set up directed connection line;
5.2, calculate weight matrix: each data are formed by the vectorial linear combination reconstruct of its contiguous numbering, are meeting ∑ sw tsunder=1 prerequisite, minimize objective function ∑ t|| x t-∑ sw tsx s||, the optimum obtaining represents the weight matrix W of local contiguous linear dependence, wherein, and W tsrepresent x tby its neighbor point x saccording to the coefficient after space length normalization reconstruct;
5.3, calculate projection matrix: minimize cost function Φ (Y)=∑ t(y t-∑ sw tsy s) 2=a txMX ta, M=(I-W) t(I-W), I is unit matrix, and imposes restriction
Figure FDA0000410155350000035
converting vector a is by solving Generalized Characteristic Equation xMx ta=λ xx tthe minimal eigenvalue of a obtains, and supposes column vector a 1, a 2..., a daccording to eigenvalue λ 1≤ λ 2≤ ... ≤ λ dthe homographic solution of sequence, final mapping relations are expressed as
Figure FDA0000410155350000031
that D × d ties up matrix;
Wherein, described hidden conditional random fields model, according to the same dimension observation vector sequences y={ y of input 1, y 2..., y mdifferentiate label value z, and the parameter model of a hidden conditional random fields is made up of observation vector and the label value of hidden state layer, input, and hidden conditional random fields HCRF utilizes following formula to carry out modeling and differentiation to the conditional probability of label:
Wherein, h={h 1, h 2..., h mcorresponding to observation sequence y, h i∈ H, H represents the hidden state set likely occurring; Parameter is θ=[θ h, θ z, θ e] and the potential-energy function Ψ (z, h, y: θ, ω) of window size ω be
Figure FDA0000410155350000033
Figure E is non-directed graph, and (j, k) represents a limit wherein, the corresponding hidden state in each summit in figure; represent the arbitrary characteristics in observation window; Parameter group θ=[θ h, θ z, θ e] in, θ hrepresent corresponding implicit state h ithe parameter of ∈ H, θ zthat measure is hidden state h iand the compatibility between label z, θ ewhat measure is the compatibility being connected between state j and k and label z;
6.1, in the training process of hidden conditional random fields model model, parameter group θ=[θ h, θ z, θ e] optimal value according to following formula determine
θ *=argmax θL(θ)
Wherein, estimation function L (θ) is
Figure FDA0000410155350000034
Wherein n represents total number of training sample sequence, and it is σ that parameter θ obeys variance θ 2gaussian distribution;
6.2, differentiation process, for the observation vector sequences y of input, the label value of differentiation
Figure FDA0000410155350000042
for
Figure 2010105158649100001DEST_PATH_IMAGE002
CN201010515864.9A 2010-10-15 2010-10-15 Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field Expired - Fee Related CN101980250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010515864.9A CN101980250B (en) 2010-10-15 2010-10-15 Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010515864.9A CN101980250B (en) 2010-10-15 2010-10-15 Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field

Publications (2)

Publication Number Publication Date
CN101980250A CN101980250A (en) 2011-02-23
CN101980250B true CN101980250B (en) 2014-06-18

Family

ID=43600752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010515864.9A Expired - Fee Related CN101980250B (en) 2010-10-15 2010-10-15 Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field

Country Status (1)

Country Link
CN (1) CN101980250B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10395125B2 (en) * 2016-10-06 2019-08-27 Smr Patents S.A.R.L. Object detection and classification with fourier fans
CN102364497B (en) * 2011-05-06 2013-06-05 北京师范大学 Image semantic extraction method applied in electronic guidance system
CN102194108B (en) * 2011-05-13 2013-01-23 华南理工大学 Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
US8755605B2 (en) 2011-07-11 2014-06-17 Futurewei Technologies, Inc. System and method for compact descriptor for visual search
KR101611778B1 (en) 2011-11-18 2016-04-15 닛본 덴끼 가부시끼가이샤 Local feature descriptor extracting apparatus, method for extracting local feature descriptor, and computer-readable recording medium recording a program
CN102496003B (en) * 2011-11-21 2014-03-12 中国科学院自动化研究所 Target locating method based on identification block
CN102682091A (en) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 Cloud-service-based visual search method and cloud-service-based visual search system
CN102819844A (en) * 2012-08-22 2012-12-12 上海海事大学 Laser particle image registering method for estimating relative motion of mobile robot
WO2014166376A1 (en) * 2013-04-12 2014-10-16 北京大学 Method for acquiring compact global feature descriptor of image and image retrieval method
CN104112151B (en) * 2013-04-18 2018-11-27 航天信息股份有限公司 The verification method and device of card image
CN103218822B (en) * 2013-05-06 2016-02-17 河南理工大学 Based on the image characteristic point automatic testing method of disappearance importance
CN103577804B (en) * 2013-10-21 2017-01-04 中国计量学院 Based on SIFT stream and crowd's Deviant Behavior recognition methods of hidden conditional random fields
CN103632149A (en) * 2013-12-17 2014-03-12 上海电机学院 Face recognition method based on image feature analysis
CN104615613B (en) * 2014-04-30 2018-04-17 北京大学 The polymerization of global characteristics description
CN104517128A (en) * 2015-01-20 2015-04-15 厦门水贝自动化科技有限公司 Infrared monitoring method and device for crab shelling
CN104777802A (en) * 2015-01-20 2015-07-15 厦门水贝自动化科技有限公司 Soft-shell crab intensive-breeding and monitoring system
CN105095862B (en) * 2015-07-10 2018-05-29 南开大学 A kind of human motion recognition method based on depth convolution condition random field
CN106203384B (en) * 2016-07-19 2020-01-31 天津大学 multi-resolution cell division recognition method
CN106407982B (en) * 2016-09-23 2019-05-14 厦门中控智慧信息技术有限公司 A kind of data processing method and equipment
CN106644162B (en) * 2016-10-12 2020-04-21 云南大学 Ring main unit wire core temperature soft measurement method based on neighborhood preserving embedding regression algorithm
CN111369535B (en) * 2020-03-05 2023-04-07 笑纳科技(苏州)有限公司 Cell detection method
CN113542525B (en) * 2021-06-30 2023-02-10 中国人民解放军战略支援部队信息工程大学 Steganography detection feature selection method based on MMD residual error

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556647A (en) * 2009-05-20 2009-10-14 哈尔滨理工大学 mobile robot visual orientation method based on improved SIFT algorithm
CN101782969A (en) * 2010-02-26 2010-07-21 浙江大学 Reliable image characteristic matching method based on physical positioning information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7430315B2 (en) * 2004-02-13 2008-09-30 Honda Motor Co. Face recognition system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556647A (en) * 2009-05-20 2009-10-14 哈尔滨理工大学 mobile robot visual orientation method based on improved SIFT algorithm
CN101782969A (en) * 2010-02-26 2010-07-21 浙江大学 Reliable image characteristic matching method based on physical positioning information

Also Published As

Publication number Publication date
CN101980250A (en) 2011-02-23

Similar Documents

Publication Publication Date Title
CN101980250B (en) Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field
Li et al. Automatic pavement crack detection by multi-scale image fusion
CN103077512B (en) Based on the feature extracting and matching method of the digital picture that major component is analysed
CN109697692B (en) Feature matching method based on local structure similarity
CN108492298B (en) Multispectral image change detection method based on generation countermeasure network
CN109165540B (en) Pedestrian searching method and device based on prior candidate box selection strategy
CN112085772B (en) Remote sensing image registration method and device
Dong et al. Local descriptor learning for change detection in synthetic aperture radar images via convolutional neural networks
Niu et al. Fast and effective Keypoint-based image copy-move forgery detection using complex-valued moment invariants
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
Yue et al. Robust loop closure detection based on bag of superpoints and graph verification
CN103927511A (en) Image identification method based on difference feature description
Yuan et al. Learning to count buildings in diverse aerial scenes
CN104881671A (en) High resolution remote sensing image local feature extraction method based on 2D-Gabor
CN104680158A (en) Face recognition method based on multi-scale block partial multi-valued mode
CN108171119B (en) SAR image change detection method based on residual error network
Liu et al. Regularization based iterative point match weighting for accurate rigid transformation estimation
CN112488128A (en) Bezier curve-based detection method for any distorted image line segment
Ouyang et al. Fingerprint pose estimation based on faster R-CNN
CN114511012A (en) SAR image and optical image matching method based on feature matching and position matching
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN105654042B (en) The proving temperature character identifying method of glass-stem thermometer
CN114332172A (en) Improved laser point cloud registration method based on covariance matrix
Srivastava et al. Drought stress classification using 3D plant models
Xu et al. Object detection using principal contour fragments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Li Chao

Inventor after: Chi Yitao

Inventor after: Guo Xinyi

Inventor after: Xiong Zhang

Inventor before: Chi Yitao

Inventor before: Li Chao

Inventor before: Guo Xinyi

Inventor before: Xiong Zhang

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: CHI YITAO LI CHAO GUO XINYI XIONG ZHANG TO: LI CHAO CHI YITAO GUO XINYI XIONG ZHANG

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140618

Termination date: 20161015