CN107219510A - Radar target identification method based on unlimited largest interval linear discriminant projection model - Google Patents
Radar target identification method based on unlimited largest interval linear discriminant projection model Download PDFInfo
- Publication number
- CN107219510A CN107219510A CN201710351886.8A CN201710351886A CN107219510A CN 107219510 A CN107219510 A CN 107219510A CN 201710351886 A CN201710351886 A CN 201710351886A CN 107219510 A CN107219510 A CN 107219510A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- represent
- distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The present invention proposes a kind of radar target identification method based on unlimited largest interval linear discriminant projection model, the technical problem low for solving discrimination present in existing radar target identification method.Realize that step is:Obtain power spectrum characteristic training sample set X category label collection ys corresponding with its;Build unlimited largest interval linear discriminant projection model;Define the expression formula of the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint;Set the initial value of parameters in the expression formula of the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint;Represent the Condition Posterior Distribution of parameters in the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint;The Condition Posterior Distribution of parameters to representing is sampled;Obtain power spectrum characteristic test sample collectionCategory of test label set corresponding with itsObtain the discrimination of radar target.The present invention can be used for carrying out detection identification to Radar High Range Resolution.
Description
Technical field
The invention belongs to Radar Technology field, it is related to a kind of radar target identification method, and in particular to a kind of unlimited maximum
Spaced linear differentiates the target identification method of the radar of projection model, and detection knowledge is carried out available for Radar High Range Resolution
Not.
Background technology
Radar target recognition is a kind of technological means that radar is used, for recognizing that it is searched in volume it has been found that mesh
Mark, its principle is exactly the radar echo signal using target, realizes the judgement to target type.Wideband radar is usually operated at light
School district, now radar target can be regarded as being made up of the different scattering point of a large amount of intensity, High Range Resolution HRRP be with width
The vector of each scattering point echo-signal on objective body is obtained with radar signal.It reflects scattering point on objective body and regarded along radar
The distribution situation of line, contains the important structural information of target, such as target size, scattering point structure etc..Therefore in 20 generation
Ji Mo, some scholars propose the technology that radar target automatic identification is completed using HRRP, see [S.P.Jacobs.Automatic
target recognition using high-resolution radar range profiles.PhD
Dissertation, Washington Univ., St.Louis, MO, 1999] enter 21st century after, this technology is in thunder
The extensive concern of people is caused up to field.
In high frequency region, scattering center model describes electromagnetic characteristic of scattering.According to the model, when targeted attitude has
During large change, it may occur that scattering center Range cell migration MTRC, i.e. partial dispersion center can be from a range cell movements
To another range cell, cause target HRRP acute variation;And during posture generation small change, that is, occur without MTRC.
As described above, HRRP is sensitive to targeted attitude, and the HRRP of same target has multimode distribution character, and this often makes data exist
Distribution in HRRP spaces has nonlinear separability, this effect that have impact on conventional target identification method and application.HRRP is certainly
Dynamic recognition methods is divided into three classes:Method based on template matches, the method based on compressed sensing and the side based on sorter model
Method.Method based on template matches is to build a series of reference pictures i.e. template from marked training image, by test image
Matched, be grouped into classification where most close template therewith, due to the template number and dimension one of storage respectively with template
As it is larger, this method complexity is higher;Method based on compressed sensing built complete storehouse, calculated test sample in training sample
Rarefaction representation in sheet, carries out Image Reconstruction and carries out Classification and Identification, but the discrimination of this method by reconstructed error minimum criteria
It can largely decline due to the influence of noise in HRRP or shelter;Method based on sorter model is first to target
Region section carries out feature extraction and obtains training sample set and test sample collection, is classified by being built with labelled training sample
Device model simultaneously carries out parameter calculating, and then unlabeled test sample is input in sorter model and calculates category label, complete
Into target identification, the target identification method based on sorter model has preferable robustness and higher discrimination.
In existing machine Learning Theory, principal component analysis (PCA), independent component analysis (ICA) factorial analysis (FA)
Deng being all unsupervised feature extracting method, the feature extracted with these technologies is not necessarily adapted to the classification task of rear end;In order to
Classification information is made full use of, the feature extracting method of a large amount of supervision classes is contemplated, representative method is linear to be differentiated
Analyze (LDA), LDA maximizes between class distance of the sample after projection, while minimizing inter- object distance, can effectively carry
The separability of high data.But, LDA highly relies on the distribution of data, which limits LDA application, for this problem, greatly
Amount scholar proposes different solutions, wherein more representational have subclass discriminant analysis (SDA) method, this method will
Initial data is divided into some subclasses, covariance matrix is modified in class, class, improves the scope of application of LDA methods,
Although this kind of method obtains good effect on some data sets, this kind of method is two benches feature extraction side
Method, is closely linked without the classification task with rear end, may result in classification loss of energy, is damaged in order to avoid this
Lose [B.Chen, H.Zhang, X.Zhang et.al., Max-margin discriminant projection via data
augmentation[J].IEEE Transactions on Knowledge and Data Engineering.2015,27
(7):1964-1976] article proposes MMLDP models, model joint Bayesian SVM, regard projection properties as hidden variable
The input of SVM classifier, under the framework of Bayesian models, carries out combination learning, effectively to projector space and SVM classifier
The performance loss that learning tape comes stage by stage is avoided, the discrimination of target is improved.
In the above method, either learn stage by stage or combination learning, the problem of they all have one jointly:It is this kind of
Method belongs to global category feature extracting method, that is, is expected that by a global projection matrix, sample is realized in projector space
It can divide.But this kind of method is for the data of Nonlinear separability, the especially data to multimode distributed architecture, because it have ignored
Implicit structural information inside data, global projecting method may bring classification loss of energy, cause discrimination low.
The content of the invention
It is an object of the invention to overcome the shortcomings of that prior art is present, it is proposed that one kind is linear based on unlimited largest interval
Differentiate the target identification method of the radar of projection model, it is low for solving discrimination present in existing radar target identification method
Technical problem.
The present invention technical thought be:The data being distributed for multimode, MMLDP is incorporated into by the thought of Mixture of expert model
A kind of unlimited maximum boundary linear discriminant projection model (infinite max-margin linear are proposed in model
discriminant projection,iMMLDP).Specifically, IMMLDP is mixed by DP (Dirichlet Process)
Data set is divided into ' unlimited ' individual subset by model, is learnt the MMLDP graders of a part in each subset, is combined each
Local classifiers realize non-linear projection and the classification of the overall situation.In addition, IMMLDP by the division of data set, project subspace
Practise and the study of grader be unified under the framework of Bayesian models, by error in classification come the division of guide data collection with
The study of subspace is projected, the linear separability of each regional area is preferably ensure that.
According to above-mentioned technical thought, realize that the technical scheme that the object of the invention is taken comprises the following steps:
(1) power spectrum characteristic training sample set X category label collection ys corresponding with its is obtained:
(1a) radar receives N number of C classifications target High Range Resolution, obtains N number of High Range Resolution;
(1b) extracts the power spectrum characteristic of each High Range Resolution in N number of High Range Resolution, obtains power spectrum characteristic
Training sample set X={ x1,x2,…,xn,…,xN, and the corresponding category labels of power spectrum characteristic training sample set X are passed through into class
Other label set y is represented:Y={ y1,y2,…,yn,…,yN, N represents the total number of training sample, xnRepresent n-th of sample, ynTable
Show the category label y of the n-th class samplen∈{1,2,…,C};
(2) unlimited largest interval linear discriminant projection model is built:By Di Li Cray process DP mixed models, projection model
It is unified in hidden variable SVM classifier under Bayesian model frameworks, obtains unlimited largest interval linear discriminant projection model;
(3) the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint is defined, its expression formula is:
Wherein, ωcC class classifier parameters are represented, N (0, β is obeyed in distributionc -1I), grader hyper parameter βc~Ga (a0,
b0), c ∈ { 1,2 ..., D }, D represents initial total cluster number;Ac=[ac1,ac2,…,ack..., acK] represent c class samples
Projection matrix, c ∈ { 1,2 ..., D }, ackRepresent AcKth rowκk~Ga (c0,d0), K represents AcIt is total
Columns;Z=[z1,z2,…,zn,…,zN] the feature samples collection that sample set X is obtained after projection is represented, while being also hidden change
Measure the input of SVM classifier, znRepresent n-th of sample xnThe feature samples obtained after projection, znInitial distribution be N (AΤ
xn,Ι);{μc, ∑cRepresent c class samples distributed constant, μcRepresent the average of c class samples, ∑cRepresent c class samples
Covariance matrix, c classes cluster distributed constant { μc,ΣcInitial distribution be Normal-Wishart distribution, i.e. { μc,Σc}
~NW ({ μc,Σc}|μ0,W0,ν0,β0), wherein μ0,W0,ν0,β0For initial distribution parameter;H=[h1,h2,…,hn,…,hN] table
Show cluster indicator variable collection, hnRepresent training sample xnWhich kind of belongs to, hn∈{1,2,…,D};υ=[υ1,υ2,…,υc,…,
υC] represent based on Stick-breaking construct Di Li Cray process DP mixed models parameter, υ be distributed as Beta (1,
α), α is distributed as Ga (e0,f0);X represents training sample, and y represents the corresponding category label of training sample;N () represents Gauss
Distribution, Ga () represent that Gamma distributions, NW () represent Normal-Wishart distributions, ()TRepresent transposition operation;
(4) parameters in the expression formula of the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint are set
Initial value:
Set c class classifier parameters ωcInitial value for one obey N (0,1) distribution K+1 dimension random matrix,
C class classifier parameters ωcCovariance precision β initial value is obedience Ga (103, 1) distribution random vector, c classes sample throw
Shadow matrix AcInitial value be the corresponding characteristic vectors of characteristic value big K before training sample set X covariance matrix, feature sample
This collection Z initial value is the random matrix of a K × N-dimensional for obeying N (0,1) distributions, c class distributed constants { μc,ΣcJust
Initial value is one and obeys Normal-Wishart distribution NW ({ μc,Σc}|μ0,W0,ν0,β0) random matrix, wherein μ0=0, W0
=10-5I,ν0=K+1, β0=10-3, the parameter υ of Di Li Cray process DP mixed models initial distribution is that Beta (1,1) sets poly-
It is an obedience Ga (1,10 to collect the initial value of parameter alpha-10) distribution random number;
(5) the unlimited pseudo- Posterior distrbutionp expression formula of largest interval linear discriminant projection model joint is representedThe Condition Posterior Distribution of parameters:C class classifier parameters ωcCondition after
Test distribution p (ωc| -), c class sample projection matrixes AcKth row ac,kCondition Posterior Distribution p (ac,k| -), feature samples collection Z
N-th of sample row k zk,nCondition Posterior Distribution p (zk,n| -), c class distributed constants { μc,ΣcCondition Posterior Distribution p
({μc,Σc| -), cluster indicator variable h Condition Posterior Distribution p (h | -), the parameter υ of Di Li Cray process DP mixed models
Condition Posterior Distribution p (υ | -);
(6) Condition Posterior Distribution of the parameters represented to step (5) is sampled:According to the Gibbs method of samplings,
The pseudo- Posterior distrbutionp expression formula of unlimited largest interval linear discriminant projection model joint represented to step (5)The Condition Posterior Distribution of parameters, carries out I successively0Secondary circulating sampling, wherein I0
For natural number, from I0+ 1 beginning is at interval of SPParameter ω in the secondary each projection subspace of preservationc,Ac, distributed constantSampled result, and preserve T0The sampled result of subparameter;
(7) power spectrum characteristic test sample collection is obtainedCategory of test label set corresponding with its
(7a) radar is receivedIndividual C classifications target High Range Resolution, is obtainedIndividual High Range Resolution;
(7b) is extractedThe power spectrum characteristic of each High Range Resolution, obtains power spectrum characteristic in individual High Range Resolution
Test sample collectionAnd by test sample collectionCorresponding category label passes through category of test mark
Number collectionRepresent:Wherein,The total number of test sample is represented,N-th of sample is represented,Table
Show the category label of the n-th class sample
(8) discrimination of radar target is obtained:
(8a) is using cluster indicator variable hnTo test sample collectionClustered, and cluster result is passed through into c class samples
This projection matrix AcIn the projection subspace for projecting to unlimited largest interval linear discriminant projection model, test feature sample is obtained
CollectionWherein, test feature sample setIn include multiple subsets;
The c class classifier parameters ω that (8b) passes through preservationcSampled result, to test feature sample setClassified,
Obtain test sample collectionCategory label collection
(8c) is by category of test label setWith test sample collectionCategory label collectionIt is compared, obtains radar mesh
Target discrimination.
The present invention compared with prior art, with advantages below:
1) HRRP Radar High Range Resolution data sets are divided into ' unlimited ' individual subset by the present invention by DP processes, every
One MMLDP model of study inside individual subset, due to clustering and classification is combination learning, thus can effective mining data
In hide linear structure.Compared with the target identification technology of existing radar, the discrimination of radar target, especially target are improved
In the case of with multimode distribution character, effect is more obvious.
2) study of the division of data set, projection subspace and SVM classifier is unified in Bayesian frame by the present invention
Under frame, combination learning is carried out, it is to avoid the classification performance loss that two phase classification method is brought.Simultaneously subset in each respectively
Learn simple projection model and grader, individually one projection model of training and the classification on whole data set with prior art
Device is compared, and reduces the complexity of model.
Brief description of the drawings
Fig. 1 is implementation process figure of the invention;
Represent the flow entirely tested.
Fig. 2 is cluster posteriority quantity result in measured data experiment.
Embodiment (technical scheme is explained and illustrated)
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail:
Reference picture 1, based on the radar target identification method of unlimited largest interval linear discriminant projection model, including following step
Suddenly:
Step (1) obtains power spectrum characteristic training sample set X category label collection ys corresponding with its:
(1a) radar receives 600 3 classification target High Range Resolutions, obtains 600 High Range Resolutions.
(1b) extracts the power spectrum characteristic of each High Range Resolution in 600 High Range Resolutions, obtains power spectrum special
Levy training sample set X={ x1,x2,…,xn,…,xN, and the corresponding category labels of power spectrum characteristic training sample set X are passed through
Category label collection y is represented:Y={ y1,y2,…,yn,…,yN, xnRepresent n-th of sample, ynRepresent the classification mark of the n-th class sample
Number yn∈{1,2,3}。
Step (2) builds unlimited largest interval linear discriminant projection model, realizes that step is:
(2a) represents the Di Li Cray process DP mixed models constructed based on Stick-breaking:
υc| α~Beta (1, α), θc|G0~G0, c=1,2 ..., ∞
hn| π (υ)~Mult (π (υ)), xn|hn=c;θc~p (x | θc), n=1 ..., N
Wherein, υcTo cut rod scale parameter,Represent to cut rod length every time, α is υcPriori
Profile parameter~Ga (e0,f0);θcRepresent xnDistributed constant, θcBe distributed as G0, G0Base distribution is represented, if base is distributed as
Normal-Wishart is distributed, i.e. G0~NW ({ μc,Σc}|μ0,W0,ν0,β0);hnIt is xnIndicator, work as hnRepresented during=c
xnBelong to c classes i.e. xn~p (x | θc)。
(2b) represents the projection model in each cluster:
Projection model based on MMLDP models can be expressed as:zn~N (AΤxn,I),κk~Ga (c0,
d0), wherein, A represents global projection matrix A=[a1,a2,…,ak], akRepresent A kth row, xnRepresent n-th of sample, κkFor ak
Prior distribution parameter, c0And d0For κkThe hyper parameter of parameter.
Then the projection model in each cluster can be expressed as:
zn|hn=c, Ac~N (Ac Τxn,Ip),Ac=[ac1,ac2..., ack..., acK]
Wherein, as cluster indicator variable hnWhen equal to c, znIt is A just to obey averagec Τxn, covariance matrix is IpJust
State is distributed, and p represents unit matrix I dimension.
(2c) represents the model of hidden variable SVM classifier in each cluster:
Set c-th hidden variable SVM classifier parameter obey prior distribution asWill be each hidden
The prior distribution of variable SVM classifier is brought into each hidden variable SVM classifier respectively, obtains hidden variable point in each cluster
The model of class device, can be expressed as:
Wherein, λ=[λ1,λ2,…,λc..., λC] represent hidden variable SVM classifier in hidden variable, λcRepresent c-th it is hidden
Hidden variable in variable SVM classifier.
Di Li Cray process DP mixed models, projection model and hidden variable SVM classifier are unified in Bayesian by (2d)
Under model framework, unlimited largest interval linear discriminant projection model is obtained, its hierarchical structure formula is:
hn| π (υ)~Disc (π (υ))
xn|hn=c, { μc,Σc}~p (xn|{μc,Σc), n=1 ..., N.
yn,{λn}c|{zn,hn=c }, ωc~φ (yn,λn|ωc,zn), c=1 ..., ∞
{μc,Σc}~NW (μ0,W0,ν0,β0)
Wherein, according to the stratification expression formula of above formula, n-th of sample x is obtainednLikelihood distribution function:
Step (3) defines the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint, its specific manifestation form
For:
Each in the expression formula of the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint of step (4) setting is joined
Several initial values:
Set c class classifier parameters ωcInitial value for one obey N (0,1) distribution K+1 dimension random matrix,
C class classifier parameters ωcCovariance precision β initial value is obedience Ga (103, 1) distribution random vector, c classes sample throw
Shadow matrix AcInitial value be the corresponding characteristic vectors of characteristic value big K before training sample set X covariance matrix, feature sample
This collection Z initial value is the random matrix of a K × N-dimensional for obeying N (0,1) distributions, c class distributed constants { μc,ΣcJust
Initial value is one and obeys Normal-Wishart distribution NW ({ μc,Σc}|μ0,W0,ν0,β0) random matrix, wherein μ0=0, W0
=10-5I,ν0=K+1, β0=10-3, the parameter υ of Di Li Cray process DP mixed models initial distribution is that Beta (1,1) sets poly-
It is an obedience Ga (1,10 to collect the initial value of parameter alpha-10) distribution random number.
Step (5) represents the unlimited pseudo- Posterior distrbutionp expression formula of largest interval linear discriminant projection model jointThe Condition Posterior Distribution of parameters, realizes that step is:
(5a) represents c class classifier parameters ωcCondition Posterior Distribution, its form of expression is:
Wherein:
βcFor the hyper parameter of c class graders.
(5b) represents c class sample projection matrixes AcKth row ac,kCondition Posterior Distribution, its form of expression
For:
Wherein:
xnRefer to c class samples, κc,kRepresent ac,kParameter inside prior distribution, κck~Ga (c0,d0)。
(5c) represents the row k z of feature samples collection Z n-th of samplek,nCondition Posterior Distribution, its form of expression is:
Wherein:
Wherein WithThe average and covariance matrix of difference respective conditions Posterior distrbutionp.
(5d) represents the distributed constant { μ of c class samplesc,ΣcCondition Posterior Distribution, its form of expression is:
Wherein:
υ'c=υ0+Nc
βc'=β0+Nc
NcThe quantity of c class samples is represented,The average of c class samples is represented,Represent the covariance square of c class samples
The average of battle array.
(5e) represents the Condition Posterior Distribution of cluster indicator variable, and its form of expression is:
P (h | -)=Disc (h;π)
And πcSubmit to restraint for
(5f) represents υ Condition Posterior Distribution, and its form of expression is:
p(υc|-)∝p(h|υ)Beta(υc;1, α)~Beta (υc;ρa,ρb)
Wherein ρa=1+Nc,NkRepresent the quantity of kth class sample.
(5g) represents the hidden variable λ of hidden variable SVM classifiernCondition Posterior Distribution, its form of expression is:
Wherein IG () represents dead wind area.
(5h) represents βcCondition Posterior Distribution, its form of expression is:
Wherein
(5i) represents κc,kCondition Posterior Distribution, its form of expression is:
Wherein
(5j) represents α Condition Posterior Distribution, and its form of expression is:
Wherein e'0=e0+ D-1,
(6) Condition Posterior Distribution of the parameters represented to step (5) is sampled:According to the Gibbs method of samplings,
The pseudo- Posterior distrbutionp expression formula of unlimited largest interval linear discriminant projection model joint represented to step (5)The Condition Posterior Distribution of parameters, carries out 400 circulating samplings successively, from the
401 beginnings preserve each parameter ω projected in subspace at interval of 10 timesc,Ac, distributed constantSampled result,
And preserve the sampled result of 10 subparameters.
Step (7) obtains power spectrum characteristic test sample collectionCategory of test label set corresponding with its
(7a) radar receives 2400 3 classification target High Range Resolutions, obtains 2400 High Range Resolutions.
(7b) extracts the power spectrum characteristic of each High Range Resolution in 2400 High Range Resolutions, obtains power spectrum special
Levy test sample collectionAnd by test sample collectionCorresponding category label passes through category of test
Label setRepresent:Wherein,N-th of sample is represented,Represent the classification mark of the n-th class sample
Number
(8) discrimination of radar target is obtained:
(8a) is using cluster indicator variable hnTo test sample collectionClustered, and cluster result is passed through into c class samples
Projection matrix AcIn the projection subspace for projecting to unlimited largest interval linear discriminant projection model, test feature sample set is obtainedWherein, test feature sample setIn include multiple subsets;Cluster indicator variable hnDetermineSub-set size and quantity, hn
Expression formula is as follows:
p(hn| -)=Disc (hn;π) and
The c class classifier parameters ω that step (8b) passes through preservationcSampled result, to test feature sample setCarry out
Classification, obtains test sample collectionCategory label collectionRealize that step is:
ρ=[ρ1,ρ2,...,ρc,...,ρD]
Wherein,Represent the weight coefficient of hidden variable SVM classifier in c-th of cluster of the t times sampling, m=1,2,3, t
=1,2 ..., 10, ρcThe average output of c-th of hidden variable SVM classifier is represented,Represent to solve maximum correspondence
M values.
(8c) is by category of test label setWith test sample collectionCategory label collectionIt is compared, obtains radar target
Discrimination.
Tested below in conjunction with emulation experiment and measured data, the technique effect of the present invention is elaborated:
1. emulation experiment
(1) experiment condition
The data set that this experiment is used is the Benchmark data from UCI Machine Learning Repository
Collection, therefrom have chosen tri- more complicated data sets of Heart, Splice, Twonorm.
(2) experiment content and interpretation of result
This experiment using input of the initial data as model, using LDA+SVM, MMLDP, k-mean+MMLDP and
Tetra- methods of IMMLDP are classified, recognized that this experiment is repeated 10 times per class method, and each method is randomly selected to data set
50% as training sample, remaining to do test sample.The average recognition performance of 10 times is taken to make under different latent space dimensions
For final discrimination, by each method, optimal identification rate is listed in table 1 under different latent space dimensions.
Table 1, discrimination of the distinct methods in different pieces of information
It may be seen that classification performances of the IMMLDP on different pieces of information collection is better than other method from table 1.
IMMLDP models show compared with MMLDP Model Identification rates, and data are first clustered, and learn a projection model in each cluster
It is better than global projection model method.IMMLDP models show that IMMLDP will be clustered, divided compared with Km+MMLDP Model Identification rates
Class device and projection carry out combination learning, disclose the implicit structure of data, thus obtain being obviously improved for recognition performance.
2 measured datas are tested
(1) experiment condition
The present invention is handled the actual measurement HRRP data of three class Aircraft Targets using wave band c radar.Returned using the norm of mould 2
One method eliminates HRRP amplitude sensitive, and extracts its power spectrum characteristic and eliminate its and translate sensitiveness, i.e., we obtain
Sample data set is HRRP power spectrum characteristic, in experiment, and section rod upper limit that we take is set to 10.
(2) experiment content and interpretation of result
600 power spectrum characteristics are chosen as training sample, 2400 power spectrum characteristics are hidden in 3-dimensional as test sample
Under the dimension in space, result is identified as shown in table 2.
Table 2, IMMLDP is clustered and performance evaluation in Radar High Range Resolution
The cluster posteriority quantity that IMMLDP is obtained is as shown in Figure 2, it can be seen that effective number of clusters is three classes, from table 2
We can see that each intra-cluster obtains preferable nicety of grading, this, which has benefited from us, will cluster subspace and classification
Device carries out combination learning, it is ensured that good classification performance in each cluster, so as to realize the lifting of global classification performance, improves
To the discrimination of radar target.
Claims (5)
1. a kind of radar target identification method based on unlimited largest interval linear discriminant projection model, comprises the following steps:
(1) power spectrum characteristic training sample set X category label collection ys corresponding with its is obtained:
(1a) radar receives N number of C classifications target High Range Resolution, obtains N number of High Range Resolution;
(1b) extracts the power spectrum characteristic of each High Range Resolution in N number of High Range Resolution, obtains power spectrum characteristic training
Sample set X={ x1,x2,…,xn,…,xN, and the corresponding category labels of power spectrum characteristic training sample set X are passed through into classification mark
Number collection y represent:Y={ y1,y2,…,yn,…,yN, N represents the total number of training sample, xnRepresent n-th of sample, ynRepresent the
The category label y of n class samplesn∈{1,2,…,C};
(2) unlimited largest interval linear discriminant projection model is built:By Di Li Cray process DP mixed models, projection model and hidden
Variable SVM classifier is unified under Bayesian model frameworks, obtains unlimited largest interval linear discriminant projection model;
(3) the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint is defined, its expression formula is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>A</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<mi>Z</mi>
<mo>,</mo>
<msubsup>
<mrow>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</msubsup>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>&upsi;</mi>
<mo>|</mo>
<mi>X</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
Wherein, ωcC class classifier parameters are represented, distribution is obeyedGrader hyper parameter βc~Ga (a0,b0), c
∈ { 1,2 ..., D }, D represent initial total cluster number;Ac=[ac1,ac2,…,ack..., acK] represent c class samples projection
Matrix, c ∈ { 1,2 ..., D }, ackRepresent AcKth rowκk~Ga (c0,d0), K represents AcTotal columns;
Z=[z1,z2,…,zn,…,zN] the feature samples collection that sample set X is obtained after projection is represented, while being also hidden variable SVM
The input of grader, znRepresent n-th of sample xnThe feature samples obtained after projection, znInitial distribution be N (AΤxn,
Ι);{μc, ∑cRepresent c class samples distributed constant, μcRepresent the average of c class samples, ∑cRepresent the association of c class samples
Variance matrix, c classes cluster distributed constant { μc,ΣcInitial distribution be Normal-Wishart distribution, i.e. { μc,Σc}~
NW({μc,Σc}|μ0,W0,ν0,β0), wherein μ0,W0,ν0,β0For initial distribution parameter;H=[h1,h2,…,hn,…,hN] represent
Cluster indicator variable collection, hnRepresent training sample xnWhich kind of belongs to, hn∈{1,2,…,D};υ=[υ1,υ2,…,υc,…,υC]
The parameter based on the Stick-breaking Di Li Cray process DP mixed models constructed is represented, υ's is distributed as Beta (1, α), α
Be distributed as Ga (e0,f0);X represents training sample, and y represents the corresponding category label of training sample;N () expressions Gaussian Profile,
Ga () represents that Gamma distributions, NW () represent Normal-Wishart distributions, ()TRepresent transposition operation;
(4) the initial of parameters in the expression formula of the pseudo- Posterior distrbutionp of unlimited largest interval linear discriminant projection model joint is set
Value:
Set c class classifier parameters ωcInitial value for one obey N (0,1) distribution K+1 dimension random matrix, c classes
Classifier parameters ωcCovariance precision β initial value is obedience Ga (103, 1) distribution random vector, c classes sample projection square
Battle array AcInitial value be the corresponding characteristic vectors of characteristic value big K before training sample set X covariance matrix, feature samples collection
Z initial value is the random matrix of a K × N-dimensional for obeying N (0,1) distributions, c class distributed constants { μc,ΣcInitial value
Normal-Wishart distribution NW ({ μ c, Σ are obeyed for onec}|μ0,W0,ν0,β0) random matrix, wherein μ0=0, W0=10-5I,ν0=K+1, β0=10-3, the parameter υ of Di Li Cray process DP mixed models initial distribution sets for Beta (1,1) assembles ginseng
Number α initial value is an obedience Ga (1,10-10) distribution random number;
(5) the unlimited pseudo- Posterior distrbutionp expression formula of largest interval linear discriminant projection model joint is representedThe Condition Posterior Distribution of parameters:C class classifier parameters ωcCondition after
Test distribution p (ωc| -), c class sample projection matrixes AcKth row ac,kCondition Posterior Distribution p (ac,k| -), feature samples collection Z
N-th of sample row k zk,nCondition Posterior Distribution p (zk,n| -), c class distributed constants { μc,ΣcCondition Posterior Distribution p
({μc,Σc| -), cluster indicator variable h Condition Posterior Distribution p (h | -), the parameter υ of Di Li Cray process DP mixed models
Condition Posterior Distribution p (υ | -);
(6) Condition Posterior Distribution of the parameters represented to step (5) is sampled:According to the Gibbs method of samplings, to step
Suddenly the pseudo- Posterior distrbutionp expression formula of unlimited largest interval linear discriminant projection model joint that (5) are representedThe Condition Posterior Distribution of parameters, carries out I successively0Secondary circulating sampling, wherein I0
For natural number, from I0+ 1 beginning is at interval of SPParameter ω in the secondary each projection subspace of preservationc,Ac, distributed constantSampled result, and preserve T0The sampled result of subparameter;
(7) power spectrum characteristic test sample collection is obtainedCategory of test label set corresponding with its
(7a) radar is receivedIndividual C classifications target High Range Resolution, is obtainedIndividual High Range Resolution;
(7b) is extractedThe power spectrum characteristic of each High Range Resolution in individual High Range Resolution, obtains power spectrum characteristic test
Sample setAnd by test sample collectionCorresponding category label passes through category of test label set
Represent:Wherein,The total number of test sample is represented,N-th of sample is represented,Represent n-th
The category label of class sample
(8) discrimination of radar target is obtained:
(8a) is using cluster indicator variable hnTo test sample collectionClustered, and cluster result is projected by c classes sample
Matrix AcIn the projection subspace for projecting to unlimited largest interval linear discriminant projection model, test feature sample set is obtainedIts
In, test feature sample setIn include multiple subsets;
The c class classifier parameters ω that (8b) passes through preservationcSampled result, to test feature sample setClassified, obtained
Test sample collectionCategory label collection
(8c) is by category of test label setWith test sample collectionCategory label collectionIt is compared, obtains the knowledge of radar target
Not other rate.
2. the radar target identification method according to claim 1 based on unlimited largest interval linear discriminant projection model,
Characterized in that, the unlimited largest interval linear discriminant projection model of structure described in step (2), realizes that step is:
(2a) represents the Di Li Cray process DP mixed models constructed based on Stick-breaking:
υc| α~Beta (1, α), θc|G0~G0, c=1,2 ..., ∞
hn| π (υ)~Mult (π (υ)), xn|hn=c;θc~p (x | θc), n=1 ..., N
Wherein, υcTo cut rod scale parameter,Represent to cut rod length every time, α is υcPrior distribution
Parameter alpha~Ga (e0,f0);θcRepresent xnDistributed constant, θcBe distributed as G0, G0Base distribution is represented, if base is distributed as Normal-
Wishart is distributed, i.e. G0~NW ({ μc,Σc}|μ0,W0,ν0,β0);hnIt is xnIndicator, work as hnX is represented during=cnBelong to
C classes, i.e. xn~p (x | θc);
(2b) represents the projection model in each cluster:
Projection model based on MMLDP models can be expressed as:zn~N (AΤxn,I),
Wherein, A represents global projection matrix A=[a1,a2,…,ak], akRepresent A kth row, xnRepresent n-th of sample, κkFor ak's
Prior distribution parameter, c0And d0For κkThe hyper parameter of parameter;
Then the projection model in each cluster can be expressed as:
zn|hn=c, Ac~N (Ac Τxn,Ip),Ac=[ac1,ac2..., ack..., acK]
<mrow>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mi>k</mi>
</mrow>
</msub>
<mo>~</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&kappa;</mi>
<mi>k</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<mi>&kappa;</mi>
<mi>k</mi>
</msub>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>c</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, as cluster indicator variable hnWhen equal to c, znIt is A just to obey averagec Τxn, covariance matrix is IpNormal state point
Cloth, p represents unit matrix I dimension;
(2c) represents the model of hidden variable SVM classifier in each cluster:
Set c-th hidden variable SVM classifier parameter obey prior distribution asBy each hidden variable
The prior distribution of SVM classifier is brought into each hidden variable SVM classifier respectively, obtains hidden variable grader in each cluster
Model, can be expressed as:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>~</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<mi>&lambda;</mi>
<mo>,</mo>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&Proportional;</mo>
<munderover>
<mo>&Pi;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<mi>&Phi;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&Proportional;</mo>
<munderover>
<mo>&Pi;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msubsup>
<mi>&lambda;</mi>
<mi>n</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
<mo>/</mo>
<mn>2</mn>
</mrow>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
<mo>(</mo>
<mrow>
<msup>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mi>T</mi>
</msup>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>+</mo>
<mi>b</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msup>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, λ=[λ1,λ2,…,λc..., λC] represent hidden variable SVM classifier in hidden variable, λcRepresent c-th of hidden variable
Hidden variable in SVM classifier;
Di Li Cray process DP mixed models, projection model and hidden variable SVM classifier are unified in Bayesian models by (2d)
Under framework, unlimited largest interval linear discriminant projection model is obtained, its hierarchical structure formula is:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&upsi;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mi>&alpha;</mi>
<mo>~</mo>
<mi>B</mi>
<mi>e</mi>
<mi>t</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>;</mo>
<mi>&alpha;</mi>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>e</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>f</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&pi;</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>&upsi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&upsi;</mi>
<mi>c</mi>
</msub>
<munderover>
<mo>&Pi;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>c</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&upsi;</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
hn| π (υ)~Disc (π (υ))
xn|hn=c, { μc,Σc}~p (xn|{μc,Σc), n=1 ..., N.
<mrow>
<msub>
<mi>z</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<msub>
<mi>h</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
<mo>,</mo>
<msub>
<mi>A</mi>
<mi>c</mi>
</msub>
<mo>~</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>A</mi>
<mi>c</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>,</mo>
<msub>
<mi>I</mi>
<mi>p</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<mi>A</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<mo>&lsqb;</mo>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>K</mi>
</mrow>
</msub>
<mo>&rsqb;</mo>
</mrow>
yn,{λn}c|{zn,hn=c }, ωc~φ (yn,λn|ωc,zn), c=1 ..., ∞
<mrow>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>~</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>&beta;</mi>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mi>m</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>~</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mi>m</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>c</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
{μc,Σc}~NW (μ0,W0,ν0,β0)
Wherein, according to the stratification expression formula of above formula, n-th of sample x is obtainednLikelihood distribution function:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
<mo>,</mo>
<mi>&pi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msub>
<mi>&pi;</mi>
<mi>c</mi>
</msub>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>;</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>.</mo>
</mrow>
3. the radar target identification method according to claim 1 based on unlimited largest interval linear discriminant projection model,
Characterized in that, the unlimited pseudo- Posterior distrbutionp of largest interval linear discriminant projection model joint of definition described in step (3), it has
The body form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>A</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<mi>Z</mi>
<mo>,</mo>
<msubsup>
<mrow>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</msubsup>
<mo>,</mo>
<mi>h</mi>
<mo>,</mo>
<mi>&upsi;</mi>
<mo>|</mo>
<mi>X</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mo>&Proportional;</mo>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<mrow>
<mo>{</mo>
<mrow>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
</mrow>
<mo>}</mo>
</mrow>
<mo>,</mo>
<msub>
<mi>h</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
</mrow>
<mo>)</mo>
</mrow>
<mi>&phi;</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>z</mi>
<mi>n</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>z</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<msubsup>
<mi>A</mi>
<mi>c</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>,</mo>
<mi>I</mi>
</mrow>
<mo>)</mo>
</mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>h</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
<mo>|</mo>
<mi>&upsi;</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>&times;</mo>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<mi>N</mi>
<mi>W</mi>
<mrow>
<mo>(</mo>
<mrow>
<mrow>
<mo>{</mo>
<mrow>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
</mrow>
<mo>}</mo>
</mrow>
<mo>|</mo>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>W</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>&nu;</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>&beta;</mi>
<mn>0</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
</mrow>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>&times;</mo>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>K</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
</mrow>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mi>c</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mn>0</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<munderover>
<mi>&Pi;</mi>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<mi>b</mi>
<mi>e</mi>
<mi>t</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&upsi;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>1</mn>
<mo>,</mo>
<mi>&alpha;</mi>
</mrow>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mi>e</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>f</mi>
<mn>0</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>.</mo>
</mrow>
3
4. the target identification method based on unlimited largest interval linear discriminant projection model according to claims 1,
Characterized in that, the pseudo- Posterior distrbutionp expression of the unlimited largest interval linear discriminant projection model joint of expression described in step (5)
FormulaThe Condition Posterior Distribution of parameters, realizes that step is:
(5a) represents c class classifier parameters ωcCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>;</mo>
<msub>
<mi>&mu;</mi>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
βcFor the hyper parameter of c class graders;
(5b) represents c class sample projection matrixes AcKth row ac,kCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>;</mo>
<msub>
<mi>&mu;</mi>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&mu;</mi>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<mo>=</mo>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<munder>
<mo>&Sigma;</mo>
<mrow>
<msub>
<mi>&tau;</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
</mrow>
</munder>
<msub>
<mi>z</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<mo>=</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<munder>
<mi>&Sigma;</mi>
<mrow>
<msub>
<mi>&tau;</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
</mrow>
</munder>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mi>T</mi>
</msubsup>
<mo>+</mo>
<msub>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mi>I</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
xnRefer to c class samples, κc,kRepresent ac,kParameter inside prior distribution, κck~Ga (c0,d0);
(5c) represents the row k z of feature samples collection Z n-th of samplek,nCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>z</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>z</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>;</mo>
<msub>
<mi>u</mi>
<msub>
<mi>z</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>z</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>z</mi>
<mrow>
<mi>n</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
</msub>
<mo>=</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<msubsup>
<mi>&omega;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mn>2</mn>
</msubsup>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
</mfrac>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&mu;</mi>
<msub>
<mi>z</mi>
<mrow>
<mi>n</mi>
<mi>k</mi>
<mo>,</mo>
</mrow>
</msub>
</msub>
<mo>=</mo>
<msub>
<mi>&Sigma;</mi>
<msub>
<mi>z</mi>
<mrow>
<mi>n</mi>
<mi>k</mi>
<mo>,</mo>
</mrow>
</msub>
</msub>
<mrow>
<mo>(</mo>
<msubsup>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>T</mi>
</msubsup>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>+</mo>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
<msub>
<mi>&omega;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>(</mo>
<mrow>
<mn>1</mn>
<mo>+</mo>
<mfrac>
<msubsup>
<mi>&xi;</mi>
<mi>n</mi>
<mrow>
<mo>-</mo>
<mi>k</mi>
</mrow>
</msubsup>
<msub>
<mi>&lambda;</mi>
<mi>n</mi>
</msub>
</mfrac>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein WithThe average and covariance matrix of difference respective conditions Posterior distrbutionp;
(5d) represents the distributed constant { μ of c class samplesc,ΣcCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>&Proportional;</mo>
<munder>
<mo>&Pi;</mo>
<mrow>
<msub>
<mi>&tau;</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>c</mi>
</mrow>
</munder>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mi>W</mi>
<mrow>
<mo>(</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&Sigma;</mi>
<mi>c</mi>
</msub>
<mo>}</mo>
<mo>|</mo>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>W</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>&nu;</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>&beta;</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>~</mo>
<mi>N</mi>
<mi>W</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&mu;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>W</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&upsi;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
<mrow>
<msubsup>
<mi>&mu;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>N</mi>
<mi>c</mi>
</msub>
<msub>
<mover>
<mi>x</mi>
<mo>&OverBar;</mo>
</mover>
<mi>c</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&beta;</mi>
<mn>0</mn>
</msub>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>/</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
</mrow>
<mrow>
<msubsup>
<mi>W</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>W</mi>
<mn>0</mn>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>+</mo>
<msub>
<mi>N</mi>
<mi>c</mi>
</msub>
<msub>
<mover>
<mi>&Sigma;</mi>
<mo>&OverBar;</mo>
</mover>
<mi>c</mi>
</msub>
<mo>+</mo>
<msub>
<mi>N</mi>
<mi>c</mi>
</msub>
<msub>
<mi>&beta;</mi>
<mn>0</mn>
</msub>
<mo>(</mo>
<mrow>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>x</mi>
<mo>&OverBar;</mo>
</mover>
<mi>c</mi>
</msub>
</mrow>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>x</mi>
<mo>&OverBar;</mo>
</mover>
<mi>c</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>/</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
<mrow>
<msubsup>
<mi>&upsi;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<msub>
<mi>&upsi;</mi>
<mn>0</mn>
</msub>
<mo>+</mo>
<msub>
<mi>N</mi>
<mi>c</mi>
</msub>
</mrow>
<mrow>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<msub>
<mi>&beta;</mi>
<mn>0</mn>
</msub>
<mo>+</mo>
<msub>
<mi>N</mi>
<mi>c</mi>
</msub>
</mrow>
NcThe quantity of c class samples is represented,The average of c class samples is represented,Represent the covariance matrix of c class samples
Average;
(5e) represents the Condition Posterior Distribution of cluster indicator variable, and its form of expression is:
P (h | -)=Disc (h;π)
And πcSubmit to restraint for
(5f) represents υ Condition Posterior Distribution, and its form of expression is:
p(υc|-)∝p(h|υ)Beta(υc;1, α)~Beta (υc;ρa,ρb)
<mrow>
<msub>
<mi>&pi;</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<msub>
<mi>&upsi;</mi>
<mi>c</mi>
</msub>
<munderover>
<mo>&Pi;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>D</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&upsi;</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>D</mi>
</mrow>
Wherein ρa=1+Nc,NkRepresent the quantity of kth class sample;
(5g) represents the hidden variable λ of hidden variable SVM classifiernCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&lambda;</mi>
<mi>n</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mi>G</mi>
<mrow>
<mo>(</mo>
<mo>|</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
<msubsup>
<mi>&omega;</mi>
<mi>c</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>z</mi>
<mi>n</mi>
</msub>
<msup>
<mo>|</mo>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein IG () represents dead wind area;
(5h) represents βcCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&beta;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>&Proportional;</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&omega;</mi>
<mi>c</mi>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mi>c</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>a</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>b</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein
(5i) represents κc,kCondition Posterior Distribution, its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>&Proportional;</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>|</mo>
<mn>0</mn>
<mo>,</mo>
<msubsup>
<mi>&kappa;</mi>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>I</mi>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>c</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>c</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>d</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein
(5j) represents α Condition Posterior Distribution, and its form of expression is:
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mi>&alpha;</mi>
<mo>|</mo>
<mo>-</mo>
<mo>)</mo>
</mrow>
<mo>&Proportional;</mo>
<munderover>
<mo>&Pi;</mo>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>D</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mi>B</mi>
<mi>e</mi>
<mi>t</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&upsi;</mi>
<mi>c</mi>
</msub>
<mo>;</mo>
<mn>1</mn>
<mo>,</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mi>&alpha;</mi>
<mo>;</mo>
<msub>
<mi>e</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>f</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>~</mo>
<mi>G</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mi>&alpha;</mi>
<mo>;</mo>
<msubsup>
<mi>e</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>f</mi>
<mn>0</mn>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
Wherein
5. the target identification method based on unlimited largest interval linear discriminant projection model according to claims 1,
Characterized in that, described in step (8b) to test feature sample setClassified, obtain test sample collectionCategory label
CollectionRealize that step is:
<mrow>
<msub>
<mi>&rho;</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>T</mi>
<mn>0</mn>
</msub>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>t</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>T</mi>
<mn>0</mn>
</msub>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&omega;</mi>
<mi>c</mi>
<mi>t</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<msubsup>
<mover>
<mi>z</mi>
<mo>^</mo>
</mover>
<mi>n</mi>
<mi>t</mi>
</msubsup>
</mrow>
ρ=[ρ1,ρ2,...,ρc,...,ρD]
<mrow>
<msub>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mi>n</mi>
</msub>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>m</mi>
</munder>
<mrow>
<mo>(</mo>
<mi>&rho;</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mo>=</mo>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<msub>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mn>2</mn>
</msub>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msub>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mi>n</mi>
</msub>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msub>
<mover>
<mi>b</mi>
<mo>^</mo>
</mover>
<mover>
<mi>N</mi>
<mo>^</mo>
</mover>
</msub>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
Wherein,Represent test samplePrediction category label,Represent hidden variable SVM in c-th of cluster of the t times sampling
The weight coefficient of grader, m=1,2 ..., C, t=1,2 ..., T0, ρcRepresent the average defeated of c-th hidden variable SVM classifier
Go out,Represent to solve the corresponding m values of maximum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710351886.8A CN107219510B (en) | 2017-05-18 | 2017-05-18 | Radar target identification method based on unlimited largest interval linear discriminant projection model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710351886.8A CN107219510B (en) | 2017-05-18 | 2017-05-18 | Radar target identification method based on unlimited largest interval linear discriminant projection model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107219510A true CN107219510A (en) | 2017-09-29 |
CN107219510B CN107219510B (en) | 2019-11-01 |
Family
ID=59945358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710351886.8A Active CN107219510B (en) | 2017-05-18 | 2017-05-18 | Radar target identification method based on unlimited largest interval linear discriminant projection model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107219510B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108828533A (en) * | 2018-04-26 | 2018-11-16 | 电子科技大学 | The similar structure of sample keeps non-linear projection feature extracting method in one type |
CN109376574A (en) * | 2018-08-14 | 2019-02-22 | 西安电子科技大学 | Refuse to sentence radar HRRP target identification method based on CNN |
CN110007286A (en) * | 2019-04-22 | 2019-07-12 | 电子科技大学 | A kind of true and false target one-dimensional range profile feature extracting method of linear discriminant study |
CN113361472A (en) * | 2021-07-01 | 2021-09-07 | 西安电子科技大学 | Radar HRRP target identification method based on ILFACs model |
CN113406623A (en) * | 2021-05-07 | 2021-09-17 | 中山大学 | Target identification method, device and medium based on radar high-resolution range profile |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104007431A (en) * | 2014-05-29 | 2014-08-27 | 西安电子科技大学 | Radar HRRP target recognition method based on dpLVSVM model |
CN104865562A (en) * | 2015-06-12 | 2015-08-26 | 西安电子科技大学 | Identification method for radar disoperative target based on mixed model |
CN105116400A (en) * | 2015-08-31 | 2015-12-02 | 西安电子科技大学 | Radar high-resolution range profile target recognition method based on iMMFA (infinite max-margin factor analysis) model |
CN105334504A (en) * | 2015-09-08 | 2016-02-17 | 西安电子科技大学 | Radar target identification method based on large-boundary nonlinear discrimination projection model |
CN106127236A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on Di Li Cray MRF mixed model |
-
2017
- 2017-05-18 CN CN201710351886.8A patent/CN107219510B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104007431A (en) * | 2014-05-29 | 2014-08-27 | 西安电子科技大学 | Radar HRRP target recognition method based on dpLVSVM model |
CN104865562A (en) * | 2015-06-12 | 2015-08-26 | 西安电子科技大学 | Identification method for radar disoperative target based on mixed model |
CN105116400A (en) * | 2015-08-31 | 2015-12-02 | 西安电子科技大学 | Radar high-resolution range profile target recognition method based on iMMFA (infinite max-margin factor analysis) model |
CN105334504A (en) * | 2015-09-08 | 2016-02-17 | 西安电子科技大学 | Radar target identification method based on large-boundary nonlinear discrimination projection model |
CN106127236A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on Di Li Cray MRF mixed model |
Non-Patent Citations (3)
Title |
---|
张学峰 等: "一种基于Dirichelt过程隐变量支撑向量机模型的目标识别方法", 《电子与信息学报》 * |
张学峰 等: "无限最大间隔BETA过程因子分析模型", 《西安电子科技大学学报(自然科学版)》 * |
文伟 等: "一种基于多极化散射机理的极化SAR图像舰船目标检测方法", 《电子与信息学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108828533A (en) * | 2018-04-26 | 2018-11-16 | 电子科技大学 | The similar structure of sample keeps non-linear projection feature extracting method in one type |
CN108828533B (en) * | 2018-04-26 | 2021-12-31 | 电子科技大学 | Method for extracting similar structure-preserving nonlinear projection features of similar samples |
CN109376574A (en) * | 2018-08-14 | 2019-02-22 | 西安电子科技大学 | Refuse to sentence radar HRRP target identification method based on CNN |
CN109376574B (en) * | 2018-08-14 | 2021-05-14 | 西安电子科技大学 | CNN-based (probabilistic neural network-based) HRRP (high-resolution Radar) target identification method for radar capable of refusing judgment |
CN110007286A (en) * | 2019-04-22 | 2019-07-12 | 电子科技大学 | A kind of true and false target one-dimensional range profile feature extracting method of linear discriminant study |
CN110007286B (en) * | 2019-04-22 | 2022-05-24 | 电子科技大学 | Linear discriminant learning true and false target one-dimensional range profile feature extraction method |
CN113406623A (en) * | 2021-05-07 | 2021-09-17 | 中山大学 | Target identification method, device and medium based on radar high-resolution range profile |
CN113361472A (en) * | 2021-07-01 | 2021-09-07 | 西安电子科技大学 | Radar HRRP target identification method based on ILFACs model |
Also Published As
Publication number | Publication date |
---|---|
CN107219510B (en) | 2019-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105334504B (en) | The radar target identification method of nonlinear discriminant projection model based on big border | |
CN107219510B (en) | Radar target identification method based on unlimited largest interval linear discriminant projection model | |
US10902296B2 (en) | Method and apparatus for training classification model, method and apparatus for classifying | |
CN101667245B (en) | Human face detection method by cascading novel detection classifiers based on support vectors | |
CN104200237A (en) | High speed automatic multi-target tracking method based on coring relevant filtering | |
CN105389583A (en) | Image classifier generation method, and image classification method and device | |
CN104298999B (en) | EO-1 hyperion feature learning method based on recurrence autocoding | |
CN105989336A (en) | Scene identification method based on deconvolution deep network learning with weight | |
CN103440508B (en) | The Remote Sensing Target recognition methods of view-based access control model word bag model | |
CN105760900A (en) | Hyperspectral image classification method based on affinity propagation clustering and sparse multiple kernel learning | |
CN103839078A (en) | Hyperspectral image classifying method based on active learning | |
CN105930846A (en) | Neighborhood information and SVGDL (support vector guide dictionary learning)-based polarimetric SAR image classification method | |
CN106203483A (en) | A kind of zero sample image sorting technique of multi-modal mapping method of being correlated with based on semanteme | |
CN106054189A (en) | Radar target recognition method based on dpKMMDP model | |
CN103473308B (en) | High-dimensional multimedia data classifying method based on maximum margin tensor study | |
CN104778482A (en) | Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction | |
Wurtz et al. | Methodology and performance comparison of statistical learning pulse shape classifiers as demonstrated with organic liquid scintillator | |
CN104504391B (en) | A kind of hyperspectral image classification method based on sparse features and markov random file | |
Hu et al. | A comparative study of sampling analysis in scene classification of high-resolution remote sensing imagery | |
CN107133648A (en) | The sparse one-dimensional range profile recognition methods for keeping projecting is merged based on self-adapting multi-dimension | |
CN107578063B (en) | Image Spectral Clustering based on fast selecting landmark point | |
CN106778714A (en) | LDA face identification methods based on nonlinear characteristic and model combination | |
CN104809471A (en) | Hyperspectral image residual error fusion classification method based on space spectrum information | |
CN107423697A (en) | Activity recognition method based on non-linear fusion depth 3D convolution description | |
CN104978569A (en) | Sparse representation based incremental face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |