CN113657541A - Domain adaptive target identification method based on deep knowledge integration - Google Patents
Domain adaptive target identification method based on deep knowledge integration Download PDFInfo
- Publication number
- CN113657541A CN113657541A CN202110987414.8A CN202110987414A CN113657541A CN 113657541 A CN113657541 A CN 113657541A CN 202110987414 A CN202110987414 A CN 202110987414A CN 113657541 A CN113657541 A CN 113657541A
- Authority
- CN
- China
- Prior art keywords
- matrix
- mapping matrix
- target
- feature
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000010354 integration Effects 0.000 title claims abstract description 39
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 91
- 238000013507 mapping Methods 0.000 claims abstract description 68
- 230000009466 transformation Effects 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 3
- 238000007476 Maximum Likelihood Methods 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 19
- 238000002474 experimental method Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 102100024506 Bone morphogenetic protein 2 Human genes 0.000 description 1
- 101000762366 Homo sapiens Bone morphogenetic protein 2 Proteins 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of target identification, and particularly relates to a field self-adaptive target identification method based on deep knowledge integration. The invention realizes the deep knowledge integration of the feature level and the decision level. Public mapping matrixes and special mapping matrixes are designed at a characteristic level to realize knowledge integration, and the robustness of target identification performance is improved; the common mapping matrix fully excavates common knowledge of heterogeneous characteristics, and the specific mapping matrix reserves specific knowledge of different characteristics. The importance degree of different characteristics is quantified by designing the characteristic weight at a decision level, and meanwhile, the characteristic weight is updated by utilizing a target domain sample through online learning, so that the data distribution difference of different fields is overcome, and the field self-adaptive target identification is realized. Therefore, the field adaptive target identification method based on deep knowledge integration provided by the invention is an intelligent field adaptive target identification method.
Description
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a field self-adaptive target identification method based on deep knowledge integration.
Background
The automatic target identification technology can identify and classify targets according to sensor data, and plays an important role in military and civil fields such as battlefield sensing reconnaissance, terrain exploration, automatic driving and the like. With the continuous development of the technology, various advantageous target identification methods are proposed one after another. How to combine the advantages of different methods to improve the performance of target identification becomes one of the hot spots in the research of target identification technology.
The document "Q.Yu, H.Hu, X.Geng, Y.Jiang and J.An," High-Performance SAR Automatic Target registration throughout Limited Data Conditioning Based on a Deep Feature Fusion Network, "in IEEE Access, vol.7, pp.165646-165658,2019" proposes a Feature-level knowledge integration method, which splices the features extracted from different convolutional layers of a neural Network together as the features input into a final classifier to integrate the knowledge of features of different scales. However, the method does not explore the relationship between different characteristics, and common knowledge and special knowledge exist between different characteristics are ignored. The document "J.Zhang, M.Xing and Y.Xie," FEC: A Feature Fusion Framework for SAR Target registration Based on Electromagnetic characterization Features and Deep CNN Features, "in IEEE Transactions on Geoscience and Remote Sensing, vol.59, No.3, pp.2174-2187, March 2021" proposes a decision-level knowledge integration method, concatenating the results of different decision levels of methods, and training a new classifier to combine the knowledge of different methods, but this method ignores how different Features are important in knowledge integration. Meanwhile, due to environmental changes or changes of observation target angles, distribution differences exist between source domain data and target domain data, while most of current knowledge integration methods do not consider the problem of domain self-adaptation, and a model learned in a source domain often cannot cope with changes generated in a target domain, so that the performance of the model in the target domain is reduced. Therefore, the research on the domain adaptive target identification method based on the deep knowledge integration is expected to further improve the target identification performance.
Disclosure of Invention
The invention aims to overcome the defects and provide a field adaptive target identification method based on deep knowledge integration. The method realizes the field self-adaptive target recognition through the deep knowledge integration of the feature level and the decision level and the online learning. At a characteristic level, the invention designs a public mapping matrix and a special mapping matrix to realize knowledge integration and improve the robustness of target identification performance; the common mapping matrix fully excavates common knowledge of heterogeneous characteristics, and the specific mapping matrix reserves specific knowledge of different characteristics. At a decision level, the invention designs the feature weight to quantify the importance degree of different features, and meanwhile, the feature weight is updated by online learning by utilizing a target domain sample, so that the data distribution difference of different fields is overcome, and the field self-adaptive target identification is realized. Therefore, the field adaptive target identification method based on deep knowledge integration provided by the invention is an intelligent field adaptive target identification method.
The technical scheme of the invention is as follows: a domain adaptive target recognition method based on deep knowledge integration, as shown in fig. 1, includes the following steps:
s1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples;
s2, training various feature extractors by using source domain samples, and extracting various features;
s3, establishing an objective function of the feature-level knowledge integration model by using the different types of features obtained in the step S2, and solving, specifically comprising:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space and scalingAnd the real label to obtain a target function of feature-level knowledge integration;
s32, optimizing and solving the objective function obtained in the step S31 by using a three-step iteration method to obtain a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i};
S4, establishing a decision-level knowledge integration online model by using the target domain samples and solving, wherein the method specifically comprises the following steps:
s41, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iProcessing a target domain mark sample, designing a characteristic weight, and establishing a decision-level knowledge integrated target function;
s42, solving the objective function obtained in the step S41 to obtain a characteristic weight;
s5, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd S4, processing the sample to be detected to obtain an identification result.
The invention has the beneficial effects that: the method fully excavates the public knowledge and the unique knowledge of the heterogeneous features at the feature level, quantifies the importance degree of different features at the decision level, updates the feature weight by utilizing the target domain data based on online learning, realizes the deep knowledge integration of the feature level and the decision level, overcomes the data distribution difference of different fields, improves the robustness of the target identification technology, and realizes the field self-adaptive target identification. Therefore, the field adaptive target identification method based on deep knowledge integration is an intelligent field adaptive target identification method.
Drawings
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a flow chart of the algorithm of the present invention;
FIG. 3 is a comparison graph of recognition accuracy for a background art method and a method of the present invention.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings and embodiments:
as shown in fig. 2, the present invention includes:
step 1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples.
And acquiring original images of each target under different pitch angles in a static state by using a radar, and observing the target under different azimuth angles under each fixed pitch angle. And marking the acquired images as a source domain and a target domain according to the difference of the pitch angles.
In the source domainCollecting a large amount of marked samples, and marking as Z ═ Z after cutting pretreatmentjj is 1,2, …, n, where n is the total number of samples in the source domain training set. Label of all the source domain samples as matrixWhere L is the total number of classes of the sample. Matrix arrayEach column in (a) represents a one-hot tag [0, …,0, 1,0, …,0 ] of the corresponding sample]TThe ith position is 1 and the remaining elements are all 0, indicating that the label number is L, L ∈ {1,2, …, L }.
In the form of online at the target domainA small number of marked samples are taken, i.e. sequentially, one sample at a time. It was cut and pretreated and recorded asThe corresponding one-hot tag is noted wherein The total number of samples is labeled for the target domain.
Step 2, training N kinds of feature extractors f by using the source domain samples obtained in the step 1i(. h), i ∈ {1,2, …, N }, noting that different types of source domain samples are characterized by
wherein ,representation utilization feature extractor fi(. features extracted on source domain samples Z, diRepresenting the dimension of the ith class of features.
Step 3, establishing an objective function of the feature-level knowledge integration model according to the different types of features obtained in the step 2 and solving, and further, the specific steps of the step 3 are as follows:
step 3-1, designing a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space, measuring the difference between the label space and a real label, establishing a feature level knowledge integration model, and recording a total objective function as:
wherein ,a loss of classification is indicated and,a regularization term is represented.Representing the mapping of features of different dimensions to a specified dimension,m is a specified dimension;further mapping the different types of features to a uniform label space, wherein,and measuring the difference between the characteristics in the label space and the real label by using the Frobenius norm to obtain a classification loss item. For transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iFrobenius norm constraint is carried out on the distribution to obtain a regularization term, wherein alpha, beta and gamma represent regularization coefficients and are used for controlling the degree of parameter regularization. A can be obtained by minimizing the objective function0,{Ai},{Θi}:
And 3-2, carrying out optimization solution on the formula (3) obtained in the step 3-1 by using a three-step iteration method, wherein the method comprises the following specific steps:
step 3-2-1: fixed transformation matrix { ΘiAnd a unique mapping matrix { A }iIs mapped to a common mapping matrix A0Optimizing, and converting the objective function into:
this is an unconstrained optimization problem, and can solve the closed form solution as:
wherein I is an M-order unit matrix.
Step 3-2-2: fixed transformation matrix { ΘiAnd common mapping matrix A0For the unique mapping matrix { AiOptimizing, and converting the objective function into:
splitting equation (6) into N independent unconstrained least squares problems, each of which can be expressed as:
solving to obtain an optimized unique mapping matrix { Ai}:
Step 3-2-3: finally, the public mapping matrix A is fixed0And a unique mapping matrix { A }iH, for transformation matrix { theta }iOptimizing, and the objective function is:
the same solving process as the step 3-2-2 is carried out to obtain an optimized transformation matrix { theta }i}:
wherein ,I1Is diOrder unit array, I2Is an M-order unit matrix.
Iterating steps 3-2-1, 3-2-2 and 3-2-3 until convergence, and obtaining a transformation matrix { theta }i}, public mappingsMatrix A0And a unique mapping matrix { A }i-considering the algorithm to converge when the degree of variation of the objective function equation (2) is less than a given threshold δ:
where t is the number of iterations.
step 4-1, marking the target domain mark sample obtained in the step 1 by using the multiple feature extractors obtained in the step 2Extracting multiple types of featuresWhere i ∈ {1,2, …, N },according to the transformation matrix { theta ] of the step 3iThe common mapping matrix A0And a unique mapping matrix { A }iCalculating corresponding transformation characteristics after different types of characteristics of the target domain samples are transformed to the label space
In order to explore the importance degree of different characteristics in a target domain, a decision-level knowledge integration online model target function is established:
wherein ,ξiRepresenting class i transform featuresTo the degree of importance of (a) the,describing the distance between the transformed feature and the authentic tag; log (xi)i) Is a constraint term to avoid the estimated xiiVery close to 0; λ is the equilibrium parameter.
Step 4-2, carrying out optimization solution on the decision-level knowledge integration online model established in the step 4-1, and specifically comprising the following steps:
and 4-2-1, converting the objective function into a minimized logarithm function problem.
Note the bookGiven feature weight ξiIs provided withObey normal distributionWith a likelihood function:
the maximum likelihood function equation (14) is equivalent to minimizing its negative log function:
it can be seen that minimizing equation (15) is equivalent to solving equation (13).
And 4-2-2, further converting the objective function into a maximum posterior probability estimation problem and solving the characteristic weight.
Setting xiiObeying distribution xii~Γ(γ1,γ2), wherein ,γ1 and γ2Are parameters. Then there is a probability density function:
calculating xiiThe posterior probability distribution of (a) is:
thus, ξiObey the gamma distribution of the posterior probability distributionGiven aMarking a sample in each target domain, and estimating by utilizing maximum posterior probability to obtain a characteristic weight:
and 4-2-3, updating the characteristic weight obtained by solving in an online mode.
Order toThe representation is based onThe cumulative count at which each sample updates the feature weight,and isWhich is indicative of the corresponding accumulated error,and is wherein ,andrespectively corresponding initial values. Thus, the feature weights can be updated online:
obtaining the predicted label number of the target to be measured by taking the index corresponding to the maximum value in the step cObtaining a target identification result:
examples
The model is used for carrying out experiments on ten types of targets in the MSTAR data set for acquiring and identifying the American moving and static targets, a sensor for acquiring the data set is a high-resolution beam-focusing synthetic aperture radar, the radar works in an X wave band, an HH polarization mode is adopted, and the resolution is 0.3m multiplied by 0.3 m. The acquired data is subjected to preprocessing, and slice images with the pixel size of 128 x 128 and containing various objects are extracted from the acquired data. The data are mostly SAR slice images of stationary vehicles, including ten types of targets including BMP2, T72, BTR70, 2S1, BRDM2, BTR60, D7, T62, ZIL131, ZSU234 and T72. Taking sample data observed at a pitch angle of 17 degrees as a source domain sample, taking sample data observed at a pitch angle of 15 degrees as a target domain sample, recording the sample data as a standard operating condition, wherein the sample data totally comprises 10 types of targets, and the specific sample number is shown in table 1. Taking sample data observed at a pitch angle of 17 degrees as a source domain sample, taking sample data observed at a pitch angle of 30 degrees as a target domain sample, recording the sample data as an extended operating condition, wherein the sample data totally comprises 3 types of targets, and the number of the specific samples is shown in table 2.
TABLE 1 number of specific samples under standard operating conditions
Table 2 extension of the number of specific samples under operating conditions
To remove the effect of background clutter, the sample image size was cut to 64 × 64 with the center. In this case, a convolutional neural network and a sparse representation method are used as two feature extractors, so that N is 2, and the extracted feature dimensions are d1=128,d2256, the remaining parameters are set to M25,α=103,β=γ=102,λ=10,γ1=10,γ2The convergence threshold is set to be δ 10, 5-3. All source domain samples are used as source domain marking samples, 10 samples are randomly extracted from target domain samples to be used as target domain marking samples to participate in training, and the rest samples are used as samples to be detected.
Experiments under different operating conditions are designed to verify the superiority of the proposed algorithm, and the performances of the background art method and the method of the invention under the standard operating condition and the extended operating condition are respectively compared. As shown in FIG. 3, under the standard operation condition, the target domain sample is observed under the pitch angle of 15 degrees, the source domain is observed under the pitch angle of 17 degrees, the data distribution difference of the target domain sample and the source domain sample is small, the method and the background technology have good identification performance, the identification rate is over 95 percent, and the identification rate of the method is the highest and reaches 98.6 percent. Under the condition of extended operation, the target domain sample is observed under the pitch angle of 30 degrees, the difference with the source domain sample is larger, the identification performance of the background technology method is greatly reduced, the method can still maintain the identification accuracy of 98.4 percent, and the advantages are obvious. In conclusion, the experimental results prove that the method effectively combines the advantages of different types of features, realizes field self-adaptation, improves the accuracy of target identification, and enhances the generalization capability of the model.
Claims (5)
1. A domain self-adaptive target identification method based on deep knowledge integration is characterized by comprising the following steps:
s1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples;
s2, training various feature extractors by using source domain samples, and extracting various features;
s3, establishing an objective function of the feature-level knowledge integration model by using the different types of features obtained in the step S2, and solving, specifically comprising:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iWill different types of featuresMapping to a uniform label space, and measuring the difference between the label space and a real label to obtain a target function of feature level knowledge integration;
s32, optimizing and solving the objective function obtained in the step S31 by using a three-step iteration method to obtain a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i};
S4, establishing a decision-level knowledge integration online model by using the target domain samples and solving, wherein the method specifically comprises the following steps:
s41, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iProcessing a target domain mark sample, designing a characteristic weight, and establishing a decision-level knowledge integrated target function;
s42, solving the objective function obtained in the step S41 to obtain a characteristic weight;
s5, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd S4, processing the sample to be detected to obtain an identification result.
2. The method for field adaptive target recognition based on deep knowledge integration according to claim 1, wherein the specific method in step S1 is as follows:
acquiring original images of each target under different pitch angles in a static state by a radar, observing the target under different azimuth angles under each fixed pitch angle, and recording the acquired images as a source domain and a target domain according to the difference of the pitch angles;
in the source domainCollecting a marking sample, and marking as Z ═ Z after cutting pretreatmentj1,2, …, n, where n is the total number of samples in the source domain training set; label of all the source domain samples as matrixWhere L is the total number of classes of the sample, the matrixEach column in (a) represents a one-hot tag [0, …,0, 1,0, …,0 ] of the corresponding sample]TThe ith position is 1 and the rest elements are all 0, which indicates that the label number is L, and L belongs to {1,2, …, L };
3. The method for field adaptive target recognition based on deep knowledge integration according to claim 2, wherein the specific method in step S2 is as follows:
training N feature extractors f by using source domain samples obtained in step S1i(. h), i ∈ {1,2, …, N }, noting that different types of source domain samples are characterized by
4. The method for field adaptive target recognition based on deep knowledge integration according to claim 3, wherein the specific method of step S3 is as follows:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space, measuring the difference between the label space and a real label, establishing a feature level knowledge integration model, and obtaining a feature level knowledge integration objective function as follows:
wherein ,a loss of classification is indicated and,a regularization term is represented as a function of,representing the mapping of features of different dimensions to a specified dimension,m is a specified dimension;mapping different types of features to unificationThe label space of (a), wherein,measuring the difference between the characteristics in the label space and the real label by using the Frobenius norm to obtain a classification loss item; for transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iFrobenius norm constraint is carried out on the distribution to obtain a regularization term, wherein alpha, beta and gamma all represent regularization coefficients and are used for controlling the degree of parameter regularization, and A can be obtained by minimizing an objective function0,{Ai},{Θi}:
Step S32, carrying out optimization solution on the formula (3) obtained in the step S31 by using a three-step iteration method, wherein the method comprises the following specific steps:
s321, fixing a transformation matrix { theta }iAnd a unique mapping matrix { A }iIs mapped to a common mapping matrix A0Optimizing, and converting the objective function into:
this is an unconstrained optimization problem, solving the closed form solution as:
wherein I is an M-order unit array;
step S322: fixed transformation matrix { ΘiAnd common mapping matrix A0For the unique mapping matrix { AiOptimizing, and converting the objective function into:
splitting equation (6) into N independent unconstrained least squares problems, each of which can be expressed as:
solving to obtain an optimized unique mapping matrix { Ai}:
Step S323: finally, the public mapping matrix A is fixed0And a unique mapping matrix { A }iH, for transformation matrix { theta }iOptimizing, and the objective function is:
obtaining an optimized transformation matrix { theta ] in the same process as the solving process of the step S322i}:
Iterating steps S321, S322 and S323 until convergence, and obtaining a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i-considering the algorithm to converge when the degree of variation of the objective function equation (2) is less than a given threshold δ:
where t is the number of iterations.
5. The method for field adaptive target recognition based on deep knowledge integration according to claim 4, wherein the specific method of step S4 is as follows:
step S41, marking the sample of the target domain obtained in step S1 by the multiple feature extractors obtained in step S2Extracting multiple types of featuresWhere i ∈ {1,2, …, N },according to the transformation matrix { Θ of step S3iThe common mapping matrix A0And a unique mapping matrix { A }iCalculating corresponding transformation characteristics after different types of characteristics of the target domain samples are transformed to the label space
In order to obtain the importance degree of different characteristics in a target domain, a decision-level knowledge integration online model target function is established:
wherein ,ξiRepresenting class i transform featuresTo the degree of importance of (a) the,describing the distance between the transformed feature and the authentic tag; log (xi)i) Is a constraint term to avoid the estimated xiiVery close to 0; λ is a balance parameter;
s42, carrying out optimization solution on the decision-level knowledge integration online model established in the step S41, and specifically comprising the following steps:
step S421, converting the objective function into a minimized logarithm function problem:
note the bookGiven feature weight ξiIs provided withObey normal distributionWith a likelihood function:
the maximum likelihood function equation (14) is equivalent to minimizing its negative log function:
the obtainable minimization formula (15) is equivalent to solving the formula (13);
step S522, the objective function is further converted into a maximum posterior probability estimation problem and the characteristic weight is solved:
setting xiiObeying distribution xii~Γ(γ1,γ2), wherein ,γ1 and γ2Are parameters. Then there is a probability density function:
calculating xiiThe posterior probability distribution of (a) is:
thus, ξiObey the gamma distribution of the posterior probability distributionGiven aMarking a sample in each target domain, and estimating by utilizing maximum posterior probability to obtain a characteristic weight:
step S423, updating the solved feature weight in an online manner:
order toThe representation is based onThe cumulative count at which each sample updates the feature weight,and is Which is indicative of the corresponding accumulated error,and is wherein ,andrespectively corresponding initial values, and updating the feature weights on line:
step S5, feature extractor f obtained in step S2i(v) extracting different types of features of the target domain sample to be tested, and recording the features asThe transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd step S4 is carried out on-line updating to obtain the feature weight xiiWill be provided withAnd (3) integrally mapping the depth knowledge subjected to the feature level and the decision level to a label space:
obtaining the predicted label number of the target to be measured by taking the index corresponding to the maximum value in the step cObtaining a target identification result:
and completing the self-adaptive target recognition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110987414.8A CN113657541B (en) | 2021-08-26 | 2021-08-26 | Domain self-adaptive target recognition method based on depth knowledge integration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110987414.8A CN113657541B (en) | 2021-08-26 | 2021-08-26 | Domain self-adaptive target recognition method based on depth knowledge integration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113657541A true CN113657541A (en) | 2021-11-16 |
CN113657541B CN113657541B (en) | 2023-10-10 |
Family
ID=78492902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110987414.8A Active CN113657541B (en) | 2021-08-26 | 2021-08-26 | Domain self-adaptive target recognition method based on depth knowledge integration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113657541B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180024968A1 (en) * | 2016-07-22 | 2018-01-25 | Xerox Corporation | System and method for domain adaptation using marginalized stacked denoising autoencoders with domain prediction regularization |
CN110210545A (en) * | 2019-05-27 | 2019-09-06 | 河海大学 | Infrared remote sensing water body classifier construction method based on transfer learning |
CN110598636A (en) * | 2019-09-09 | 2019-12-20 | 哈尔滨工业大学 | Ship target identification method based on feature migration |
CN111723823A (en) * | 2020-06-24 | 2020-09-29 | 河南科技学院 | Underwater target detection method based on third-party transfer learning |
-
2021
- 2021-08-26 CN CN202110987414.8A patent/CN113657541B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180024968A1 (en) * | 2016-07-22 | 2018-01-25 | Xerox Corporation | System and method for domain adaptation using marginalized stacked denoising autoencoders with domain prediction regularization |
CN110210545A (en) * | 2019-05-27 | 2019-09-06 | 河海大学 | Infrared remote sensing water body classifier construction method based on transfer learning |
CN110598636A (en) * | 2019-09-09 | 2019-12-20 | 哈尔滨工业大学 | Ship target identification method based on feature migration |
CN111723823A (en) * | 2020-06-24 | 2020-09-29 | 河南科技学院 | Underwater target detection method based on third-party transfer learning |
Non-Patent Citations (1)
Title |
---|
陶洋;胡昊;鲍灵浪;: "基于重建分类网络特征增强的盲域自适应分类器", 信息通信, no. 06 * |
Also Published As
Publication number | Publication date |
---|---|
CN113657541B (en) | 2023-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107515895B (en) | Visual target retrieval method and system based on target detection | |
Han et al. | Combining 3D‐CNN and Squeeze‐and‐Excitation Networks for Remote Sensing Sea Ice Image Classification | |
CN109684922B (en) | Multi-model finished dish identification method based on convolutional neural network | |
Liu et al. | Remote sensing image change detection based on information transmission and attention mechanism | |
CN110889865B (en) | Video target tracking method based on local weighted sparse feature selection | |
CN105760821A (en) | Classification and aggregation sparse representation face identification method based on nuclear space | |
CN110991257B (en) | Polarized SAR oil spill detection method based on feature fusion and SVM | |
CN110414616B (en) | Remote sensing image dictionary learning and classifying method utilizing spatial relationship | |
CN114926693A (en) | SAR image small sample identification method and device based on weighted distance | |
CN111882554B (en) | SK-YOLOv 3-based intelligent power line fault detection method | |
CN111639697B (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
CN113344045A (en) | Method for improving SAR ship classification precision by combining HOG characteristics | |
CN111126155B (en) | Pedestrian re-identification method for generating countermeasure network based on semantic constraint | |
CN109558803B (en) | SAR target identification method based on convolutional neural network and NP criterion | |
CN117152503A (en) | Remote sensing image cross-domain small sample classification method based on false tag uncertainty perception | |
CN113627240B (en) | Unmanned aerial vehicle tree species identification method based on improved SSD learning model | |
CN110827327B (en) | Fusion-based long-term target tracking method | |
Liang et al. | Comparison-based convolutional neural networks for cervical Cell/Clumps detection in the limited data scenario | |
Dewari et al. | Agricultural Insect Pest’s Recognition System Using Deep Learning Model | |
CN113902975B (en) | Scene perception data enhancement method for SAR ship detection | |
WO2023273337A1 (en) | Representative feature-based method for detecting dense targets in remote sensing image | |
Kundur et al. | Insect pest image detection and classification using deep learning | |
CN113409351B (en) | Unsupervised field self-adaptive remote sensing image segmentation method based on optimal transmission | |
CN113657541A (en) | Domain adaptive target identification method based on deep knowledge integration | |
Sun et al. | Feature space fusion classification of remote sensing image based on ant colony optimisation algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |