CN113657541A - Domain adaptive target identification method based on deep knowledge integration - Google Patents

Domain adaptive target identification method based on deep knowledge integration Download PDF

Info

Publication number
CN113657541A
CN113657541A CN202110987414.8A CN202110987414A CN113657541A CN 113657541 A CN113657541 A CN 113657541A CN 202110987414 A CN202110987414 A CN 202110987414A CN 113657541 A CN113657541 A CN 113657541A
Authority
CN
China
Prior art keywords
matrix
mapping matrix
target
feature
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110987414.8A
Other languages
Chinese (zh)
Other versions
CN113657541B (en
Inventor
郭贤生
张玉坤
段林甫
陆浩然
袁杨鹏
黄健
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202110987414.8A priority Critical patent/CN113657541B/en
Publication of CN113657541A publication Critical patent/CN113657541A/en
Application granted granted Critical
Publication of CN113657541B publication Critical patent/CN113657541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of target identification, and particularly relates to a field self-adaptive target identification method based on deep knowledge integration. The invention realizes the deep knowledge integration of the feature level and the decision level. Public mapping matrixes and special mapping matrixes are designed at a characteristic level to realize knowledge integration, and the robustness of target identification performance is improved; the common mapping matrix fully excavates common knowledge of heterogeneous characteristics, and the specific mapping matrix reserves specific knowledge of different characteristics. The importance degree of different characteristics is quantified by designing the characteristic weight at a decision level, and meanwhile, the characteristic weight is updated by utilizing a target domain sample through online learning, so that the data distribution difference of different fields is overcome, and the field self-adaptive target identification is realized. Therefore, the field adaptive target identification method based on deep knowledge integration provided by the invention is an intelligent field adaptive target identification method.

Description

Domain adaptive target identification method based on deep knowledge integration
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a field self-adaptive target identification method based on deep knowledge integration.
Background
The automatic target identification technology can identify and classify targets according to sensor data, and plays an important role in military and civil fields such as battlefield sensing reconnaissance, terrain exploration, automatic driving and the like. With the continuous development of the technology, various advantageous target identification methods are proposed one after another. How to combine the advantages of different methods to improve the performance of target identification becomes one of the hot spots in the research of target identification technology.
The document "Q.Yu, H.Hu, X.Geng, Y.Jiang and J.An," High-Performance SAR Automatic Target registration throughout Limited Data Conditioning Based on a Deep Feature Fusion Network, "in IEEE Access, vol.7, pp.165646-165658,2019" proposes a Feature-level knowledge integration method, which splices the features extracted from different convolutional layers of a neural Network together as the features input into a final classifier to integrate the knowledge of features of different scales. However, the method does not explore the relationship between different characteristics, and common knowledge and special knowledge exist between different characteristics are ignored. The document "J.Zhang, M.Xing and Y.Xie," FEC: A Feature Fusion Framework for SAR Target registration Based on Electromagnetic characterization Features and Deep CNN Features, "in IEEE Transactions on Geoscience and Remote Sensing, vol.59, No.3, pp.2174-2187, March 2021" proposes a decision-level knowledge integration method, concatenating the results of different decision levels of methods, and training a new classifier to combine the knowledge of different methods, but this method ignores how different Features are important in knowledge integration. Meanwhile, due to environmental changes or changes of observation target angles, distribution differences exist between source domain data and target domain data, while most of current knowledge integration methods do not consider the problem of domain self-adaptation, and a model learned in a source domain often cannot cope with changes generated in a target domain, so that the performance of the model in the target domain is reduced. Therefore, the research on the domain adaptive target identification method based on the deep knowledge integration is expected to further improve the target identification performance.
Disclosure of Invention
The invention aims to overcome the defects and provide a field adaptive target identification method based on deep knowledge integration. The method realizes the field self-adaptive target recognition through the deep knowledge integration of the feature level and the decision level and the online learning. At a characteristic level, the invention designs a public mapping matrix and a special mapping matrix to realize knowledge integration and improve the robustness of target identification performance; the common mapping matrix fully excavates common knowledge of heterogeneous characteristics, and the specific mapping matrix reserves specific knowledge of different characteristics. At a decision level, the invention designs the feature weight to quantify the importance degree of different features, and meanwhile, the feature weight is updated by online learning by utilizing a target domain sample, so that the data distribution difference of different fields is overcome, and the field self-adaptive target identification is realized. Therefore, the field adaptive target identification method based on deep knowledge integration provided by the invention is an intelligent field adaptive target identification method.
The technical scheme of the invention is as follows: a domain adaptive target recognition method based on deep knowledge integration, as shown in fig. 1, includes the following steps:
s1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples;
s2, training various feature extractors by using source domain samples, and extracting various features;
s3, establishing an objective function of the feature-level knowledge integration model by using the different types of features obtained in the step S2, and solving, specifically comprising:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space and scalingAnd the real label to obtain a target function of feature-level knowledge integration;
s32, optimizing and solving the objective function obtained in the step S31 by using a three-step iteration method to obtain a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i};
S4, establishing a decision-level knowledge integration online model by using the target domain samples and solving, wherein the method specifically comprises the following steps:
s41, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iProcessing a target domain mark sample, designing a characteristic weight, and establishing a decision-level knowledge integrated target function;
s42, solving the objective function obtained in the step S41 to obtain a characteristic weight;
s5, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd S4, processing the sample to be detected to obtain an identification result.
The invention has the beneficial effects that: the method fully excavates the public knowledge and the unique knowledge of the heterogeneous features at the feature level, quantifies the importance degree of different features at the decision level, updates the feature weight by utilizing the target domain data based on online learning, realizes the deep knowledge integration of the feature level and the decision level, overcomes the data distribution difference of different fields, improves the robustness of the target identification technology, and realizes the field self-adaptive target identification. Therefore, the field adaptive target identification method based on deep knowledge integration is an intelligent field adaptive target identification method.
Drawings
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a flow chart of the algorithm of the present invention;
FIG. 3 is a comparison graph of recognition accuracy for a background art method and a method of the present invention.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings and embodiments:
as shown in fig. 2, the present invention includes:
step 1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples.
And acquiring original images of each target under different pitch angles in a static state by using a radar, and observing the target under different azimuth angles under each fixed pitch angle. And marking the acquired images as a source domain and a target domain according to the difference of the pitch angles.
In the source domain
Figure BDA0003231190620000031
Collecting a large amount of marked samples, and marking as Z ═ Z after cutting pretreatmentjj is 1,2, …, n, where n is the total number of samples in the source domain training set. Label of all the source domain samples as matrix
Figure BDA0003231190620000032
Where L is the total number of classes of the sample. Matrix array
Figure BDA00032311906200000310
Each column in (a) represents a one-hot tag [0, …,0, 1,0, …,0 ] of the corresponding sample]TThe ith position is 1 and the remaining elements are all 0, indicating that the label number is L, L ∈ {1,2, …, L }.
In the form of online at the target domain
Figure BDA0003231190620000033
A small number of marked samples are taken, i.e. sequentially, one sample at a time. It was cut and pretreated and recorded as
Figure BDA0003231190620000034
The corresponding one-hot tag is noted
Figure BDA0003231190620000035
wherein
Figure BDA0003231190620000036
The total number of samples is labeled for the target domain.
Step 2, training N kinds of feature extractors f by using the source domain samples obtained in the step 1i(. h), i ∈ {1,2, …, N }, noting that different types of source domain samples are characterized by
Figure BDA0003231190620000037
Figure BDA0003231190620000038
wherein ,
Figure BDA0003231190620000039
representation utilization feature extractor fi(. features extracted on source domain samples Z, diRepresenting the dimension of the ith class of features.
Step 3, establishing an objective function of the feature-level knowledge integration model according to the different types of features obtained in the step 2 and solving, and further, the specific steps of the step 3 are as follows:
step 3-1, designing a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space, measuring the difference between the label space and a real label, establishing a feature level knowledge integration model, and recording a total objective function as:
Figure BDA0003231190620000041
wherein ,
Figure BDA0003231190620000042
a loss of classification is indicated and,
Figure BDA0003231190620000043
a regularization term is represented.
Figure BDA0003231190620000044
Representing the mapping of features of different dimensions to a specified dimension,
Figure BDA0003231190620000045
m is a specified dimension;
Figure BDA0003231190620000046
further mapping the different types of features to a uniform label space, wherein,
Figure BDA0003231190620000047
and measuring the difference between the characteristics in the label space and the real label by using the Frobenius norm to obtain a classification loss item. For transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iFrobenius norm constraint is carried out on the distribution to obtain a regularization term, wherein alpha, beta and gamma represent regularization coefficients and are used for controlling the degree of parameter regularization. A can be obtained by minimizing the objective function0,{Ai},{Θi}:
Figure BDA0003231190620000048
And 3-2, carrying out optimization solution on the formula (3) obtained in the step 3-1 by using a three-step iteration method, wherein the method comprises the following specific steps:
step 3-2-1: fixed transformation matrix { ΘiAnd a unique mapping matrix { A }iIs mapped to a common mapping matrix A0Optimizing, and converting the objective function into:
Figure BDA0003231190620000049
this is an unconstrained optimization problem, and can solve the closed form solution as:
Figure BDA00032311906200000410
wherein I is an M-order unit matrix.
Step 3-2-2: fixed transformation matrix { ΘiAnd common mapping matrix A0For the unique mapping matrix { AiOptimizing, and converting the objective function into:
Figure BDA0003231190620000051
splitting equation (6) into N independent unconstrained least squares problems, each of which can be expressed as:
Figure BDA0003231190620000052
solving to obtain an optimized unique mapping matrix { Ai}:
Figure BDA0003231190620000053
Step 3-2-3: finally, the public mapping matrix A is fixed0And a unique mapping matrix { A }iH, for transformation matrix { theta }iOptimizing, and the objective function is:
Figure BDA0003231190620000054
the same solving process as the step 3-2-2 is carried out to obtain an optimized transformation matrix { theta }i}:
Figure BDA0003231190620000055
wherein ,I1Is diOrder unit array, I2Is an M-order unit matrix.
Iterating steps 3-2-1, 3-2-2 and 3-2-3 until convergence, and obtaining a transformation matrix { theta }i}, public mappingsMatrix A0And a unique mapping matrix { A }i-considering the algorithm to converge when the degree of variation of the objective function equation (2) is less than a given threshold δ:
Figure BDA0003231190620000056
where t is the number of iterations.
Step 4, according to the transformation matrix { theta ] obtained in step 3iThe common mapping matrix A0And a unique mapping matrix { A }iEstablishing a decision-level knowledge integration online model by using a target domain sample and solving, wherein the specific steps of the step 4 are as follows:
step 4-1, marking the target domain mark sample obtained in the step 1 by using the multiple feature extractors obtained in the step 2
Figure BDA0003231190620000057
Extracting multiple types of features
Figure BDA0003231190620000058
Where i ∈ {1,2, …, N },
Figure BDA0003231190620000059
according to the transformation matrix { theta ] of the step 3iThe common mapping matrix A0And a unique mapping matrix { A }iCalculating corresponding transformation characteristics after different types of characteristics of the target domain samples are transformed to the label space
Figure BDA00032311906200000510
Figure BDA0003231190620000061
In order to explore the importance degree of different characteristics in a target domain, a decision-level knowledge integration online model target function is established:
Figure BDA0003231190620000062
wherein ,ξiRepresenting class i transform features
Figure BDA0003231190620000063
To the degree of importance of (a) the,
Figure BDA0003231190620000064
describing the distance between the transformed feature and the authentic tag; log (xi)i) Is a constraint term to avoid the estimated xiiVery close to 0; λ is the equilibrium parameter.
Step 4-2, carrying out optimization solution on the decision-level knowledge integration online model established in the step 4-1, and specifically comprising the following steps:
and 4-2-1, converting the objective function into a minimized logarithm function problem.
Note the book
Figure BDA0003231190620000065
Given feature weight ξiIs provided with
Figure BDA0003231190620000066
Obey normal distribution
Figure BDA0003231190620000067
With a likelihood function:
Figure BDA0003231190620000068
the maximum likelihood function equation (14) is equivalent to minimizing its negative log function:
Figure BDA0003231190620000069
it can be seen that minimizing equation (15) is equivalent to solving equation (13).
And 4-2-2, further converting the objective function into a maximum posterior probability estimation problem and solving the characteristic weight.
Setting xiiObeying distribution xii~Γ(γ12), wherein ,γ1 and γ2Are parameters. Then there is a probability density function:
Figure BDA00032311906200000610
calculating xiiThe posterior probability distribution of (a) is:
Figure BDA0003231190620000071
thus, ξiObey the gamma distribution of the posterior probability distribution
Figure BDA0003231190620000072
Given a
Figure BDA0003231190620000073
Marking a sample in each target domain, and estimating by utilizing maximum posterior probability to obtain a characteristic weight:
Figure BDA0003231190620000074
and 4-2-3, updating the characteristic weight obtained by solving in an online mode.
Order to
Figure BDA0003231190620000075
The representation is based on
Figure BDA0003231190620000076
The cumulative count at which each sample updates the feature weight,
Figure BDA0003231190620000077
and is
Figure BDA0003231190620000078
Which is indicative of the corresponding accumulated error,
Figure BDA0003231190620000079
and is
Figure BDA00032311906200000710
wherein ,
Figure BDA00032311906200000711
and
Figure BDA00032311906200000712
respectively corresponding initial values. Thus, the feature weights can be updated online:
Figure BDA00032311906200000713
step 5, utilizing the characteristic extractor f obtained in the step 2i(v) extracting different types of features of the target domain sample to be tested, and recording the features as
Figure BDA00032311906200000714
i ∈ {1,2, …, N }. According to the transformation matrix { theta ] obtained in the step 3iThe common mapping matrix A0And a unique mapping matrix { A }i Step 4, feature weight xi after on-line updatingiWill be provided with
Figure BDA00032311906200000715
And (3) integrally mapping the depth knowledge subjected to the feature level and the decision level to a label space:
Figure BDA00032311906200000716
obtaining the predicted label number of the target to be measured by taking the index corresponding to the maximum value in the step c
Figure BDA00032311906200000717
Obtaining a target identification result:
Figure BDA00032311906200000718
examples
The model is used for carrying out experiments on ten types of targets in the MSTAR data set for acquiring and identifying the American moving and static targets, a sensor for acquiring the data set is a high-resolution beam-focusing synthetic aperture radar, the radar works in an X wave band, an HH polarization mode is adopted, and the resolution is 0.3m multiplied by 0.3 m. The acquired data is subjected to preprocessing, and slice images with the pixel size of 128 x 128 and containing various objects are extracted from the acquired data. The data are mostly SAR slice images of stationary vehicles, including ten types of targets including BMP2, T72, BTR70, 2S1, BRDM2, BTR60, D7, T62, ZIL131, ZSU234 and T72. Taking sample data observed at a pitch angle of 17 degrees as a source domain sample, taking sample data observed at a pitch angle of 15 degrees as a target domain sample, recording the sample data as a standard operating condition, wherein the sample data totally comprises 10 types of targets, and the specific sample number is shown in table 1. Taking sample data observed at a pitch angle of 17 degrees as a source domain sample, taking sample data observed at a pitch angle of 30 degrees as a target domain sample, recording the sample data as an extended operating condition, wherein the sample data totally comprises 3 types of targets, and the number of the specific samples is shown in table 2.
TABLE 1 number of specific samples under standard operating conditions
Figure BDA0003231190620000081
Table 2 extension of the number of specific samples under operating conditions
Figure BDA0003231190620000082
To remove the effect of background clutter, the sample image size was cut to 64 × 64 with the center. In this case, a convolutional neural network and a sparse representation method are used as two feature extractors, so that N is 2, and the extracted feature dimensions are d1=128,d2256, the remaining parameters are set to M25,α=103,β=γ=102,λ=10,γ1=10,γ2The convergence threshold is set to be δ 10, 5-3. All source domain samples are used as source domain marking samples, 10 samples are randomly extracted from target domain samples to be used as target domain marking samples to participate in training, and the rest samples are used as samples to be detected.
Experiments under different operating conditions are designed to verify the superiority of the proposed algorithm, and the performances of the background art method and the method of the invention under the standard operating condition and the extended operating condition are respectively compared. As shown in FIG. 3, under the standard operation condition, the target domain sample is observed under the pitch angle of 15 degrees, the source domain is observed under the pitch angle of 17 degrees, the data distribution difference of the target domain sample and the source domain sample is small, the method and the background technology have good identification performance, the identification rate is over 95 percent, and the identification rate of the method is the highest and reaches 98.6 percent. Under the condition of extended operation, the target domain sample is observed under the pitch angle of 30 degrees, the difference with the source domain sample is larger, the identification performance of the background technology method is greatly reduced, the method can still maintain the identification accuracy of 98.4 percent, and the advantages are obvious. In conclusion, the experimental results prove that the method effectively combines the advantages of different types of features, realizes field self-adaptation, improves the accuracy of target identification, and enhances the generalization capability of the model.

Claims (5)

1. A domain self-adaptive target identification method based on deep knowledge integration is characterized by comprising the following steps:
s1, respectively collecting original image samples in a source domain and a target domain and preprocessing the original image samples;
s2, training various feature extractors by using source domain samples, and extracting various features;
s3, establishing an objective function of the feature-level knowledge integration model by using the different types of features obtained in the step S2, and solving, specifically comprising:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iWill different types of featuresMapping to a uniform label space, and measuring the difference between the label space and a real label to obtain a target function of feature level knowledge integration;
s32, optimizing and solving the objective function obtained in the step S31 by using a three-step iteration method to obtain a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i};
S4, establishing a decision-level knowledge integration online model by using the target domain samples and solving, wherein the method specifically comprises the following steps:
s41, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iProcessing a target domain mark sample, designing a characteristic weight, and establishing a decision-level knowledge integrated target function;
s42, solving the objective function obtained in the step S41 to obtain a characteristic weight;
s5, using the various feature extractors obtained in step S2 and the transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd S4, processing the sample to be detected to obtain an identification result.
2. The method for field adaptive target recognition based on deep knowledge integration according to claim 1, wherein the specific method in step S1 is as follows:
acquiring original images of each target under different pitch angles in a static state by a radar, observing the target under different azimuth angles under each fixed pitch angle, and recording the acquired images as a source domain and a target domain according to the difference of the pitch angles;
in the source domain
Figure FDA0003231190610000014
Collecting a marking sample, and marking as Z ═ Z after cutting pretreatmentj1,2, …, n, where n is the total number of samples in the source domain training set; label of all the source domain samples as matrix
Figure FDA0003231190610000011
Where L is the total number of classes of the sample, the matrix
Figure FDA0003231190610000012
Each column in (a) represents a one-hot tag [0, …,0, 1,0, …,0 ] of the corresponding sample]TThe ith position is 1 and the rest elements are all 0, which indicates that the label number is L, and L belongs to {1,2, …, L };
in the form of online at the target domain
Figure FDA0003231190610000013
Collecting marked samples, i.e. sequentially collecting one sample at a time, cutting and preprocessing the samples and recording the samples as
Figure FDA0003231190610000021
The corresponding one-hot tag is noted
Figure FDA0003231190610000022
wherein
Figure FDA0003231190610000023
The total number of samples is labeled for the target domain.
3. The method for field adaptive target recognition based on deep knowledge integration according to claim 2, wherein the specific method in step S2 is as follows:
training N feature extractors f by using source domain samples obtained in step S1i(. h), i ∈ {1,2, …, N }, noting that different types of source domain samples are characterized by
Figure FDA0003231190610000024
Figure FDA0003231190610000025
wherein ,
Figure FDA0003231190610000026
representation utilization feature extractor fi(. features extracted on source domain samples Z, diRepresenting the dimension of the ith class of features.
4. The method for field adaptive target recognition based on deep knowledge integration according to claim 3, wherein the specific method of step S3 is as follows:
s31, designing a transformation matrix thetaiThe common mapping matrix A0And a unique mapping matrix { A }iMapping different types of features to a uniform label space, measuring the difference between the label space and a real label, establishing a feature level knowledge integration model, and obtaining a feature level knowledge integration objective function as follows:
Figure FDA0003231190610000027
wherein ,
Figure FDA0003231190610000028
a loss of classification is indicated and,
Figure FDA0003231190610000029
a regularization term is represented as a function of,
Figure FDA00032311906100000210
representing the mapping of features of different dimensions to a specified dimension,
Figure FDA00032311906100000211
m is a specified dimension;
Figure FDA00032311906100000212
mapping different types of features to unificationThe label space of (a), wherein,
Figure FDA00032311906100000213
measuring the difference between the characteristics in the label space and the real label by using the Frobenius norm to obtain a classification loss item; for transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }iFrobenius norm constraint is carried out on the distribution to obtain a regularization term, wherein alpha, beta and gamma all represent regularization coefficients and are used for controlling the degree of parameter regularization, and A can be obtained by minimizing an objective function0,{Ai},{Θi}:
Figure FDA00032311906100000214
Step S32, carrying out optimization solution on the formula (3) obtained in the step S31 by using a three-step iteration method, wherein the method comprises the following specific steps:
s321, fixing a transformation matrix { theta }iAnd a unique mapping matrix { A }iIs mapped to a common mapping matrix A0Optimizing, and converting the objective function into:
Figure FDA0003231190610000031
this is an unconstrained optimization problem, solving the closed form solution as:
Figure FDA0003231190610000032
wherein I is an M-order unit array;
step S322: fixed transformation matrix { ΘiAnd common mapping matrix A0For the unique mapping matrix { AiOptimizing, and converting the objective function into:
Figure FDA0003231190610000033
splitting equation (6) into N independent unconstrained least squares problems, each of which can be expressed as:
Figure FDA0003231190610000034
solving to obtain an optimized unique mapping matrix { Ai}:
Figure FDA0003231190610000035
Step S323: finally, the public mapping matrix A is fixed0And a unique mapping matrix { A }iH, for transformation matrix { theta }iOptimizing, and the objective function is:
Figure FDA0003231190610000036
obtaining an optimized transformation matrix { theta ] in the same process as the solving process of the step S322i}:
Figure FDA0003231190610000037
321. Wherein, I1Is diOrder unit array, I2Is an M-order unit matrix.
Iterating steps S321, S322 and S323 until convergence, and obtaining a transformation matrix { theta }iThe common mapping matrix A0And a unique mapping matrix { A }i-considering the algorithm to converge when the degree of variation of the objective function equation (2) is less than a given threshold δ:
Figure FDA0003231190610000041
where t is the number of iterations.
5. The method for field adaptive target recognition based on deep knowledge integration according to claim 4, wherein the specific method of step S4 is as follows:
step S41, marking the sample of the target domain obtained in step S1 by the multiple feature extractors obtained in step S2
Figure FDA0003231190610000042
Extracting multiple types of features
Figure FDA0003231190610000043
Where i ∈ {1,2, …, N },
Figure FDA0003231190610000044
according to the transformation matrix { Θ of step S3iThe common mapping matrix A0And a unique mapping matrix { A }iCalculating corresponding transformation characteristics after different types of characteristics of the target domain samples are transformed to the label space
Figure FDA0003231190610000045
Figure FDA0003231190610000046
In order to obtain the importance degree of different characteristics in a target domain, a decision-level knowledge integration online model target function is established:
Figure FDA0003231190610000047
wherein ,ξiRepresenting class i transform features
Figure FDA0003231190610000048
To the degree of importance of (a) the,
Figure FDA0003231190610000049
describing the distance between the transformed feature and the authentic tag; log (xi)i) Is a constraint term to avoid the estimated xiiVery close to 0; λ is a balance parameter;
s42, carrying out optimization solution on the decision-level knowledge integration online model established in the step S41, and specifically comprising the following steps:
step S421, converting the objective function into a minimized logarithm function problem:
note the book
Figure FDA00032311906100000410
Given feature weight ξiIs provided with
Figure FDA00032311906100000411
Obey normal distribution
Figure FDA00032311906100000412
With a likelihood function:
Figure FDA00032311906100000413
the maximum likelihood function equation (14) is equivalent to minimizing its negative log function:
Figure FDA0003231190610000051
the obtainable minimization formula (15) is equivalent to solving the formula (13);
step S522, the objective function is further converted into a maximum posterior probability estimation problem and the characteristic weight is solved:
setting xiiObeying distribution xii~Γ(γ12), wherein ,γ1 and γ2Are parameters. Then there is a probability density function:
Figure FDA0003231190610000052
calculating xiiThe posterior probability distribution of (a) is:
Figure FDA0003231190610000053
thus, ξiObey the gamma distribution of the posterior probability distribution
Figure FDA0003231190610000054
Given a
Figure FDA0003231190610000055
Marking a sample in each target domain, and estimating by utilizing maximum posterior probability to obtain a characteristic weight:
Figure FDA0003231190610000056
step S423, updating the solved feature weight in an online manner:
order to
Figure FDA0003231190610000057
The representation is based on
Figure FDA0003231190610000058
The cumulative count at which each sample updates the feature weight,
Figure FDA0003231190610000059
and is
Figure FDA00032311906100000510
Figure FDA00032311906100000511
Which is indicative of the corresponding accumulated error,
Figure FDA00032311906100000512
and is
Figure FDA00032311906100000513
wherein ,
Figure FDA00032311906100000514
and
Figure FDA00032311906100000515
respectively corresponding initial values, and updating the feature weights on line:
Figure FDA00032311906100000516
step S5, feature extractor f obtained in step S2i(v) extracting different types of features of the target domain sample to be tested, and recording the features as
Figure FDA0003231190610000061
The transformation matrix { Θ ] obtained in step S3iThe common mapping matrix A0And a unique mapping matrix { A }iAnd step S4 is carried out on-line updating to obtain the feature weight xiiWill be provided with
Figure FDA0003231190610000062
And (3) integrally mapping the depth knowledge subjected to the feature level and the decision level to a label space:
Figure FDA0003231190610000063
obtaining the predicted label number of the target to be measured by taking the index corresponding to the maximum value in the step c
Figure FDA0003231190610000064
Obtaining a target identification result:
Figure FDA0003231190610000065
and completing the self-adaptive target recognition.
CN202110987414.8A 2021-08-26 2021-08-26 Domain self-adaptive target recognition method based on depth knowledge integration Active CN113657541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110987414.8A CN113657541B (en) 2021-08-26 2021-08-26 Domain self-adaptive target recognition method based on depth knowledge integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110987414.8A CN113657541B (en) 2021-08-26 2021-08-26 Domain self-adaptive target recognition method based on depth knowledge integration

Publications (2)

Publication Number Publication Date
CN113657541A true CN113657541A (en) 2021-11-16
CN113657541B CN113657541B (en) 2023-10-10

Family

ID=78492902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110987414.8A Active CN113657541B (en) 2021-08-26 2021-08-26 Domain self-adaptive target recognition method based on depth knowledge integration

Country Status (1)

Country Link
CN (1) CN113657541B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180024968A1 (en) * 2016-07-22 2018-01-25 Xerox Corporation System and method for domain adaptation using marginalized stacked denoising autoencoders with domain prediction regularization
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning
CN110598636A (en) * 2019-09-09 2019-12-20 哈尔滨工业大学 Ship target identification method based on feature migration
CN111723823A (en) * 2020-06-24 2020-09-29 河南科技学院 Underwater target detection method based on third-party transfer learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180024968A1 (en) * 2016-07-22 2018-01-25 Xerox Corporation System and method for domain adaptation using marginalized stacked denoising autoencoders with domain prediction regularization
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning
CN110598636A (en) * 2019-09-09 2019-12-20 哈尔滨工业大学 Ship target identification method based on feature migration
CN111723823A (en) * 2020-06-24 2020-09-29 河南科技学院 Underwater target detection method based on third-party transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶洋;胡昊;鲍灵浪;: "基于重建分类网络特征增强的盲域自适应分类器", 信息通信, no. 06 *

Also Published As

Publication number Publication date
CN113657541B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN107515895B (en) Visual target retrieval method and system based on target detection
Han et al. Combining 3D‐CNN and Squeeze‐and‐Excitation Networks for Remote Sensing Sea Ice Image Classification
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
Liu et al. Remote sensing image change detection based on information transmission and attention mechanism
CN110889865B (en) Video target tracking method based on local weighted sparse feature selection
CN105760821A (en) Classification and aggregation sparse representation face identification method based on nuclear space
CN110991257B (en) Polarized SAR oil spill detection method based on feature fusion and SVM
CN110414616B (en) Remote sensing image dictionary learning and classifying method utilizing spatial relationship
CN114926693A (en) SAR image small sample identification method and device based on weighted distance
CN111882554B (en) SK-YOLOv 3-based intelligent power line fault detection method
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN113344045A (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN109558803B (en) SAR target identification method based on convolutional neural network and NP criterion
CN117152503A (en) Remote sensing image cross-domain small sample classification method based on false tag uncertainty perception
CN113627240B (en) Unmanned aerial vehicle tree species identification method based on improved SSD learning model
CN110827327B (en) Fusion-based long-term target tracking method
Liang et al. Comparison-based convolutional neural networks for cervical Cell/Clumps detection in the limited data scenario
Dewari et al. Agricultural Insect Pest’s Recognition System Using Deep Learning Model
CN113902975B (en) Scene perception data enhancement method for SAR ship detection
WO2023273337A1 (en) Representative feature-based method for detecting dense targets in remote sensing image
Kundur et al. Insect pest image detection and classification using deep learning
CN113409351B (en) Unsupervised field self-adaptive remote sensing image segmentation method based on optimal transmission
CN113657541A (en) Domain adaptive target identification method based on deep knowledge integration
Sun et al. Feature space fusion classification of remote sensing image based on ant colony optimisation algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant