CN110598636B - Ship target identification method based on feature migration - Google Patents

Ship target identification method based on feature migration Download PDF

Info

Publication number
CN110598636B
CN110598636B CN201910866137.8A CN201910866137A CN110598636B CN 110598636 B CN110598636 B CN 110598636B CN 201910866137 A CN201910866137 A CN 201910866137A CN 110598636 B CN110598636 B CN 110598636B
Authority
CN
China
Prior art keywords
hog
domain
target
source domain
target domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910866137.8A
Other languages
Chinese (zh)
Other versions
CN110598636A (en
Inventor
陈浩
郭斌
李宏博
高通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910866137.8A priority Critical patent/CN110598636B/en
Publication of CN110598636A publication Critical patent/CN110598636A/en
Application granted granted Critical
Publication of CN110598636B publication Critical patent/CN110598636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A ship target identification method based on feature migration belongs to the field of ship target identification. The method solves the problem that in the prior art, the target to be recognized is different from the known training target data in the aspects of appearance and imaging quality, so that the recognition effect of the target to be recognized is poor. The method extracts HOG characteristics of ship images with different resolutions, maps the HOG characteristics of a source domain and the HOG characteristics of a target domain to the same characteristic space based on a transfer learning method of space alignment and probability adaptation, then carries out probability adaptation and instance weight adjustment in the same characteristic space, regenerates new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics, trains a support vector machine by using the new source domain vectorization HOG characteristics, and carries out target identification on an image to be identified by using the trained support vector machine. The method can be applied to the identification of the ship target in the remote sensing image.

Description

Ship target identification method based on feature migration
Technical Field
The invention belongs to the field of ship target identification, and particularly relates to a ship target identification method based on feature migration.
Background
In the case of an optical sensor, since the observation position, the height, and the like change, the resolution of an image obtained by a target also changes. Since the resolution of the image obtained from the same object varies, they tend to follow different distributions. For a traditional machine learning method, it is generally assumed that training data and test data satisfy the same distribution, but actually, a target to be recognized in an optical remote sensing image is often different from known training target data in characteristics such as appearance and imaging quality, so that the target cannot be recognized well, and the recognition effect of the target to be recognized is poor. Therefore, how to improve the target recognition rate of images with different resolutions obtained under the homologous sensor is one of the problems to be solved at present.
Disclosure of Invention
The invention aims to solve the problem that in the existing method, the target to be recognized is poor in recognition effect due to the fact that the target to be recognized is different from known training target data in appearance and imaging quality characteristics, and provides a ship target recognition method based on feature migration to be applied to typical remote sensing target recognition.
The technical scheme adopted by the invention for solving the technical problems is as follows: a ship target identification method based on feature migration comprises the following steps:
selecting a high-resolution ship image as a training set image, respectively intercepting a target slice of each image in the training set, and taking the high-resolution ship image in the training set as a source domain, namely obtaining a target slice of the source domain;
step two, calculating the HOG characteristics of the source domain target slice obtained in the step one, and vectorizing the calculated HOG characteristics to obtain vectorized HOG characteristics of the source domain;
intercepting a target slice of the image to be identified for the low-resolution ship image to be identified, and taking the low-resolution ship image to be identified as a target domain, namely obtaining the target slice of the target domain;
calculating HOG characteristics of a target slice of a target domain, and vectorizing the calculated HOG characteristics to obtain vectorized HOG characteristics of the target domain;
respectively carrying out standard normalization processing on the vectorized HOG characteristics of the source domain and the vectorized HOG characteristics of the target domain to obtain normalized backward quantization HOG characteristics of the source domain and normalized backward quantization HOG characteristics of the target domain;
performing PCA transformation on the normalized backward quantization HOG characteristic of the source domain and the normalized backward quantization HOG characteristic of the target domain respectively to obtain a base of a subspace of the source domain and a base of a subspace of the target domain;
performing subspace alignment on the substrate of the source domain subspace and the substrate of the target domain subspace to generate a new coordinate space; mapping the normalized backward quantization HOG characteristics of the source domain and the normalized backward quantization HOG characteristics of the target domain into a new coordinate space to obtain vectorization HOG characteristics of the source domain and the target domain in the new coordinate space;
fifthly, carrying out probability adaptation and instance weight adjustment on the vectorization HOG characteristics of the source domain and the target domain in a new coordinate space, and regenerating new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics;
inputting the source domain vectorization HOG characteristics regenerated in the step five into a support vector machine for training, and stopping training when the error function of the support vector machine is not reduced any more to obtain a trained support vector machine;
inputting the target domain vectorization HOG characteristics regenerated in the step five into a trained support vector machine to obtain a target identification result.
The invention has the beneficial effects that: the invention provides a ship target identification method based on feature migration, which extracts HOG features of ship images with different resolutions, maps the HOG features of a source domain and the HOG features of a target domain to the same feature space based on a migration learning method of space alignment and probability adaptation, then performs probability adaptation and example weight adjustment in the same feature space, regenerates new source domain vectorization HOG features and new target domain vectorization HOG features, trains a support vector machine by using the new source domain vectorization HOG features, and performs target identification of an image to be identified by using the trained support vector machine.
The method of the invention can achieve the target recognition accuracy of the image to be recognized up to 98%, and compared with the prior art, the method is at least improved by more than 3%.
Drawings
FIG. 1 is a flow chart of a method of feature migration based ship target identification of the present invention;
figure 2 is a schematic diagram of a low resolution destroyer;
figure 3 is a schematic diagram of a high resolution destroyer;
FIG. 4 is a schematic illustration of a low resolution cruiser;
FIG. 5 is a schematic illustration of a high resolution cruiser;
figure 6 is a schematic illustration of a low resolution aircraft carrier;
figure 7 is a schematic illustration of a high resolution aircraft carrier;
fig. 8 is a graph comparing the identification accuracy of the method of the present invention with BDA (balanced distributed adaptation), JDA (joint probability adaptation), SA (subspace alignment), TCA (migration component analysis), and PCA (principal component analysis) at a resolution of 1m for a destroyer, a cruiser, and an aircraft carrier;
FIG. 9 is a diagram comparing the MMD distance at 1m resolution for the method of the present invention with BDA, JDA, SA, TCA and PCA;
FIG. 10 is a graph comparing the mean of the accuracy of three object identifications for each method of the present invention with BDA, JDA, SA, TCA and PCA at a resolution of 1 m;
namely, the average value of the accuracy of the method for identifying the three targets is calculated, and similarly, the average value of the accuracy of the other 5 methods for identifying the three targets is calculated.
Detailed Description
The first specific implementation way is as follows: as shown in fig. 1. The method for identifying the ship target based on the feature migration comprises the following steps:
selecting a high-resolution ship image as a training set image, respectively intercepting a target slice of each image in the training set, and taking the high-resolution ship image in the training set as a source domain to obtain a target slice of the source domain;
the high-resolution ship image is within 0.5m of resolution;
step two, calculating the HOG characteristics of the source domain target slices obtained in the step one, and vectorizing the calculated HOG characteristics to obtain vectorized HOG characteristics of the source domain (each slice has a corresponding vectorized HOG characteristic);
intercepting a target slice of the image to be identified for the low-resolution ship image to be identified, and taking the low-resolution ship image to be identified as a target domain, namely obtaining the target slice of the target domain;
calculating the HOG characteristics of the target slice of the target domain, and vectorizing the calculated HOG characteristics to obtain the vectorized HOG characteristics of the target domain;
the low-resolution ship image is beyond 1m resolution;
when the HOG characteristics of the target slice of the high-resolution training set image and the target slice of the low-resolution image to be recognized are extracted, the HOG characteristics are extracted relative to the high-resolution target slice: selecting the sizes of cells and blocks suitable for the high-resolution target, and obtaining a gradient map by using a filter to finally obtain the HOG characteristic of the high-resolution target;
extraction of HOG features for low resolution target slices: and selecting the sizes of the cells and blocks suitable for the low-resolution target, and obtaining the gradient map by applying a filter to finally obtain the HOG characteristic of the low-resolution target.
By selecting the proper cell and block sizes, the HOG characteristics of the high-resolution target and the HOG characteristics of the low-resolution target can be ensured to have the same dimension, and the accuracy of identifying the image target to be detected is improved.
Respectively carrying out standard normalization processing on the vectorized HOG characteristics of the source domain and the vectorized HOG characteristics of the target domain to obtain normalized backward quantization HOG characteristics of the source domain and normalized backward quantization HOG characteristics of the target domain;
performing PCA transformation on the normalized backward quantization HOG characteristic of the source domain and the normalized backward quantization HOG characteristic of the target domain respectively to obtain a substrate of a subspace of the source domain and a substrate of a subspace of the target domain;
performing subspace alignment on the substrate of the source domain subspace and the substrate of the target domain subspace to generate a new coordinate space; mapping the normalized backward quantization HOG characteristics of the source domain and the normalized backward quantization HOG characteristics of the target domain into a new coordinate space to obtain vectorized HOG characteristics of the source domain and the target domain in the new coordinate space;
fifthly, carrying out probability adaptation and instance weight adjustment on the vectorization HOG characteristics of the source domain and the target domain in a new coordinate space, and regenerating new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics;
inputting the source domain vectorization HOG characteristics regenerated in the step five into a support vector machine for training, and stopping training when an error function of the support vector machine is not reduced any more to obtain a trained support vector machine;
inputting the target domain vectorization HOG characteristics regenerated in the step five into a trained support vector machine to obtain a target identification result.
The embodiment mainly aims at the visible light ship target image acquired by the homologous optical sensor, and improves the ship target recognition rate with different resolutions.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the first step, the size of the target slice is set according to the size of the target in the high-resolution ship image.
For the identification of ship targets, the training set images are from a Google Earth data source, the targets in the training set images are divided into aircraft carriers, destroyers and cruisers, and the target slices corresponding to each training set image are the same in size.
The third concrete implementation mode: the second embodiment is different from the first embodiment in that: the specific process of the fourth step is as follows:
step four, generating a subspace:
respectively carrying out standard normalization processing (mean value is 0 and variance is 1) on the vectorized HOG characteristics of the source domain and the vectorized HOG characteristics of the target domain to obtain the normalized backward quantization HOG characteristics of the source domain and the normalized backward quantization HOG characteristics of the target domain;
carrying out PCA (principal component analysis) transformation on normalized backward quantization HOG (histogram of oriented gradient) features of the source domain, selecting feature vectors corresponding to the first d large feature values, and taking the selected feature vectors as a substrate B of a subspace of the source domain S
Similarly, after PCA transformation is carried out on the normalized backward quantization HOG characteristics of the target domain, the characteristic vectors corresponding to the first d large characteristic values are selected, and the selected characteristic vectors are used as the substrate B of the subspace of the target domain T
B S ,B T ∈R p×d ;B S ′,B T ' is a normalized orthogonal matrix (i.e., B) S ′B S =I d ,B T ′B T =I d ) Wherein, I d Is a d-dimensional identity matrix;
step two, solving a conversion matrix between the source domain and the target domain:
by calculating gamma S B S Normalizing each backward quantized HOG feature gamma of the source domain S Mapping into subspace of source domain by computing gamma T B T Normalizing each of the backward quantized HOG features gamma of the target domain T Mapping into a subspace of a target domain;
by learning a transformation matrix, it aligns the source subspace coordinate system with the target subspace coordinate system;
learning the transformation matrix F by minimizing the Bregman matrix divergence T (F) to obtain an optimal solution for the transformation matrixF *
Figure BDA0002201324400000051
Figure BDA0002201324400000052
Wherein T (F) is the Bregman matrix divergence,
Figure BDA0002201324400000053
is the Frobenius norm, since the Frobenius norm is invariant to orthogonal operations, equation (1) is written in the form of equation (3):
Figure BDA0002201324400000054
Figure BDA0002201324400000055
as can be seen from equation (3), the optimal solution F of the transformation matrix * From F * =B S ′B T Obtaining; by means of an optimal solution F * Realizing the alignment of the source domain subspace substrate and the target domain subspace substrate, generating a new coordinate space B a ,B a Is expressed as B S B S ′B T (ii) a B is to be a A transformation system called source domain to target domain;
if the source domain and target domain subspace bases are the same, the optimal solution F of the transformation matrix F * Is the identity matrix;
step three, respectively calculating vectorization HOG characteristics of the source domain and the target domain in a new coordinate space:
Z S =γ S B a (4)
wherein: gamma ray S For normalised backward quantization of the HOG feature in the source domain, Z S Representing the source domain as newVectorized HOG features in the coordinate space of (a);
Z T =γ T B T (5)
wherein: gamma ray T HOG feature, Z, is normalized backward quantization of the target domain T Representing vectorized HOG features of the target domain in the new coordinate space.
After using the spatial alignment algorithm, the source domain and the target domain are located in the same feature space. However, in the new space, the distribution of the vectorized HOG features of the source domain and the target domain are not exactly the same. To improve accuracy, a joint probability adaptation method is used to adjust the probability distribution originating from the target domain in the new space.
The fourth concrete implementation mode: the third difference between the present embodiment and the specific embodiment is that: the concrete process of the fifth step is as follows:
fifthly, after subspace alignment, the source domain vectorization HOG with the label in the new coordinate space is characterized in that
Figure BDA0002201324400000056
The label-free target domain vectorization HOG feature in the new coordinate space is
Figure BDA0002201324400000057
Wherein:
Figure BDA0002201324400000058
the source domain vectorizes the ith sample of the HOG features in the new coordinate space,
Figure BDA0002201324400000061
the label of the ith sample is i =1,2, \8230, and n, n is the number of samples of the source domain vectorization HOG feature;
Figure BDA0002201324400000062
j =1,2, \ 8230for the jth sample of the vectorized HOG feature of the target domain in the new coordinate space, wherein m and m are the number of samples of the vectorized HOG feature of the target domain;
source-domain vectorized HOG feature edge distribution in new coordinate spaceP S (Z S ) Edge distribution P of vectorized HOG features with target domain t (Z T ) Is not equal, i.e. P S (Z S )≠P t (Z T ) And the source domain vectorizes the conditional distribution P of the HOG features S (y S |Z S ) Conditional distribution P of vectorized HOG features with target domain t (y t |Z T ) Is not equal, i.e. P S (y S |Z S )≠P t (y t |Z T );
Minimizing the sum D (Z) of the edge distribution distance and the conditional distribution distance S |Z T ) Carrying out balance distribution adaptation;
D(Z S |Z T )=(1-β)D(P s (Z S ),P t (Z T ))+βD(P s (y s |Z S ),P t (y t |Z T )) (6)
wherein D (P) s (Z S ),P t (Z T ) Is an edge distribution distance, D (P) s (y s |Z S ),P t (y t |Z T ) β is a balance factor;
when β → 0, it means that the source domain vectorizes the HOG feature
Figure BDA0002201324400000063
Vectorizing HOG features with a target domain
Figure BDA0002201324400000064
The distribution of (2) is large, so that adapting the edge distribution is more advantageous; when beta → 1, source domain vectorization HOG feature is disclosed
Figure BDA0002201324400000065
Vectorizing HOG features with a target domain
Figure BDA0002201324400000066
Are similar, the conditional distribution is more suitable for adaptation. Therefore, the balance factor β can adaptively adjust the importance of each distribution and achieve good results;
step two, using MMD (maximum mean difference) to estimate the edge distribution distance and the conditional distribution distance, and rewriting a formula (6) into a formula (7):
Figure BDA0002201324400000067
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002201324400000068
and
Figure BDA0002201324400000069
respectively representing samples belonging to class c in the new coordinate space, source domain and target domain, n c Number of samples belonging to class c in the source domain, m c The number of samples belonging to a category C in the target domain, wherein C belongs to {1,2, \8230 ∈, C } is a category label, C is the total number of categories, and H represents a regeneration core Hilbert space;
wherein the first part of the right side of the equation represents the edge distribution distance between the source domain and the target domain, and the second part represents the distance of the conditional distribution between the two domains;
step five, reducing the formula (7) into a form of a formula (8) through mathematical transformation and regularization:
Figure BDA0002201324400000071
wherein A is a transformation matrix with probability adaptation, T represents transposition, and K 0 Is the maximum mean difference matrix of the edge distribution, K c Z is a set of vectorized HOG characteristics of a source domain in a new coordinate space and vectorized HOG characteristics of a target domain in the new coordinate space;
Z={Z S ,Z T },Z S to vectorize the set of all samples of the HOG feature in the new coordinate space, Z, in the source domain T To vectorize the set of all samples of the HOG features in the new coordinate space by the target domain;
K 0 And K c Satisfies the following conditions:
Figure BDA0002201324400000072
Figure BDA0002201324400000073
there are two parts in equation (8), the first part is the adaptation of the edge distribution, and the second part is the adaptation of the conditional distribution;
and introducing structure sparsification and adjusting the weight of the instance. Matching feature distributions based on MMD minimization in the equation matches low and high order statistics for domain adaptation, but the distributions do not match exactly. Especially when the domain differences are particularly large, there will always be some source instances that are not related to the target instance. Thus, incorporating the instance re-weighting process with the BDA, it is important to reload the source instance.
Step five and four, adding l to the transformation matrix A of probability adaptation 2,1 Carrying out structure sparse regularization on the norm, namely introducing row sparsity into a probability-adapted transformation matrix A;
since each row of the probability adapted transform matrix a corresponds to one sample in the set Z of vectorized HOG features (i.e., corresponds to one vectorized HOG feature in the set Z), determining the adaptive instance weights based on row sparsity defines the regularization term of the instance weight adjustment as:
Figure BDA0002201324400000074
wherein A is s =A 1:n,: Is a transform matrix corresponding to the probability adaptation of the vectorized HOG features of the source domain in the new coordinate space, A t =A n+1:n+m,: Is a transformation matrix corresponding to a probabilistic adaptation of the vectorized HOG features of the target domain in the new coordinate space;
A 1:n,: represents line 1 to line 1 of AAll columns of n rows, A n+1:n+m,: All columns from row n +1 to row n + m in A;
in equation (9), only the HOG feature vectorized in the source domain is added with l 2,1 Norm normalization because the goal is to re-weight the source domain vectored HOG features by correlation with the target domain vectored HOG features. By minimizing equation (10), the source domain vectored HOG features that are correlated (uncorrelated) with the target domain vectored HOG features are adaptively re-weighted, in the new representation Z = a T K is of greater (lesser) importance. With this regularizer, robustness is provided to domain differences caused by uncorrelated vectorized HOG features.
Fifthly, combining and optimizing the formula (8) and the formula (9) to obtain the optimization problem of the formula (10):
Figure BDA0002201324400000081
s.t.A T ZH 0 Z T A=I
where I represents the identity matrix, tr represents the trace of the matrix, λ is the regularization parameter used to trade-off probability adaptation and instance weight adjustment, H 0 Is a central matrix; h 0 =I-(1/N)1,N=n+m;
According to a constraint optimization theory, obtaining a Lagrangian function L:
Figure BDA0002201324400000082
wherein: Φ is the Lagrange multiplier;
fifthly, derivation is carried out on the Lagrange function L, and the order is given
Figure BDA0002201324400000083
Equation (12) is obtained:
Figure BDA0002201324400000084
wherein: g is a diagonal gradient matrix, and a transformation matrix A of probability adaptation is solved according to a formula (12);
wherein, | | A s || 2,1 Is a non-smooth function of zero, | A s || 2,1 Is calculated as
Figure BDA0002201324400000085
G is the diagonal gradient matrix, the ith element G in G ii Equal to:
Figure BDA0002201324400000091
wherein: z is a radical of formula i As elements in the set Z, a i Is the ith row in the matrix A; since the diagonal gradient matrix G is also unknown, it depends on the matrix a. Therefore, an alternating optimization method is employed, i.e., iteratively fixing one variable and updating the other.
Step five, generating new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics again respectively:
Z′ S =Z S A s (13)
wherein: z' S Vectorizing the HOG feature on behalf of the regenerated new source domain;
Z′ T =Z T A t (14)
wherein: z' T Vectorizing the HOG features on behalf of the regenerated target domain.
The fifth concrete implementation mode: the fourth difference between this embodiment and the specific embodiment is that: in the fifth step, the value range of the balance factor beta belongs to [0,1].
The sixth specific implementation mode: the fifth embodiment is different from the fifth embodiment in that: in the fifth step, the value range of the regularization parameter lambda is [0.1,10].
The following examples were used to demonstrate the beneficial effects of the present invention:
the source of the experimental data set is 252 destroyers, 160 cruisers and 192 aircraft carriers, with 0.5m resolution. The target areas are 160 destroyers, 160 cruisers and 212 aircraft carriers, 1m resolution. Fig. 2 to 7 are schematic diagrams of images of a source domain and a target domain.
The test results of fig. 8 to 10 show that the method provided by the invention is superior to the existing transfer learning method and the method without transfer learning.
The above-described calculation examples of the present invention are merely to describe the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (4)

1. A ship target identification method based on feature migration is characterized by comprising the following steps:
selecting a high-resolution ship image as a training set image, respectively intercepting a target slice of each image in the training set, and taking the high-resolution ship image in the training set as a source domain to obtain a target slice of the source domain;
step two, calculating the HOG characteristics of the source domain target slice obtained in the step one, and vectorizing the calculated HOG characteristics to obtain vectorized HOG characteristics of the source domain;
intercepting a target slice of the image to be identified for the low-resolution ship image to be identified, and taking the low-resolution ship image to be identified as a target domain, namely obtaining the target slice of the target domain;
calculating HOG characteristics of a target slice of a target domain, and vectorizing the calculated HOG characteristics to obtain vectorized HOG characteristics of the target domain;
respectively carrying out standard normalization processing on the vectorized HOG characteristics of the source domain and the vectorized HOG characteristics of the target domain to obtain normalized backward quantization HOG characteristics of the source domain and normalized backward quantization HOG characteristics of the target domain;
performing PCA transformation on the normalized backward quantization HOG characteristic of the source domain and the normalized backward quantization HOG characteristic of the target domain respectively to obtain a base of a subspace of the source domain and a base of a subspace of the target domain;
performing subspace alignment on the substrate of the source domain subspace and the substrate of the target domain subspace to generate a new coordinate space; mapping the normalized backward quantization HOG characteristics of the source domain and the normalized backward quantization HOG characteristics of the target domain into a new coordinate space to obtain vectorization HOG characteristics of the source domain and the target domain in the new coordinate space;
the specific process of the step four is as follows:
step four, generating a subspace:
respectively carrying out standard normalization processing on the vectorized HOG characteristics of the source domain and the vectorized HOG characteristics of the target domain to obtain normalized backward quantization HOG characteristics of the source domain and normalized backward quantization HOG characteristics of the target domain;
carrying out PCA conversion on the normalized backward quantization HOG characteristics of the source domain, selecting characteristic vectors corresponding to the first d large characteristic values, and taking the selected characteristic vectors as a substrate B of a subspace of the source domain S
Similarly, after the normalized backward quantization HOG feature of the target domain is subjected to PCA (principal component analysis) conversion, the feature vectors corresponding to the first d large feature values are selected, and the selected feature vectors are used as the substrate B of the subspace of the target domain T
Step four, solving a conversion matrix between the source domain and the target domain:
by calculating gamma S B S Normalizing backward quantization HOG characteristic gamma of source domain S Mapping into subspace of source domain by computing gamma T B T Normalizing backward quantization of HOG features gamma of a target domain T Mapping into a subspace of the target domain;
learning the transformation matrix F by minimizing the Bregman matrix divergence T (F) to obtain an optimal solution F of the transformation matrix *
Figure FDA0003927009810000021
Figure FDA0003927009810000022
Wherein T (F) is the Bregman matrix divergence,
Figure FDA0003927009810000023
is the frobenius norm, since the frobenius norm is invariant to orthogonal operations, formula (1) is written in the form of formula (3):
Figure FDA0003927009810000024
optimal solution F of the transformation matrix * From F * =B S ′B T Obtaining; by means of an optimal solution F * And realizing the alignment of the source domain subspace substrate and the target domain subspace substrate, and generating a new coordinate space B a ,B a Is expressed as B S B S ′B T ,B S ' is B S The orthonormal matrix of (a);
step four, respectively calculating the vectorization HOG characteristics of the source domain and the target domain in a new coordinate space:
Z S =γ S B a (4)
wherein: gamma ray S HOG feature, Z, is normalized backward quantization of the source domain S Representing vectorized HOG features of the source domain in a new coordinate space;
Z T =γ T B T (5)
wherein: gamma ray T For normalized backward quantization of the HOG feature, Z, of the target domain T Representing vectorized HOG features of the target domain in the new coordinate space;
fifthly, carrying out probability adaptation and instance weight adjustment on the vectorization HOG characteristics of the source domain and the target domain in the new coordinate space, and regenerating new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics, wherein the specific process is as follows:
fifthly, after subspace alignment, the source domain vectorization HOG with the label in the new coordinate space is characterized in that
Figure FDA0003927009810000025
The label-free target domain vectorization HOG feature in the new coordinate space is
Figure FDA0003927009810000026
Wherein:
Figure FDA0003927009810000027
the source domain vectorizes the ith sample of the HOG features in the new coordinate space,
Figure FDA0003927009810000028
the label of the ith sample is i =1,2, \8230, and n, n is the number of samples of the source domain vectorization HOG feature;
Figure FDA0003927009810000029
j =1,2, \ 8230for the jth sample of the vectorized HOG feature of the target domain in the new coordinate space, wherein m and m are the number of samples of the vectorized HOG feature of the target domain;
source-domain vectorized HOG feature edge distribution P in new coordinate space S (Z S ) Vectorizing an edge distribution P of HOG features with a target domain t (Z T ) Is not equal, i.e. P S (Z S )≠P t (Z T ) And source-domain vectorization of conditional distribution P of HOG features S (y S |Z S ) Conditional distribution P of vectorized HOG features with target domain t (y t |Z T ) Is not equal, i.e. P S (y S |Z S )≠P t (y t |Z T );
Minimized edgeSum D (Z) of edge distribution distance and conditional distribution distance S |Z T ) Carrying out balanced distribution adaptation;
D(Z S |Z T )=(1-β)D(P s (Z S ),P t (Z T ))+βD(P s (y s |Z S ),P t (y t |Z T )) (6)
wherein D (P) s (Z S ),P t (Z T ) Is an edge distribution distance, D (P) s (y s |Z S ),P t (y t |Z T ) β is a balance factor;
step two, the MMD is used for estimating the edge distribution distance and the conditional distribution distance, and the formula (6) is rewritten into a form of a formula (7):
Figure FDA0003927009810000031
wherein the content of the first and second substances,
Figure FDA0003927009810000032
and
Figure FDA0003927009810000033
respectively representing samples belonging to class c in the new coordinate space, source domain and target domain, n c Number of samples belonging to class c in the source domain, m c For the number of samples in the target domain belonging to class C, C ∈ {1,2, \8230;, C } is the class label, C is the total number of classes, H represents the regeneration kernel Hilbert space;
step three, through mathematical transformation and regularization, the formula (7) is simplified into the form of the formula (8):
Figure FDA0003927009810000034
wherein A is a transformation matrix of probability adaptation, the upper corner mark T represents transposition, K 0 Is the maximum of the edge distributionMean difference matrix, K c Z is a set of vectorized HOG characteristics of a source domain in a new coordinate space and vectorized HOG characteristics of a target domain in the new coordinate space;
Z={Z S ,Z T },Z S to vectorize the set of all samples of the HOG feature in the new coordinate space in the source domain, Z T A set of all samples of the target domain vectorized HOG features in the new coordinate space;
step five and four, adding l to the transformation matrix A of probability adaptation 2,1 Carrying out structure sparsity regularization on the norm, namely introducing row sparsity into a probability-adapted transformation matrix A;
each row of the transform matrix a with probability adaptation corresponds to one sample in the vectorized HOG feature set Z, and the adaptive instance weight is determined according to row sparsity, so that the regularization term of instance weight adjustment is defined as:
Figure FDA0003927009810000041
wherein A is s =A 1:n,: Is a transform matrix corresponding to the probability adaptation of the vectorized HOG features of the source domain in the new coordinate space, A t =A n+1:n+m,: Is a transformation matrix corresponding to a probabilistic adaptation of the vectorized HOG features of the target domain in the new coordinate space;
fifthly, combining and optimizing the formula (8) and the formula (9) to obtain the optimization problem of the formula (10):
Figure FDA0003927009810000042
where I represents the identity matrix, tr represents the trace of the matrix, λ is the regularization parameter used to trade-off probability adaptation and instance weight adjustment, H 0 Is a central matrix;
according to a constraint optimization theory, obtaining a Lagrangian function L:
Figure FDA0003927009810000043
wherein: phi is the Lagrange multiplier;
fifthly, derivation is carried out on the Lagrange function L, and the order is given
Figure FDA0003927009810000044
Equation (12) is obtained:
Figure FDA0003927009810000045
wherein: g is a diagonal gradient matrix, and a transformation matrix A of probability adaptation is solved according to a formula (12);
step five, generating new source domain vectorization HOG characteristics and new target domain vectorization HOG characteristics again respectively:
Z′ S =Z S A s (13)
wherein: z' S Vectorizing the HOG feature on behalf of the regenerated new source domain;
Z′ T =Z T A t (14)
wherein: z' T Vectorizing HOG features on behalf of the regenerated target domain;
inputting the source domain vectorization HOG characteristics regenerated in the step five into a support vector machine for training to obtain a trained support vector machine;
inputting the target domain vectorization HOG characteristics regenerated in the step five into a trained support vector machine to obtain a target identification result.
2. The method for identifying ship targets based on feature migration according to claim 1, wherein in the first step, the size of the target slice is set according to the size of the target in the high-resolution ship image.
3. The method for identifying ship targets based on feature migration according to claim 2, wherein in the fifth step, a value range of the balance factor β is β e [0,1].
4. The method for identifying ship targets based on feature migration according to claim 3, wherein in the fifth step, the value range of the regularization parameter λ is [0.1,10].
CN201910866137.8A 2019-09-09 2019-09-09 Ship target identification method based on feature migration Active CN110598636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910866137.8A CN110598636B (en) 2019-09-09 2019-09-09 Ship target identification method based on feature migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910866137.8A CN110598636B (en) 2019-09-09 2019-09-09 Ship target identification method based on feature migration

Publications (2)

Publication Number Publication Date
CN110598636A CN110598636A (en) 2019-12-20
CN110598636B true CN110598636B (en) 2023-01-17

Family

ID=68859314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910866137.8A Active CN110598636B (en) 2019-09-09 2019-09-09 Ship target identification method based on feature migration

Country Status (1)

Country Link
CN (1) CN110598636B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583201B (en) * 2020-04-26 2022-04-05 浙江大学 Transfer learning method for constructing super-resolution pathology microscope
CN111931558A (en) * 2020-06-22 2020-11-13 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) Ship category identification method and system
CN113657541B (en) * 2021-08-26 2023-10-10 电子科技大学长三角研究院(衢州) Domain self-adaptive target recognition method based on depth knowledge integration
CN116229442B (en) * 2023-01-03 2024-05-28 武汉工程大学 Text image synthesis and instantiation weight transfer learning method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368905A (en) * 2008-11-06 2012-03-07 南卡罗来纳医科大学研究发展基金会 Lysosomotropic inhibitors of acid ceramidase
CN103268363A (en) * 2013-06-06 2013-08-28 哈尔滨工业大学 Elastic HOG (histograms of oriented gradient) feature-based Chinese calligraphy image retrieval method matched with DDTW (Derivative dynamic time wrapping)
CN104751198A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Method and device for identifying target object in image
CN105868794A (en) * 2016-04-19 2016-08-17 哈尔滨工业大学 Method for ship target fuzzy recognition based on inverse synthetic aperture radar (ISAR) image
WO2017070322A1 (en) * 2015-10-21 2017-04-27 Toth, Landy Controlled and precise treatment of cardiac tissues
CN107292246A (en) * 2017-06-05 2017-10-24 河海大学 Infrared human body target identification method based on HOG PCA and transfer learning
CN107832711A (en) * 2017-11-13 2018-03-23 常州大学 A kind of recognition methods again of the pedestrian based on transfer learning
CN108399420A (en) * 2018-01-30 2018-08-14 北京理工雷科电子信息技术有限公司 A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network
CN108647702A (en) * 2018-04-13 2018-10-12 湖南大学 A kind of extensive food materials image classification method based on transfer learning
KR101914717B1 (en) * 2017-09-28 2018-11-02 전남대학교산학협력단 Human Action Recognition Using Rreproducing Kernel Hilbert Space for Product manifold of Symmetric Positive definite Matrices
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109359623A (en) * 2018-11-13 2019-02-19 西北工业大学 High spectrum image based on depth Joint Distribution adaptation network migrates classification method
CN109373438A (en) * 2018-09-11 2019-02-22 太原理工大学 Heating energy-saving control method and system based on transfer learning algorithm
CN109766811A (en) * 2018-12-31 2019-05-17 复旦大学 The end-to-end detection and recognition methods of sea ship in a kind of satellite-borne SAR image
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11061165B2 (en) * 2008-10-08 2021-07-13 Chester A. Wallace System, method and apparatus for exploration
US11501152B2 (en) * 2017-05-05 2022-11-15 Intel Corporation Efficient learning and using of topologies of neural networks in machine learning
CN109389174B (en) * 2018-10-23 2021-04-13 四川大学 Crowd gathering sensitive image detection method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368905A (en) * 2008-11-06 2012-03-07 南卡罗来纳医科大学研究发展基金会 Lysosomotropic inhibitors of acid ceramidase
CN103268363A (en) * 2013-06-06 2013-08-28 哈尔滨工业大学 Elastic HOG (histograms of oriented gradient) feature-based Chinese calligraphy image retrieval method matched with DDTW (Derivative dynamic time wrapping)
CN104751198A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Method and device for identifying target object in image
WO2017070322A1 (en) * 2015-10-21 2017-04-27 Toth, Landy Controlled and precise treatment of cardiac tissues
CN105868794A (en) * 2016-04-19 2016-08-17 哈尔滨工业大学 Method for ship target fuzzy recognition based on inverse synthetic aperture radar (ISAR) image
CN107292246A (en) * 2017-06-05 2017-10-24 河海大学 Infrared human body target identification method based on HOG PCA and transfer learning
KR101914717B1 (en) * 2017-09-28 2018-11-02 전남대학교산학협력단 Human Action Recognition Using Rreproducing Kernel Hilbert Space for Product manifold of Symmetric Positive definite Matrices
CN107832711A (en) * 2017-11-13 2018-03-23 常州大学 A kind of recognition methods again of the pedestrian based on transfer learning
CN108399420A (en) * 2018-01-30 2018-08-14 北京理工雷科电子信息技术有限公司 A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network
CN108647702A (en) * 2018-04-13 2018-10-12 湖南大学 A kind of extensive food materials image classification method based on transfer learning
CN109373438A (en) * 2018-09-11 2019-02-22 太原理工大学 Heating energy-saving control method and system based on transfer learning algorithm
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109359623A (en) * 2018-11-13 2019-02-19 西北工业大学 High spectrum image based on depth Joint Distribution adaptation network migrates classification method
CN109766811A (en) * 2018-12-31 2019-05-17 复旦大学 The end-to-end detection and recognition methods of sea ship in a kind of satellite-borne SAR image
CN110210545A (en) * 2019-05-27 2019-09-06 河海大学 Infrared remote sensing water body classifier construction method based on transfer learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Transfer Learning Method For Ship Recognition In Multi-Optical Remote Sensing Satellites;Hongbo Li;《2018 IEEE/CIC International Conference on Communications in China (ICCC Workshops)》;20190328;第43-48页 *
A Transfer Learning Method of Ship Identification Based on Weighted Hog Features;Hongbo Li;《IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium》;20190802;第1302-1305页 *
Recognition of seed varieties using neural networks analysis of electrophoretic images;M. Jedra;《Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks》;20001231;第521-526页 *
一种光学遥感图像船舶目标检测技术;方梦梁;《计算机技术与发展》;20190830;第136-141页 *
航天运载器端面特征的新型图像特征识别方法;陈浩等;《传感器与微系统》;20161231;第46-49页 *

Also Published As

Publication number Publication date
CN110598636A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110598636B (en) Ship target identification method based on feature migration
WO2021120752A1 (en) Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium
US7961952B2 (en) Method and system for detecting and tracking objects in images
CN110569696A (en) Neural network system, method and apparatus for vehicle component identification
CN109214470B (en) Image visibility detection method based on coding network fine adjustment
WO2022218396A1 (en) Image processing method and apparatus, and computer readable storage medium
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
CN110414616B (en) Remote sensing image dictionary learning and classifying method utilizing spatial relationship
CN108121962B (en) Face recognition method, device and equipment based on nonnegative adaptive feature extraction
CN106023221A (en) Remote sensing image segmentation method based on nonnegative low-rank sparse correlated drawing
Wang et al. A novel sparse boosting method for crater detection in the high resolution planetary image
CN112818850A (en) Cross-posture face recognition method based on progressive neural network and attention mechanism
CN114512191A (en) Penicillin concentration prediction method based on migration component analysis
CN109886315B (en) Image similarity measurement method based on kernel preservation
CN114565861A (en) Airborne downward-looking target image positioning method based on probability statistic differential homoembryo set matching
Yang et al. Non-rigid point set registration via global and local constraints
CN114254703A (en) Robust local and global regularization non-negative matrix factorization clustering method
CN110443169B (en) Face recognition method based on edge preservation discriminant analysis
CN112465062A (en) Clustering method based on manifold learning and rank constraint
CN110533078B (en) Multi-view recognition method based on dictionary pairs
CN109543717B (en) Joint collaborative expression hyperspectral classification method based on adaptive neighborhood and dictionary
CN109815889B (en) Cross-resolution face recognition method based on feature representation set
CN116580243A (en) Cross-domain remote sensing scene classification method for mask image modeling guide domain adaptation
US11756319B2 (en) Shift invariant loss for deep learning based image segmentation
CN112232102A (en) Building target identification method and system based on deep neural network and multitask learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant