CN111325094A - High-resolution range profile-based ship type identification method and system - Google Patents

High-resolution range profile-based ship type identification method and system Download PDF

Info

Publication number
CN111325094A
CN111325094A CN202010046491.9A CN202010046491A CN111325094A CN 111325094 A CN111325094 A CN 111325094A CN 202010046491 A CN202010046491 A CN 202010046491A CN 111325094 A CN111325094 A CN 111325094A
Authority
CN
China
Prior art keywords
loss function
neural network
network model
ship
ship type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010046491.9A
Other languages
Chinese (zh)
Inventor
但波
付哲泉
王亮
高山
戢治洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval Aeronautical University
Original Assignee
Naval Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval Aeronautical University filed Critical Naval Aeronautical University
Priority to CN202010046491.9A priority Critical patent/CN111325094A/en
Publication of CN111325094A publication Critical patent/CN111325094A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a ship type identification method and system based on a high-resolution range profile. The method comprises the steps of obtaining high-resolution distance images of ships of different ship types in a database; each ship of the ship type corresponds to a plurality of high-resolution range profiles; performing feature extraction on the high-resolution range profile to obtain a feature set; acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output; training the neural network model according to a central loss function and a Softmax loss function; acquiring a high-resolution range profile of a ship to be identified; and identifying the ship type of the ship to be identified by using the trained neural network model. The ship type identification method and system based on the high-resolution range profile can improve the accuracy of classification and identification of the ship type.

Description

High-resolution range profile-based ship type identification method and system
Technical Field
The invention relates to the field of ship type identification, in particular to a ship type identification method and system based on a high-resolution range profile.
Background
The radar target identification is to judge the type of a target by using a radar echo signal of the target. Broadband radars typically operate in the optical region, where the target can be viewed as being made up of a large number of scattering points of varying intensity. The High-resolution range profile (HRRP) is composed of a vector sum of echoes of scattering points on a ship acquired by a broadband radar signal. The method reflects the distribution condition of scattering points on the ship along the radar sight line, contains important structural characteristics of the ship, and is widely applied to the field of identification of the type of the radar ship. The ship type is identified by acquiring a high-resolution range profile of the ship, extracting the characteristics of the high-resolution range profile and constructing a neural network.
The final classification of the target features is done in the prior art by using a loss function. The loss function is used to measure the difference between the predicted value and the true value, and is generally denoted by L (y _, y), where y _ denotes the predicted value and y denotes the true value. For the Softmax loss function, the magnitude of the loss value is related to the modulo length of the feature. When the Softmax loss function classifies a certain sample correctly, Softmax continuous optimization can promote characteristic modular length to be continuously increased, but the characteristic modular length cannot be further improved by the change, and the generalization capability of the model is improved, because the ideal condition in target classification is to make the characteristics as close as possible, namely the Softmax loss function has the characteristics of easiness in optimization and quick convergence, and the Softmax loss function is generally used as a loss function in a multi-classification convolutional neural network. Softmax Loss is formally the sum of the Softmax function and the cross entropy Loss, and the classification of different classes is guaranteed to be correct by letting all target classes have the maximum log-likelihood in probability space. However, Softmax loss functions can cause the iterative training of the network, and Softmax only distinguishes different classes of features and does not try to separate the features more obviously. In the ship type classification and identification task, the characteristics processed by Softmax can divide the hyperspace or the hyperspace into corresponding blocks according to the number of the types of ships, the characteristics of Softmax determine that the characteristics of Softmax cannot guarantee the cluster aggregation and the cluster separation of the characteristics of various types, and then the trained network model cannot accurately classify and identify the types of the ships. And the center loss function (Centerlos, CL) constructs a class center for the characteristics of each class of targets, penalizes the target characteristics far away from the class center to promote the intra-class distance of the target characteristics to be more compact, and achieves the effects of reducing the intra-class distance and increasing the inter-class distance. But the central loss function is not suitable for direct feature classification because computing the features of all samples consumes huge computing resources when the number of training data sets is large.
Disclosure of Invention
The invention aims to provide a ship type identification method and system based on a high-resolution range profile, which can improve the accuracy of ship type classification identification.
In order to achieve the purpose, the invention provides the following scheme:
a ship type identification method based on high-resolution range profile comprises the following steps:
acquiring high-resolution distance images of ships of different ship types in a database; each ship of the ship type corresponds to a plurality of high-resolution range profiles;
performing feature extraction on the high-resolution range profile to obtain a feature set;
acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output;
training the neural network model according to a central loss function and a Softmax loss function;
acquiring a high-resolution range profile of a ship to be identified;
and identifying the ship type of the ship to be identified by using the trained neural network model.
Optionally, the training the neural network model according to the central loss function and the Softmax loss function specifically includes:
using formulas
Figure BDA0002369584280000021
Determining a joint loss function; l isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure BDA0002369584280000022
is the ith characteristic and belongs to the yiEach category, d is the dimension of the feature;
Figure BDA0002369584280000023
weight matrix for last full connection layer
Figure BDA0002369584280000024
Column j of (1);
Figure BDA0002369584280000025
the weighted result of the ith sample target feature is obtained; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling the cosine value, mu is an integer greater than 1, and is used for controlling the size of the corner edge between the features;
Figure BDA0002369584280000026
is the yiA characteristic center of individual ship type;
and training the neural network model according to the joint loss function.
Optionally, the training the neural network model according to the central loss function and the Softmax loss function further includes:
using formulas
Figure BDA0002369584280000031
Determining a central loss function;
using formulas
Figure BDA0002369584280000032
A Softmax loss function is determined.
Optionally, the training the neural network model according to the joint loss function specifically includes:
obtaining a deviation value of the combined loss function during the t iteration;
updating a full-connection matrix, convolution kernel parameters and a characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
calculating a deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function;
judging whether the deviation value of the updated joint loss function is larger than a set threshold value or not;
if the deviation value of the updated joint loss function is larger than the set threshold value, returning to the step of updating the full connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
and if the deviation value of the updated joint loss function is not greater than the set threshold, taking the currently updated neural network model as the trained neural network model.
A system for vessel type identification based on high resolution range profile, comprising:
the first acquisition module is used for acquiring high-resolution range images of ships of different ship types in the database; each ship of the ship type corresponds to a plurality of high-resolution range profiles;
the characteristic set determining module is used for extracting characteristics of the high-resolution range profile to obtain a characteristic set;
the neural network model acquisition module is used for acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output;
the neural network model training module is used for training the neural network model according to a central loss function and a Softmax loss function;
the second acquisition module is used for acquiring a high-resolution range profile of the ship to be identified;
and the ship type identification module is used for identifying the ship type of the ship to be identified by using the trained neural network model.
Optionally, the neural network model training module specifically includes:
a joint loss function determination unit for utilizing the formula
Figure BDA0002369584280000041
Determining a joint loss function; l isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure BDA0002369584280000042
is the ith characteristic and belongs to the yiEach category, d is the dimension of the feature;
Figure BDA0002369584280000043
weight matrix for last full connection layer
Figure BDA0002369584280000044
Column j of (1);
Figure BDA0002369584280000045
the weighted result of the ith sample target feature is obtained; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling cosine values, and μ is an integer greater than 1 for controlling the angular edges between featuresThe size of the rim;
Figure BDA0002369584280000046
is the yiA characteristic center of individual ship type;
and the neural network model training unit is used for training the neural network model according to the joint loss function.
Optionally, the method further includes:
a central loss function determination module for utilizing the formula
Figure BDA0002369584280000047
Determining a central loss function;
a Softmax loss function determination module for utilizing the formula
Figure BDA0002369584280000048
A Softmax loss function is determined.
Optionally, the neural network model training unit specifically includes:
the first deviation value calculation operator unit is used for calculating the deviation value of the joint loss function when the tth iteration is obtained;
the updating subunit is used for updating the full-connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
the deviation value updating subunit is used for calculating the deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function;
a judging subunit, configured to judge whether a deviation value of the updated joint loss function is greater than a set threshold;
a continuous updating subunit, configured to, if the deviation value of the updated joint loss function is greater than the set threshold, return to the step of updating the full-connection matrix, the convolution kernel parameters, and the feature center of each ship type of the joint loss function in the neural network according to the deviation value;
and the trained neural network model determining subunit is used for taking the currently updated neural network model as the trained neural network model if the deviation value of the updated joint loss function is not greater than the set threshold value.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the ship type identification method and system based on the high-resolution range profile, the central loss function and the Softmax loss function are combined with the training neural network model, the fact that the Softmax loss function increases the inter-class difference of different target characteristics through characteristic boundary constraint is considered, the fact that the central loss function reduces the intra-class difference of the different target characteristics through central clustering is considered, and the problem that the common Softmax only distinguishes the characteristics of different classes and cannot separate the characteristics is solved. The method has the advantages that the characteristics of intra-class aggregation and inter-class dispersion are met, and meanwhile, the accuracy of classification and identification of the types of ships is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a ship type identification method based on a high-resolution range profile according to the present invention;
fig. 2 is a schematic structural diagram of a ship type identification system based on a high-resolution range profile provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a ship type identification method and system based on a high-resolution range profile, which can improve the accuracy of ship type classification identification.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of a method for identifying a ship type based on a high-resolution range profile, as shown in fig. 1, the method for identifying a ship type based on a high-resolution range profile includes:
s101, acquiring high-resolution range images of ships of different ship types in a database; each ship of the ship type corresponds to a plurality of the high-resolution range images.
And S102, performing feature extraction on the high-resolution range profile to obtain a feature set.
S103, acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output.
And S104, training the neural network model according to the central loss function and the Softmax loss function.
Using formulas
Figure BDA0002369584280000061
A central loss function is determined. According to the formula
Figure BDA0002369584280000062
And formula
Figure BDA0002369584280000063
And updating the central loss function. When y isiFor the jth ship type, δ (-) is equal to 1 when identifying correct, otherwise δ (-) is equal to 0.
Using formulas
Figure BDA0002369584280000064
A Softmax loss function is determined. The specific determination process of the Softmax loss function is as follows:
using formulas
Figure BDA0002369584280000071
Determining a first loss function; by adding multiplicative constraints, larger feature boundaries and recognition performance are obtained. The loss function A-Softmax is normalized by the weight vector (to W |)i| becomes 1) and will
Figure BDA0002369584280000072
Become into
Figure BDA0002369584280000073
The form enhances the boundary constraint. The A-Softmax calculation formula is as follows:
Figure BDA0002369584280000074
where ψ (θ) is generally defined as a piecewise function as follows:
ψ(θ)=(-1)kcos(μθ)-2k,
Figure BDA0002369584280000075
k∈[0,μ-1]。
mu is typically an integer greater than 1 and is used to control the size of the corner edges between features. When mu is 1, then LASBecomes the conventional Softmax loss function. Mu is taken as a parameter closely related to the classification boundary of the features, the larger the value of mu is, the stronger the angle edge constraint is, the more difficult the model is to be converged during training, the model can be converged only by using various training skills, and certain difficulty exists in application. Further, the form of multiplying θ in ψ (θ) by the boundary constraint μ is converted to a more concise form of subtraction, i.e., ψ (θ) ═ cos θ - μ. This form is more versatile and simple to visualize than the a-Softmax loss function. In the actual operation process, the input after feature and weight normalization becomes:
Figure BDA0002369584280000076
therefore, only Ψ (f) ═ f — μ needs to be calculated during the forward propagation. And no extra calculation is needed in the back propagation process because Ψ' (f) ═ 1, which is easier to implement than the above formula.
The feature similarity can be calculated in two forms of Euclidean distance and cosine distance. Euclidean distance: the smaller the distance between the characteristic points is, the higher the similarity of the vectors is; cosine distance: the smaller the included angle between feature points, the higher the similarity of vectors. The cosine layer is constructed using the cosine as the similarity between two features, while the weights and normalization of the features are done. Introducing a hyper-parameter s as a scale factor to scale the cosine value, and obtaining a Softmax loss function as follows:
Figure BDA0002369584280000081
using formulas
Figure BDA0002369584280000082
A joint loss function is determined. L isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure BDA0002369584280000083
is the ith characteristic and belongs to the yiEach category, d is the dimension of the feature;
Figure BDA0002369584280000084
weight matrix for last full connection layer
Figure BDA0002369584280000085
Column j of (1);
Figure BDA0002369584280000086
weighted node for ith sample target featureFruit; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling the cosine value, mu is an integer greater than 1, and is used for controlling the size of the corner edge between the features;
Figure BDA0002369584280000087
is the yiCharacteristic centers for individual ship types.
And training the neural network model according to the joint loss function.
Obtaining the deviation value of the combined loss function in the t iteration
Figure BDA0002369584280000088
Using formulas
Figure BDA0002369584280000089
And
Figure BDA00023695842800000810
updating the full-connection matrix W and the convolution kernel parameter theta in the neural networkcAnd a characteristic center c for each ship type of the joint loss functionj
And calculating the deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function.
And judging whether the deviation value of the updated joint loss function is larger than a set threshold value.
And if the deviation value of the updated joint loss function is larger than the set threshold value, returning to the step of updating the full-connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value.
And if the deviation value of the updated joint loss function is not greater than the set threshold, taking the currently updated neural network model as the trained neural network model.
And S105, acquiring a high-resolution range image of the ship to be identified.
And S106, identifying the ship type of the ship to be identified by using the trained neural network model.
Fig. 2 is a schematic structural diagram of a ship type identification system based on a high-resolution range profile, as shown in fig. 2, the ship type identification system based on a high-resolution range profile provided in the present invention includes: the system comprises a first acquisition module 201, a feature set determination module 202, a neural network model acquisition module 203, a neural network model training module 204, a second acquisition module 205 and a ship type identification module 206.
The first obtaining module 201 is configured to obtain high-resolution range images of ships of different ship types in the database; each ship of the ship type corresponds to a plurality of the high-resolution range images.
The feature set determining module 202 is configured to perform feature extraction on the high-resolution range profile to obtain a feature set.
The neural network model obtaining module 203 is used for obtaining a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output.
The neural network model training module 204 is configured to train the neural network model according to a central loss function and a Softmax loss function.
The second acquisition module 205 is configured to acquire a high-resolution range image of the ship to be identified.
The ship type identification module 206 is configured to identify the ship type of the ship to be identified by using the trained neural network model.
The neural network model training module 204 specifically includes: a joint loss function determination unit and a neural network model training unit.
A joint loss function determination unit for utilizing the formula
Figure BDA0002369584280000091
Determining a joint loss function; l isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure BDA0002369584280000092
is the ith characteristic and belongs to the yiEach category, d is the dimension of the feature;
Figure BDA0002369584280000093
weight matrix for last full connection layer
Figure BDA0002369584280000094
Column j of (1);
Figure BDA0002369584280000095
the weighted result of the ith sample target feature is obtained; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling the cosine value, mu is an integer greater than 1, and is used for controlling the size of the corner edge between the features;
Figure BDA0002369584280000101
is the yiCharacteristic centers for individual ship types.
And the neural network model training unit is used for training the neural network model according to the joint loss function.
The ship type identification system based on the high-resolution range profile further comprises: a central loss function determination module and a Softmax loss function determination module.
A central loss function determination module for utilizing the formula
Figure BDA0002369584280000102
A central loss function is determined.
The Softmax loss function determining module is used for utilizing a formula
Figure BDA0002369584280000103
A Softmax loss function is determined.
The neural network model training unit specifically comprises: the first deviation value calculating subunit, the updating subunit, the deviation value updating subunit, the judging subunit, the continuous updating subunit and the trained neural network model determining subunit.
And the first deviation value calculating unit is used for calculating the deviation value of the joint loss function when the t iteration is obtained.
And the updating unit is used for updating the full-connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value.
And the deviation value updating subunit is used for calculating the deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function.
And the judgment subunit is used for judging whether the deviation value of the updated joint loss function is greater than a set threshold value.
And the continuous updating subunit is configured to return to the step of updating the full connection matrix, the convolution kernel parameter, and the feature center of each ship type of the joint loss function in the neural network according to the deviation value if the deviation value of the updated joint loss function is greater than the set threshold value.
And the trained neural network model determining subunit is used for taking the currently updated neural network model as the trained neural network model if the deviation value of the updated joint loss function is not greater than the set threshold value.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A ship type identification method based on high-resolution range profile is characterized by comprising the following steps:
acquiring high-resolution distance images of ships of different ship types in a database; each ship of the ship type corresponds to a plurality of high-resolution range profiles;
performing feature extraction on the high-resolution range profile to obtain a feature set;
acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output;
training the neural network model according to a central loss function and a Softmax loss function;
acquiring a high-resolution range profile of a ship to be identified;
and identifying the ship type of the ship to be identified by using the trained neural network model.
2. The method for identifying a ship type based on a high-resolution range profile as claimed in claim 1, wherein the training of the neural network model according to a central loss function and a Softmax loss function specifically comprises:
using formulas
Figure FDA0002369584270000011
Determining a joint loss function; l isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure FDA0002369584270000012
is the ith characteristic and belongs to the yiA class, d is special(ii) a characterized dimension;
Figure FDA0002369584270000013
weight matrix for last full connection layer
Figure FDA0002369584270000014
Column j of (1);
Figure FDA0002369584270000015
the weighted result of the ith sample target feature is obtained; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling the cosine value, mu is an integer greater than 1, and is used for controlling the size of the corner edge between the features;
Figure FDA0002369584270000016
is the yiA characteristic center of individual ship type;
and training the neural network model according to the joint loss function.
3. The method for identifying ship types based on high-resolution range profiles as claimed in claim 2, wherein the training of the neural network model according to the central loss function and the Softmax loss function further comprises:
using formulas
Figure FDA0002369584270000021
Determining a central loss function;
using formulas
Figure FDA0002369584270000022
A Softmax loss function is determined.
4. The method according to claim 3, wherein the training of the neural network model according to the joint loss function specifically includes:
obtaining a deviation value of the combined loss function during the t iteration;
updating a full-connection matrix, convolution kernel parameters and a characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
calculating a deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function;
judging whether the deviation value of the updated joint loss function is larger than a set threshold value or not;
if the deviation value of the updated joint loss function is larger than the set threshold value, returning to the step of updating the full connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
and if the deviation value of the updated joint loss function is not greater than the set threshold, taking the currently updated neural network model as the trained neural network model.
5. A system for identifying a ship type based on a high-resolution range profile, comprising:
the first acquisition module is used for acquiring high-resolution range images of ships of different ship types in the database; each ship of the ship type corresponds to a plurality of high-resolution range profiles;
the characteristic set determining module is used for extracting characteristics of the high-resolution range profile to obtain a characteristic set;
the neural network model acquisition module is used for acquiring a neural network model; the neural network model takes the characteristic set as input and takes the ship type as output;
the neural network model training module is used for training the neural network model according to a central loss function and a Softmax loss function;
the second acquisition module is used for acquiring a high-resolution range profile of the ship to be identified;
and the ship type identification module is used for identifying the ship type of the ship to be identified by using the trained neural network model.
6. The system for identifying ship types based on high-resolution range profiles as claimed in claim 5, wherein the neural network model training module specifically comprises:
a joint loss function determination unit for utilizing the formula
Figure FDA0002369584270000031
Determining a joint loss function; l isAMSCAs a joint loss function, LAMSAs a function of Softmax loss, LCAs a function of the central loss, λ is LAMSCMiddle LCThe weight of the weight to be occupied by the weight,
Figure FDA0002369584270000032
is the ith characteristic and belongs to the yiEach category, d is the dimension of the feature;
Figure FDA0002369584270000033
weight matrix for last full connection layer
Figure FDA0002369584270000034
Column j of (1);
Figure FDA0002369584270000035
the weighted result of the ith sample target feature is obtained; m is the number of high-resolution range images of ships of different ship types in the database; n is the number of ship types; s is a scaling factor for scaling the cosine value, mu is an integer greater than 1, and is used for controlling the size of the corner edge between the features;
Figure FDA0002369584270000036
is the yiA characteristic center of individual ship type;
and the neural network model training unit is used for training the neural network model according to the joint loss function.
7. The system of claim 6, further comprising:
a central loss function determination module for utilizing the formula
Figure FDA0002369584270000037
Determining a central loss function;
a Softmax loss function determination module for utilizing the formula
Figure FDA0002369584270000038
A Softmax loss function is determined.
8. The system according to claim 7, wherein the neural network model training unit specifically comprises:
the first deviation value calculation operator unit is used for calculating the deviation value of the joint loss function when the tth iteration is obtained;
the updating subunit is used for updating the full-connection matrix, the convolution kernel parameters and the characteristic center of each ship type of the joint loss function in the neural network according to the deviation value;
the deviation value updating subunit is used for calculating the deviation value of the updated joint loss function according to the updated neural network model and the updated characteristic center of each ship type of the joint loss function;
a judging subunit, configured to judge whether a deviation value of the updated joint loss function is greater than a set threshold;
a continuous updating subunit, configured to, if the deviation value of the updated joint loss function is greater than the set threshold, return to the step of updating the full-connection matrix, the convolution kernel parameters, and the feature center of each ship type of the joint loss function in the neural network according to the deviation value;
and the trained neural network model determining subunit is used for taking the currently updated neural network model as the trained neural network model if the deviation value of the updated joint loss function is not greater than the set threshold value.
CN202010046491.9A 2020-01-16 2020-01-16 High-resolution range profile-based ship type identification method and system Pending CN111325094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010046491.9A CN111325094A (en) 2020-01-16 2020-01-16 High-resolution range profile-based ship type identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010046491.9A CN111325094A (en) 2020-01-16 2020-01-16 High-resolution range profile-based ship type identification method and system

Publications (1)

Publication Number Publication Date
CN111325094A true CN111325094A (en) 2020-06-23

Family

ID=71172539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010046491.9A Pending CN111325094A (en) 2020-01-16 2020-01-16 High-resolution range profile-based ship type identification method and system

Country Status (1)

Country Link
CN (1) CN111325094A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406623A (en) * 2021-05-07 2021-09-17 中山大学 Target identification method, device and medium based on radar high-resolution range profile
CN114898222A (en) * 2022-04-21 2022-08-12 中国人民解放军91977部队 Ship target track identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446689A (en) * 2018-05-30 2018-08-24 南京开为网络科技有限公司 A kind of face identification method
CN109033938A (en) * 2018-06-01 2018-12-18 上海阅面网络科技有限公司 A kind of face identification method based on ga s safety degree Fusion Features
CN109085918A (en) * 2018-06-28 2018-12-25 天津大学 Acupuncture needling manipulation training method based on myoelectricity
CN110263603A (en) * 2018-05-14 2019-09-20 桂林远望智能通信科技有限公司 Face identification method and device based on center loss and residual error visual simulation network
CN110598801A (en) * 2019-09-24 2019-12-20 东北大学 Vehicle type recognition method based on convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263603A (en) * 2018-05-14 2019-09-20 桂林远望智能通信科技有限公司 Face identification method and device based on center loss and residual error visual simulation network
CN108446689A (en) * 2018-05-30 2018-08-24 南京开为网络科技有限公司 A kind of face identification method
CN109033938A (en) * 2018-06-01 2018-12-18 上海阅面网络科技有限公司 A kind of face identification method based on ga s safety degree Fusion Features
CN109085918A (en) * 2018-06-28 2018-12-25 天津大学 Acupuncture needling manipulation training method based on myoelectricity
CN110598801A (en) * 2019-09-24 2019-12-20 东北大学 Vehicle type recognition method based on convolutional neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FENG WANG 等: "Additive Margin Softmax for Face Verification", 《IEEE SIGNAL PROCESSING LETTERS》 *
MINGCHAO JIANG 等: "Additive Margin Softmax with Center Loss for Face Recognition", 《ICVIP 2018》 *
余东行 等: "联合显著性特征与卷积神经网络的遥感影像舰船检测", 《中国图象图形学报》 *
余成波 等: "中心损失与Softmax损失联合监督下的人脸识别", 《重庆大学学报》 *
郭晨 等: "基于深度多尺度一维卷积神经网络的雷达舰船目标识别", 《电子与信息学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406623A (en) * 2021-05-07 2021-09-17 中山大学 Target identification method, device and medium based on radar high-resolution range profile
CN114898222A (en) * 2022-04-21 2022-08-12 中国人民解放军91977部队 Ship target track identification method and device
CN114898222B (en) * 2022-04-21 2024-01-02 中国人民解放军91977部队 Ship target track identification method and device

Similar Documents

Publication Publication Date Title
CN111507335B (en) Method and device for automatically labeling training images used for deep learning network
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
CN110472483B (en) SAR image-oriented small sample semantic feature enhancement method and device
US11670071B2 (en) Fine-grained image recognition
CN111160407B (en) Deep learning target detection method and system
CN109063719B (en) Image classification method combining structure similarity and class information
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN111428557A (en) Method and device for automatically checking handwritten signature based on neural network model
CN113095333B (en) Unsupervised feature point detection method and unsupervised feature point detection device
Liu et al. An improved InceptionV3 network for obscured ship classification in remote sensing images
CN110569738A (en) natural scene text detection method, equipment and medium based on dense connection network
CN113743417B (en) Semantic segmentation method and semantic segmentation device
CN117409190B (en) Real-time infrared image target detection method, device, equipment and storage medium
CN112529068B (en) Multi-view image classification method, system, computer equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111325094A (en) High-resolution range profile-based ship type identification method and system
CN111738319B (en) Clustering result evaluation method and device based on large-scale samples
CN114926693A (en) SAR image small sample identification method and device based on weighted distance
CN111079807B (en) Ground object classification method and device
CN107564013B (en) Scene segmentation correction method and system fusing local information
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
US20230401670A1 (en) Multi-scale autoencoder generation method, electronic device and readable storage medium
CN113011376B (en) Marine ship remote sensing classification method and device, computer equipment and storage medium
CN115331021A (en) Dynamic feature extraction and description method based on multilayer feature self-difference fusion
CN114612802A (en) System and method for classifying fine granularity of ship target based on MBCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623