CN117671374A - Method and device for identifying image target of inverse synthetic aperture radar - Google Patents

Method and device for identifying image target of inverse synthetic aperture radar Download PDF

Info

Publication number
CN117671374A
CN117671374A CN202311683226.1A CN202311683226A CN117671374A CN 117671374 A CN117671374 A CN 117671374A CN 202311683226 A CN202311683226 A CN 202311683226A CN 117671374 A CN117671374 A CN 117671374A
Authority
CN
China
Prior art keywords
target
angle
module
attention
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311683226.1A
Other languages
Chinese (zh)
Inventor
李家宽
叶春茂
余继周
申伦豪
冯博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Radio Measurement
Original Assignee
Beijing Institute of Radio Measurement
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Radio Measurement filed Critical Beijing Institute of Radio Measurement
Priority to CN202311683226.1A priority Critical patent/CN117671374A/en
Publication of CN117671374A publication Critical patent/CN117671374A/en
Pending legal-status Critical Current

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method and a device for identifying an image target of an inverse synthetic aperture radar, wherein the method for identifying the image target of the inverse synthetic aperture radar comprises the following steps: acquiring an inverse synthetic aperture radar image of a target by adopting a range-Doppler algorithm for target radar echo data, and dividing the inverse synthetic aperture radar image into a training set and a testing set; estimating an incident line of sight angle according to the distance, azimuth and pitching information of the target contained in the radar echo data; constructing a convolutional neural network based on angle guidance attention; training a convolutional neural network based on angle guidance attention by using a training set to obtain a trained model; and testing the trained model by using the test set to obtain a target recognition result of the inverse synthetic aperture radar image. According to the invention, the incident line of sight angle of the target relative to the radar is introduced, and the target gesture and the target ISAR image feature are subjected to associated coupling, so that the target recognition performance can be improved, and the target recognition accuracy can be improved.

Description

Method and device for identifying image target of inverse synthetic aperture radar
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a method and a device for identifying an image target of an inverse synthetic aperture radar.
Background
Inverse synthetic aperture radar (inverse synthetic aperture radar, ISAR) is used as a two-dimensional high-resolution means, high resolution in a distance dimension is realized by transmitting a large-bandwidth signal and a pulse compression technology, high resolution in an azimuth dimension is realized by Doppler frequency shift generated by rotation of a target relative to the radar, and the method becomes an important way for observing targets in sea, land, air and sky. Since the ISAR image contains fine physical structure information of a target, target feature extraction and radar automatic target recognition can be performed by using the ISAR image, and the ISAR image plays an important role in modern military applications.
The conventional automatic target recognition of the ISAR image firstly extracts effective features of the image by means of manual experience, then designs a classifier to complete target recognition, and actually, the manually defined features need a great deal of priori knowledge, so that large losses usually exist, the feature description capability of the ISAR image is poor, and the recognition precision and efficiency are affected. In order to solve the problem, the prior art promotes the identification application of ISAR images through the description capability of a deep learning algorithm to the features of the ISAR images at the fingerprint level, but still has the problem of low target identification accuracy.
Disclosure of Invention
The invention aims to provide a target identification method and device for an inverse synthetic aperture radar image, computer equipment and a computer readable storage medium, and by introducing an incident line of sight angle of a target relative to a radar, the target gesture and target ISAR image characteristics are subjected to associated coupling, so that the target identification performance can be improved, and the target identification accuracy can be improved.
One aspect of the present invention provides an inverse synthetic aperture radar image target recognition method, including:
step S1: acquiring an inverse synthetic aperture radar image of a target by adopting a range-Doppler algorithm for target radar echo data, and dividing the inverse synthetic aperture radar image into a training set and a testing set;
step S2: estimating an incident line of sight angle according to the distance, azimuth and pitching information of the target contained in the radar echo data;
step S3: constructing a convolutional neural network based on angle-guided attention, wherein the convolutional neural network comprises a convolutional layer, a pooling layer, a plurality of mixed attention residual modules, an angle-guided attention module and a full-connection layer, each mixed attention residual module comprises a convolutional layer, a batch normalization layer, an activation function layer and a mixed attention module, and the angle-guided attention structure comprises a feature mapping module and an angle coding module;
step S4: training a convolutional neural network based on angle guidance attention by using a training set to obtain a trained model;
step S5: and testing the trained model by using the test set to obtain a target recognition result of the inverse synthetic aperture radar image.
Preferably, the step S2 includes:
step S21: calculating the space coordinates of the target according to the distance, azimuth angle and pitch angle between the target and the radar, and obtaining a unit vector in the opposite direction of the sight direction of the radar according to the space coordinates of the target;
performing polynomial fitting on the space coordinates of the target at a plurality of moments to obtain a track of the target, obtaining a target speed through derivation, and obtaining a unit vector of the target moving direction according to the target speed;
and calculating to obtain an included angle between the opposite direction of the radar sight line direction and the target movement direction as an incident sight line angle.
Preferably, the step S3 includes:
step S31: sequentially stacking a convolution layer and a pooling layer, and reducing the dimension of an input image;
step S32: constructing a mixed attention residual error module consisting of a plurality of sequentially laminated convolution layers, batch normalization layers, an activation function layer and a mixed attention module;
step S33: constructing an angle guiding attention structure comprising an angle coding module and a feature mapping module;
step S34: and carrying out average pooling on the output of the last mixed attention residual error module, and carrying out classification and identification on the obtained result through the full connection layer.
Preferably, the step S32 includes:
the method comprises the steps that an original input is subjected to a convolution layer, a batch normalization layer and an activation function layer to obtain a first feature map;
carrying out average pooling and maximum pooling on the first feature map along the space dimension to obtain a channel attention weight, and multiplying the channel attention weight with the first feature map to obtain a channel attention feature;
carrying out average pooling and maximum pooling on the channel attention features along the channel dimension to obtain a spatial attention weight, and multiplying the spatial attention weight by the channel attention features to obtain a mixed attention feature which is used as the output of the mixed attention module;
and after the mixed attention characteristic is connected with the original input residual, obtaining an output characteristic diagram of the mixed attention residual module through an activation function.
Preferably, the step S33 includes:
taking an output feature map of the mixed attention residual error module as input of a feature mapping module, performing convolution operation on the input through a first convolution layer to realize dimension reduction of channel dimension, constructing nonlinear mapping through an activation function after batch normalization, and performing convolution operation through a second convolution layer to adjust the number of channels to be consistent with the original number to obtain the input feature map as output of the feature mapping module;
selecting an incident sight angle corresponding to an inverse synthetic aperture radar image as input of an angle coding module, calculating the incident sight angle to obtain an angle coding vector, forwarding the obtained angle coding vector to a multi-layer perceptron to construct nonlinear mapping, and obtaining an angle guiding attention weight through an activation function to serve as output of the angle coding module;
multiplying the output of the angle coding module and the output of the feature mapping structure along the channel dimension, and then carrying out residual connection with the input of the feature mapping module to obtain the output of the angle guiding attention structure.
Preferably, in step S4, the convolutional neural network based on angle guidance attention is trained by using a 5-fold cross validation method, the training loss function uses cross entropy loss, and the optimizer uses Adam optimizer.
Another aspect of the present invention provides an inverse synthetic aperture radar image target recognition apparatus, comprising:
the image acquisition module acquires an inverse synthetic aperture radar image of the target by adopting a range-Doppler algorithm on target radar echo data, and divides the inverse synthetic aperture radar image into a training set and a testing set;
the incident view angle estimation module is used for estimating the incident view angle according to the distance, azimuth and pitching information of the target contained in the radar echo data;
the system comprises a convolutional neural network construction module, an angle-guided attention-based convolutional neural network construction module, a characteristic mapping module and an angle coding module, wherein the convolutional neural network construction module comprises a convolutional layer, a pooling layer, a plurality of mixed attention residual modules, an angle-guided attention residual module and a full-connection layer, each mixed attention residual module comprises a convolutional layer, a batch normalization layer, an activation function layer and a mixed attention module, and the angle-guided attention structure comprises a characteristic mapping module and an angle coding module;
the training module is used for training the convolutional neural network based on the angle guiding attention by using the training set to obtain a trained model;
and the test module is used for testing the trained model by using the test set to obtain a target identification result of the inverse synthetic aperture radar image.
A further aspect of the invention provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the computer program is executed by the processor.
A further aspect of the invention provides a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the method described above.
According to the method and device for identifying the target by the reverse synthetic aperture radar image, the computer equipment and the computer readable storage medium, the target gesture and the target ISAR image characteristic are associated and coupled by introducing the incident line angle of the target relative to the radar, so that the target identification performance can be improved, and the target identification accuracy can be improved.
Drawings
For a clearer description of the technical solutions of the present invention, the following description will be given with reference to the attached drawings used in the description of the embodiments of the present invention, it being obvious that the attached drawings in the following description are only some embodiments of the present invention, and that other attached drawings can be obtained by those skilled in the art without the need of inventive effort:
FIG. 1 is a flow chart of a method for target recognition of an inverse synthetic aperture radar image in accordance with one embodiment of the present invention;
FIG. 2 is a schematic diagram of the architecture of a convolutional neural network of one embodiment of the present invention;
FIG. 3 is a schematic diagram of an inverse SAR image target recognition device according to an embodiment of the present invention;
fig. 4 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The inventor of the present invention recognizes that a rotating component of an object such as an aircraft modulates radar echoes, the modulated echo frequency domain characteristic of the rotating component is influenced by the rotation parameter (rotation speed) and the structural parameter (blade length and number) of the rotating component, the parameters of the rotating components of different models are different, the modulated echo characteristics are different, the strip scattering structure forms presented in the ISAR images are different, and under different incident line angles, the modulation phenomenon in the ISAR images of the same object is correspondingly changed due to the possible condition that the rotating component is blocked, so that focusing on the modulation phenomenon in the images by the attention-directed network is helpful for the object identification, and the incident line angle information input network and the extracted features are coupled in an associated manner, which is beneficial for further improving the identification accuracy.
Based on this recognition, embodiments of the present invention provide an inverse synthetic aperture radar image target recognition method. Fig. 1 is a flowchart of an inverse synthetic aperture radar image target recognition method according to an embodiment of the present invention, and as shown in fig. 1, the inverse synthetic aperture radar image target recognition method according to an embodiment of the present invention includes steps S1 to S5.
Step S1: and acquiring an inverse synthetic aperture radar image of the target through a range-Doppler algorithm after pulse compression and translational compensation of target radar echo data, and dividing the inverse synthetic aperture radar image into a training set and a testing set.
Step S2: the estimation of the incident line of sight angle is achieved by accurate distance, azimuth and pitch information of the target contained in the radar echo data.
The step S2 specifically includes the following substeps:
step S21: establishing a coordinate system by taking a radar as an origin of the coordinate system according toDistance of time target to radar ∈>Azimuth angle->Pitch angle->Calculating the spatial coordinates of the object in the coordinate system>The calculation formula is as follows:
according to the space coordinates of the targetThe unit vector that gets the opposite direction of the radar line-of-sight direction can be expressed as:
step S22: selecting a time window with fixed duration, wherein the window contains space coordinates of the target at a plurality of momentsPolynomial fitting is performed on the spatial coordinates of the target at a plurality of moments to obtain the track of the target +.>Deriving a target speed +.>. At->The speed of the object can be expressed as +.>The calculation formula is as follows:
the unit vector of the target movement direction at this time can be expressed as:
step S23: the incident line of sight angle is defined as the angle between the opposite direction of the radar line of sight direction and the target movement direction, and is calculated by the formulaAnd calculating to obtain the incident line of sight angle.
Step S3: a convolutional neural network based on angle-guided attention was constructed as shown in fig. 2. The convolutional neural network comprises a structure 1, a structure 2, a structure 3, an angle guiding attention structure, a structure 4, a structure 5 and a full connection layer which are connected in sequence. The structure 1 comprises a 7×7 convolution layer and a pooling layer, so as to realize the effects of fast dimension reduction and calculation amount reduction on the input image. The structure 2-structure 5 respectively comprises 3, 4, 6 and 3 mixed attention residual modules, and each mixed attention residual module comprises a convolution layer, a batch normalization layer, an activation function layer and a mixed attention module, so that extraction of different levels of features of input features is realized. The angle guiding attention structure comprises a feature mapping module and an angle coding module, and achieves the functions of embedding incident line of sight angles into a network and guiding attention weight distribution. The full connection layer realizes classification and identification.
The step S3 specifically includes the following substeps:
step S31: and the convolution layer and the pooling layer are sequentially stacked, so that the dimension of the input image is reduced, and the calculated amount is reduced.
Step S32: and constructing a mixed attention residual error module consisting of a convolution layer, a batch normalization layer, an activation function layer and a mixed attention module which are sequentially stacked. The specific construction process of the mixed attention residual module is as follows:
(1) Taking the output of the previous stage as the original input of the mixed attention residual moduleAfter the convolution layer, the batch normalization layer and the activation function layer, a first characteristic diagram is obtained>
Where Conv (-) represents the convolution operation, BN (-) represents the batch normalization, and ReLU (-) represents activation by the ReLU function.
(2) For the first characteristic diagramAnd carrying out average pooling and maximum pooling along the space dimension to obtain average pooling characteristics and maximum pooling characteristics. Forwarding the two features to the same multi-layer perceptron (MLP) to obtain two new features, adding the two new features according to corresponding elements, and activating the added features through a sigmoid function to obtain the channel attention weight->
Wherein the method comprises the steps ofσIs a sigmoid function, MLP (·) represents a linear transformation and activation operation on the feature, a bar represents a corresponding element addition, avgpool (·) represents an average pooling operation, and Maxpool (·) represents a maximum pooling operation.
The channel attention is then weightedAnd the first characteristic diagram->Multiplying to obtain the channel attention feature->
(3) Attention to channel featuresAverage pooling and maximum pooling along the channel dimension results in average pooling features and maximum pooling features. Combining the two features on the channel dimension, adjusting the channel dimension to 1 through convolution operation, and finally activating through a sigmoid function to obtain the spatial attention weight +.>
Where Concat (·) represents a merge in the channel dimension.
The spatial attention is then weightedAnd channel attention feature->Multiplication gives a mixed attention profile->As an output of the mixed attention module.
(4) Mix attention featuresAnd original input +.>After residual connection, the output characteristic diagram of the mixed attention residual module is obtained through an activation function>
The resulting output is taken as input to the next module or structure.
Step S33: the method comprises the following steps of constructing an angle guiding attention structure comprising an angle coding module and a feature mapping module, wherein the specific construction process of the angle guiding attention structure is as follows:
(1) The feature mapping module consists of two convolution layers, a batch normalization layer and an activation function layer. The input of the feature mapping module is the output feature map of the mixed attention residual moduleThe first convolution layer carries out convolution operation on input to realize dimension reduction of channel dimension, nonlinear mapping is constructed through an activation function after batch normalization, and finally, the second convolution layer carries out convolution operation to adjust the number of channels to be consistent with the original number, so that the input feature mapping is obtained, and the output of the feature mapping module can be expressed as:
(2) Selecting an incident line-of-sight angle corresponding to an inverse synthetic aperture radar imageAs the input of the angle coding module, the incident sight angle is calculated to obtain an angle coding vectorAEThe calculation formula is as follows:
wherein C is a constant, representingIs a number of channels.
The obtained angle code vectorAEForwarding to MLP to construct nonlinear mapping, activating by sigmoid function to obtain angle guiding attention weight, and outputting as angle coding module, which can be expressed as:
(3) Output of angle coding moduleOutput of the mapping module ∈>Multiplying along channel dimension and then adding the multiplication with the input of the feature mapping module>And carrying out residual connection to obtain the output of the angle guiding attention structure, wherein the expression is as follows:
the resulting output is taken as input to the next module.
Step S34: after finishing input, structure 1, structure 2, structure 3, angle guiding attention structure, structure 4 and structure 5, carrying out average pooling on the output of the last mixed attention residual module of the structure 5, and carrying out classification and identification on the obtained result through the full connection layer.
Step S4: and training the convolutional neural network based on the angle guiding attention by using the training set to obtain a trained model. Training is carried out by adopting a 5-fold cross validation mode, a cross entropy loss is adopted as a training loss function, and a training model is stored after an optimizer is trained by adopting an Adam optimizer.
Step S5: and testing the trained model by using the test set to obtain a target recognition result of the inverse synthetic aperture radar image.
The following describes advantageous effects of the inverse synthetic aperture radar image target recognition method according to the embodiment of the present invention by way of comparative examples and examples.
Comparative example: and generating ISAR image samples to construct a data set by adopting a range Doppler algorithm for three types of aircraft target radar echoes, dividing the data set into a training set and a testing set, training by adopting the existing ResNet34 network structure, and testing the trained network by using the testing set to obtain three types of aircraft target recognition results.
Examples: the network shown in fig. 2 of the present invention was trained by using the same training set and test set as in the comparative example, and the trained network was tested using the test set to obtain three types of aircraft target recognition results.
The recognition accuracy, the network model parameters, and the calculated amounts obtained by the above comparative examples and examples are recorded in table 1 below.
Table 1 comparison of identification properties of comparative examples and examples
As can be seen from table 1, in the embodiment of the invention, the recognition rate of three types of aircrafts is improved, the average recognition rate of the whole network is improved by 3.56% compared with the existing network, which indicates that the angle guiding attention module and the mixed attention residual module effectively guide the difference of network focusing modulation phenomenon, and strengthen the relationship between the target feature extracted by the network and the corresponding incident sight angle, thereby obtaining better recognition effect. In addition, the network structure of the invention has a larger improvement on the identification performance than the existing network, and the introduced parameter quantity is lightweight, thereby realizing the balance between the identification performance and the calculation efficiency.
In summary, according to the method for identifying the target of the inverse synthetic aperture radar image in the embodiment of the invention, the angle is encoded to obtain a c×1 vector with the same number as that of the output characteristic channels, a nonlinear mapping relation is constructed through a multi-layer perceptron, then the angle information is activated through an activation function, finally the angle information plays a role similar to a channel attention mechanism, and the identification performance is improved by distributing channel attention weights to the characteristic mapping.
Compared with the prior art, the method for identifying the image target of the inverse synthetic aperture radar has the following beneficial effects:
1) Compared with the prior art, the convolutional neural network extracts deeper features of ISAR images through layer-by-layer convolutional operation and nonlinear mapping, and avoids the problems of redundancy process and feature loss of artificial feature extraction;
2) The existing method has the advantages that the feature extraction and the classifier design are separated, and the feature extraction and the classification recognition are integrated through a deep learning algorithm, so that the method is simpler and more convenient;
3) According to the invention, the difference of modulation phenomena in ISAR images under different incident view angles is considered, and the incident view angle information is embedded into the network, so that the associated coupling of target feature representation and target gesture is realized, and the accuracy rate of ISAR image target identification is improved.
The embodiment of the invention also provides an inverse synthetic aperture radar image target recognition device. Fig. 3 is a schematic diagram of an inverse synthetic aperture radar image target recognition apparatus according to an embodiment of the present invention. As shown in fig. 3, the inverse synthetic aperture radar image target recognition apparatus of the present embodiment includes:
the image acquisition module 101 acquires an inverse synthetic aperture radar image of a target by adopting a range-Doppler algorithm on target radar echo data, and divides the inverse synthetic aperture radar image into a training set and a testing set;
the incident view angle estimation module 102 performs estimation of the incident view angle according to the distance, azimuth and pitch information of the target contained in the radar echo data;
the convolutional neural network construction module 103 is used for constructing a convolutional neural network based on angle guiding attention, the convolutional neural network comprises a convolutional layer, a pooling layer, a plurality of mixed attention residual modules, angle guiding attention modules and a full connection layer, each mixed attention residual module comprises a convolutional layer, a batch normalization layer, an activation function layer and a mixed attention module, and the angle guiding attention structure comprises a feature mapping module and an angle coding module;
the training module 104 is used for training the convolutional neural network based on the angle guiding attention by using a training set to obtain a trained model;
and the test module 105 tests the trained model by using the test set to obtain a target recognition result of the inverse synthetic aperture radar image.
Specific examples of the device for identifying an image target of the inverse synthetic aperture radar of this embodiment may be referred to above as a limitation of the method for identifying an image target of the inverse synthetic aperture radar, and will not be described herein. The above-mentioned respective modules in the inverse synthetic aperture radar image target recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Embodiments of the present invention also provide a computer device, which may be a server, and an internal structure thereof may be as shown in fig. 4. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store operating parameter data for each of the frames. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements the steps of the inverse synthetic aperture radar image target recognition method of the present embodiment.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the method for identifying an inverse synthetic aperture radar image object according to the embodiment of the invention.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the invention, which is defined by the appended claims.

Claims (9)

1. An inverse synthetic aperture radar image target recognition method, comprising:
step S1: acquiring an inverse synthetic aperture radar image of a target by adopting a range-Doppler algorithm for target radar echo data, and dividing the inverse synthetic aperture radar image into a training set and a testing set;
step S2: estimating an incident line of sight angle according to the distance, azimuth and pitching information of the target contained in the radar echo data;
step S3: constructing a convolutional neural network based on angle-guided attention, wherein the convolutional neural network comprises a convolutional layer, a pooling layer, a plurality of mixed attention residual modules, an angle-guided attention module and a full-connection layer, each mixed attention residual module comprises a convolutional layer, a batch normalization layer, an activation function layer and a mixed attention module, and the angle-guided attention structure comprises a feature mapping module and an angle coding module;
step S4: training a convolutional neural network based on angle guidance attention by using a training set to obtain a trained model;
step S5: and testing the trained model by using the test set to obtain a target recognition result of the inverse synthetic aperture radar image.
2. The method according to claim 1, wherein the step S2 includes:
step S21: calculating the space coordinates of the target according to the distance, azimuth angle and pitch angle between the target and the radar, and obtaining a unit vector in the opposite direction of the sight direction of the radar according to the space coordinates of the target;
performing polynomial fitting on the space coordinates of the target at a plurality of moments to obtain a track of the target, obtaining a target speed through derivation, and obtaining a unit vector of the target moving direction according to the target speed;
and calculating to obtain an included angle between the opposite direction of the radar sight line direction and the target movement direction as an incident sight line angle.
3. The method according to claim 1 or 2, wherein the step S3 comprises:
step S31: sequentially stacking a convolution layer and a pooling layer, and reducing the dimension of an input image;
step S32: constructing a mixed attention residual error module consisting of a plurality of sequentially laminated convolution layers, batch normalization layers, an activation function layer and a mixed attention module;
step S33: constructing an angle guiding attention structure comprising an angle coding module and a feature mapping module;
step S34: and carrying out average pooling on the output of the last mixed attention residual error module, and carrying out classification and identification on the obtained result through the full connection layer.
4. A method according to claim 3, wherein said step S32 comprises:
the method comprises the steps that an original input is subjected to a convolution layer, a batch normalization layer and an activation function layer to obtain a first feature map;
carrying out average pooling and maximum pooling on the first feature map along the space dimension to obtain a channel attention weight, and multiplying the channel attention weight with the first feature map to obtain a channel attention feature;
carrying out average pooling and maximum pooling on the channel attention features along the channel dimension to obtain a spatial attention weight, and multiplying the spatial attention weight by the channel attention features to obtain a mixed attention feature which is used as the output of the mixed attention module;
and after the mixed attention characteristic is connected with the original input residual, obtaining an output characteristic diagram of the mixed attention residual module through an activation function.
5. The method according to claim 3 or 4, wherein the step S33 includes:
taking an output feature map of the mixed attention residual error module as input of a feature mapping module, performing convolution operation on the input through a first convolution layer to realize dimension reduction of channel dimension, constructing nonlinear mapping through an activation function after batch normalization, and performing convolution operation through a second convolution layer to adjust the number of channels to be consistent with the original number to obtain the input feature map as output of the feature mapping module;
selecting an incident sight angle corresponding to an inverse synthetic aperture radar image as input of an angle coding module, calculating the incident sight angle to obtain an angle coding vector, forwarding the obtained angle coding vector to a multi-layer perceptron to construct nonlinear mapping, and obtaining an angle guiding attention weight through an activation function to serve as output of the angle coding module;
multiplying the output of the angle coding module and the output of the feature mapping structure along the channel dimension, and then carrying out residual connection with the input of the feature mapping module to obtain the output of the angle guiding attention structure.
6. The method according to any one of claims 1 to 5, wherein in step S4, the convolutional neural network based on angle guidance attention is trained by using a 5-fold cross-validation method, the training loss function uses cross entropy loss, and the optimizer uses Adam optimizer.
7. An inverse synthetic aperture radar image target recognition apparatus, comprising:
the image acquisition module acquires an inverse synthetic aperture radar image of the target by adopting a range-Doppler algorithm on target radar echo data, and divides the inverse synthetic aperture radar image into a training set and a testing set;
the incident view angle estimation module is used for estimating the incident view angle according to the distance, azimuth and pitching information of the target contained in the radar echo data;
the system comprises a convolutional neural network construction module, an angle-guided attention-based convolutional neural network construction module, a characteristic mapping module and an angle coding module, wherein the convolutional neural network construction module comprises a convolutional layer, a pooling layer, a plurality of mixed attention residual modules, an angle-guided attention residual module and a full-connection layer, each mixed attention residual module comprises a convolutional layer, a batch normalization layer, an activation function layer and a mixed attention module, and the angle-guided attention structure comprises a characteristic mapping module and an angle coding module;
the training module is used for training the convolutional neural network based on the angle guiding attention by using the training set to obtain a trained model;
and the test module is used for testing the trained model by using the test set to obtain a target identification result of the inverse synthetic aperture radar image.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1-6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1-6.
CN202311683226.1A 2023-12-09 2023-12-09 Method and device for identifying image target of inverse synthetic aperture radar Pending CN117671374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311683226.1A CN117671374A (en) 2023-12-09 2023-12-09 Method and device for identifying image target of inverse synthetic aperture radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311683226.1A CN117671374A (en) 2023-12-09 2023-12-09 Method and device for identifying image target of inverse synthetic aperture radar

Publications (1)

Publication Number Publication Date
CN117671374A true CN117671374A (en) 2024-03-08

Family

ID=90071071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311683226.1A Pending CN117671374A (en) 2023-12-09 2023-12-09 Method and device for identifying image target of inverse synthetic aperture radar

Country Status (1)

Country Link
CN (1) CN117671374A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363175A (en) * 2022-12-21 2023-06-30 北京化工大学 Polarized SAR image registration method based on attention mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN114564982A (en) * 2022-01-19 2022-05-31 中国电子科技集团公司第十研究所 Automatic identification method for radar signal modulation type
CN115808666A (en) * 2022-07-29 2023-03-17 中国人民解放军空军预警学院 Rotor category identification method based on residual multi-scale depth network model
CN116051428A (en) * 2023-03-31 2023-05-02 南京大学 Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN116758435A (en) * 2023-06-16 2023-09-15 西安电子科技大学 ISAR target deformation steady recognition method based on dual-channel fusion network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN114564982A (en) * 2022-01-19 2022-05-31 中国电子科技集团公司第十研究所 Automatic identification method for radar signal modulation type
CN115808666A (en) * 2022-07-29 2023-03-17 中国人民解放军空军预警学院 Rotor category identification method based on residual multi-scale depth network model
CN116051428A (en) * 2023-03-31 2023-05-02 南京大学 Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN116758435A (en) * 2023-06-16 2023-09-15 西安电子科技大学 ISAR target deformation steady recognition method based on dual-channel fusion network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SANGHYUN WOO ET AL.: "CBAM: Convolutional Block Attention Module", 《ARXIV:1807.06521V2》, 18 July 2018 (2018-07-18), pages 1 - 17 *
郭帅等: "基于角度引导Transformer融合网络的多站协同目标识别方法", 《雷达学报》, 23 April 2023 (2023-04-23), pages 516 - 528 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363175A (en) * 2022-12-21 2023-06-30 北京化工大学 Polarized SAR image registration method based on attention mechanism

Similar Documents

Publication Publication Date Title
CN110472483B (en) SAR image-oriented small sample semantic feature enhancement method and device
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN110472627B (en) End-to-end SAR image recognition method, device and storage medium
CN109683161B (en) Inverse synthetic aperture radar imaging method based on depth ADMM network
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN111126134B (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN105137408B (en) The radar angle ultra-resolution method that a kind of optimal antenna directional diagram is chosen
CN117671374A (en) Method and device for identifying image target of inverse synthetic aperture radar
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN114565655B (en) Depth estimation method and device based on pyramid segmentation attention
CN108345856B (en) SAR automatic target recognition method based on heterogeneous convolutional neural network integration
Wang et al. SAR target recognition based on probabilistic meta-learning
Zhang et al. Polarimetric HRRP recognition based on ConvLSTM with self-attention
CN113850783B (en) Sea surface ship detection method and system
CN108646247A (en) Inverse synthetic aperture radar imaging method based on Gamma process linear regression
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN110490894A (en) Background separating method before the video decomposed based on improved low-rank sparse
CN115775261A (en) Sea surface multi-target tracking method and system based on Gaussian distance matching
Qin et al. ISAR resolution enhancement using residual network
CN112819199A (en) Precipitation prediction method, device, equipment and storage medium
Yu et al. Application of a convolutional autoencoder to half space radar hrrp recognition
CN111781599A (en) SAR moving ship target speed estimation method based on CV-EstNet
CN112132880B (en) Real-time dense depth estimation method based on sparse measurement and monocular RGB image
CN116206196B (en) Ocean low-light environment multi-target detection method and detection system thereof
CN115861595B (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination