CN115965794A - SAR moving target detection method, device and storage medium - Google Patents

SAR moving target detection method, device and storage medium Download PDF

Info

Publication number
CN115965794A
CN115965794A CN202211528281.9A CN202211528281A CN115965794A CN 115965794 A CN115965794 A CN 115965794A CN 202211528281 A CN202211528281 A CN 202211528281A CN 115965794 A CN115965794 A CN 115965794A
Authority
CN
China
Prior art keywords
complex
moving target
image
sar
complex field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211528281.9A
Other languages
Chinese (zh)
Inventor
穆慧琳
赵思源
丁畅
童宁宁
宋玉伟
郑桂妹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN202211528281.9A priority Critical patent/CN115965794A/en
Publication of CN115965794A publication Critical patent/CN115965794A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a SAR moving target detection method, a device and a storage medium, belonging to the technical field of microwave remote sensing and comprising the following steps: acquiring a dual-channel SAR complex image; preprocessing a dual-channel SAR complex image to obtain a data set; constructing a deep complex field convolution neural network by using a complex field residual error dense block; inputting a dual-channel SAR complex image block in a data set into a depth complex field convolution neural network to obtain a moving target image, wherein the depth complex field convolution neural network comprises: the moving target feature extraction module is used for extracting a feature map of the SAR complex image block in the data set by using a complex field residual error dense block RDB; the characteristic fusion module is used for carrying out characteristic fusion on the characteristic graph of the SAR complex image block; and the moving target image generation module is used for generating an SAR moving target prediction image according to the SAR complex image block after the feature fusion. The invention expands the common convolutional neural network to a complex field and has high detection performance.

Description

SAR moving target detection method, device and storage medium
Technical Field
The invention belongs to the technical field of microwave remote sensing, and particularly relates to a Synthetic Aperture Radar (SAR) moving target detection method and device based on a complex field convolutional neural network and a storage medium.
Background
Synthetic Aperture Radar (SAR for short) is a high-resolution imaging Radar, has the characteristics of all-time, all-weather and long distance, and can provide abundant surface electromagnetic scattering characteristic information. The SAR system realizes Moving Target Indication (GMTI) on the basis of inherent imaging, namely, the detection of a Ground Moving Target is realized while imaging a static scene by utilizing a signal processing technology. However, the moving target is usually submerged in the ground clutter, and detection is difficult to achieve. Meanwhile, the SAR imaging algorithm is usually specific to a static scene and is influenced by motion parameters, and a moving target has the problems of azimuth displacement and defocusing, so that the detection is more difficult.
The single-channel SAR-GMTI system has the advantages that the frequency spectrum of ground clutter is widened due to platform motion, so that slow-speed moving targets are submerged in the system and are difficult to detect; the multi-channel SAR moving target detection realizes the space-time two-domain combined processing by increasing the dimension number on the space, thereby overcoming the defects of a single-channel system in the moving target detection. The traditional multi-channel method mainly aims at moving targets with radial speed, cannot detect targets with tangential speed only, and has blind speed targets. Both DPCA and ATI methods require that the two channels have a high registration accuracy. The STAP is used for a multi-antenna channel system, the method needs to meet the requirement of independent and identically distributed training samples to realize the estimation of the covariance matrix, but in the actual environment, due to the influence of various non-ideal factors, samples around a unit to be detected are difficult to meet the condition. Therefore, it is urgent to break through the conventional clutter suppression method and provide a moving target detection method suitable for complex clutter environment.
In recent years, with the continuous development of deep learning, the deep learning theory is more and more applied to the field of SAR image processing, such as SAR image terrain classification, speckle noise suppression, and the like, and a better effect is obtained. However, many applications only utilize the amplitude information of the SAR image, ignoring the phase information.
Disclosure of Invention
The invention provides a SAR moving target detection method, a device and a storage medium of a complex field convolution neural network, which aim to solve the problem that the traditional moving target detection method is difficult to detect slow moving targets and moving targets only with tangential speed in a complex clutter environment, applies a deep learning theory to a dual-channel SAR complex image, and provides the SAR moving target detection method, the device and the storage medium of the complex field convolution neural network.
In order to achieve the above purpose, the invention provides the following technical scheme:
a SAR moving target detection method comprises the following steps:
acquiring a dual-channel SAR complex image;
preprocessing a dual-channel SAR complex image to obtain a data set;
constructing a deep complex field convolution neural network by using a complex field residual error dense block;
inputting the dual-channel SAR complex image block in the data set into a depth complex field convolution neural network to obtain a moving target image;
the deep complex field convolution neural network comprises:
the moving target feature extraction module is used for extracting a feature map of a dual-channel SAR complex image block in the data set by using a complex field residual error dense block RDB;
the characteristic fusion module is used for carrying out characteristic fusion on the characteristic graph of the SAR complex image block;
and the moving target image generation module is used for generating an SAR moving target prediction image according to the SAR complex image block after the feature fusion.
Preferably, the preprocessing the dual-channel SAR complex image specifically includes image cutting and amplitude normalization processing on the image, and includes the following steps:
cutting the dual-channel SAR complex image into 50 multiplied by 50 image blocks by utilizing a sliding window with the step length of 25 and the size of 50 multiplied by 50, and superposing a plurality of simulated moving target signals on the image blocks according to different signal to noise ratios (SCNR) to obtain the complex image
Figure BDA0003969968020000021
To complex image
Figure BDA0003969968020000022
According to the channel 1 image->
Figure BDA0003969968020000023
In a maximum value->
Figure BDA0003969968020000024
And a minimum value +>
Figure BDA0003969968020000025
Carrying out amplitude normalization, and obtaining a normalized complex image I i Expressed as:
Figure BDA0003969968020000031
obtaining true value of moving target image
Figure BDA0003969968020000032
For moving target image truth value>
Figure BDA0003969968020000033
According to +>
Figure BDA0003969968020000034
Is greater than or equal to>
Figure BDA0003969968020000035
And a minimum value->
Figure BDA0003969968020000036
And (3) carrying out amplitude normalization, wherein the normalized moving target image O is represented as:
Figure BDA0003969968020000037
and acquiring a data set according to the processed multiple dual-channel SAR complex images and the corresponding moving target image pairs.
Preferably, the acquiring of moving object image truth values
Figure BDA0003969968020000038
The method comprises the following steps:
recording the position and the speed of a real moving target in an imaging scene by using an airborne optical system, and then extracting a real moving target image truth value from a Terras SAR-X complex image;
and for the simulated moving target, generating a simulated moving target image truth value by using the moving target simulation parameters.
Preferably, the moving object feature extraction module includes a single-channel feature extraction network and an inter-channel feature extraction network;
in a single-channel feature extraction network, the real part and the imaginary part of an SAR complex image block of a channel 1 are separated and regarded as two independent channels as the input of the network, the size of which is 50 × 50 × 2, and the method specifically includes:
firstly, extracting shallow layer characteristics by using two complex field convolution layers and a complex field ReLU, wherein each convolution layer comprises 32 convolution kernels with the size of 3 multiplied by 3, and the sliding step length is set to be 1; assume that the input of the first complex field convolutional layer is
Figure BDA0003969968020000039
Contains K signatures, the convolution kernel is represented as:
Figure BDA00039699680200000310
m is the number of characteristic diagrams output by the convolution layer, and the deviation amount of the plurality of fields is
Figure BDA00039699680200000311
The complex field convolution layer is then expressed as:
Figure BDA00039699680200000312
where, is a complex convolution operation, the complex field ReLU is represented as:
Figure BDA00039699680200000313
/>
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00039699680200000314
and &>
Figure BDA00039699680200000315
Respectively representing the real part and the imaginary part of a complex number;
then extracting layering characteristics from each convolution layer by applying 3 complex field residual error dense blocks (RDBs);
in the inter-channel feature extraction network, the real parts and the imaginary parts of SAR complex image blocks of all channels are separated, the real parts and the imaginary parts of all channels are respectively connected in series, then the real parts and the imaginary parts are connected in series to serve as the input of the network, the size is 50 multiplied by 4, and the extraction of the hierarchical features is realized by applying the structure same as that of a single-channel feature extraction network.
Preferably, the extracting layered features from each convolution layer by using 3 complex field residual error dense blocks RDB specifically includes:
in each complex domain RDB, firstly, extracting local features by using 6 complex domain convolutional layers and a complex domain ReLU, wherein each convolutional layer comprises 32 convolutional cores with the size of 3 multiplied by 3, and the output of the previous complex domain RDB and the output of all the previous convolutional layers in the current RDB are connected in series according to a generation sequence and are used as the input of the current convolutional layer of the current RDB;
then applying 32 convolution kernels with the size of 1 multiplied by 1 to fuse the feature graph from the previous complex domain RDB and all convolution layers in the current complex domain RDB; and finally, introducing local residual learning, and adding the feature graph of the previous complex domain RDB and the feature graph of the current complex domain RDB after 1 × 1 convolution.
Preferably, in the feature fusion module, all the feature maps of the complex field RDB are connected in series, and 32 convolution kernels with a size of 1 × 1 are applied to fuse the feature maps of the RDB; then, 1 complex field convolution layer and a complex field ReLU are used for further extracting features; the convolutional layer contains 32 convolutional kernels with the size of 3 × 3, and the sliding step is set to 1; and finally, introducing global residual learning to realize feature fusion of the feature map of the SAR complex image block.
Preferably, in the moving object image generation module, 1 complex field convolution layer and complex field ReLU extraction features are applied; the convolutional layer contains 32 convolutional kernels with the size of 3 x 3, and the sliding step length is set to be 1; then generating a moving target image by using the 1 complex field convolution layer and the Abs activation function; the convolutional layer contains 1 convolutional kernel with the size of 3 × 3, and the sliding step length is set to 1; the Abs activation function is expressed as:
Figure BDA0003969968020000041
wherein the content of the first and second substances,
Figure BDA0003969968020000042
is the output of the L-th multi-field convolution layer.
Preferably, before the moving target image is obtained by using the deep complex field convolution neural network, a loss function is constructed, and the deep complex field convolution neural network is trained by using a complex field gradient descent algorithm, wherein the specific training process is as follows:
the MSE loss function is adopted in the deep complex field convolution neural network, model parameters are continuously updated through the minimum loss function, and the MSE loss function E is expressed as:
Figure BDA0003969968020000051
wherein the content of the first and second substances,
Figure BDA0003969968020000052
as output of the deep complex field convolutional neural network, O w,h A true value of a moving target image;
firstly, forward transmission is carried out on a given sample to obtain output values of all network neural nodes;
then, a loss function is calculated and updated by using a complex field gradient descent algorithm
Figure BDA0003969968020000053
And &>
Figure BDA0003969968020000054
Finally obtaining a minimum loss function;
and substituting the updated parameters into the loss function, and repeatedly executing the parameter updating process until the loss function is minimum, and ending the updating.
Another object of the present invention is to provide a SAR moving target detection apparatus, comprising:
the image acquisition module is used for acquiring a dual-channel SAR complex image;
the image preprocessing module is used for preprocessing the double-channel SAR complex image to obtain a data set;
the network construction module is used for constructing a deep complex field convolutional neural network by using a complex field residual error dense block;
the target prediction module is used for inputting the dual-channel SAR complex image blocks in the data set into a depth complex field convolution neural network to obtain a moving target image;
the deep complex field convolution neural network comprises:
the moving target feature extraction module is used for extracting a feature map of the SAR complex image block in the data set by using a complex field residual error dense block RDB;
the characteristic fusion module is used for carrying out characteristic fusion on the characteristic graph of the SAR complex image block;
and the moving target image generation module is used for generating an SAR moving target prediction image according to the SAR complex image block after the feature fusion.
The present invention also provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the SAR moving target detection method.
The SAR moving target detection method and device provided by the invention have the following beneficial effects:
1. the method expands the common convolutional neural network to a complex domain, and fully utilizes the amplitude and phase information of the SAR complex image;
2. the invention introduces complex field residual error dense blocks, extracts rich local features through the densely connected complex field convolution layers, adaptively learns more effective features by utilizing local feature fusion, and jointly and adaptively learns global hierarchical features in an integral way by using global feature fusion; the moving target detection method provided by the invention has high detection performance, and is particularly suitable for slow moving targets and moving targets with only tangential speed.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some embodiments of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow diagram of an SAR moving target detection method according to embodiment 1 of the present invention;
FIG. 2 is a block diagram of a deep complex field convolutional neural network according to the present invention;
FIG. 3 is a block diagram of a complex field residual error dense block according to the present invention;
FIG. 4 is a single channel SAR test image;
FIG. 5 is a true value image of an SAR moving target;
fig. 6 is an image of a SAR moving target generated based on a pre-trained complex-field convolutional neural network.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention and can practice the same, the present invention will be described in detail with reference to the accompanying drawings and specific examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
The invention provides an SAR moving target detection method, which specifically comprises the following steps as shown in figure 1:
step 1, obtaining a double-channel SAR complex image of a satellite-borne SAR system Terras SAR-X.
Step 2, preprocessing the dual-channel SAR complex image to obtain a data set, specifically, performing image cutting and amplitude normalization processing on the dual-channel SAR complex image, wherein the processing process comprises the following steps:
and 2.1, cutting the dual-channel SAR complex image into 50 multiplied by 50 image blocks by utilizing a sliding window with the step size of 25 and the size of 50 multiplied by 50. In order to increase the moving target samples, a plurality of simulated moving target signals are superposed on the complex image blocks according to different signal-to-noise-and-noise ratios (SCNR) to obtain complex images
Figure BDA0003969968020000071
Figure BDA0003969968020000072
Wherein the content of the first and second substances,
Figure BDA0003969968020000073
and t m Expressed as fast time and slow time, respectively, i =1,2 denotes the number of channels, in conjunction with a control unit>
Figure BDA0003969968020000074
For the SAR complex image acquired by TerrraSAR-X, C is the average amplitude of clutter, and SCNR obeys uniform distribution U (0 dB, 20dB) in the sample. />
Figure BDA0003969968020000075
V =7400m/s is the platform velocity, B =150MHz is the transmit signal bandwidth, c =3 × 10 8 m/s is the speed of light, and λ =0.03 represents the carrier wavelength. The number N of moving targets in the sample is a random integer from 1 to 5, the nth moving target position (x) n ,R n ) Radial velocity v of moving object uniformly distributed in imaging scene rn And tangential velocity v an Respectively obey uniform distribution | v rn l-U (1 m/s,20 m/s) and | v an l-U (1 m/s,20 m/s), and the length of the moving target signal is delta T n =B an ,B a For Doppler bandwidth, the signal Doppler frequency is gamma n =γ dc γ dn /(γ dcdn ) Wherein γ is dc =-2V 2 /(λR n ),γ dn =-2(V-v an ) 2 /(λR n ),f dn =2v rn /λ。ψ i (f dn ) Is the interference phase caused by the target radial velocity, where 1 (f dn )=0,ψ 2 (f dn )=2πf dn d/V, d =2.4m is 1/2 of the base length.
Step 2.2, compound image alignment
Figure BDA0003969968020000076
According to channel 1 image>
Figure BDA0003969968020000077
Is greater than or equal to>
Figure BDA0003969968020000078
And a minimum value->
Figure BDA0003969968020000079
Carrying out amplitude normalization to obtain normalized complex image I i Expressed as:
Figure BDA00039699680200000710
step 2.3, recording the position of the moving target in the imaging scene by using the onboard optical system, then extracting the moving target image from the Terras SAR-X complex image, and aiming at the imitationGenerating true value of simulated moving target image by using moving target simulation parameters
Figure BDA00039699680200000711
The generated moving object image is represented as:
Figure BDA0003969968020000081
for moving target image truth value
Figure BDA0003969968020000082
According to>
Figure BDA0003969968020000083
In a maximum value->
Figure BDA0003969968020000084
And a minimum value->
Figure BDA0003969968020000085
And (4) carrying out amplitude normalization, wherein the normalized moving target image O is represented as:
Figure BDA0003969968020000086
generating 1200k double-channel SAR complex images and corresponding moving target image pairs to obtain a data set, randomly selecting 800k from the data set as a training sample, 200k as a verification sample and 200k as a test sample.
And 3, constructing a deep complex field convolutional neural network by using the complex field residual error dense block.
As shown in fig. 2, the deep complex field convolutional neural network mainly includes three modules: the device comprises a moving target feature extraction module, a feature fusion module and a moving target image generation module. The moving object feature extraction module comprises a single-channel feature extraction network and an inter-channel feature extraction network.
Feature extraction net in single channelIn the network, the real part and the imaginary part of the SAR complex image block of channel 1 are separated, and two independent channels are considered as input of the network, and the size is 50 × 50 × 2. First, shallow features are extracted using two complex field convolution layers and a complex field ReLU. Each convolutional layer contains 32 convolutional kernels of size 3 x 3, with the sliding step set to 1. Assume that the input of the first complex field convolutional layer is
Figure BDA0003969968020000087
Contains K feature maps with convolution kernels expressed as ^ er>
Figure BDA0003969968020000088
M is the number of characteristic graphs output by the convolution layer, and the deviation amount of the plurality of fields is ^ er>
Figure BDA0003969968020000089
The complex field convolution layer is then expressed as:
Figure BDA00039699680200000810
wherein, is a complex convolution operation. The complex field ReLU is represented as:
Figure BDA00039699680200000811
wherein the content of the first and second substances,
Figure BDA00039699680200000812
and &>
Figure BDA00039699680200000813
Representing the real and imaginary parts of the complex number, respectively. Then, 3 complex field residual error dense blocks (RDBs) are applied to extract the layered features from each convolutional layer. As shown in FIG. 3, in each complex-field RDB, local features are first extracted using 6 complex-field convolutional layers, each containing 32 convolutional kernels of size 3 × 3, and the output of the previous complex-field RDB and all previous volumes in the current RDBThe outputs of the buildup layers are concatenated in order of generation as the input to the current buildup layer of the current RDB. Then apply 32 convolution kernels of size 1 x 1 to fuse the feature map from the previous complex field RDB and all convolution layers in the current complex field RDB. And finally, introducing local residual learning, and adding the feature graph of the previous complex domain RDB and the feature graph of the current complex domain RDB after 1 × 1 convolution to improve data flow.
In the inter-channel feature extraction network, the real parts and the imaginary parts of SAR complex image blocks of all channels are separated, the real parts and the imaginary parts of all the channels are respectively connected in series, and then the real parts and the imaginary parts are connected in series to serve as the input of the network, wherein the size of the network is 50 multiplied by 4. And the extraction of the layered features is realized by applying the same structure as a single-channel feature extraction network.
In the feature fusion module, all the feature maps of all the complex domain RDBs are connected in series, and 32 convolution kernels with the size of 1 × 1 are applied to fuse the feature maps of the RDBs. Features are then further extracted using the 1 complex field convolution layer and the complex field ReLU. The convolutional layer contains 32 convolutional kernels of size 3 × 3, with a sliding step set to 1. And finally, introducing global residual learning to improve data flow.
In the moving object image generation module, 1 complex field convolution layer and complex field ReLU extraction features are applied. The convolutional layer contains 32 convolutional kernels of size 3 × 3 with a sliding step set to 1. Then generating a moving target image by using the 1 complex field convolutional layer and the Abs activation function. The convolutional layer contains 1 convolutional kernel of size 3 × 3 with a sliding step set to 1. The Abs activation function is expressed as:
Figure BDA0003969968020000091
wherein the content of the first and second substances,
Figure BDA0003969968020000092
is the output of the L-th multi-field convolutional layer.
Step 4, constructing a loss function, and training a deep complex field convolution neural network by using a complex field gradient descent algorithm, wherein the specific process is as follows:
the MSE loss function is adopted in the deep complex field convolution neural network, model parameters are continuously updated through the minimum loss function, and the MSE loss function is expressed as:
Figure BDA0003969968020000093
/>
wherein, O wh Is the moving object image true value. Firstly, forward transmission is carried out on a given sample to obtain output values of all network neural nodes; then, a loss function is calculated and updated using a complex field gradient descent algorithm
Figure BDA0003969968020000101
And &>
Figure BDA0003969968020000102
Finally, the minimum loss function is obtained. The network parameters after the t-th update are expressed as:
Figure BDA0003969968020000103
Figure BDA0003969968020000104
where μ is the learning rate. According to
Figure BDA0003969968020000105
And
Figure BDA0003969968020000106
then it is possible to obtain:
Figure BDA0003969968020000107
wherein the error term is represented as
Figure BDA0003969968020000108
When L = L, the signal is transmitted,
Figure BDA0003969968020000109
when L =1, \8230;, L-1, consideration is given to +>
Figure BDA00039699680200001010
When using a complex field ReLU as activation function, a->
Figure BDA00039699680200001011
The calculation is as follows:
Figure BDA00039699680200001012
where rot180 represents a 180 ° rotation of the matrix. When the activation function is not to be used,
Figure BDA00039699680200001013
is calculated as
Figure BDA00039699680200001014
Due to the fact that
Figure BDA00039699680200001015
Is counted to>
Figure BDA00039699680200001016
The parameters are thus updated as:
Figure BDA0003969968020000111
and substituting the updated parameters into the loss function, and repeatedly executing the parameter updating process until the loss function is minimum, and ending the updating. During training, an Nvidia Ge Force GTX 1080GPU is used for acceleration.
And 5, inputting the dual-channel SAR complex image blocks in the test set into the trained depth complex field convolution neural network, and taking the obtained output result as the moving target image result of the input image.
The beneficial effects of the invention are verified as follows: as can be seen from a double-channel SAR moving target detection test experiment, the moving target is submerged under clutter and is difficult to directly detect as shown in a single-channel SAR test image in figure 4. Fig. 5 is a true value of an image of a moving object, which is defocused in the azimuth direction due to the tangential velocity. Fig. 6 is a predicted moving object image obtained through a trained network, and it can be seen that the predicted moving object image has better similarity with a true value, and images of 6 moving objects can be successfully obtained.
The detection method provided by the embodiment has the following advantages:
1. the method expands the common convolutional neural network into a complex domain, comprises network input and network parameters, fully utilizes the amplitude and phase information of the SAR complex image, and is more suitable for the SAR complex image;
2. the embodiment introduces a complex field residual error dense block, extracts abundant local features through a densely connected complex field convolution layer, adaptively learns more effective features by utilizing local feature fusion, and jointly and adaptively learns global hierarchical features in an integral way by using global feature fusion;
3. in the embodiment, the gradient descent algorithm of the real number domain is expanded to the complex number domain, and the optimal network parameters are obtained through iteration;
4. the measured data processing result shows that: the moving target detection method provided by the invention has high detection performance, and is particularly suitable for slow moving targets and moving targets with only tangential speed.
The SAR moving target detection method based on the complex field convolution neural network provided by the invention takes the moving parameters of the target into consideration and is reflected in the SAR phase, and the detection performance is obviously improved compared with the traditional processing method by utilizing the characteristic difference of the moving target and the clutter in the amplitude and the phase in the image. Training the network by using training data, and continuously updating network parameters by minimizing a loss function to finally obtain an implicit clutter suppression and moving target detection model suitable for the SAR complex image.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any simple modifications or equivalent substitutions of the technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1.A SAR moving target detection method is characterized by comprising the following steps:
acquiring a dual-channel SAR complex image;
preprocessing a dual-channel SAR complex image to obtain a data set;
constructing a deep complex field convolution neural network by using a complex field residual error dense block;
inputting the dual-channel SAR complex image blocks in the data set into a depth complex field convolution neural network to obtain a moving target image;
the deep complex field convolution neural network comprises:
the moving target feature extraction module is used for extracting a feature map of a dual-channel SAR complex image block in the data set by using a complex field residual error dense block RDB;
the characteristic fusion module is used for carrying out characteristic fusion on the characteristic graph of the SAR complex image block;
and the moving target image generation module is used for generating an SAR moving target prediction image according to the SAR complex image block after the feature fusion.
2. The SAR moving target detection method according to claim 1, wherein the pre-processing of the dual-channel SAR complex image is specifically image cutting and amplitude normalization processing of the image, and comprises the following steps:
cutting a dual-channel SAR complex image into a 50 x 50 image block by using a sliding window with the step length of 25 and the size of 50 x 50, and overlapping a plurality of simulated moving target signals on the image block according to different signal-to-noise-and-noise ratios (SCNR) to obtain the complex image
Figure FDA0003969968010000011
To complex image
Figure FDA0003969968010000012
According to the channel 1 image->
Figure FDA0003969968010000013
In a maximum value->
Figure FDA0003969968010000014
And a minimum value->
Figure FDA0003969968010000015
Carrying out amplitude normalization, and obtaining a normalized complex image I i Expressed as:
Figure FDA0003969968010000016
obtaining moving object image truth value
Figure FDA0003969968010000017
For moving target image truth value>
Figure FDA0003969968010000018
According to>
Figure FDA0003969968010000019
In a maximum value->
Figure FDA00039699680100000110
And minimum value
Figure FDA00039699680100000111
And (4) carrying out amplitude normalization, wherein the normalized moving target image O is represented as:
Figure FDA00039699680100000112
and acquiring a data set according to the processed multiple double-channel SAR complex images and corresponding moving target image pairs.
3. The SAR moving target detection method of claim 2, characterized in that the obtaining of moving target image truth value
Figure FDA0003969968010000021
The method comprises the following steps:
recording the position and the speed of a real moving target in an imaging scene by using an airborne optical system, and then extracting a real moving target image truth value from a complex image;
and for the simulated moving target, generating a true value of a simulated moving target image by using the moving target simulation parameters.
4. The SAR moving target detection method according to claim 2, characterized in that the moving target feature extraction module comprises a single-channel feature extraction network and an inter-channel feature extraction network;
in a single-channel feature extraction network, the real part and the imaginary part of an SAR complex image block of a channel 1 are separated, and are regarded as two independent channels as input of the network, the size of the two independent channels is 50 × 50 × 2, and the method specifically includes:
firstly, extracting shallow layer characteristics by using two complex field convolution layers and a complex field ReLU, wherein each convolution layer comprises 32 convolution kernels with the size of 3 multiplied by 3, and the sliding step length is set to be 1; assume that the input of the first complex field convolutional layer is
Figure FDA0003969968010000022
Contains K signatures, the convolution kernel is represented as: />
Figure FDA0003969968010000023
M is the number of characteristic graphs output by the convolution layer, and the deviation amount of the plurality of fields is->
Figure FDA0003969968010000024
The complex field convolution layer is represented as:
Figure FDA0003969968010000025
where, is a complex convolution operation, the complex field ReLU is represented as:
Figure FDA0003969968010000026
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003969968010000027
and &>
Figure FDA0003969968010000028
Respectively representing the real part and the imaginary part of the complex number;
then extracting layering characteristics from each convolution layer by applying 3 complex field residual error dense blocks (RDBs);
in the inter-channel feature extraction network, the real parts and the imaginary parts of SAR complex image blocks of all channels are separated, the real parts and the imaginary parts of all channels are respectively connected in series, then the real parts and the imaginary parts are connected in series to serve as the input of the network, the size is 50 multiplied by 4, and the extraction of the hierarchical features is realized by applying the structure which is the same as that of the single-channel feature extraction network.
5. The SAR moving target detection method according to claim 4, wherein the applying 3 complex field residual dense blocks RDB to extract layered features from each convolutional layer specifically comprises:
in each complex field RDB, local features are extracted by using 6 complex field convolutional layers and a complex field ReLU, each convolutional layer comprises 32 convolutional cores with the size of 3 x 3, and the output of the previous complex field RDB and the output of all the previous convolutional layers in the current RDB are connected in series according to the generation sequence and serve as the input of the current convolutional layer of the current RDB;
then applying 32 convolution kernels with the size of 1 multiplied by 1 to fuse the feature graph from the previous complex domain RDB and all convolution layers in the current complex domain RDB;
and finally, local residual error learning is introduced, and the feature graph of the previous complex domain RDB is added with the feature graph of the current complex domain RDB after being subjected to 1 × 1 convolution.
6. The SAR moving target detection method according to claim 5, characterized in that in the feature fusion module, all feature maps of all complex domain RDBs are connected in series, and 32 convolution kernels with the size of 1 x 1 are applied to fuse the feature maps of the RDBs; then, 1 complex field convolution layer and a complex field ReLU are used for further extracting features; the convolutional layer contains 32 convolutional kernels with the size of 3 × 3, and the sliding step is set to 1; and finally, introducing global residual learning to realize feature fusion of the feature map of the SAR complex image block.
7. The SAR moving target detection method according to claim 6, characterized in that in the moving target image generation module, 1 complex field convolution layer and complex field ReLU extraction features are applied; the convolutional layer contains 32 convolutional kernels with the size of 3 × 3, and the sliding step is set to 1; then generating a moving target image by using the 1 complex field convolution layer and the Abs activation function; the convolution layer contains 1 convolution kernel with the size of 3 multiplied by 3, and the sliding step length is set to 1; the Abs activation function is expressed as:
Figure FDA0003969968010000031
wherein the content of the first and second substances,
Figure FDA0003969968010000032
and &>
Figure FDA0003969968010000033
Representing respectively the real and imaginary parts of a complex number>
Figure FDA0003969968010000034
Is the output of the L-th multi-field convolutional layer.
8. The SAR moving target detection method of claim 4, characterized in that before obtaining the moving target image by using the deep complex field convolutional neural network, the method further comprises constructing a loss function, and training the deep complex field convolutional neural network by using a complex field gradient descent algorithm, wherein the specific training process is as follows:
the MSE loss function is adopted in the deep complex field convolution neural network, model parameters are continuously updated through the minimum loss function, and the MSE loss function E is expressed as follows:
Figure FDA0003969968010000041
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003969968010000042
as output of a deep complex-field convolutional neural network, O w,h A true value of a moving target image;
firstly, forward transmission is carried out on a given sample to obtain output values of all network neural nodes;
then, a loss function is calculated and updated by using a complex field gradient descent algorithm
Figure FDA0003969968010000043
And &>
Figure FDA0003969968010000044
Finally obtaining a minimum loss function;
and substituting the updated parameters into the loss function, and repeatedly executing the parameter updating process until the loss function is minimum, and ending the updating.
9. A SAR moving target detection device is characterized by comprising:
the image acquisition module is used for acquiring a dual-channel SAR complex image;
the image preprocessing module is used for preprocessing the dual-channel SAR complex image to obtain a data set;
the network construction module is used for constructing a deep complex field convolution neural network by using the complex field residual error dense block;
the target prediction module is used for inputting the dual-channel SAR complex image blocks in the data set into a depth complex field convolution neural network to obtain a moving target image;
the deep complex field convolutional neural network comprises:
the moving target feature extraction module is used for extracting a feature map of the SAR complex image block in the data set by using a complex field residual error dense block RDB;
the characteristic fusion module is used for carrying out characteristic fusion on the characteristic graph of the SAR complex image block;
and the moving target image generation module is used for generating an SAR moving target prediction image according to the SAR complex image block after the feature fusion.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements a SAR moving target detection method according to any one of claims 1 to 8.
CN202211528281.9A 2022-11-29 2022-11-29 SAR moving target detection method, device and storage medium Pending CN115965794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211528281.9A CN115965794A (en) 2022-11-29 2022-11-29 SAR moving target detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211528281.9A CN115965794A (en) 2022-11-29 2022-11-29 SAR moving target detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115965794A true CN115965794A (en) 2023-04-14

Family

ID=87362488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211528281.9A Pending CN115965794A (en) 2022-11-29 2022-11-29 SAR moving target detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115965794A (en)

Similar Documents

Publication Publication Date Title
Geng et al. Deep-learning for radar: A survey
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN108229404B (en) Radar echo signal target identification method based on deep learning
CN104851097B (en) The multichannel SAR GMTI methods aided in based on target shape and shade
CN103529437B (en) Method used for captive-balloon-borne phased array radar to distinguish open space targets under multi-target condition
CN104950296B (en) Based on the remaining sane non-homogeneous detection method of weight Weighted adaptive power
CN107450055B (en) High-speed maneuvering target detection method based on discrete linear frequency modulation Fourier transform
CN106483516A (en) Radar clutter space-time adaptive processing method based on priori
CN112346030B (en) Super-resolution direction-of-arrival estimation method for unmanned aerial vehicle group
CN112782695B (en) Satellite attitude and size estimation method based on ISAR image and parameter optimization
CN111401168B (en) Multilayer radar feature extraction and selection method for unmanned aerial vehicle
CN102914773A (en) Multi-pass circumference SAR three-dimensional imaging method
Budillon et al. SAR tomography based on deep learning
Su et al. Deep CNN-based radar detection for real maritime target under different sea states and polarizations
Zou et al. Light‐weight deep learning method for active jamming recognition based on improved MobileViT
CN106802408B (en) Airborne non-positive side array short range clutter distance fuzzy suppression method based on sparse recovery
Tuuk et al. Compressed sensing radar amid noise and clutter using interference covariance information
Chen et al. Sea clutter suppression and micromotion marine target detection via radon‐linear canonical ambiguity function
CN115965794A (en) SAR moving target detection method, device and storage medium
CN113534065B (en) Radar target micro-motion feature extraction and intelligent classification method and system
US11704772B2 (en) Image classification system
Guo et al. Radar moving target detection method based on SET2 and AlexNet
CN113281776A (en) Laser radar target intelligent detector for complex underwater dynamic target
CN114049551A (en) ResNet 18-based SAR raw data target identification method
Jordanov et al. Intelligent radar signal recognition and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination