CN115375786A - One-dimensional synthetic aperture depth convolution neural network and image reconstruction method - Google Patents

One-dimensional synthetic aperture depth convolution neural network and image reconstruction method Download PDF

Info

Publication number
CN115375786A
CN115375786A CN202210907445.2A CN202210907445A CN115375786A CN 115375786 A CN115375786 A CN 115375786A CN 202210907445 A CN202210907445 A CN 202210907445A CN 115375786 A CN115375786 A CN 115375786A
Authority
CN
China
Prior art keywords
image
network
dimensional
brightness
synthetic aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210907445.2A
Other languages
Chinese (zh)
Inventor
李�浩
吴袁超
李一楠
窦昊锋
宋广南
杨小娇
李鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN202210907445.2A priority Critical patent/CN115375786A/en
Publication of CN115375786A publication Critical patent/CN115375786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses a one-dimensional synthetic aperture depth convolution neural network and an image reconstruction method. Compared with the traditional brightness temperature reconstruction method, the method can obviously inhibit Gibbs oscillation on the premise of not reducing the spatial resolution, and improve the quality of the reconstructed brightness temperature image; the method is simple and reliable, can effectively improve the imaging quality of the synthetic aperture radiometer, improves the effective utilization rate of satellite remote sensing data, and has wide market application prospect.

Description

One-dimensional synthetic aperture depth convolution neural network and image reconstruction method
Technical Field
The invention relates to a one-dimensional synthetic aperture depth convolution neural network and an image reconstruction method, and belongs to the technical field of space microwave remote sensing.
Background
The front end of the one-dimensional synthetic aperture radiometer adopts a one-dimensional antenna array which is sparsely arranged, every two antennas in the array can form a group of base line pairs, interference measurement is carried out on a scene signal, and a visibility function is output (each group of base line pairs corresponds to one visibility function). According to the traditional one-dimensional synthetic aperture image reconstruction method, a Fourier change relation between a visibility function and scene brightness temperature is utilized, and a brightness temperature image is reconstructed through Fourier transformation of the visibility function.
For a one-dimensional synthetic aperture radiometer, due to the fact that the number of antennas of the system is limited, and the corresponding sampling base line is limited, truncated sampling occurs on a space frequency domain of the radiometer, and therefore an imaging result has obvious Gibbs oscillation error in a Gao Liangwen contrast area. The oscillation error can affect the whole bright temperature image, so that the error of the reconstructed bright temperature image is large, and the detection precision of the radiometer is finally affected. In the existing synthetic aperture radiometer imaging algorithm, a windowing method is generally adopted to reduce Gibbs oscillation errors, but windowing can reduce the spatial resolution of an image. Therefore, a more efficient method of reducing gibbs oscillation errors in an image is desired.
Disclosure of Invention
The technical problem solved by the invention is as follows: aiming at the problem of reconstruction of the current one-dimensional synthetic aperture microwave brightness temperature image, a one-dimensional synthetic aperture depth convolution neural network and an image reconstruction method are provided. According to the method, the deep convolution neural network structure for the one-dimensional synthetic aperture radiometer is designed, the reconstruction of the one-dimensional synthetic aperture microwave bright temperature image is achieved, the method is simple and reliable, the image quality can be effectively improved, and the inversion accuracy of the synthetic aperture radiometer is improved.
The technical solution of the invention is as follows:
the invention discloses a one-dimensional synthetic aperture image reconstruction method based on a deep convolutional neural network, which comprises the following steps:
(1) Collecting color image training samples of different scenes;
(2) Mapping and interpolating the color image training sample into a one-dimensional brightness-temperature distribution map training sample with the same pixel points;
(3) Calculating a visibility function by utilizing the one-dimensional brightness and temperature distribution graph training sample, and simulating to obtain visibility function training samples corresponding to all the color image training samples;
(4) Constructing a one-dimensional synthetic aperture depth convolution neural network, and initializing network parameters;
(5) Inputting the visibility function training sample into a one-dimensional synthetic aperture depth convolution neural network for image reconstruction to obtain a network reconstruction brightness temperature image for training;
(6) Calculating the image error of the one-dimensional brightness and temperature distribution graph training sample and the corresponding network reconstruction brightness and temperature image for training;
(7) Obtaining a curve of the image error along with the change of training iteration times according to the image error result, judging whether the image error tends to be stable, if so, finishing training to obtain a trained one-dimensional synthetic aperture depth convolution neural network; otherwise, updating the network parameters by using a standard reverse random gradient algorithm, and returning to the step (5);
(8) Obtaining a one-dimensional brightness-temperature distribution diagram verification sample for network verification and a visibility function verification sample corresponding to the one-dimensional brightness-temperature distribution diagram verification sample through simulation or measurement;
(9) Inputting the visibility function verification sample into the trained one-dimensional synthetic aperture depth convolution neural network to obtain a network reconstruction brightness temperature image for verification; verifying the visibility function verification sample, and performing brightness temperature reconstruction by using a traditional one-dimensional synthetic aperture brightness temperature reconstruction method to obtain a traditional reconstructed brightness temperature image;
(10) And comparing and analyzing the network reconstructed bright temperature image for verification with the traditional reconstructed bright temperature image to realize the effect evaluation of the network reconstructed image.
In the above reconstruction method, the color image training sample is processed into a one-dimensional brightness-temperature distribution map training sample with the same pixel points by mapping and interpolation, and the specific method is as follows: converting the color image into a gray image numerical value with the value range of 0-255, mapping the gray image numerical value into a brightness temperature value of 0-300K, adjusting the image size into the fixed phase pixel number through a linear interpolation method, and completing the creation of a one-dimensional brightness temperature distribution diagram.
In the above reconstruction method, the visibility function is expressed by the following formula:
Figure BDA0003772929340000031
wherein V (u) is a visibility function, T B (theta) is a brightness temperature distribution corresponding to the one-dimensional brightness temperature distribution diagram, F nk (θ)、 F nj (theta) antenna patterns of units k, j, respectively, omega k 、Ω j Antenna solid angles of elements k, j, respectively, theta is azimuth angle, u is spatial frequency, u = (x) j -x k )/λ 0 Wherein x is k 、x j Is the geometric position coordinate of the unit k, j, λ 0 Is the wavelength.
In the above reconstruction method, the one-dimensional synthetic aperture depth convolution neural network includes: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module is used for sorting effective characteristic information from the visibility function data and outputting the effective characteristic information to the image reconstruction module, and the image reconstruction module consists of a characteristic extraction unit and a reverse mapping unit; the characteristic extraction unit comprises two convolutional layers and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolutional layers and is used for up-sampling the output of the convolutional layers; after each convolution layer and the transposed convolution layer, a batch normalization layer and a linear rectification function are added.
The image reconstruction module consists of two full-connection layers and six convolution layers; receiving characteristic information input by the characteristic enhancement module, extracting and reconstructing a brightness-temperature image through the characteristic information, and outputting a one-dimensional brightness-temperature image reconstructed by a network; the two fully-connected layers adopt a nonlinear activation function to map the characteristic information into a subsequent convolutional layer; and the six convolutional layers further extract high-order characteristics, perform dimension conversion and image reconstruction by using the characteristic information and output a one-dimensional brightness temperature image reconstructed by a network.
In the above reconstruction method, the specific method for judging whether the image error tends to be stable is as follows: and calculating the image error of the network reconstructed brightness temperature image for training each time of training iteration, obtaining a curve of the image error changing along with the iteration times according to the image error result corresponding to each iteration, and judging that the image error tends to be stable when the image error change value of any two adjacent times of continuous multiple iterations is within 0.1K.
In the reconstruction method, the formula of the standard inverse random gradient algorithm is as follows:
Figure BDA0003772929340000032
in the formula, W p Is the network parameter of the p-th iteration, p is the number of updating parameters, alpha is the learning rate,
Figure BDA0003772929340000041
is the partial derivative of the network parameter with respect to the image error, i.e. the gradient of the image error.
In the above reconstruction method, the network reconstructed brightness temperature image for verification is compared with the traditional reconstructed brightness temperature image for analysis, so as to realize the evaluation of the effect of network reconstruction, and the specific method is as follows: calculating to obtain a network reconstruction image error through a network reconstruction bright temperature image for verification and the one-dimensional bright temperature distribution diagram verification sample; verifying the sample by using the traditional reconstructed brightness temperature image and the one-dimensional brightness temperature distribution diagram, and calculating to obtain the traditional reconstructed image error; and (4) judging whether the error of the network reconstructed image is smaller than the error of the traditional reconstructed image, if so, reconstructing the bright temperature image by using the network, otherwise, readjusting the network parameters, and returning to the step (5).
In the above reconstruction method, the image error is calculated by the following formula:
Figure BDA0003772929340000042
in the formula, T k Is the brightness temperature, invT, of the kth one-dimensional brightness temperature distribution map sample k Reconstructing the brightness temperature of the image for the network corresponding to the kth sample; n is the number of brightness Wen Xiangsu points of the one-dimensional brightness temperature distribution graph and the reconstructed brightness temperature image, m is the number of samples, k represents the sample number, and W = { u = 1 ,...u i Is the set of all network parameters.
The invention discloses a one-dimensional synthetic aperture depth convolution neural network, which is characterized by comprising the following components: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module extracts effective characteristic information from the training sample and outputs the effective characteristic information to the image reconstruction module, and the image reconstruction module consists of a characteristic extraction unit and a reverse mapping unit;
the characteristic extraction unit comprises two convolution layers and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolution layers, and performs up-sampling on the output of the convolution layers to obtain the output with the same size as the input data of the characteristic enhancement module; adding a batch normalization layer and a linear rectification function after each convolution layer and each transposed convolution layer;
the image reconstruction module receives the characteristic information output by the characteristic enhancement module, extracts and reconstructs a brightness-temperature image through the characteristic information, and outputs a one-dimensional brightness-temperature image reconstructed by a network; consists of two full-connecting layers and six convolution layers;
the two fully-connected layers adopt a nonlinear activation function to map the characteristic information into a subsequent convolutional layer; and further extracting high-order characteristics from the six convolutional layers, realizing the dimension conversion and image reconstruction of a frequency domain and a space domain by using the characteristic information, and outputting a one-dimensional brightness-temperature image reconstructed by a network.
In the one-dimensional synthetic aperture depth convolution neural network, the batch normalization layer and the linear rectification function are added, and the specific method comprises the following steps:
the batch normalization layer performs batch normalization processing on the convolution layer and the output data of the transposed convolution layer, and the formula is as follows:
Figure BDA0003772929340000051
in the formula, x represents input data of a batch normalization layer, BN (x) represents output after batch normalization processing, E (x) and D (x) are respectively mean value and variance of the input data, and alpha and beta represent scale factors and translation factors and are parameters obtained by self-learning of a network during training.
The linear rectification function nonlinearizes the data and increases the nonlinear expression of the model, and the formula is as follows:
ReLU(x)=max(0,x)
where x represents input data of a linear rectification function, reLU (x) represents an output processed by the linear rectification function, and the linear rectification function output is a number equal to or greater than 0.
In the one-dimensional synthetic aperture depth convolution neural network, six layers of convolution layers further extract high-order characteristics, dimension conversion and image reconstruction are realized by utilizing the characteristic information, and a one-dimensional brightness-temperature image reconstructed by the network is output, wherein the specific method comprises the following steps: high-order characteristic information between a visibility function of a frequency domain and a bright temperature image of a space domain and system characteristics of a synthetic aperture radiometer are obtained through the six convolutional layers, and dimension conversion from the frequency domain to the space domain is learned, so that bright temperature image reconstruction is achieved.
The beneficial effects of the invention and the prior art are as follows:
(1) According to the method, the one-dimensional synthetic aperture image reconstruction based on the depth convolution neural network can be realized by designing the one-dimensional synthetic aperture depth convolution neural network and learning the mapping relation between the visibility function input and the brightness temperature output, so that the Gibbs oscillation error of the reconstructed brightness temperature image is effectively inhibited under the condition of not reducing the spatial resolution, and the image quality is improved; once the network is trained, the output of the one-dimensional synthetic aperture radiometer can be rapidly reconstructed by using the deep learning neural network;
(2) The invention trains the network by designing the deep learning neural network aiming at the one-dimensional synthetic aperture, and utilizes the trained network to reconstruct the one-dimensional synthetic aperture image, thereby effectively reducing Gibbs oscillation error of the image and improving the detection precision of the synthetic aperture radiometer.
Drawings
FIG. 1 is a diagram of a one-dimensional synthetic aperture depth convolution neural network of the present invention;
FIG. 2 is a block diagram of a feature enhancement module of the present invention;
FIG. 3 is a block diagram of an image reconstruction module according to the present invention;
FIG. 4 is a bright temperature reconstruction method based on a deep convolutional neural network according to the present invention (a), a network training process (b), and a network verification and application process;
FIG. 5 is a schematic view of an array arrangement according to the present invention;
FIG. 6 is a schematic view of a point source scene in accordance with the present invention;
fig. 7 shows a bright-temperature reconstruction result (a), an IFFT reconstructed image (b), and a network reconstructed image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the invention discloses a one-dimensional synthetic aperture image reconstruction method based on a depth convolution neural network, comprising:
step (1), collecting color image training samples of different scenes;
step (2), the color image training sample is processed into a one-dimensional brightness-temperature distribution map training sample with the same pixel points through mapping and interpolation, and the specific method comprises the following steps: converting the color image into a gray image numerical value with the value range of 0-255, mapping the gray image numerical value into a brightness temperature value of 0-300K, adjusting the image size into the fixed phase pixel number through a linear interpolation method, and completing the creation of a one-dimensional brightness temperature distribution diagram.
Step (3) calculating a visibility function by utilizing the one-dimensional brightness temperature distribution diagram training sample, and simulating to obtain visibility function training samples corresponding to all color image training samples; the visibility function, the formula is:
Figure BDA0003772929340000061
where V (u) is the visibility function, T B (theta) is a brightness temperature distribution corresponding to the one-dimensional brightness temperature distribution diagram, F nk (θ)、F nj (theta) antenna patterns of units k, j, respectively, omega k 、Ω j Antenna solid angles of elements k, j, respectively, theta is azimuth angle, u is spatial frequency, u = (x) j -x k )/λ 0 Wherein x is k 、x j Is the geometric position coordinate of the unit k, j, λ 0 Is the wavelength.
Step (4), constructing a one-dimensional synthetic aperture depth convolution neural network, and initializing network parameters;
inputting the visibility function training sample into a one-dimensional synthetic aperture depth convolution neural network for image reconstruction to obtain a network reconstruction brightness temperature image for training;
a one-dimensional synthetic aperture depth convolutional neural network, comprising: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module is used for sorting effective characteristic information from the visibility function data and outputting the effective characteristic information to the image reconstruction module, and the image reconstruction module consists of a characteristic extraction unit and a reverse mapping unit; the characteristic extraction unit comprises two convolutional layers and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolution layers and performs up-sampling on the output of the convolution layers; after each convolution layer and the transposed convolution layer, a batch normalization layer and a linear rectification function are added.
The image reconstruction module consists of two full-connection layers and six convolution layers; receiving characteristic information input by the characteristic enhancement module, extracting and reconstructing a brightness-temperature image through the characteristic information, and outputting a one-dimensional brightness-temperature image reconstructed by a network; the two full-connection layers adopt a nonlinear activation function to map the characteristic information into a subsequent convolution layer; the six-layer convolution layer further extracts high-order characteristics, performs dimension conversion and image reconstruction by using the characteristic information, and outputs a network-reconstructed one-dimensional brightness temperature image.
Step (6), calculating image errors of the one-dimensional brightness-temperature distribution graph training sample and the corresponding training network reconstruction brightness-temperature image;
step (7), obtaining a curve of the image error along with the change of training iteration times according to an image error result, judging whether the image error tends to be stable, and finishing training if the image error tends to be stable to obtain a trained one-dimensional synthetic aperture depth convolution neural network; otherwise, updating the network parameters by using a standard reverse random gradient algorithm, and returning to the step (5); as shown in fig. 4 (a).
Judging whether the image error tends to be stable, wherein the specific method comprises the following steps: the image error of the network reconstructed brightness temperature image for training can be calculated in each training iteration, a curve of the image error changing along with the iteration times can be obtained according to an image error result corresponding to each iteration, when the image error changing value of any two adjacent times of continuous multiple iterations is within 0.1K, the image error tends to be stable, in the embodiment, the iteration times are 5 times, and the image error changing value of each two adjacent times is within 0.1K.
The standard inverse stochastic gradient algorithm has the formula:
Figure BDA0003772929340000081
in the formula, W p Is the network parameter of the p-th iteration, p is the number of updating parameters, alpha is the learning rate,
Figure BDA0003772929340000082
is the partial derivative of the network parameter with respect to the image error, i.e. the gradient of the image error.
Step (8), obtaining a one-dimensional brightness-temperature distribution diagram verification sample for network verification and a visibility function verification sample corresponding to the one-dimensional brightness-temperature distribution diagram verification sample through simulation or measurement;
inputting the visibility function verification sample into a trained one-dimensional synthetic aperture depth convolution neural network to obtain a network reconstruction brightness temperature image for verification; verifying the visibility function verification sample, and performing brightness temperature reconstruction by using a traditional one-dimensional synthetic aperture brightness temperature reconstruction method to obtain a traditional reconstructed brightness temperature image;
comparing and analyzing the network reconstructed bright temperature image for verification with the traditional reconstructed bright temperature image to realize effect evaluation of the network reconstructed image, wherein as shown in fig. 4 (b), the specific method comprises the following steps: the network reconstructed brightness temperature image for verification and the one-dimensional brightness temperature distribution diagram verification sample are used for calculating to obtain a network reconstructed image error; verifying the sample by using the traditional reconstructed brightness temperature image and the one-dimensional brightness temperature distribution diagram, and calculating to obtain the traditional reconstructed image error; and (5) judging whether the error of the network reconstructed image is smaller than the error of the traditional reconstructed image, if so, reconstructing the bright-temperature image by using the network, otherwise, readjusting the network parameters, and returning to the step (5).
The image error is calculated by the formula:
Figure BDA0003772929340000083
in the formula, T k Is the brightness temperature, invT, of the kth one-dimensional brightness temperature distribution map sample k Reconstructing the brightness temperature of the image for the network corresponding to the kth sample; n is the number of brightness Wen Xiangsu points of the one-dimensional brightness temperature distribution graph and the reconstructed brightness temperature image, m is the number of samples, k represents the sample number, and W = { u = 1 ,...u i Is the set of all network parameters.
As shown in fig. 2, the present invention discloses a one-dimensional synthetic aperture depth convolution neural network, which includes: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module extracts effective characteristic information from the training sample and outputs the effective characteristic information to the image reconstruction module, and the image reconstruction module consists of a characteristic extraction unit and a reverse mapping unit;
the characteristic extraction unit comprises two layers of convolution layers (C1-C2) and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolution layers (TC 1-TC 2), and performs up-sampling on the output of the convolution layers to obtain the output with the same size as the input data of the characteristic enhancement module; adding a batch normalization layer and a linear rectification function after each convolution layer and each transposed convolution layer;
the image reconstruction module receives the characteristic information output by the characteristic enhancement module, extracts and reconstructs a brightness-temperature image through the characteristic information, and outputs a one-dimensional brightness-temperature image reconstructed by a network; consists of two full connecting layers (FC 1 and FC2 layers) and six convolution layers; as shown in fig. 3.
The two fully-connected layers adopt a nonlinear activation function to map the characteristic information into a subsequent convolutional layer; the six-layer convolution layer further extracts high-order characteristics, realizes the dimension conversion and image reconstruction of a frequency domain and a space domain by utilizing characteristic information, and outputs a one-dimensional brightness-temperature image reconstructed by a network, and the specific method comprises the following steps: high-order characteristic information between a visibility function of a frequency domain and a bright temperature image of a space domain and system characteristics of a synthetic aperture radiometer are obtained through the six convolutional layers, and dimension conversion from the frequency domain to the space domain is learned, so that bright temperature image reconstruction is achieved.
Adding a batch normalization layer and a linear rectification function, wherein the specific method comprises the following steps:
the batch normalization layer performs batch normalization processing on output data of the convolution layer and the transposed convolution layer, and the formula is as follows:
Figure BDA0003772929340000091
in the formula, x represents input data of a batch normalization layer, BN (x) represents output after batch normalization processing, E (x) and D (x) are respectively a mean value and a variance of the input data, and alpha and beta represent a scale factor and a translation factor, and are parameters obtained by self-learning of a network during training.
The linear rectification function nonlinearizes the data and increases the nonlinear expression of the model, and the formula is as follows:
ReLU(x)=max(0,x)
in the formula, x represents input data of a linear rectification function, reLU (x) represents an output processed by the linear rectification function, and the linear rectification function output is a number equal to or greater than 0.
Example 1
In this embodiment, 6 units of one-dimensional linear arrays are used for simulation, fig. 5 is an array used in simulation, the minimum spacing of array elements is 0.5 times of wavelength, and the maximum baseline of a feed source array is 13 times of wavelength. Designing a corresponding one-dimensional synthetic aperture depth convolution neural network based on the array, and selecting 1000 pictures of different scenes to convert the pictures into original scene brightness temperature image samples for network training according to the network training steps. Setting the target observation scene as a point source scene as shown in fig. 6, the corresponding visibility function is obtained from the point source scene.
The traditional one-dimensional synthetic aperture brightness temperature reconstruction method (inverse fourier transform IFFT) is adopted to reconstruct the brightness temperature of the visibility function of the point source scene, and the result is shown in fig. 7 (a). The visibility function of the point source scene is input into the trained network, and the brightness and temperature reconstruction is performed, and the result is shown in fig. 7 (b). The brightness temperature image reconstructed by the one-dimensional synthetic aperture depth convolution neural network is closer to the original scene, the sidelobe Gibbs oscillation of the brightness temperature image is reduced by 70% compared with the brightness temperature image reconstructed by IFFT, and meanwhile, the zero beam widths of the images reconstructed by the two methods are basically the same, which indicates that the spatial resolution is not degraded.
Therefore, the one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network can obviously inhibit Gibbs oscillation and improve the quality of the reconstructed bright temperature image on the premise of not reducing the spatial resolution.
According to the invention, a one-dimensional synthetic aperture depth convolution neural network consisting of a feature enhancement module and an image reconstruction module is designed, so that the mapping relation between the input of a visibility function and the output of a brightness temperature is learned. And training the network by the bright temperature image sample and verifying the network performance. Therefore, the problems that a sampling base line of a radiometer is limited and obvious Gibbs oscillation errors exist in an imaging result are solved, the quality of one-dimensional synthetic aperture image reconstruction is improved, and the reconstructed brightness temperature and the original scene brightness temperature distribution are more consistent.
The invention can effectively improve the detection precision of the synthetic aperture radiometer, and is suitable for a one-dimensional synthetic aperture depth convolution neural network and an image reconstruction method.
The above-described embodiments are merely preferred embodiments of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (11)

1. A one-dimensional synthetic aperture image reconstruction method based on a deep convolutional neural network is characterized by comprising the following steps:
(1) Collecting color image training samples of different scenes;
(2) Mapping and interpolating the color image training sample to obtain a one-dimensional brightness-temperature distribution map training sample with the same pixel points;
(3) Calculating a visibility function by utilizing the one-dimensional brightness and temperature distribution graph training sample, and simulating to obtain visibility function training samples corresponding to all the color image training samples;
(4) Constructing a one-dimensional synthetic aperture depth convolution neural network, and initializing network parameters;
(5) Inputting the visibility function training sample into a one-dimensional synthetic aperture depth convolution neural network for image reconstruction to obtain a network reconstruction brightness temperature image for training;
(6) Calculating the image error of the one-dimensional brightness temperature distribution graph training sample and the corresponding network for training to reconstruct the brightness temperature image;
(7) Obtaining a curve of the image error along with the change of training iteration times according to the image error result, judging whether the image error tends to be stable, if so, finishing training to obtain a trained one-dimensional synthetic aperture depth convolution neural network; otherwise, updating the network parameters by using a standard reverse random gradient algorithm, and returning to the step (5);
(8) Obtaining a one-dimensional brightness-temperature distribution diagram verification sample for network verification and a visibility function verification sample corresponding to the one-dimensional brightness-temperature distribution diagram verification sample through simulation or measurement;
(9) Inputting the visibility function verification sample into the trained one-dimensional synthetic aperture depth convolution neural network to obtain a network reconstruction brightness temperature image for verification; verifying the visibility function verification sample, and performing brightness temperature reconstruction by using a traditional one-dimensional synthetic aperture brightness temperature reconstruction method to obtain a traditional reconstructed brightness temperature image;
(10) And comparing and analyzing the network reconstructed bright temperature image for verification with the traditional reconstructed bright temperature image to realize the effect evaluation of the network reconstructed image.
2. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: the color image training sample is processed into a one-dimensional brightness and temperature distribution map training sample with the same pixel points through mapping and interpolation, and the specific method comprises the following steps: converting the color image into a gray image numerical value with the value range of 0-255, mapping the gray image numerical value into a brightness temperature value of 0-300K, adjusting the image size into the fixed phase pixel number through a linear interpolation method, and completing the creation of a one-dimensional brightness temperature distribution diagram.
3. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: the visibility function is expressed by the formula:
Figure FDA0003772929330000021
wherein V (u) is a visibility function, T B (theta) is a brightness temperature distribution corresponding to the one-dimensional brightness temperature distribution diagram, F nk (θ)、F nj (theta) antenna patterns of units k, j, respectively, omega k 、Ω j Antenna solid angles of units k and j, respectively, theta is azimuth angle, u is spatial frequency, and u = (x) j -x k )/λ 0 Wherein x is k 、x j Is the geometric position coordinate of the unit k, j, λ 0 Is the wavelength.
4. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: the one-dimensional synthetic aperture depth convolution neural network comprises: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module is used for sorting effective characteristic information from the visibility function data and outputting the effective characteristic information to the image reconstruction module, and consists of a characteristic extraction unit and a reverse mapping unit; the characteristic extraction unit comprises two convolutional layers and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolution layers and performs up-sampling on the output of the convolution layers; after each convolution layer and the transposed convolution layer, a batch normalization layer and a linear rectification function are added.
The image reconstruction module consists of two full-connection layers and six convolution layers; receiving characteristic information input by the characteristic enhancement module, extracting and reconstructing a brightness-temperature image through the characteristic information, and outputting a one-dimensional brightness-temperature image reconstructed by a network; the two fully-connected layers adopt a nonlinear activation function, and the characteristic information is mapped into a subsequent convolutional layer; and the six convolutional layers further extract high-order characteristics, perform dimension conversion and image reconstruction by using the characteristic information and output a one-dimensional brightness temperature image reconstructed by a network.
5. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: the method for judging whether the image error tends to be stable specifically comprises the following steps: and calculating the image error of the network reconstructed brightness temperature image for training each time of training iteration, obtaining a curve of the image error changing along with the iteration times according to the image error result corresponding to each iteration, and judging that the image error tends to be stable when the image error change value of any two adjacent times of continuous multiple iterations is within 0.1K.
6. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, characterized in that: the formula of the standard inverse random gradient algorithm is as follows:
Figure FDA0003772929330000031
in the formula, W p Is the network parameter of the p-th iteration, p is the number of updating parameters, alpha is the learning rate,
Figure FDA0003772929330000032
is the partial derivative of the network parameter with respect to the image error, i.e. the gradient of the image error.
7. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: comparing and analyzing the network reconstructed bright temperature image for verification with a traditional reconstructed bright temperature image to realize the effect evaluation of network reconstruction, wherein the specific method comprises the following steps: calculating to obtain a network reconstruction image error through a network reconstruction bright temperature image for verification and the one-dimensional bright temperature distribution diagram verification sample; verifying the sample by the traditional reconstructed bright temperature image and the one-dimensional bright temperature distribution diagram, and calculating to obtain the traditional reconstructed image error; and (5) judging whether the error of the network reconstructed image is smaller than the error of the traditional reconstructed image, if so, reconstructing the bright-temperature image by using the network, otherwise, readjusting the network parameters, and returning to the step (5).
8. The one-dimensional synthetic aperture image reconstruction method based on the deep convolutional neural network as claimed in claim 1, wherein: the image error is calculated by the following formula:
Figure FDA0003772929330000033
in the formula,T k Is the brightness temperature, invT, of the kth one-dimensional brightness temperature distribution map sample k Reconstructing the brightness temperature of the image for the network corresponding to the kth sample; n is the number of brightness Wen Xiangsu points of the one-dimensional brightness temperature distribution graph and the reconstructed brightness temperature image, m is the number of samples, k represents the sample number, and W = { u = 1 ,...u i Is the set of all network parameters.
9. A one-dimensional synthetic aperture depth convolutional neural network, comprising: the device comprises a feature enhancement module and an image reconstruction module;
the characteristic enhancement module extracts effective characteristic information from the training sample and outputs the effective characteristic information to the image reconstruction module, and the image reconstruction module consists of a characteristic extraction unit and a reverse mapping unit;
the characteristic extraction unit comprises two convolutional layers and is used for extracting characteristic information of the visibility function data; the reverse mapping unit is composed of two layers of transposition convolutional layers, and performs up-sampling on the output of the convolutional layers to obtain the output with the same size as the input data of the characteristic enhancement module; adding a batch normalization layer and a linear rectification function after each convolution layer and each transposed convolution layer;
the image reconstruction module receives the characteristic information output by the characteristic enhancement module, extracts and reconstructs a brightness-temperature image through the characteristic information, and outputs a one-dimensional brightness-temperature image reconstructed by a network; consists of two full-connecting layers and six convolution layers;
the two fully-connected layers adopt a nonlinear activation function to map the characteristic information into a subsequent convolutional layer; and further extracting high-order characteristics from the six convolutional layers, realizing the dimension conversion and image reconstruction of a frequency domain and a space domain by using the characteristic information, and outputting a network-reconstructed one-dimensional brightness temperature image.
10. The one-dimensional synthetic aperture deep convolutional neural network of claim 9, wherein: the method for adding the batch normalization layer and the linear rectification function comprises the following specific steps:
the batch normalization layer performs batch normalization processing on the convolution layer and the output data of the transposed convolution layer, and the formula is as follows:
Figure FDA0003772929330000041
in the formula, x represents input data of a batch normalization layer, BN (x) represents output after batch normalization processing, E (x) and D (x) are respectively a mean value and a variance of the input data, and alpha and beta represent a scale factor and a translation factor, and are parameters obtained by self-learning of a network during training.
The linear rectification function nonlinearizes the data and increases the nonlinear expression of the model, and the formula is as follows:
ReLU(x)=max(0,x)
where x represents input data of a linear rectification function, reLU (x) represents an output processed by the linear rectification function, and the linear rectification function output is a number equal to or greater than 0.
11. The one-dimensional synthetic aperture deep convolutional neural network of claim 9, wherein: the six-layer convolution layer further extracts high-order characteristics, dimension conversion and image reconstruction are achieved by utilizing the characteristic information, and a one-dimensional brightness-temperature image reconstructed by a network is output, and the specific method comprises the following steps: high-order characteristic information between a visibility function of a frequency domain and a bright temperature image of a space domain and system characteristics of a synthetic aperture radiometer are obtained through the six convolutional layers, and dimension conversion from the frequency domain to the space domain is learned, so that bright temperature image reconstruction is achieved.
CN202210907445.2A 2022-07-29 2022-07-29 One-dimensional synthetic aperture depth convolution neural network and image reconstruction method Pending CN115375786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907445.2A CN115375786A (en) 2022-07-29 2022-07-29 One-dimensional synthetic aperture depth convolution neural network and image reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907445.2A CN115375786A (en) 2022-07-29 2022-07-29 One-dimensional synthetic aperture depth convolution neural network and image reconstruction method

Publications (1)

Publication Number Publication Date
CN115375786A true CN115375786A (en) 2022-11-22

Family

ID=84064512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907445.2A Pending CN115375786A (en) 2022-07-29 2022-07-29 One-dimensional synthetic aperture depth convolution neural network and image reconstruction method

Country Status (1)

Country Link
CN (1) CN115375786A (en)

Similar Documents

Publication Publication Date Title
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
CN107274462B (en) Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN113361367B (en) Underground target electromagnetic inversion method and system based on deep learning
CN111754598B (en) Local space neighborhood parallel magnetic resonance imaging reconstruction method based on transformation learning
CN116012364B (en) SAR image change detection method and device
CN110333489A (en) The processing method to SAR echo data Sidelobe Suppression is combined with RSVA using CNN
Xu et al. Fast full-wave electromagnetic inverse scattering based on scalable cascaded convolutional neural networks
Zhang et al. Microwave SAIR imaging approach based on deep convolutional neural network
Mao et al. Target fast reconstruction of real aperture radar using data extrapolation-based parallel iterative adaptive approach
Yuan et al. Weather Radar Image Superresolution Using a Nonlocal Residual Network
CN114972041B (en) Polarization radar image super-resolution reconstruction method and device based on residual error network
CN117471457A (en) Sparse SAR learning imaging method, device and medium based on deep expansion complex network
CN115375786A (en) One-dimensional synthetic aperture depth convolution neural network and image reconstruction method
CN108647183B (en) Complex RCS data interpolation method based on compressed sensing
CN114998137A (en) Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN115760655B (en) Interference array imaging method based on big data technology
CN113435487B (en) Deep learning-oriented multi-scale sample generation method
Zeng et al. Unsupervised 3D Array-SAR Imaging Based on Generative Model for Scattering Diagnosis
CN113034639B (en) Magnetic resonance imaging image reconstruction method based on separable Henkel matrix
CN114140325B (en) C-ADMN-based structured sparse aperture ISAR imaging method
CN110310308A (en) A kind of method for registering images based on subgraph
CN116659684A (en) Sea-land pollution error correction method for satellite-borne comprehensive aperture microwave radiometer
CN112770390B (en) Robust fingerprint extraction method based on large-scale MIMO single station system
Shen et al. A image denoising algorithm based on sparse dictionary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination