CN114972041B - Polarization radar image super-resolution reconstruction method and device based on residual error network - Google Patents

Polarization radar image super-resolution reconstruction method and device based on residual error network Download PDF

Info

Publication number
CN114972041B
CN114972041B CN202210900724.6A CN202210900724A CN114972041B CN 114972041 B CN114972041 B CN 114972041B CN 202210900724 A CN202210900724 A CN 202210900724A CN 114972041 B CN114972041 B CN 114972041B
Authority
CN
China
Prior art keywords
data
resolution
image
image data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210900724.6A
Other languages
Chinese (zh)
Other versions
CN114972041A (en
Inventor
陈思伟
李铭典
崔兴超
肖顺平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210900724.6A priority Critical patent/CN114972041B/en
Publication of CN114972041A publication Critical patent/CN114972041A/en
Application granted granted Critical
Publication of CN114972041B publication Critical patent/CN114972041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to a polarization radar image super-resolution reconstruction method and device based on a residual error network, and belongs to the technical field of radar imaging remote sensing. The method comprises the following steps: constructing a polarized radar image training data set; inputting low-resolution image data in the training data set into a polarization radar image super-resolution reconstruction network model based on a residual error network for training to obtain a well-trained polarization radar image super-resolution reconstruction network model; and inputting the low-resolution polarization radar image to be reconstructed into the trained polarization radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarization radar image. By adopting the method, according to the grouping residual convolution module and the residual attention main network module in the polarized radar image super-resolution network model, the correlation between the real part and the imaginary part in the image data and the correlation between the spatial domain and the multiple channels of the shallow layer and the deep layer of the image are respectively utilized, so that the super-resolution reconstruction precision is effectively improved.

Description

Polarization radar image super-resolution reconstruction method and device based on residual error network
Technical Field
The application relates to the technical field of radar imaging remote sensing, in particular to a polarized radar image super-resolution reconstruction method and device based on a residual error network.
Background
The polarized radar can acquire multi-polarization scattering information of the target, and is beneficial to the interpretation of a target scattering mechanism and the inversion of characteristic parameters. Among them, inverse Synthetic Aperture Radar (ISAR), which is a typical imaging Radar, can observe and monitor a space target by transmitting a large bandwidth signal, and plays an important role in maintaining space security.
The high-resolution image in the polarized radar image contains more target detail information, and target detection and classification are facilitated. However, generating High Resolution (HR) radar images requires a large bandwidth and a large coherent integration angle, which is limited by the radar system. In addition, the increase of the resolution ratio can bring about the reduction of the imaging width under the hardware condition of the radar system. Therefore, the method has important significance for improving the resolution of the polarized radar image without increasing the cost and simultaneously maintaining the original imaging width. Currently, a supervised learning method is often used in the super-Resolution field in computer vision, which requires constructing Low Resolution (LR) and high Resolution image data pairs so that a network model can learn a mapping relationship from LR image data to HR image data. For the generation of LR image data, it is currently common to process HR image data by using a down-sampling function or constructing a degraded network. However, none of these studies have generated corresponding LR and HR image data pairs for radar system parameters.
In addition, compared with an optical image, a polarization radar image has the characteristics of high dynamic range, complex data, multiple channels and the like, and a good effect cannot be achieved when a network model in computer vision is directly transferred to polarization radar image processing, so that it is necessary to construct a network model which can simultaneously consider the characteristics of the high dynamic range, the complex data, the multiple channels and the like of the polarization radar image to perform super-resolution reconstruction of the polarization radar image.
Disclosure of Invention
Based on the above, it is necessary to provide a method and an apparatus for reconstructing super-resolution of polarization radar images based on a residual error network.
A polarization radar image super-resolution reconstruction method based on a residual error network comprises the following steps:
imaging processing is carried out on echo data of an observation target, and a polarization radar image training data set is obtained; the polarization radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, and the high-resolution image data and the low-resolution image data are complex data;
inputting low-resolution image data into a pre-constructed polarized radar image super-resolution reconstruction network model based on a residual error network, and outputting high-resolution reconstructed image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention backbone network module, an up-sampling module and a post-processing module; performing data recombination on a real part and an imaginary part in the low-resolution image data according to a grouping residual convolution module to obtain total real data, and performing feature extraction on the total real data to obtain image shallow layer features; according to the residual attention backbone network module, deep feature extraction, space attention weighting and channel attention weighting are carried out on the image shallow feature to obtain weighted features; according to the up-sampling module, up-sampling is carried out on the weighted features to obtain up-sampling features; processing the up-sampling characteristics according to a post-processing module to obtain high-resolution reconstructed image data;
inputting the high-resolution reconstructed image data and the high-resolution image data into a pre-constructed loss function, and training the polarization radar image super-resolution reconstruction network model according to the loss function to obtain a trained polarization radar image super-resolution reconstruction network model;
and inputting the low-resolution polarization radar image to be reconstructed into the trained polarization radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarization radar image.
In one embodiment, the method further comprises:
establishing a polarized radar observation model; wherein the polarized radar observation model comprises two or more basic scattering structures;
and performing data simulation on the observation target according to a basic scattering structure observation target in the polarization radar observation model and electromagnetic simulation software to obtain echo data of the observation target with different bandwidths at different observation angles.
In one embodiment, the imaging processing of the echo data of the observation target to obtain a polarimetric radar image training data set includes:
and imaging the echo data according to the central imaging angle, the imaging aperture and the bandwidth of the echo data to obtain a plurality of groups of data pairs including high-resolution image data and low-resolution image data, and forming a polarimetric radar image training data set by the plurality of groups of data pairs.
In one embodiment, the imaging processing of the echo data according to the central imaging angle, the imaging aperture and the bandwidth of the echo data to obtain a plurality of sets of data pairs including high resolution image data and low resolution image data includes:
calculating to obtain an imaging aperture required by echo data according to the azimuth resolution, and determining a coherent accumulation angle required by the echo data according to the imaging aperture and the central imaging angle;
calculating a bandwidth value required by the echo data according to the distance direction resolution;
imaging the echo data according to the angle interval of the coherent accumulation angle and the bandwidth value to obtain a plurality of groups of data pairs including high-resolution image data and low-resolution image data; the angle interval comprises an angle interval corresponding to the high-resolution image data and an angle interval corresponding to the low-resolution image data, and the central imaging angles of the same group of data pairs formed by the high-resolution image data and the low-resolution image data are the same.
In one embodiment, the data reconstruction of the real part and the imaginary part in the low-resolution image data according to the grouping residual convolution module to obtain total real data, and the feature extraction of the total real data to obtain the image shallow feature includes:
splitting a real part and an imaginary part in the low-resolution image data and performing modulus value processing to obtain total real data, and performing grouping processing to the total real data to obtain multiple groups of real data;
carrying out normalization processing on the multiple groups of real data according to the maximum value and the minimum value in the multiple groups of real data to obtain multiple groups of normalized real data;
inputting a plurality of groups of normalized real number data into a residual error attention unit for feature extraction to obtain a plurality of groups of shallow layer features, wherein the residual error attention unit comprises two convolution layers, an activation layer and an attention module;
and splicing the multiple groups of shallow features in the channel dimension to obtain the image shallow features.
In one embodiment, the residual attention backbone network module comprises a plurality of residual attention units;
according to the residual attention backbone network module, deep feature extraction, space attention weighting and channel attention weighting are carried out on the image shallow feature to obtain weighted features, wherein the weighted features comprise:
and sequentially carrying out deep feature extraction, spatial attention weighting and channel attention weighting on the image shallow feature according to the residual attention units to obtain a weighted image deep feature, and adding the weighted image deep feature and the image shallow feature to obtain a weighted feature.
In one embodiment, the upsampling module comprises two convolution layers, a sub-pixel rearrangement layer;
according to the up-sampling module, up-sampling is carried out on the weighted features to obtain up-sampling features, and the up-sampling features comprise:
and sequentially carrying out channel expansion, channel reduction and characteristic dimension reduction on the weighted characteristics according to the first convolutional layer, the sub-pixel rearrangement layer and the second convolutional layer to obtain the up-sampling characteristics.
In one embodiment, the upsampling feature is processed according to a post-processing module to obtain high-resolution reconstructed image data:
the post-processing module carries out inverse normalization processing on the up-sampling characteristics according to the maximum value and the minimum value in the multiple groups of real number data in the grouped residual convolution module to obtain inverse normalization data;
and performing data recombination on the reverse normalized data to obtain high-resolution reconstructed image data.
In one embodiment, inputting the high-resolution reconstructed image data and the high-resolution image data into a loss function which is constructed in advance, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain a trained polarized radar image super-resolution reconstruction network model, the method comprises the following steps:
inputting high-resolution reconstructed image data and high-resolution image data into a loss function which is constructed in advance, and calculating a single-channel loss function in a polarized radar image training data set according to the loss function;
carrying out weighted summation on the single-channel loss function to obtain a multi-channel weighted loss function;
and training the polarization radar image super-resolution reconstruction network model according to the multi-channel weighting loss function to obtain the trained polarization radar image super-resolution reconstruction network model.
A polarized radar image super-resolution reconstruction apparatus based on a residual error network, the apparatus comprising:
the training set construction module is used for carrying out imaging processing on echo data of an observation target to obtain a polarimetric radar image training data set; the polarization radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, and the high-resolution image data and the low-resolution image data are complex data;
the model construction module is used for inputting low-resolution image data into a pre-constructed polarization radar image super-resolution reconstruction network model based on a residual error network and outputting high-resolution reconstruction image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention backbone network module, an up-sampling module and a post-processing module; performing data recombination on a real part and an imaginary part in the low-resolution image data according to a grouping residual convolution module to obtain total real data, and performing feature extraction on the total real data to obtain image shallow layer features; according to the residual attention backbone network module, deep feature extraction, spatial attention weighting and channel attention weighting are carried out on the image shallow feature to obtain weighted features; according to the up-sampling module, up-sampling is carried out on the weighted features to obtain up-sampling features; processing the up-sampling characteristics according to a post-processing module to obtain high-resolution reconstructed image data;
the model training module is used for inputting the high-resolution reconstructed image data and the high-resolution image data into a loss function which is constructed in advance, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain a trained polarized radar image super-resolution reconstruction network model;
and the test module is used for inputting the low-resolution polarization radar image to be reconstructed into the trained polarization radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain the high-resolution polarization radar image.
According to the polarization radar image super-resolution reconstruction method and device based on the residual error network, the polarization radar image training data set comprising high-resolution image data and low-resolution image data is obtained by imaging the echo data of the observation target; inputting a polarized radar image training data set into a pre-constructed polarized radar image super-resolution reconstruction network model based on a residual error network for training to obtain a trained polarized radar image super-resolution reconstruction network model, wherein a grouped residual error convolution module in the polarized radar image super-resolution reconstruction network model performs data reconstruction and feature extraction on low-resolution image data according to the correlation between the real part and the imaginary part of the low-resolution image data, so that the error of image shallow feature extraction is effectively reduced; according to the correlation between the spatial domain and multiple channels of the image shallow feature, effective reconstruction of spatial information and channel information is achieved by deep feature extraction, spatial attention weighting and channel attention weighting on the image shallow feature, so that the super-resolution reconstruction precision is effectively improved, and important technical support is provided for detection and identification of subsequent radar targets.
Drawings
FIG. 1 is a schematic flowchart of a super-resolution reconstruction method for a polarization radar image based on a residual error network in an embodiment;
FIG. 2 is a schematic structural diagram of a polarization radar image super-resolution reconstruction network model based on a residual error network in an embodiment;
FIG. 3 is a diagram illustrating a residual attention unit in one embodiment;
fig. 4 is a schematic structural diagram of a residual attention backbone network module in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a polarization radar image super-resolution reconstruction method based on a residual error network, including the following steps:
s1, imaging processing is carried out on echo data of an observation target to obtain a polarimetric radar image training data set; the polarimetric radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, and the high-resolution image data and the low-resolution image data are complex data.
It can be understood that echo data of different imaging apertures and different bandwidths of an observation target under different observation angles are subjected to imaging processing to obtain high-resolution and low-resolution image data pairs corresponding to polarization radar system parameters, and a plurality of groups of image data pairs are integrated into a training set to provide a super-resolution data set capable of being trained and tested for a polarization radar image super-resolution reconstruction network model. For the constructed polarized radar image training data set, the data set can be enhanced or amplified by using data enhancement means such as rotation, cropping and translation.
S2, inputting low-resolution image data into a pre-constructed polarization radar image super-resolution reconstruction network model based on a residual error network, and outputting high-resolution reconstruction image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention main network module, an up-sampling module and a post-processing module.
The method can be understood that the real part and the imaginary part of the low-resolution image data are fully utilized by carrying out data recombination on the real part and the imaginary part in the low-resolution image data according to a grouping residual convolution module in the polarized radar image super-resolution reconstruction network model, so that the image shallow feature in the low-resolution image data is effectively extracted; according to the residual attention backbone network module, deep feature extraction, spatial attention weighting and channel attention weighting are carried out on the image shallow feature, and the correlation between spatial domains and multiple channels in the image shallow feature and the deep feature is fully utilized, so that effective reconstruction of spatial information and channel information in the image shallow feature is guaranteed.
And S3, inputting the high-resolution reconstructed image data and the high-resolution image data into a loss function which is constructed in advance, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain the trained polarized radar image super-resolution reconstruction network model.
It can be understood that according to the characteristic that high-resolution reconstructed image data output by the polarized radar image super-resolution reconstruction network model is multi-channel data, the pre-constructed loss function is a multi-channel weighted loss function.
And S4, inputting the low-resolution polarized radar image to be reconstructed into the trained polarized radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarized radar image.
The super-resolution reconstruction network model of the polarized radar image can also be popularized to other polarized radar image related fields such as polarized interference SAR images, dual-polarized SAR images and fully-polarized SAR images.
According to the polarization radar image super-resolution reconstruction method based on the residual error network, the polarization radar image training data set comprising high-resolution image data and low-resolution image data is obtained by imaging the echo data of an observation target; inputting a polarized radar image training data set into a pre-constructed polarized radar image super-resolution reconstruction network model based on a residual error network for training to obtain a trained polarized radar image super-resolution reconstruction network model, wherein a grouped residual error convolution module in the polarized radar image super-resolution reconstruction network model performs data reconstruction and feature extraction on low-resolution image data according to the correlation between the real part and the imaginary part of the low-resolution image data, so that the error of image shallow feature extraction is effectively reduced; the residual attention backbone network module in the polarization radar image super-resolution reconstruction network model realizes effective reconstruction of spatial information and channel information by performing deep feature extraction, spatial attention weighting and channel attention weighting on shallow features of an image according to correlation between the spatial domain and multiple channels of the shallow features of the image, so that the super-resolution reconstruction precision is effectively improved.
In one embodiment, the method further comprises: establishing a polarized radar observation model; the polarized radar observation model comprises two or more than two basic scattering structures, the types of the basic scattering structures can be the same or different, and the basic scattering structures are arranged in an asymmetric L shape; and performing data simulation on the observation target according to a basic scattering structure observation target in the polarization radar observation model and electromagnetic simulation software to obtain echo data of the observation target with different bandwidths at different observation angles.
Specifically, taking a polarization ISAR system in a polarization radar system as an example, polarization ISAR observation models are established, and polarization ISAR observation models of a cube, a cylinder, a dihedron, a flat plate, a top cap, a narrow dihedral angle, a sphere and a three-plane angle structure are respectively established, wherein each observation model comprises three same basic scattering structures consistent with an observation target structure;
observing corresponding targets according to basic scattering structures in each observation model, performing data simulation on the observation targets according to electromagnetic simulation software, and respectively simulating to obtain echo data of eight observation targets of a cube, a cylinder, a dihedron, a flat plate, a top cap, a narrow dihedral angle, a sphere and a three-plane angle structure, wherein the pitch angle of the eight observation targets is 45 degrees and 60 degrees, the azimuth angle of the eight observation targets is-90 degrees to 90 degrees, and the angle step selection of the azimuth angle is 0.2 degrees.
It can be understood that three same basic scattering structures are arranged in a polarization radar observation model, the resolution degradation conditions of an observation target in the two directions of the azimuth and the distance can be obtained, and more training samples can be obtained through simulation by arranging the three same basic scattering structures into an L-shaped arrangement in an asymmetric arrangement.
In one embodiment, the echo data is subjected to imaging processing according to the central imaging angle, the imaging aperture and the bandwidth of the echo data to obtain a plurality of groups of data pairs including high-resolution image data and low-resolution image data, and the plurality of groups of data pairs form a polarized radar image training data set.
In one embodiment, the high resolution imaging aperture required to obtain the echo data is calculated based on the azimuthal resolution
Figure 568539DEST_PATH_IMAGE001
According to the central imaging angle
Figure 220100DEST_PATH_IMAGE002
And a high resolution imaging aperture
Figure 468679DEST_PATH_IMAGE003
The angle interval in which the coherent accumulation angle corresponding to the high-resolution image data required for calculating the echo data is located is
Figure 332730DEST_PATH_IMAGE004
The bandwidth value required for obtaining echo data by calculating the distance resolution is
Figure 881523DEST_PATH_IMAGE005
In pair at
Figure 907247DEST_PATH_IMAGE006
Within the range of
Figure 643122DEST_PATH_IMAGE007
The echo data of the bandwidth is imaged to obtain the size of
Figure 310864DEST_PATH_IMAGE008
Of the high resolution image data
Figure 448584DEST_PATH_IMAGE009
Wherein, in the step (A),
Figure 910790DEST_PATH_IMAGE010
Figure 133961DEST_PATH_IMAGE011
and
Figure 339814DEST_PATH_IMAGE012
indicating the number of channels, height and width,irepresenting the second in a polarimetric radar image training datasetiA piece of data;
calculating the low-resolution imaging aperture required by the echo data according to the azimuth resolution
Figure 332041DEST_PATH_IMAGE013
According to the central imaging angle
Figure 496306DEST_PATH_IMAGE014
And a low resolution imaging aperture
Figure 206773DEST_PATH_IMAGE015
The angle interval in which the coherent accumulation angle corresponding to the low-resolution image data required for obtaining the echo data is calculated is
Figure 950738DEST_PATH_IMAGE016
The bandwidth value required for obtaining echo data by calculating the distance resolution is
Figure 63050DEST_PATH_IMAGE017
Selecting coherent accumulation angle in
Figure 867058DEST_PATH_IMAGE018
Within the range of
Figure 64822DEST_PATH_IMAGE019
The echo data of the bandwidth is imaged to obtain the size of
Figure 612478DEST_PATH_IMAGE020
Of the low resolution image data
Figure 579297DEST_PATH_IMAGE021
Wherein, in the step (A),
Figure 819785DEST_PATH_IMAGE022
Figure 239265DEST_PATH_IMAGE023
and
Figure 590612DEST_PATH_IMAGE024
respectively representing the number, height and width of channels of the low-resolution image data, an
Figure 411937DEST_PATH_IMAGE026
Figure 88906DEST_PATH_IMAGE028
Will have the same central imaging angle
Figure 261262DEST_PATH_IMAGE029
The high-resolution image data and the low-resolution image data form a group of training data pairs, and echo data of different central imaging anglesPerforming the same treatment to obtain
Figure 162439DEST_PATH_IMAGE030
Set of training data pairs based on
Figure 103850DEST_PATH_IMAGE030
The set of training data pairs constitutes a polarized radar image training data set, denoted as
Figure 686141DEST_PATH_IMAGE031
Specifically, imaging processing is respectively carried out on echo data of eight observation targets by utilizing a polar coordinate formatted imaging algorithm, wherein the echo data with 4GHz bandwidth and 10-degree imaging aperture are selected for imaging to obtain echo data with the size of
Figure 80214DEST_PATH_IMAGE032
HR image data of (2); selecting echo data with 2GHz bandwidth and 5-degree imaging aperture for imaging to obtain echo data with the size of
Figure 773363DEST_PATH_IMAGE033
LR image data of (1).
In one embodiment, the coherent matrix is based on polarization
Figure 303702DEST_PATH_IMAGE034
Representing low resolution image data in a polarimetric radar image training dataset
Figure 56894DEST_PATH_IMAGE035
Polarising the coherence matrix under reciprocity conditions
Figure 203842DEST_PATH_IMAGE034
From Pauli vector
Figure 700682DEST_PATH_IMAGE036
Is constructed and is expressed as
Figure 351106DEST_PATH_IMAGE037
Wherein the polarization coherent matrix
Figure 9621DEST_PATH_IMAGE034
Is composed of
Figure 643864DEST_PATH_IMAGE038
The matrix is a matrix of a plurality of matrices,
Figure 678816DEST_PATH_IMAGE039
representing a polarized coherence matrix
Figure 714905DEST_PATH_IMAGE040
To (1)
Figure 544321DEST_PATH_IMAGE041
And row and the first
Figure 665861DEST_PATH_IMAGE042
Elements of a column, superscripts of elements
Figure 770083DEST_PATH_IMAGE043
Representing conjugate operations of data due to polarized coherent matrix
Figure 129520DEST_PATH_IMAGE040
The lower triangular element of (1) is the conjugate of the upper triangular element, and only the following six elements in the matrix need to be considered
Figure 660996DEST_PATH_IMAGE044
Wherein, in the step (A),
Figure 738673DEST_PATH_IMAGE045
is the non-negative real data, and the real data,
Figure 381007DEST_PATH_IMAGE046
is complex number data;
as shown in fig. 2, the low resolution image data of the number of channels 6
Figure 860530DEST_PATH_IMAGE047
Inputting a polarized radar image super-resolution reconstruction network model based on a residual error network, wherein,
Figure 297328DEST_PATH_IMAGE048
Figure 862301DEST_PATH_IMAGE049
the defined field representing the data is a complex field,
Figure 42747DEST_PATH_IMAGE050
respectively representing the number, height and width of channels of data, the data in each channel being
Figure 376776DEST_PATH_IMAGE051
Firstly, according to the grouping residual error convolution module pair in the network model
Figure 984475DEST_PATH_IMAGE052
The real part and the imaginary part are split and subjected to modulus value processing to obtain total real data with 12 channels
Figure 36745DEST_PATH_IMAGE053
Wherein, in the step (A),
Figure 755302DEST_PATH_IMAGE054
the domain of definition representing the data is a real number domain,
Figure 474996DEST_PATH_IMAGE055
comprises the following elements
Figure 253596DEST_PATH_IMAGE056
Then, the total real data is processed
Figure 793162DEST_PATH_IMAGE057
Grouping to obtain four groups of real data
Figure 580990DEST_PATH_IMAGE059
Are respectively as
Figure 155190DEST_PATH_IMAGE060
Figure 104692DEST_PATH_IMAGE061
Figure 865974DEST_PATH_IMAGE062
And
Figure 191914DEST_PATH_IMAGE063
. Wherein the content of the first and second substances,
Figure 355042DEST_PATH_IMAGE064
respectively representing the processes of taking imaginary part, taking real part and taking modulus value to the data, and superscripting
Figure 741024DEST_PATH_IMAGE065
Denotes the first
Figure 255182DEST_PATH_IMAGE065
Grouping real number data;
then according to the maximum value in the four groups of real data
Figure 384812DEST_PATH_IMAGE066
And minimum value
Figure 402446DEST_PATH_IMAGE067
Normalizing the four groups of real data to obtain four groups of normalized real data
Figure 224909DEST_PATH_IMAGE068
Is represented as
Figure 960784DEST_PATH_IMAGE069
Wherein the content of the first and second substances,
Figure 628525DEST_PATH_IMAGE070
is an adjustment factor;
four groups of normalized real data
Figure 766246DEST_PATH_IMAGE071
Inputting Residual Attention Unit (RESA) to perform feature extraction to obtain four groups of shallow features
Figure 759609DEST_PATH_IMAGE072
Is represented as
Figure 982780DEST_PATH_IMAGE073
Wherein the content of the first and second substances,
Figure 454213DEST_PATH_IMAGE074
for the functional representation of RESA, the residual attention unit is shown in FIG. 3, and includes two convolutional layers, an active layer, and an attention module;
finally, four groups of shallow layer characteristics
Figure 712019DEST_PATH_IMAGE075
Splicing in channel dimension to obtain shallow layer characteristics of image
Figure 610705DEST_PATH_IMAGE076
Specifically, a network model is trained by using a polarized radar image data set with a pitch angle of 45 degrees; in the network model, the RESA modules in the grouped residual convolution module can be replaced by other pre-trained feature extraction modules to extract shallow features, and the number of the RESA modules can be increased or decreased; an activation layer in the RESA module selects a parameterized modified Linear Unit (PReLU); the attention mechanism can be a non-local attention mechanism, or can be replaced by other attention models; adjustment coefficients in normalization and de-normalization processes
Figure 55593DEST_PATH_IMAGE077
Is chosen to be 255.
It should be noted that, for the input data form of the present invention, in addition to the data characterization using the elements in the polarization coherence matrix, the polarization scattering matrix can also be used
Figure 330716DEST_PATH_IMAGE078
And scattering matrix by polarization
Figure 708608DEST_PATH_IMAGE078
Elements in the derived other statistics are data-characterized.
In one embodiment, as shown in FIG. 4, the shallow features of the image are
Figure 512616DEST_PATH_IMAGE079
Input is composed of
Figure 975958DEST_PATH_IMAGE080
The residual error attention unit is connected in series to form a residual error attention main network module, deep feature extraction, space attention weighting and channel attention weighting are sequentially carried out on the image shallow feature according to the residual error attention units to obtain a weighted image deep feature, and the weighted image deep feature and the input image shallow feature are combined
Figure 523614DEST_PATH_IMAGE081
Adding to obtain a weighted feature
Figure 490433DEST_PATH_IMAGE082
Is represented as
Figure 730922DEST_PATH_IMAGE083
In one embodiment, the features are weighted
Figure 415981DEST_PATH_IMAGE084
Input up-samplingThe up-sampling module comprises two convolution layers and a sub-pixel rearrangement layer which are mutually connected in series; according to the first convolutional layer to be input
Figure 32907DEST_PATH_IMAGE085
Channel number expansion of
Figure 854232DEST_PATH_IMAGE086
The number of times of the total number of the parts,
Figure 531201DEST_PATH_IMAGE086
the super-resolution is set as a super-resolution multiple; reducing the number of channels of input data to original according to a sub-pixel rearrangement layer
Figure 437978DEST_PATH_IMAGE088
And will be
Figure 593015DEST_PATH_IMAGE089
In each channel
Figure 534427DEST_PATH_IMAGE090
Image expansion of size to
Figure 851138DEST_PATH_IMAGE091
Realizing the up-sampling of data; finally, the characteristic dimension of the image is reduced to 9 through a second convolution layer, and the output up-sampling characteristic is obtained
Figure 510790DEST_PATH_IMAGE092
Wherein, in the step (A),
Figure 469519DEST_PATH_IMAGE093
in one embodiment, the upsampling feature
Figure 265436DEST_PATH_IMAGE094
Inputting the data into a post-processing module, wherein the post-processing module is used for processing the data according to the maximum value in the multiple groups of real number data in the grouping residual convolution module
Figure 284208DEST_PATH_IMAGE095
And minimum value
Figure 165576DEST_PATH_IMAGE096
To the up-sampling feature
Figure 662417DEST_PATH_IMAGE097
Performing inverse normalization processing to obtain inverse normalization data
Figure 578420DEST_PATH_IMAGE098
Is shown as
Figure 502514DEST_PATH_IMAGE099
Then, the data is denormalized
Figure 136757DEST_PATH_IMAGE100
The data is reassembled, specifically,
Figure 171710DEST_PATH_IMAGE101
wherein each channel has the elements of
Figure 942219DEST_PATH_IMAGE102
Will be
Figure 37214DEST_PATH_IMAGE103
Reconstruction into high resolution reconstructed image data
Figure 158754DEST_PATH_IMAGE104
Is shown as
Figure 720099DEST_PATH_IMAGE105
Wherein the content of the first and second substances,
Figure 610695DEST_PATH_IMAGE106
in one embodiment, the high resolution reconstructed map is generatedImage data
Figure 876591DEST_PATH_IMAGE107
And high resolution image data
Figure 219848DEST_PATH_IMAGE108
Inputting a pre-constructed loss function
Figure 862182DEST_PATH_IMAGE109
Loss function
Figure 607284DEST_PATH_IMAGE110
Is defined as
Figure 44081DEST_PATH_IMAGE111
Wherein, the first and the second end of the pipe are connected with each other,
Figure 874634DEST_PATH_IMAGE112
respectively represent
Figure 789500DEST_PATH_IMAGE113
To middle
Figure 654688DEST_PATH_IMAGE114
Data of each channel;
according to a loss function
Figure 527966DEST_PATH_IMAGE115
Computing a single-channel loss function in a polarimetric radar image training dataset
Figure 580236DEST_PATH_IMAGE116
For single channel loss function
Figure 564373DEST_PATH_IMAGE117
Carrying out weighted summation to obtain a multi-channel weighted loss function
Figure 18488DEST_PATH_IMAGE118
Is represented as
Figure 62667DEST_PATH_IMAGE119
Wherein the content of the first and second substances,
Figure 602233DEST_PATH_IMAGE120
is shown as
Figure 390060DEST_PATH_IMAGE121
A training pair
Figure 698682DEST_PATH_IMAGE122
The maximum value in the channel is that of the channel,
Figure 913762DEST_PATH_IMAGE123
is the total number of channels;
according to a multi-channel weighted loss function
Figure 675045DEST_PATH_IMAGE124
And training the polarization radar image super-resolution reconstruction network model to obtain the trained polarization radar image super-resolution reconstruction network model.
Specifically, the network model optimizer selects an Adaptive Moment Estimation (ADAM) optimizer, wherein the optimizer parameters are set to be
Figure 266563DEST_PATH_IMAGE125
Figure 695271DEST_PATH_IMAGE126
And
Figure 815673DEST_PATH_IMAGE127
in order To further verify the beneficial effect of the polarization radar image super-resolution reconstruction method based on the residual error network provided by the invention, in a specific embodiment, for an observation target with a three-face angle structure, the method and a Bicubic (Bicubic) method according To the invention respectively perform super-resolution reconstruction processing on a low-resolution polarization radar image with a pitch angle of 60 degrees, a Peak Signal-To-Noise Ratio (PSNR) index is adopted To quantitatively evaluate a total backscattering power characteristic SPAN of a reconstruction result, and average PSNR indexes of eight kinds of observation targets obtained by different methods are shown in table 1.
TABLE 1 average PSNR indicators obtained by different methods
Figure 595411DEST_PATH_IMAGE129
As can be seen from table 1, compared with the Bicubic method, the average PSNR index obtained by the method provided by the present invention is improved by 4.12dB, that is, the reconstruction result obtained by the method provided by the present invention has a higher peak signal-to-noise ratio, the reconstruction accuracy is higher, and an important technical support can be provided for the subsequent detection and identification of the radar target.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, a polarization radar image super-resolution reconstruction apparatus based on a residual error network is provided, which includes: training set construction module, model training module and test module, wherein:
the training set construction module is used for carrying out imaging processing on echo data of an observation target to obtain a polarimetric radar image training data set; the polarimetric radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, and the high-resolution image data and the low-resolution image data are complex data.
It can be understood that echo data of different imaging apertures and different bandwidths of an observation target under different observation angles are subjected to imaging processing to obtain high-resolution and low-resolution image data pairs corresponding to polarization radar system parameters, and a plurality of groups of image data pairs are integrated into a training set to provide a super-resolution data set capable of being trained and tested for a polarization radar image super-resolution reconstruction network model. For the constructed polarized radar image training data set, the data set can be enhanced or amplified by using data enhancement means such as rotation, cropping and translation.
The model construction module is used for inputting low-resolution image data into a pre-constructed polarization radar image super-resolution reconstruction network model based on a residual error network and outputting high-resolution reconstruction image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention backbone network module, an up-sampling module and a post-processing module; according to the grouping residual convolution module, carrying out data recombination on a real part and an imaginary part in the low-resolution image data to obtain total real number data, and carrying out feature extraction on the total real number data to obtain image shallow layer features; according to the residual attention backbone network module, deep feature extraction, space attention weighting and channel attention weighting are carried out on the image shallow feature to obtain weighted features; according to the up-sampling module, up-sampling is carried out on the weighted features to obtain up-sampling features; and processing the up-sampling characteristics according to the post-processing module to obtain high-resolution reconstructed image data.
The method can be understood that the real part and the imaginary part of the low-resolution image data are fully utilized by carrying out data recombination on the real part and the imaginary part in the low-resolution image data according to a grouping residual convolution module in the polarized radar image super-resolution reconstruction network model, so that the image shallow feature in the low-resolution image data is effectively extracted; according to the residual attention backbone network module, deep feature extraction, spatial attention weighting and channel attention weighting are carried out on the image shallow feature, and the correlation between spatial domains and multiple channels in the image shallow feature and the deep feature is fully utilized, so that effective reconstruction of spatial information and channel information in the image shallow feature is guaranteed.
And the model training module is used for inputting the high-resolution reconstructed image data and the high-resolution image data into a loss function which is constructed in advance, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain the trained polarized radar image super-resolution reconstruction network model.
It can be understood that according to the characteristic that high-resolution reconstructed image data output by the polarized radar image super-resolution reconstruction network model is multi-channel data, the pre-constructed loss function is a multi-channel weighted loss function.
And the test module is used for inputting the low-resolution polarized radar image to be reconstructed into the trained polarized radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarized radar image.
The super-resolution reconstruction network model of the polarimetric radar image provided by the invention can also be popularized to other polarimetric radar image related fields such as polarimetric interference SAR images, dual-polarization SAR images and fully-polarized SAR images.
The specific limitations of the polarization radar image super-resolution reconstruction device based on the residual error network can refer to the limitations of the polarization radar image super-resolution reconstruction method based on the residual error network, and are not described herein again. All or part of the modules in the polarization radar image super-resolution reconstruction device based on the residual error network can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (8)

1. A polarization radar image super-resolution reconstruction method based on a residual error network is characterized by comprising the following steps:
imaging processing is carried out on echo data of an observation target, and a polarization radar image training data set is obtained; the polarimetric radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, wherein the high-resolution image data and the low-resolution image data are complex data;
inputting the low-resolution image data into a pre-constructed polarization radar image super-resolution reconstruction network model based on a residual error network, and outputting high-resolution reconstruction image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention backbone network module, an up-sampling module and a post-processing module; performing data recombination on a real part and an imaginary part in the low-resolution image data according to the grouped residual convolution module to obtain total real data, and performing feature extraction on the total real data to obtain image shallow layer features; carrying out deep feature extraction, spatial attention weighting and channel attention weighting on the image shallow feature according to the residual attention backbone network module to obtain weighted features; according to the up-sampling module, up-sampling is carried out on the weighted features to obtain up-sampling features; processing the up-sampling characteristics according to the post-processing module to obtain high-resolution reconstructed image data;
inputting the high-resolution reconstructed image data and the high-resolution image data into a pre-constructed loss function, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain a trained polarized radar image super-resolution reconstruction network model;
inputting a low-resolution polarization radar image to be reconstructed into the trained polarization radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarization radar image;
performing data recombination on a real part and an imaginary part in the low-resolution image data according to the grouped residual convolution module to obtain total real data, and performing feature extraction on the total real data to obtain image shallow layer features, wherein the method comprises the following steps:
splitting a real part and an imaginary part in the low-resolution image data and performing modulus value processing to obtain total real data, and performing grouping processing to the total real data to obtain multiple groups of real data;
normalizing the multiple groups of real number data according to the maximum value and the minimum value in the multiple groups of real number data to obtain multiple groups of normalized real number data;
inputting the multiple groups of normalized real number data into a residual error attention unit for feature extraction to obtain multiple groups of shallow layer features, wherein the residual error attention unit comprises two convolution layers, an activation layer and an attention module;
splicing the multiple groups of shallow features in a channel dimension to obtain the image shallow features;
processing the up-sampling feature according to the post-processing module to obtain high-resolution reconstructed image data, wherein the processing comprises the following steps:
the post-processing module performs inverse normalization processing on the upsampling characteristics according to the maximum value and the minimum value in the multiple groups of real number data in the grouped residual convolution module to obtain inverse normalization data;
and performing data recombination on the reverse normalized data to obtain the high-resolution reconstructed image data.
2. The method of claim 1, further comprising:
establishing a polarized radar observation model; wherein the polarized radar observation model comprises two or more basic scattering structures;
and performing data simulation on the observation target according to the basic scattering structure observation target in the polarization radar observation model and electromagnetic simulation software to obtain echo data of the observation target under different observation angles and with different bandwidths.
3. The method of claim 1, wherein imaging echo data of an observed target to obtain a polarimetric radar image training dataset comprises:
and imaging the echo data according to the central imaging angle, the imaging aperture and the bandwidth of the echo data to obtain a plurality of groups of data pairs including high-resolution image data and low-resolution image data, and forming the plurality of groups of data pairs into the polarized radar image training data set.
4. The method of claim 3, wherein the image processing of the echo data according to the central imaging angle, imaging aperture and bandwidth of the echo data to obtain a plurality of sets of data pairs including high resolution image data and low resolution image data comprises:
calculating to obtain an imaging aperture required by the echo data according to the azimuth resolution, and determining a coherent accumulation angle required by the echo data according to the imaging aperture and the central imaging angle;
calculating a bandwidth value required by the echo data according to the distance resolution;
imaging the echo data according to the angle interval of the coherent accumulation angle and the bandwidth value to obtain a plurality of groups of data pairs including high-resolution image data and low-resolution image data; the angle interval comprises an angle interval corresponding to the high-resolution image data and an angle interval corresponding to the low-resolution image data, and the central imaging angles of the same group of data pairs formed by the high-resolution image data and the low-resolution image data are the same.
5. The method of claim 1, wherein the residual attention backbone network module comprises a plurality of residual attention units;
according to the residual attention main network module, deep feature extraction, space attention weighting and channel attention weighting are carried out on the image shallow feature to obtain weighted features, and the method comprises the following steps:
and sequentially carrying out deep feature extraction, space attention weighting and channel attention weighting on the image shallow feature according to the residual attention units to obtain a weighted image deep feature, and adding the weighted image deep feature and the image shallow feature to obtain the weighted feature.
6. The method of claim 1, wherein the upsampling module comprises two convolutional layers, a sub-pel reordering layer;
according to the up-sampling module, up-sampling the weighted features to obtain up-sampling features, and the up-sampling features comprise:
and sequentially performing channel expansion, channel reduction and characteristic dimension reduction on the weighted characteristic according to the first convolutional layer, the sub-pixel rearrangement layer and the second convolutional layer to obtain the up-sampling characteristic.
7. The method according to claim 1, wherein the inputting the high resolution reconstructed image data and the high resolution image data into a pre-constructed loss function, and training the polar radar image super-resolution reconstruction network model according to the loss function to obtain a trained polar radar image super-resolution reconstruction network model comprises:
inputting the high-resolution reconstructed image data and the high-resolution image data into a pre-constructed loss function, and calculating a single-channel loss function in the polarized radar image training data set according to the loss function;
carrying out weighted summation on the single-channel loss function to obtain a multi-channel weighted loss function;
and training the polarization radar image super-resolution reconstruction network model according to the multi-channel weighting loss function to obtain the trained polarization radar image super-resolution reconstruction network model.
8. A polarized radar image super-resolution reconstruction device based on a residual error network is characterized by comprising:
the training set construction module is used for carrying out imaging processing on echo data of an observation target to obtain a polarimetric radar image training data set; the polarimetric radar image training data set comprises a plurality of groups of data pairs consisting of high-resolution image data and low-resolution image data obtained by imaging, wherein the high-resolution image data and the low-resolution image data are complex data;
the model construction module is used for inputting the low-resolution image data into a pre-constructed polarized radar image super-resolution reconstruction network model based on a residual error network and outputting high-resolution reconstruction image data; the polarization radar image super-resolution reconstruction network model comprises a grouping residual error convolution module, a residual error attention backbone network module, an up-sampling module and a post-processing module; according to the grouping residual convolution module, carrying out data recombination on a real part and an imaginary part in the low-resolution image data to obtain total real data, and carrying out feature extraction on the total real data to obtain image shallow layer features; carrying out deep feature extraction, spatial attention weighting and channel attention weighting on the image shallow feature according to the residual attention backbone network module to obtain weighted features; performing up-sampling on the weighted features according to the up-sampling module to obtain up-sampling features; processing the up-sampling characteristics according to the post-processing module to obtain high-resolution reconstructed image data;
the model training module is used for inputting the high-resolution reconstructed image data and the high-resolution image data into a pre-constructed loss function, and training the polarized radar image super-resolution reconstruction network model according to the loss function to obtain a trained polarized radar image super-resolution reconstruction network model;
the test module is used for inputting the low-resolution polarization radar image to be reconstructed into the trained polarization radar image super-resolution reconstruction network model for super-resolution reconstruction to obtain a high-resolution polarization radar image;
performing data recombination on a real part and an imaginary part in the low-resolution image data according to the grouped residual convolution module to obtain total real data, and performing feature extraction on the total real data to obtain image shallow layer features, wherein the method comprises the following steps:
splitting a real part and an imaginary part in the low-resolution image data and performing modulus value processing to obtain total real data, and performing grouping processing to the total real data to obtain multiple groups of real data;
normalizing the multiple groups of real number data according to the maximum value and the minimum value in the multiple groups of real number data to obtain multiple groups of normalized real number data;
inputting the multiple groups of normalized real number data into a residual error attention unit for feature extraction to obtain multiple groups of shallow layer features, wherein the residual error attention unit comprises two convolution layers, an activation layer and an attention module;
splicing the multiple groups of shallow features in a channel dimension to obtain the image shallow features;
processing the up-sampling feature according to the post-processing module to obtain high-resolution reconstructed image data, wherein the processing comprises the following steps:
the post-processing module carries out reverse normalization processing on the up-sampling characteristics according to the maximum value and the minimum value in the multiple groups of real number data in the grouped residual convolution module to obtain reverse normalization data;
and performing data recombination on the reverse normalized data to obtain the high-resolution reconstructed image data.
CN202210900724.6A 2022-07-28 2022-07-28 Polarization radar image super-resolution reconstruction method and device based on residual error network Active CN114972041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210900724.6A CN114972041B (en) 2022-07-28 2022-07-28 Polarization radar image super-resolution reconstruction method and device based on residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210900724.6A CN114972041B (en) 2022-07-28 2022-07-28 Polarization radar image super-resolution reconstruction method and device based on residual error network

Publications (2)

Publication Number Publication Date
CN114972041A CN114972041A (en) 2022-08-30
CN114972041B true CN114972041B (en) 2022-10-21

Family

ID=82969899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210900724.6A Active CN114972041B (en) 2022-07-28 2022-07-28 Polarization radar image super-resolution reconstruction method and device based on residual error network

Country Status (1)

Country Link
CN (1) CN114972041B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128727B (en) * 2023-02-02 2023-06-20 中国人民解放军国防科技大学 Super-resolution method, system, equipment and medium for polarized radar image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104122549A (en) * 2014-07-21 2014-10-29 电子科技大学 Deconvolution based radar angle super-resolution imaging method
CN112419155A (en) * 2020-11-26 2021-02-26 武汉大学 Super-resolution reconstruction method for fully-polarized synthetic aperture radar image
CN113096017A (en) * 2021-04-14 2021-07-09 南京林业大学 Image super-resolution reconstruction method based on depth coordinate attention network model
CN114429422A (en) * 2021-12-22 2022-05-03 山东师范大学 Image super-resolution reconstruction method and system based on residual channel attention network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184043B2 (en) * 2010-03-12 2012-05-22 The Boeing Company Super-resolution imaging radar
US11899132B2 (en) * 2020-01-03 2024-02-13 Qualcomm Incorporated Super-resolution enhancement techniques for radar
WO2022057837A1 (en) * 2020-09-16 2022-03-24 广州虎牙科技有限公司 Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104122549A (en) * 2014-07-21 2014-10-29 电子科技大学 Deconvolution based radar angle super-resolution imaging method
CN112419155A (en) * 2020-11-26 2021-02-26 武汉大学 Super-resolution reconstruction method for fully-polarized synthetic aperture radar image
CN113096017A (en) * 2021-04-14 2021-07-09 南京林业大学 Image super-resolution reconstruction method based on depth coordinate attention network model
CN114429422A (en) * 2021-12-22 2022-05-03 山东师范大学 Image super-resolution reconstruction method and system based on residual channel attention network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A residual convolutional neural network for polarimetric SAR image super-resolution;H Shen,L Lin,J Li,Q Yuan,L Zhao;《ISPRS Journal of Photogrammetry and Remote Sensing》;20200331;全文 *
基于探墙雷达的图像超分辨率重建算法研究;杨亚霖;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210115;全文 *
基于注意力机制神经网络的超分辨率重建算法研究;白富瑞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210515;全文 *
基于深度学习的图像超分辨率算法研究;王永金;《中国优秀硕士学位论文全文数据库 信息科技辑》;20220515;全文 *
宽带逆合成孔径雷达高分辨率成像技术综述;田彪等;《雷达学报》;20201031;第9卷(第5期);全文 *
融合极化旋转域特征和超像素技术的极化SAR舰船检测;崔兴超,粟毅,陈思伟;《雷达学报》;20210228;第10卷(第1期);全文 *

Also Published As

Publication number Publication date
CN114972041A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN112488924B (en) Image super-resolution model training method, image super-resolution model reconstruction method and image super-resolution model reconstruction device
CN111476717A (en) Face image super-resolution reconstruction method based on self-attention generation countermeasure network
CN107274462B (en) Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction
CN111951344B (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN111353947A (en) Magnetic resonance parallel imaging method and related equipment
Yeganeh et al. Objective quality assessment for image super-resolution: A natural scene statistics approach
CN111784581A (en) SAR image super-resolution reconstruction method based on self-normalization generation countermeasure network
CN114972041B (en) Polarization radar image super-resolution reconstruction method and device based on residual error network
CN111754598B (en) Local space neighborhood parallel magnetic resonance imaging reconstruction method based on transformation learning
WO2023124971A1 (en) Magnetic resonance imaging down-sampling and reconstruction method based on cross-domain network
CN111640067B (en) Single image super-resolution reconstruction method based on three-channel convolutional neural network
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN109991602A (en) ISAR image resolution enhancement method based on depth residual error network
CN114140442A (en) Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception
CN111667407A (en) Image super-resolution method guided by depth information
Wang et al. Group shuffle and spectral-spatial fusion for hyperspectral image super-resolution
CN105931184B (en) SAR image super-resolution method based on combined optimization
Wang et al. Local conditional neural fields for versatile and generalizable large-scale reconstructions in computational imaging
CN114565511B (en) Lightweight image registration method, system and device based on global homography estimation
Chilukuri et al. Analysing Of Image Quality Computation Models Through Convolutional Neural Network
CN112634385B (en) Rapid magnetic resonance imaging method based on deep Laplace network
Zhang et al. A novel super-resolution method of PolSAR images based on target decomposition and polarimetric spatial correlation
CN114693547A (en) Radio frequency image enhancement method and radio frequency image identification method based on image super-resolution
Xiang et al. Pseudo light field image and 4D Wavelet-transform-based reduced-reference light field image quality assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant