CN114331922A - Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect - Google Patents

Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect Download PDF

Info

Publication number
CN114331922A
CN114331922A CN202210229047.XA CN202210229047A CN114331922A CN 114331922 A CN114331922 A CN 114331922A CN 202210229047 A CN202210229047 A CN 202210229047A CN 114331922 A CN114331922 A CN 114331922A
Authority
CN
China
Prior art keywords
feature map
image
optical effect
calibration
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210229047.XA
Other languages
Chinese (zh)
Other versions
CN114331922B (en
Inventor
洪汉玉
罗心怡
马雷
张天序
张耀宗
熊伦
桑农
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202210229047.XA priority Critical patent/CN114331922B/en
Publication of CN114331922A publication Critical patent/CN114331922A/en
Application granted granted Critical
Publication of CN114331922B publication Critical patent/CN114331922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-scale self-calibration method for restoring turbulent degraded images under the aerodynamic optical effect, which comprises the following steps of: s1, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image; s2, calibrating the characteristic graph through a pre-constructed self-calibration network to obtain a local fusion characteristic graph calibrated for a local fuzzy area of the turbulence degradation image; s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area; and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution. The method can utilize the potential high-resolution and low-resolution spatial information of the image and simultaneously give consideration to the multi-scale information of the image, thereby accurately restoring the pneumatic optical effect turbulence degraded image.

Description

Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for restoring a pneumatic optical effect turbulence degradation image based on a multi-scale self-calibration network.
Background
When the aircraft reaches a certain speed, the density of the flow field is changed due to the compression effect generated by the surrounding air, so that the imaging performance of the optical system is adversely affected. Image blur caused by the aerodynamic optical effect may be affected by various factors. Unlike degraded images in life, the process of turbulent image recovery has more complex factors: 1. the change of the atmospheric refractive index can affect the resolution of an image in the imaging process of an optical system; 2. the image quality is reduced due to low-altitude wind shear in the air layer, instability in the air layer, the influence of turbulent media and the imperfection of equipment in the transmission process; 3. the original image is not known to be degraded, it is difficult to estimate the degradation model from the turbulence degradation image, and various natural phenomena cause interference of random factors.
The existing traditional method for restoring the degraded image by the aerodynamic optical effect generally adopts three ways: 1. a flow field control method; 2. an adaptive optical method; 3. a digital image restoration method. The method can reduce the influence of the pneumatic optical effect, but the point spread function solved by the existing algorithm always has deviation.
Disclosure of Invention
The invention mainly aims to provide a method capable of improving the restoration precision of an optical effect turbulence degraded image.
The technical scheme adopted by the invention is as follows:
the method for restoring the turbulence degradation image of the multi-scale self-calibration aerodynamic optical effect comprises the following steps:
s1, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s2, calibrating the characteristic diagram through a pre-constructed self-calibration network, and particularly calibrating the characteristic diagram along the channel dimensionThe feature map is separated into two sub-feature maps, and the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area;
and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
In step S2, the fusion feature map is used as input, the fusion operation is repeated m times, and m fusion feature maps are concatenated to be used as the final fusion feature map, where m is a natural number.
According to the technical scheme, in step S2, sigmoid functions are used to perform weighted fusion on the high-resolution spatial features and the low-resolution spatial features, so as to obtain calibration spatial features.
In step S2, the feature map of the turbulence degradation image is extracted by using the convolution layer.
In step S3, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
The invention also provides a pneumatic optical effect turbulence degradation image restoration system based on the multi-scale self-calibration network, which comprises the following components:
the characteristic image extraction module is used for extracting a characteristic diagram of the original aerodynamic optical effect turbulence degradation image, and the size of the characteristic diagram isC×H×WIn which the channelDimension ofCHWRespectively the height and width of the turbulence degradation image;
a local fuzzy region calibration module, configured to calibrate the feature map through a pre-constructed self-calibration network, specifically, separate the feature map into two sub-feature maps along a channel dimension, where the number of channels in each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
the global region recovery module is used for performing multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global region;
and the image restoration module is used for merging the local fusion feature map and the global restoration feature map and restoring the image of the merged feature map by convolution.
In connection with the above technical solution, the local fuzzy area calibration module is further configured to specifically use the fusion feature map as an input, repeat the fusion operation m times, and cascade m fusion feature maps as a final fusion feature map, where m is a natural number.
According to the technical scheme, the local fuzzy region calibration module specifically performs weighted fusion on the high-resolution spatial features and the low-resolution spatial features by using a sigmoid function to obtain calibrated spatial features.
In connection with the above technical solution, the global area recovery module is specifically configured to:
specifically, a multi-channel filter is used, the void rates of convolution kernels in different channels are different, so that multi-scale convolution is formed, and the number of channels of the multi-channel filter is equal to the channel dimension C of the characteristic diagram;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
The invention also provides a computer storage device, in which a computer program executable by a processor is stored, and the computer program executes the multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to the technical scheme.
The invention has the following beneficial effects: according to the method, the high-resolution and low-resolution spatial features are weighted and fused through a self-calibration network to obtain calibration spatial features, then the calibration spatial features and the original resolution spatial features of the aero-optical effect turbulence degradation image feature map are fused, the original image features are kept, meanwhile, the local area of the image is calibrated, and the local fusion feature map is obtained. After multiple calibrations, the local fuzzy area is recovered more accurately.
Further, the multi-scale convolution utilizes the hole convolution with different hole rates to expand the receptive field in the global fuzzy region to different degrees and integrates the characteristics of different scales, so that a larger fuzzy range and multi-scale information can be learned, and the detailed information beneficial to recovering the image can be extracted. Therefore, the method and the device calibrate the local fuzzy area in the image, and effectively restore the details in the global area in the image, thereby improving the quality of the overall restoration of the image.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to an embodiment of the invention;
FIG. 2 is a flow chart of a multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to another embodiment of the invention;
FIG. 3 is a schematic diagram of a multi-scale self-calibration network according to an embodiment of the present invention;
FIG. 4 is a diagram of the self-calibration process of the local blurred region of the image according to the present invention;
FIG. 5 shows the test results of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in FIG. 1, the multi-scale self-calibration aero-optical effect turbulence degradation image restoration method of the embodiment of the invention comprises the following steps:
s101, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s102, calibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along a channel dimension, wherein the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s103, carrying out multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global area;
and S104, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
Further, in step S102, the fused feature map may be used as an input, the fusion operation may be repeated m times, and m fused feature maps may be concatenated to serve as a final fused feature map, where m is a natural number.
In step S102, sigmoid function is used for weighting and fusing the high-resolution spatial features and the low-resolution spatial features to obtain calibration spatial features.
Further, in step S102, the feature map of the turbulence degradation image may be extracted by using the convolution layer.
In step S103, a multi-channel filter is specifically used, and the void ratios of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
The invention constructs multi-scale convolution through the void convolution with different void rates. The different voidage enlarges the receptive field of the global fuzzy region to different degrees, so that the multi-scale convolution is beneficial to integrating the characteristics of different scales (different receptive fields), thereby being beneficial to restoring the fuzzy regions with different fuzzy ranges and fuzzy degrees.
The method for restoring the aerodynamically optical effect turbulence degradation image based on the multi-scale self-calibration network, which is disclosed by the invention, can be written in a python language on a Linux operating system platform, and comprises the following steps as shown in fig. 2:
s201, acquiring a pneumatic optical effect sequence turbulence degradation image database by using pneumatic optical effect turbulence degradation image simulation software, dividing the database into a training set and a testing set, and adding a real scene pneumatic optical effect turbulence degradation image into a testing sample set;
s202, training a pre-constructed self-calibration network through sample data in a training set, wherein the specific training process comprises the following steps:
inputting a pneumatic optical effect turbulence degradation image, and extracting a pneumatic optical effect turbulence degradation image characteristic diagram; focusing on a local fuzzy area of a feature map of the pneumatic optical effect turbulence degradation image, and calibrating the local feature of the feature map for multiple times to obtain a local fusion feature map calibrated for the local fuzzy area of the turbulence degradation image; learning larger fuzzy range and multi-scale information in the pneumatic optical effect turbulence degradation image feature map and recovering image details to obtain a global recovery feature map; and combining the local fusion feature map and the global recovery feature map, and restoring the combined map through convolution.
Extracting a characteristic diagram of an original aerooptical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWAre respectively provided withThe height and width of the turbulence degradation image;
calibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along a channel dimension, wherein the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image; and repeating the fusion operation for m times by taking the local fusion feature map as input, and cascading the m fusion feature maps to obtain a final local fusion feature map, wherein m is a natural number.
Performing multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global area;
merging the local fusion feature map and the global recovery feature map;
and performing image restoration on the combined feature map by convolution.
An end-to-end training strategy can be adopted, and the multi-scale self-calibration network is continuously optimized through a training set until optimal weight is obtained, so that a trained self-calibration network is obtained;
s203, testing the trained self-calibration network by using the test sample set, and evaluating the performance of the self-calibration network;
and S204, inputting the aero-optical effect turbulence degradation image to be restored into a self-calibration network which is evaluated to meet the requirements, and outputting the restored image.
Further, as shown in fig. 3, in step S2, the feature extraction module is used to extract the feature map of the aero-optical effect turbulence degradation image. The module extracts the size of a single 3 x 3 convolutional layerC×H×WThe characteristic map of the image is degraded by the turbulence of the aerodynamic optical effect. WhereinCThe number of channels of the characteristic diagram is shown,HandWrespectively representing the height and width of the feature map. Representing the extracted feature map of the aerooptical effect turbulence degradation imageComprises the following steps:
S SF =C SF (I SF )
I SF representing the input aero-optical effect turbulence degraded image,C SF (. cndot.) denotes a convolution operation,S SF and representing the obtained characteristic diagram of the aerooptical effect turbulence degradation image.
In step S3, according to the feature map of the aerodynamically optical effect turbulence degradation image obtained in the previous stepS SF The method is divided into two sub-feature maps along the channel dimension C, and the number of channels of each sub-feature map is C/2.
The self-calibration process of the present invention is illustrated in fig. 4, where a sub-feature map is downsampled and feature extracted using convolutional layers to learn a representation of the low-resolution feature space. And simultaneously, the sub-feature graph is subjected to up-sampling, and feature extraction is carried out by using a convolutional layer, so that the expression of a high-resolution feature space is learned. For an image I (of size M × N), s-fold down-sampling is performed to obtain a resolution image of (M/s) × (N/s) size. Since the downsampling operation of the sub-feature map means that the sub-feature map is image reduced, the reduced image has a low resolution, and when the features are extracted by the convolutional layer, the representation of the low resolution feature space is learned. Similarly, for the sub-feature map that is up-sampled, it means that the sub-feature map is image-enlarged, so that a higher resolution image can be obtained, and the representation of the high resolution feature space is learned when extracting features with convolutional layers.
The sigmoid function can be used for carrying out weighted fusion on the high-resolution spatial features and the low-resolution spatial features to obtain calibrated spatial features.
For the other sub-feature map, the convolution layer is used to extract the features of the original resolution space.
Fusing the features of the original resolution space with the calibration features to obtain an outputY 1
Will fuse to obtain the outputY 1Repeating the operation of step S3 as input to obtain an outputY 2Repeating the operation m times by analogy.
M outputs (Y 1), (Y 2), …,(Y m) Cascading is performed, and the total output is obtained by m outputs as shown in FIG. 3Y MWill be connected toY MExpressed as:
Y M=C sum [(Y 1), (Y 2), …,(Y m)]
Y Mrepresents a cascade ofY 1), (Y 2), …,(Y m) The total output of (a) and (b),C sum is shown to beY 1), (Y 2), …,(Y m) The cascade operation.Y mRepresents the m-th output.
Further, as shown in fig. 3, in step S4, specifically, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
Further, in step S5, the feature map of the aerodynamically optical effect turbulence degraded image is processedS SF Cascaded outputs of sumsY MAdding pixel by pixel to combine the feature map after the local fuzzy region calibration and the feature map after the global fuzzy region recovery and outputS DF
S DF= S SF +W SC C sum [(Y 1), (Y 2), …,(Y m)]= S SF +W SC Y M
W SC Is arranged atY MThe weight of the latter 3 x 3 convolution,S SF representing the original aerooptical effect turbulence degradation image characteristic diagram.
Reconstructing the combined characteristic diagram into a clear image by adopting a 3 multiplied by 3 convolutional layer:I CI =C Rec (S DF ),C Rec which represents a convolution operation, the operation of the convolution,I CI the restored image is represented.
Further, step S6 specifically uses a training sample set to construct a loss function by an automatic differentiation technique using a stochastic gradient descent and back propagation algorithm
Figure 879711DEST_PATH_IMAGE002
N denotes the total number of data sets,H DRN representing the entire multi-scale self-calibrating network, I i BI representing the ith aero-optical effect turbulence degradation image in the training set,I i CI the ith original image is shown. Using loss functionsL(θ) Optimizing multi-scale self-calibrating networks, updating network parametersθObtaining the weight of the training sample set;
on the basis of adopting the weight of the training set, the test sample set tests the multi-scale self-calibration network. The test adopts a trained self-calibration network, and the test is equivalent to sending a turbulence degradation image into the network, and the turbulence degradation image can be directly recovered. It becomes visually clearer (visual effect), but the results of the test are typically evaluated to verify the validity of the network (data description). The embodiment of the invention adopts Peak signal-to-noise ratio (PSNR) evaluation indexes to evaluate the result after the test. The evaluation index is a common index for image quality evaluation, and a larger value indicates a better image quality.
The invention provides an implementation case, which utilizes pneumatic optical effect turbulence degradation image simulation software to obtain a pneumatic optical effect sequence turbulence degradation image database, divides the database into a training set and a test set, and adds a real scene pneumatic optical effect turbulence degradation image into a test sample set; 1000 original images and 1000 aerodynamic optical effect turbulence degradation images are used as training sets, and 50 images are used as test data sets. The image size in the dataset is 256 x 256 pixels each.
The test results shown in fig. 5 indicate that the quality is better than the original input turbulence degradation image, which indicates that the trained network model is effective for restoration of the turbulence degradation image.
When the test sample is used for self-calibration network test, a Peak signal-to-noise ratio (PSNR) evaluation index can be specifically used for evaluating a reconstructed result. During each iteration of training, the initial learning rate is set to 10-4Setting parameters:m=M=20, the overall model architecture is realized on a GeForce GTX Titan V with PyTorch, the training period is 800epoch, and the optimal model parameters are saved after the training is completed.
In the testing phase, an additional 50 images from the aero-optical effect turbulence degradation image dataset were selected as the test dataset, where the average PSNR =20.006dB for the degradation map. And testing by using the optimal model parameters obtained by training to obtain a restored image, wherein the average consumed time is 1.18s, the average PSNR =32.361dB of the restored image, and the test result is shown in FIG. 5. The turbulence degradation image with PSNR =20.006dB input by people can be visually seen from the graph, the restoration image is obtained after the whole network is restored on the basis of the optimal model, and the mean PSNR =32.361dB of the restoration image is obviously increased, so that the network is proved to be effective in restoring the turbulence degradation image with the aero-optical effect.
The present application also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer readable storage medium of this embodiment is for implementing the multi-scale self-calibrating aero-optical effect turbulence degradation image restoration method of the method embodiment when executed by a processor.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-scale self-calibration aero-optical effect turbulence degradation image restoration method is characterized by comprising the following steps:
s1, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s2, calibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along the channel dimension, wherein the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area;
and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
2. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S2, the local fusion feature map is used as an input, the fusion operation is repeated m times, and m fusion feature maps are cascaded to be used as a final local fusion feature map, wherein m is a natural number.
3. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S2, the sigmoid function is used to perform weighted fusion on the high-resolution spatial features and the low-resolution spatial features to obtain calibrated spatial features.
4. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S2, the feature map of the turbulence degradation image is extracted by using convolution layer.
5. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S3, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form multi-scale convolution, wherein the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
6. A multi-scale self-calibration network-based aerodynamic optical effect turbulence degradation image restoration system is characterized by comprising:
the characteristic image extraction module is used for extracting a characteristic diagram of the original aerodynamic optical effect turbulence degradation image, and the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
local fuzzy area calibration module forCalibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along a channel dimension, wherein the number of channels of each sub-feature map isCExtracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
the global region recovery module is used for performing multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global region;
and the image restoration module is used for merging the local fusion feature map and the global restoration feature map and restoring the image of the merged feature map by convolution.
7. The system for restoring the turbulence degradation image of the aero-optical effect based on the multi-scale self-calibration network according to claim 6, wherein the local fuzzy region calibration module is further configured to repeat the fusion operation m times by using the fusion feature map as an input, and cascade m fusion feature maps as a final fusion feature map, wherein m is a natural number.
8. The system for restoring the turbulence degradation image of the aerodynamic optical effect based on the multi-scale self-calibration network as claimed in claim 6, wherein the local fuzzy region calibration module performs weighted fusion on the spatial features with high and low resolution by using a sigmoid function to obtain the calibrated spatial features.
9. The system for restoration of an aero-optical effect turbulence degradation image based on multi-scale self-calibration network according to claim 6, wherein the global area recovery module is specifically configured to:
specifically, a multi-channel filter is used, the void rates of convolution kernels in different channels are different, so that multi-scale convolution is formed, and the number of channels of the multi-channel filter is equal to the channel dimension C of the characteristic diagram;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
10. A computer storage device having stored therein a computer program executable by a processor to perform the multi-scale self-calibrating aero-optical effect turbulence-degraded image restoration method of any one of claims 1-5.
CN202210229047.XA 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect Active CN114331922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210229047.XA CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210229047.XA CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Publications (2)

Publication Number Publication Date
CN114331922A true CN114331922A (en) 2022-04-12
CN114331922B CN114331922B (en) 2022-07-19

Family

ID=81033920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210229047.XA Active CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Country Status (1)

Country Link
CN (1) CN114331922B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688637A (en) * 2023-01-03 2023-02-03 中国海洋大学 Turbulent mixing intensity calculation method, system, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150251745A1 (en) * 2014-03-06 2015-09-10 General Atomics Aeronautical Systems, Inc. Devices, systems and methods for passive control of flow
US20160300326A1 (en) * 2015-04-10 2016-10-13 Realtek Semiconductor Corporation Image processing device and method thereof
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN109685072A (en) * 2018-12-22 2019-04-26 北京工业大学 A kind of compound degraded image high quality method for reconstructing based on generation confrontation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150251745A1 (en) * 2014-03-06 2015-09-10 General Atomics Aeronautical Systems, Inc. Devices, systems and methods for passive control of flow
US20160300326A1 (en) * 2015-04-10 2016-10-13 Realtek Semiconductor Corporation Image processing device and method thereof
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN109685072A (en) * 2018-12-22 2019-04-26 北京工业大学 A kind of compound degraded image high quality method for reconstructing based on generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIHUI SUN ET AL.: "Adaptive denoising algorithm of aero-optical degraded image based on edge orientation", 《2012 IEEE 11TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688637A (en) * 2023-01-03 2023-02-03 中国海洋大学 Turbulent mixing intensity calculation method, system, computer device and storage medium

Also Published As

Publication number Publication date
CN114331922B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN113240580B (en) Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN111028177B (en) Edge-based deep learning image motion blur removing method
CN110490082B (en) Road scene semantic segmentation method capable of effectively fusing neural network features
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN111523546B (en) Image semantic segmentation method, system and computer storage medium
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN112365514A (en) Semantic segmentation method based on improved PSPNet
CN114549913A (en) Semantic segmentation method and device, computer equipment and storage medium
Chen et al. U-net like deep autoencoders for deblurring atmospheric turbulence
CN114331922B (en) Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
CN110570375B (en) Image processing method, device, electronic device and storage medium
CN116757930A (en) Remote sensing image super-resolution method, system and medium based on residual separation attention mechanism
CN114092803A (en) Cloud detection method and device based on remote sensing image, electronic device and medium
CN116071279A (en) Image processing method, device, computer equipment and storage medium
CN112766099B (en) Hyperspectral image classification method for extracting context information from local to global
CN110070541B (en) Image quality evaluation method suitable for small sample data
CN116228576A (en) Image defogging method based on attention mechanism and feature enhancement
CN116091893A (en) Method and system for deconvolution of seismic image based on U-net network
CN115689918A (en) Parallel single image rain removing method based on residual error prior attention mechanism
CN116091792A (en) Method, system, terminal and medium for constructing visual attention prediction model
CN115496654A (en) Image super-resolution reconstruction method, device and medium based on self-attention mechanism
CN114494065A (en) Image deblurring method, device and equipment and readable storage medium
CN113536971A (en) Target detection method based on incremental learning
CN114331931A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism
CN113628139A (en) Fuzzy image restoration method and system based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant