CN114331922B - Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect - Google Patents

Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect Download PDF

Info

Publication number
CN114331922B
CN114331922B CN202210229047.XA CN202210229047A CN114331922B CN 114331922 B CN114331922 B CN 114331922B CN 202210229047 A CN202210229047 A CN 202210229047A CN 114331922 B CN114331922 B CN 114331922B
Authority
CN
China
Prior art keywords
feature map
image
optical effect
calibration
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210229047.XA
Other languages
Chinese (zh)
Other versions
CN114331922A (en
Inventor
洪汉玉
罗心怡
马雷
张天序
张耀宗
熊伦
桑农
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202210229047.XA priority Critical patent/CN114331922B/en
Publication of CN114331922A publication Critical patent/CN114331922A/en
Application granted granted Critical
Publication of CN114331922B publication Critical patent/CN114331922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-scale self-calibration method for restoring turbulent degraded images under the aerodynamic optical effect, which comprises the following steps of: s1, extracting a characteristic diagram of the original pneumatic optical effect turbulence degradation image; s2, calibrating the characteristic graph through a pre-constructed self-calibration network to obtain a local fusion characteristic graph calibrated for a local fuzzy area of the turbulence degradation image; s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area; and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution. The method can utilize the potential high-resolution and low-resolution spatial information of the image and simultaneously give consideration to the multi-scale information of the image, thereby accurately restoring the pneumatic optical effect turbulence degraded image.

Description

Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for restoring a pneumatic optical effect turbulence degradation image based on a multi-scale self-calibration network.
Background
When the aircraft reaches a certain speed, the density of the flow field is changed due to the compression effect generated by the surrounding air, so that the imaging performance of the optical system is adversely affected. Image blur caused by the aerodynamic optical effect may be affected by various factors. Unlike degraded images in life, the process of turbulent image recovery has more complex factors: 1. the change of the atmospheric refractive index can affect the resolution of an image in the imaging process of an optical system; 2. the image quality is reduced due to low-altitude wind shear in the air layer, instability in the air layer, the influence of turbulent media and the imperfection of equipment in the transmission process; 3. the original image is not known to be degraded, it is difficult to estimate the degradation model from the turbulence degradation image, and various natural phenomena cause interference of random factors.
The existing traditional method for restoring the degraded image by the aerodynamic optical effect generally adopts three ways: 1. a flow field control method; 2. an adaptive optical method; 3. a digital image restoration method. The method can reduce the influence of the pneumatic optical effect, but the point spread function solved by the existing algorithm always has deviation.
Disclosure of Invention
The invention mainly aims to provide a method capable of improving the restoration precision of an optical effect turbulence degraded image.
The technical scheme adopted by the invention is as follows:
the method for restoring the turbulence degradation image of the multi-scale self-calibration aerodynamic optical effect comprises the following steps:
s1, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s2, calibrating the characteristic diagram through a pre-constructed self-calibration network, specifically, separating the characteristic diagram into two sub-characteristic diagrams along the channel dimension, wherein the channel number of each sub-characteristic diagram isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area;
and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
In step S2, the fusion feature map is used as input, the fusion operation is repeated m times, and m fusion feature maps are concatenated to be used as the final fusion feature map, where m is a natural number.
According to the technical scheme, in step S2, sigmoid functions are used to perform weighted fusion on the high-resolution spatial features and the low-resolution spatial features, so as to obtain calibration spatial features.
In step S2, the feature map of the turbulence degradation image is extracted by using the convolution layer.
In step S3, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
The invention also provides a pneumatic optical effect turbulence degradation image restoration system based on the multi-scale self-calibration network, which comprises the following components:
the characteristic image extraction module is used for extracting a characteristic diagram of the original aerodynamic optical effect turbulence degradation image, and the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
a local fuzzy region calibration module, configured to calibrate the feature map through a pre-established self-calibration network, specifically, separate the feature map into two sub-feature maps along a channel dimension, where a channel number of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
the global region restoration module is used for carrying out multi-scale convolution restoration on the feature map of the original aero-optical effect turbulence degradation image to obtain a global restoration feature map for a global region;
and the image restoration module is used for merging the local fusion feature map and the global restoration feature map and restoring the image of the merged feature map by convolution.
In connection with the above technical solution, the local fuzzy area calibration module is further configured to specifically use the fusion feature map as an input, repeat the fusion operation m times, and cascade m fusion feature maps as a final fusion feature map, where m is a natural number.
According to the technical scheme, the local fuzzy region calibration module specifically performs weighted fusion on the high-resolution spatial features and the low-resolution spatial features by using a sigmoid function to obtain calibrated spatial features.
In connection with the above technical solution, the global area recovery module is specifically configured to:
specifically, a multi-channel filter is used, the void rates of convolution kernels in different channels are different, so that multi-scale convolution is formed, and the number of channels of the multi-channel filter is equal to the channel dimension C of the characteristic diagram;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
The invention also provides a computer storage device, in which a computer program executable by a processor is stored, and the computer program executes the multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to the technical scheme.
The invention has the following beneficial effects: according to the method, the high-resolution and low-resolution spatial features are weighted and fused through a self-calibration network to obtain calibration spatial features, then the calibration spatial features and the original resolution spatial features of the aero-optical effect turbulence degradation image feature map are fused, the original image features are kept, meanwhile, the local area of the image is calibrated, and the local fusion feature map is obtained. After multiple calibrations, the local fuzzy area is recovered more accurately.
Further, the multi-scale convolution utilizes the hole convolution with different hole rates to expand the receptive field in the global fuzzy region to different degrees and integrates the characteristics of different scales, so that a larger fuzzy range and multi-scale information can be learned, and the detailed information beneficial to recovering the image can be extracted. Therefore, the method and the device calibrate the local fuzzy area in the image, and effectively restore the details in the global area in the image, thereby improving the quality of the overall restoration of the image.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to an embodiment of the invention;
FIG. 2 is a flow chart of a multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to another embodiment of the invention;
FIG. 3 is a schematic diagram of a multi-scale self-calibration network according to an embodiment of the present invention;
FIG. 4 is a diagram of the self-calibration process of the local blurred area of the image according to the present invention;
FIG. 5 shows the test results of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in FIG. 1, the method for restoring the turbulence degradation image by the multi-scale self-calibration aerodynamic optical effect of the embodiment of the invention comprises the following steps:
s101, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s102, calibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along a channel dimension, wherein the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s103, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area;
and S104, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
Further, in step S102, the fused feature map may be used as an input, the fusion operation may be repeated m times, and m fused feature maps are concatenated to be used as a final fused feature map, where m is a natural number.
In step S102, sigmoid function is used for weighting and fusing the high-resolution spatial features and the low-resolution spatial features to obtain calibration spatial features.
Further, in step S102, the feature map of the turbulence degradation image may be extracted by using the convolution layer.
In step S103, a multi-channel filter is specifically used, and the void ratios of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic diagram of the original aerooptical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic diagram which is recovered in the global area after the multi-scale convolution.
The invention constructs multi-scale convolution through the void convolution with different void rates. The different voidage enlarges the receptive field of the global fuzzy region to different degrees, so that the multi-scale convolution is beneficial to integrating the characteristics of different scales (different receptive fields), thereby being beneficial to restoring the fuzzy regions with different fuzzy ranges and fuzzy degrees.
The method for restoring the aerodynamically optical effect turbulence degradation image based on the multi-scale self-calibration network, which is disclosed by the invention, can be written in a python language on a Linux operating system platform, and comprises the following steps as shown in fig. 2:
s201, acquiring a pneumatic optical effect sequence turbulence degradation image database by using pneumatic optical effect turbulence degradation image simulation software, dividing the database into a training set and a testing set, and adding a real scene pneumatic optical effect turbulence degradation image into a testing sample set;
s202, training a pre-constructed self-calibration network through sample data in a training set, wherein the specific training process comprises the following steps:
inputting a pneumatic optical effect turbulence degradation image, and extracting a pneumatic optical effect turbulence degradation image characteristic diagram; focusing on a local fuzzy area of a feature map of the pneumatic optical effect turbulence degradation image, and calibrating the local feature of the feature map for multiple times to obtain a local fusion feature map calibrated for the local fuzzy area of the turbulence degradation image; learning larger fuzzy range and multi-scale information in the pneumatic optical effect turbulence degradation image feature map and recovering image details to obtain a global recovery feature map; and combining the local fusion feature map and the global recovery feature map, and restoring the combined map through convolution.
Extracting a characteristic diagram of an original aerooptical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
calibrating the characteristic diagram through a pre-constructed self-calibration network, specifically, separating the characteristic diagram into two sub-characteristic diagrams along the channel dimension, wherein the number of channels of each sub-characteristic diagram isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image; repeating the fusion operation for m times by taking the local fusion characteristic diagram as input, and cascading the m fusion characteristic diagrams as the final local fusion characteristicIn the figure, m is a natural number.
Performing multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global area;
merging the local fusion feature map and the global recovery feature map;
and performing image restoration on the combined feature map by convolution.
An end-to-end training strategy can be adopted, and the multi-scale self-calibration network is continuously optimized through a training set until optimal weight is obtained, so that a trained self-calibration network is obtained;
s203, testing the trained self-calibration network by using the test sample set, and evaluating the performance of the self-calibration network;
s204, inputting the aero-optical effect turbulence degradation image to be restored to a self-calibration network which is evaluated to meet the requirements, and outputting the restored image.
Further, as shown in fig. 3, in step S2, the feature extraction module is used to extract the feature map of the aero-optical effect turbulence degradation image. The module extracts the size of a single 3 x 3 convolutional layerC×H×WThe characteristic map of the image is degraded by the turbulence of the aerodynamic optical effect. WhereinCThe number of channels of the characteristic diagram is shown,HandWrepresenting the height and width of the feature map, respectively. The extracted characteristic diagram of the aero-optical effect turbulence degradation image is represented as follows:
S SF =C SF (I SF )
I SF representing the input aero-optical effect turbulence degraded image,C SF (. cndot.) denotes the operation of convolution,S SF and representing the obtained characteristic diagram of the aerooptical effect turbulence degradation image.
In step S3, according to the feature map of the aerodynamically optical effect turbulence degradation image obtained in the previous stepS SF It is split along the channel dimension C into two sub-feature maps, one for each sub-feature mapThe number of tracks is C/2.
The self-calibration process of the present invention is illustrated in fig. 4, where a sub-feature map is downsampled and feature extracted using convolutional layers to learn a representation of the low-resolution feature space. And simultaneously, performing up-sampling on the sub-feature graph and performing feature extraction by using a convolution layer to learn the expression of the high-resolution feature space. For an image I (of size M × N), s-fold down sampling is performed to obtain a resolution image of (M/s) × (N/s) size. Since the downsampling operation of the sub-feature map means that the sub-feature map is image reduced, the reduced image has a low resolution, and when the features are extracted by the convolutional layer, the representation of the low resolution feature space is learned. Similarly, for the sub-feature map that is up-sampled, it means that the sub-feature map is image-enlarged, so that a higher resolution image can be obtained, and the representation of the high resolution feature space is learned when extracting features with convolutional layers.
The sigmoid function can be used for carrying out weighted fusion on the high-resolution spatial features and the low-resolution spatial features to obtain calibrated spatial features.
For the other sub-feature map, the convolution layer is used to extract the features of the original resolution space.
Fusing the characteristics of the original resolution space and the calibration characteristics to obtain outputY 1
Will fuse to obtain the outputY 1Repeating the operation of step S3 as input to obtain an outputY 2Repeating the operation m times by analogy.
M outputs (Y 1), (Y 2), …,(Y m) Cascading is performed, and the total output is obtained by m outputs as shown in FIG. 3Y MWill be connected toY MExpressed as:
Y M=C sum [(Y 1), (Y 2), …,(Y m)]
Y Mrepresents a cascade ofY 1), (Y 2), …,(Y m) The total output of (a) and (b),C sum is shown to aY 1), (Y 2), …,(Y m) The cascade operation.Y mRepresents the m-th output.
Further, as shown in fig. 3, in step S4, specifically, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form a multi-scale convolution, where the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic image of the original pneumatic optical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic image after the global region is recovered after the multi-scale convolution.
Further, in step S5, the characteristic map of the aerooptical effect turbulence degradation image is processedS SF Cascaded outputs of sumsY MAdding pixel by pixel to combine the feature map after the local fuzzy region calibration and the feature map after the global fuzzy region recovery and outputS DF
S DF= S SF +W SC C sum [(Y 1), (Y 2), …,(Y m)]= S SF +W SC Y M
W SC Is arranged atY MThe weight of the latter 3 x 3 convolution,S SF representing the original aerooptical effect turbulence degradation image characteristic diagram.
Reconstructing the combined characteristic diagram into a clear image by adopting a convolution layer of 3 multiplied by 3:I CI =C Rec (S DF ),C Rec which represents a convolution operation, is a function of,I CI the restored image is represented.
Further, step (ii)S6 specifically adopts a training sample set to construct a loss function through an automatic differentiation technology and by using a random gradient descent and back propagation algorithm
Figure 879711DEST_PATH_IMAGE002
N denotes the total number of data sets,H DRN representing the entire multi-scale self-calibration network, I i BI representing the ith aero-optical effect turbulence degradation image in the training set,I i CI showing the ith original image. Using loss functionsL(θ) Optimizing a multi-scale self-calibrating network to update network parametersθObtaining the weight of the training sample set;
on the basis of adopting the weight of the training set, the test sample set tests the multi-scale self-calibration network. The test adopts a trained self-calibration network, and equivalently, the test sends a turbulence degradation image into the network, and the turbulence degradation image can directly obtain a restored image. It becomes visually clearer (visual effect), but the results of the test are typically evaluated to verify the validity of the network (data description). The embodiment of the invention adopts Peak signal-to-noise ratio (PSNR) evaluation indexes to evaluate the result after the test. The evaluation index is a common index for image quality evaluation, and a larger value indicates a better image quality.
The invention provides an implementation case, which utilizes pneumatic optical effect turbulence degradation image simulation software to obtain a pneumatic optical effect sequence turbulence degradation image database, divides the database into a training set and a test set, and adds a real scene pneumatic optical effect turbulence degradation image into a test sample set; 1000 original images and 1000 aerodynamic optical effect turbulence degradation images are used as training sets, and 50 images are used as test data sets. The image size in the dataset is 256 x 256 pixels each.
The test results shown in fig. 5 indicate that the quality is better than the original input turbulence degradation image, which indicates that the trained network model is effective for restoration of the turbulence degradation image.
When the test sample is used for self-calibration network test, a Peak signal-to-noise ratio (PSNR) evaluation index can be specifically used for evaluating a reconstructed result. During each iteration of training, the initial learning rate is set to 10-4Setting parameters:m=M=20, the overall model architecture is realized on a GeForce GTX Titan V with PyTorch, the training period is 800epoch, and the optimal model parameters are saved after the training is completed.
In the testing phase, an additional 50 images from the aero-optical effect turbulence degradation image dataset were selected as the test dataset, where the average PSNR =20.006dB for the degradation map. And testing by using the optimal model parameters obtained by training to obtain a restored image, wherein the average consumed time is 1.18s, the average PSNR =32.361dB of the restored image, and the test result is shown in FIG. 5. The turbulence degradation image with PSNR =20.006dB input by people can be visually seen from the graph, the restoration image is obtained after the whole network is restored on the basis of the optimal model, and the mean PSNR =32.361dB of the restoration image is obviously increased, so that the network is proved to be effective in restoring the turbulence degradation image with the aero-optical effect.
The present application also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., having stored thereon a computer program that, when executed by a processor, performs corresponding functions. The computer readable storage medium of this embodiment is for implementing the multi-scale self-calibrating aero-optical effect turbulence degradation image restoration method of the method embodiment when executed by a processor.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-scale self-calibration aero-optical effect turbulence degradation image restoration method is characterized by comprising the following steps:
s1, extracting a characteristic diagram of the original aero-optical effect turbulence degradation image, wherein the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
s2, calibrating the feature map through a pre-constructed self-calibration network, specifically, separating the feature map into two sub-feature maps along the channel dimension, wherein the number of channels of each sub-feature map isC/2, extracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
s3, carrying out multi-scale convolution recovery on the characteristic diagram of the original aero-optical effect turbulence degradation image to obtain a global recovery characteristic diagram for a global area;
and S4, merging the local fusion feature map and the global restoration feature map, and restoring the image of the merged feature map by convolution.
2. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S2, the local fusion feature map is used as an input, the fusion operation is repeated m times, and m fusion feature maps are cascaded to be used as a final local fusion feature map, wherein m is a natural number.
3. The multi-scale self-calibration aero-optical effect turbulence degradation image restoration method according to claim 1, wherein in step S2, the sigmoid function is used to perform weighted fusion on the high-resolution spatial features and the low-resolution spatial features to obtain calibrated spatial features.
4. The method for recovering the turbulence degradation image based on the multi-scale self-calibration aero-optical effect of claim 1, wherein in step S2, the convolution layer is specifically used to extract the feature map of the turbulence degradation image.
5. The method for restoring a turbulence-degraded image with a multi-scale self-calibration aero-optical effect according to claim 1, wherein in step S3, a multi-channel filter is used, and the void rates of convolution kernels in different channels are different, so as to form a multi-scale convolution, wherein the number of channels of the multi-channel filter is equal to the channel dimension C of the feature map;
and expanding the receptive field of the characteristic diagram of the original aerooptical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic diagram which is recovered in the global area after the multi-scale convolution.
6. A pneumatic optical effect turbulence degradation image restoration system based on a multi-scale self-calibration network is characterized by comprising:
the characteristic image extraction module is used for extracting a characteristic diagram of the original aerodynamic optical effect turbulence degradation image, and the size of the characteristic diagram isC×H×WWherein the channel dimension isCHWRespectively the height and width of the turbulence degradation image;
a local fuzzy region calibration module, configured to calibrate the feature map through a pre-constructed self-calibration network, specifically, separate the feature map into two sub-feature maps along a channel dimension, where the number of channels in each sub-feature map isCExtracting high-resolution and low-resolution spatial features of one of the sub-feature maps, and performing weighted fusion to obtain calibration spatial features; extracting the original resolution spatial features of the other sub-feature map; fusing the original resolution spatial features and the calibration spatial features to obtain a local fusion feature map calibrated for a local fuzzy region of the turbulence degradation image;
the global region recovery module is used for performing multi-scale convolution recovery on the feature map of the original aero-optical effect turbulence degradation image to obtain a global recovery feature map for a global region;
and the image restoration module is used for merging the local fusion feature map and the global restoration feature map and restoring the image of the merged feature map by convolution.
7. The system for restoring the turbulence degradation image of the aero-optical effect based on the multi-scale self-calibration network according to claim 6, wherein the local fuzzy region calibration module is further configured to repeat the fusion operation m times by using the fusion feature map as an input, and cascade m fusion feature maps as a final fusion feature map, wherein m is a natural number.
8. The pneumatic optical effect turbulence degradation image restoration system based on the multi-scale self-calibration network according to claim 6, wherein the local fuzzy region calibration module performs weighted fusion on the high-resolution spatial features and the low-resolution spatial features by using a sigmoid function to obtain calibrated spatial features.
9. The system for restoration of an aero-optical effect turbulence degradation image based on multi-scale self-calibration network according to claim 6, wherein the global area recovery module is specifically configured to:
specifically, a multi-channel filter is used, the void rates of convolution kernels in different channels are different, so that multi-scale convolution is formed, and the number of channels of the multi-channel filter is equal to the channel dimension C of the characteristic diagram;
and expanding the receptive field of the characteristic diagram of the original aerooptical effect turbulence degradation image by utilizing multi-scale convolution, and outputting the characteristic diagram which is recovered in the global area after the multi-scale convolution.
10. A computer storage device having stored therein a computer program executable by a processor to perform the multi-scale self-calibrating aero-optical effect turbulence-degraded image restoration method of any one of claims 1-5.
CN202210229047.XA 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect Active CN114331922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210229047.XA CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210229047.XA CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Publications (2)

Publication Number Publication Date
CN114331922A CN114331922A (en) 2022-04-12
CN114331922B true CN114331922B (en) 2022-07-19

Family

ID=81033920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210229047.XA Active CN114331922B (en) 2022-03-10 2022-03-10 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Country Status (1)

Country Link
CN (1) CN114331922B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688637B (en) * 2023-01-03 2023-05-16 中国海洋大学 Turbulent mixing intensity calculation method, turbulent mixing intensity calculation system, computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN109685072A (en) * 2018-12-22 2019-04-26 北京工业大学 A kind of compound degraded image high quality method for reconstructing based on generation confrontation network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9567056B2 (en) * 2014-03-06 2017-02-14 General Atomics Aeronautical Systems, Inc. Devices, systems and methods for passive control of flow
TWI546769B (en) * 2015-04-10 2016-08-21 瑞昱半導體股份有限公司 Image processing device and method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN109685072A (en) * 2018-12-22 2019-04-26 北京工业大学 A kind of compound degraded image high quality method for reconstructing based on generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Adaptive denoising algorithm of aero-optical degraded image based on edge orientation;Lihui Sun et al.;《2012 IEEE 11th International Conference on Signal Processing》;20130404;第1001-1005页 *

Also Published As

Publication number Publication date
CN114331922A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN113240580B (en) Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN111028177B (en) Edge-based deep learning image motion blur removing method
US20200034648A1 (en) Method and apparatus for segmenting sky area, and convolutional neural network
CN110490082B (en) Road scene semantic segmentation method capable of effectively fusing neural network features
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN111523546B (en) Image semantic segmentation method, system and computer storage medium
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN112861729B (en) Real-time depth completion method based on pseudo-depth map guidance
CN112365514A (en) Semantic segmentation method based on improved PSPNet
CN114331922B (en) Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
Chen et al. U-net like deep autoencoders for deblurring atmospheric turbulence
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN110570375B (en) Image processing method, device, electronic device and storage medium
CN116523897A (en) Semi-supervised enteromorpha detection method and system based on transconductance learning
CN114092803A (en) Cloud detection method and device based on remote sensing image, electronic device and medium
CN113837941B (en) Training method and device for image superdivision model and computer readable storage medium
CN112785517B (en) Image defogging method and device based on high-resolution representation
CN112766099B (en) Hyperspectral image classification method for extracting context information from local to global
CN110070541B (en) Image quality evaluation method suitable for small sample data
CN116228576A (en) Image defogging method based on attention mechanism and feature enhancement
CN116485811A (en) Stomach pathological section gland segmentation method based on Swin-Unet model
CN115689918A (en) Parallel single image rain removing method based on residual error prior attention mechanism
CN116091792A (en) Method, system, terminal and medium for constructing visual attention prediction model
CN115496654A (en) Image super-resolution reconstruction method, device and medium based on self-attention mechanism
CN114331931A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant