CN116071229A - Image super-resolution reconstruction method for wearable helmet - Google Patents

Image super-resolution reconstruction method for wearable helmet Download PDF

Info

Publication number
CN116071229A
CN116071229A CN202211021098.XA CN202211021098A CN116071229A CN 116071229 A CN116071229 A CN 116071229A CN 202211021098 A CN202211021098 A CN 202211021098A CN 116071229 A CN116071229 A CN 116071229A
Authority
CN
China
Prior art keywords
image
neural network
convolutional neural
depth separable
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211021098.XA
Other languages
Chinese (zh)
Inventor
程德强
王培杰
寇旗旗
刘海
徐飞翔
王晓艺
王希
李雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202211021098.XA priority Critical patent/CN116071229A/en
Publication of CN116071229A publication Critical patent/CN116071229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an image super-resolution reconstruction method for a wearable helmet, which comprises the following steps of: constructing a degradation model, and obtaining a high-low resolution image pair through the degradation model; constructing a depth separable convolutional neural network model, and generating a super-resolution image by the depth separable convolutional neural network; training a depth separable convolutional neural network model, performing loss calculation on the super-resolution image and the input high-resolution image, and optimizing the depth separable convolutional neural network model; testing the reconstructed super-resolution image; preprocessing the tested image; verification depth separable convolutional neural network models. The invention can effectively collect the image in the low illumination environment by using the degradation model, and reconstruct a high resolution image; and constructing a depth separable convolutional neural network model, retaining more texture details of the image, and reconstructing a fine image.

Description

Image super-resolution reconstruction method for wearable helmet
Technical Field
The invention belongs to the technical field of image enhancement, and particularly relates to an image super-resolution reconstruction method for a wearable helmet.
Background
With the development of society and technological progress, the industrial dressing and smelting operation environment is continuously pushed to intelligence, wherein the wearable intelligent helmet is widely applied in industry; for example, under the condition of ensuring safety protection, the underground wearable helmet is added with various intelligent auxiliary detection such as coal mine selection detection, pulp bubble characteristic detection, foreign matter investigation and the like, can replace manual high-intensity, high-risk and repeated labor, lightens labor intensity, reduces accident risk and has remarkable ecological environment, social and economic benefits. However, due to the low configuration of the wearable helmet camera, the low illumination condition under the coal mine and other reasons, the image collected by the camera is often blurred and low in resolution, which brings great difficulty to the intelligent helmet detection and even can not be used for the next link.
At present, the most studied and effective image super-resolution reconstruction method is a reconstruction method based on deep learning, and although the current reconstruction technology based on the deep learning enables reconstructed images to achieve excellent results on subjective evaluation indexes, the reconstructed images still have the problems of high-frequency detail loss, image smoothing, artifact generation and the like, so that the super-resolution image reconstruction technology still needs to be further improved.
Chinese published (application number: 202111250771.2; application publication number: CN 114187174A) image super-resolution reconstruction method based on multi-scale residual feature fusion: the method comprises the following steps: s1: acquiring images with different resolutions, and preprocessing the images to obtain high-low resolution image pairs; s2: constructing a multi-scale feature extraction module based on depth separable convolution, carrying out feature extraction on the preprocessed high-low resolution image pair, and outputting a feature map; s3: constructing a residual feature fusion module, and carrying out residual feature fusion processing on the output feature map; s4: constructing an enhanced attention module, and processing the feature map after the residual feature fusion processing; s5: an adaptive up-sampling module is adopted to up-sample the feature image output in the step S4, and a super-resolution image is generated; s6: constructing a Loss function module based on Charbonnier Loss and processing the super-resolution image; s7: constructing a super-resolution image reconstruction model based on multi-scale residual feature fusion, and inputting the super-resolution image processed in the step S6 into the super-resolution reconstruction model for training; s8: inputting the image to be processed into a super-resolution image reconstruction model based on multi-scale residual feature fusion for processing, and obtaining image information after super-resolution reconstruction of the image to be processed.
The image super-resolution reconstruction method based on multi-scale residual feature fusion in the patent has the following defects: the scale information is not utilized enough, the channel interaction capability of the depth separable convolution is not strong, the algorithm for generating the super-resolution image is complicated, and the quality of the super-resolution image is poor.
Disclosure of Invention
The invention aims at the defects and provides an image super-resolution reconstruction method for a wearable helmet.
The invention aims at realizing the following steps: an image super-resolution reconstruction method for a wearable helmet is characterized by comprising the following steps of: the method comprises the following steps:
step 1: constructing a degradation model, and obtaining a high-low resolution image pair through the degradation model;
step 2: constructing a depth separable convolutional neural network model, and generating a super-resolution image by the depth separable convolutional neural network;
step 3: training a depth separable convolutional neural network model, performing loss calculation on the super-resolution image and the input high-resolution image, and optimizing the depth separable convolutional neural network model;
step 4: testing the depth separable convolutional neural network model and testing the reconstructed super-resolution image;
step 5: preprocessing an image, and inputting the image into a depth separable convolutional neural network model for verification;
step 6: and verifying the depth separable convolutional neural network model, evaluating the depth separable convolutional neural network model, and evaluating the reconstructed super-resolution image quality standard.
Preferably, the constructing the degradation model includes the steps of:
step 1-1: constructing a data set, wherein the data set is composed of real images under a mine;
step 1-2: converting a real image under the mine from an RGB image to an HSV color space, randomly reducing the saturation S and brightness V channel values, and simulating a low-illumination image;
step 1-3: performing degradation operation on the low-illumination image in the data set, and downsampling the high-resolution image in the data set by scale factors of 2 times, 3 times and 4 times through a bicubic linear interpolation method to obtain a corresponding low-resolution image;
the degradation operation is represented by the following formula:
x=D(y)=(y HSV ×s)↓ r
wherein x represents a degraded image, y represents an original image, D represents a degradation function, s is a reduction multiple, and r represents a reduction scale factor.
Preferably, the constructing the depth separable convolutional neural network model, the generating the super-resolution image by the depth separable convolutional neural network includes the following steps:
step 2-1: a shallow feature extraction module for constructing a depth separable convolutional neural network model is used for carrying out primary feature extraction on an input image to obtain a shallow feature extraction diagram;
step 2-2: constructing a feature fusion residual group module of the depth separable convolutional neural network model, and carrying out deep feature extraction on the extracted shallow feature map to obtain a deep feature map;
step 2-3: and an image reconstruction module for constructing a depth separable convolutional neural network model, wherein the image reconstruction module is used for upsampling the deep feature map and generating a super-resolution image through standard convolution of 3 multiplied by 3.
Preferably, in the step 3, training the depth separable convolutional neural network model, and constructing a training set, wherein the training set adopts real images under the mine; the method comprises the steps of performing downsampling on a high-resolution image HR of a data set by 2 times, 3 times and 4 times of scale factors through a bicubic linear interpolation method to obtain a corresponding low-resolution image LR; and carrying out loss calculation on the obtained reconstructed super-resolution image and the input high-resolution image.
Preferably, the test depth separable convolutional neural network model in the step 4 is constructed by constructing a test set, which includes a low-resolution image LR downsampled by bicubic linear interpolation with scale factors of x2, x3, and x4, and a corresponding high-resolution image HR.
Preferably, in the step 5, preprocessing the tested image, setting patch size, taking the high-low resolution image pair in the training set as a starting point, cutting the image pair at random distances respectively, and storing the training set as a npy format as input of the depth separable convolutional neural network model.
Preferably, in the step 6, the depth separable convolutional neural network model is verified, and a verification set is constructed;
standards for assessing super-resolution image reconstruction quality on the Y-channel transformed into YCbCr space by peak signal-to-noise ratio PSNR and structural similarity SSIM:
Figure SMS_1
the higher the two evaluation indexes PSNR and SSIM are, the closer the reconstruction result is to the real original image.
Preferably, the step 2-2 of constructing a feature fusion residual group module of the depth separable convolutional neural network model, performing deep feature extraction on the extracted shallow feature map to obtain a deep feature map includes the following steps:
step 2-2-1: constructing a depth separable convolution channel attention residual error module of a depth separable convolution neural network, inputting a shallow feature map, and carrying out depth convolution with a layer of convolution kernel size of 96 multiplied by 7, step length of 1 and filling of 3 so as to acquire a spatial feature relation of an image through a large receptive field;
step 2-2-2: constructing a channel attention module of a depth separable convolutional neural network, carrying out average pooling on space dimensions, learning channel attention through two full connections, carrying out layer normalization by using Sigmoid to obtain a channel attention feature vector, multiplying the channel attention feature vector by feature points after deep packet convolution to obtain weighted depth convolution features, and carrying out primary layer normalization;
the feature map with weighted channel attention is obtained, the point convolution dimension-increasing operation with the primary convolution kernel of 384 multiplied by 1 is carried out, after the GELU activation function is used, the point convolution with the primary convolution kernel of 96 multiplied by 1 is used for dimension-decreasing operation, and the channel interaction capability of the depth convolution is improved; short-connection adding the input feature map and the residual extraction feature map to obtain an output feature map;
step 2-2-3: the method comprises the steps of constructing a characteristic fusion module of the depth separable convolutional neural network, wherein the characteristic fusion module is formed by connecting six attention residual modules of the depth separable convolutional channels in series; respectively fusing the feature graphs output by the attention residual error modules of the six depth separable convolution channels in the channel dimension, then carrying out feature recombination by convolution with a convolution kernel of 96 multiplied by 3, and adding long connections in the input feature graphs and the residual error extraction feature graphs to form a feature fusion module; the feature fusion module utilizes the complementarity between features to fuse the advantages between the features, eliminates redundant information generated by the correlation between different features, and improves the performance of the model;
step 2-2-4: constructing a depth-separable convolution channel attention fusion residual group module, wherein the depth-separable convolution channel attention fusion residual group module is formed by connecting four characteristic fusion modules in series; the input feature images and residual extraction feature images are connected and added to form a depth separable convolution channel attention fusion residual group module; the depth separable convolution channel attention fusion residual group module uses residual groups as basic modules of deeper networks, can flow among information in each residual group to obtain more useful information, pays attention to richer features, and enables a model to obtain better performance through residual learning.
Preferably, the loss calculation is performed on the reconstructed super-resolution image and the reconstructed high-resolution image, and the loss calculation adopts the following formula:
Figure SMS_2
wherein L is 1 To average loss, x HR For high resolution primary image, y SR The image is reconstructed for super resolution.
The invention has the beneficial effects that: 1. by using the degradation model, the characteristic mapping from the low-illumination picture to the normal-illumination picture can effectively collect the image in the low-illumination environment and reconstruct the high-resolution image.
2. By constructing the depth separable convolutional neural network model, the method is beneficial to capturing multi-scale context information and reconstructing a fine image; the feature fusion module can deeply fuse the features learned by each convolution, learn more abundant context information and keep more texture details.
3. The channel attention module of the depth separable convolutional neural network is constructed, the channel attention feature vector is obtained by carrying out layer normalization by using Sigmoid, the layer normalization processing is carried out, the training time is reduced, and the training speed of the network is accelerated.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a block diagram of a depth separable convolution channel attention residual error.
Fig. 3 is a block diagram of a feature fusion module.
FIG. 4 is a block diagram of a depth separable convolution channel attention fusion residual set
Detailed Description
The invention is further summarized below with reference to the drawings.
As shown in fig. 1, a method for reconstructing an image super-resolution for a wearable helmet, the method comprising:
step 1: constructing a degradation model, and obtaining a high-low resolution image pair through the degradation model;
the degradation model includes:
step 1-1: constructing a data set, wherein the data set is composed of real images under a mine;
step 1-2: converting a real image under the mine from an RGB image to an HSV color space, randomly reducing the saturation S and brightness V channel values, and simulating a low-illumination image;
step 1-3: performing degradation operation on the low-illumination image in the data set, and downsampling the high-resolution image in the data set by scale factors of 2 times, 3 times and 4 times through a bicubic linear interpolation method to obtain a corresponding low-resolution image;
the degradation operation is represented by the following formula:
x=D(y)=(y HSV ×s)↓ r
wherein x represents a degraded image, y represents an original image, D represents a degradation function, s is a reduction multiple, and r represents a reduction scale factor.
Step 2: constructing a depth separable convolutional neural network model, and generating a super-resolution image by the depth separable convolutional neural network;
step 2-1: a shallow feature extraction module for constructing a depth separable convolutional neural network model is used for carrying out primary feature extraction on an input image to obtain a shallow feature extraction diagram;
step 2-2: constructing a feature fusion residual group module of the depth separable convolutional neural network model, and carrying out deep feature extraction on the extracted shallow feature map to obtain a deep feature map;
as shown in fig. 2, a feature fusion residual group module of a depth separable convolutional neural network model is constructed, deep feature extraction is performed on an extracted shallow feature map, and a deep feature map is obtained, which comprises the following steps:
step 2-2-1: constructing a depth separable convolution channel attention residual error module of a depth separable convolution neural network, inputting a shallow feature map, and carrying out depth convolution with a layer of convolution kernel size of 96 multiplied by 7, step length of 1 and filling of 3 so as to acquire a spatial feature relation of an image through a large receptive field;
step 2-2-2: the method comprises the steps of constructing a channel attention module of a depth separable convolutional neural network, carrying out average pooling on space dimensions, learning channel attention through two full connections, carrying out layer normalization by using Sigmoid to obtain a channel attention feature vector, multiplying the channel attention feature vector by feature points after deep packet convolution to obtain weighted depth convolution features, calculating an average value and variance of the weighted depth convolution features, carrying out normalization processing on the features of the layer, reducing training time, and accelerating training speed of the network. The above can be expressed by the following formula:
Figure SMS_3
A 1 =[A 1,1 ,A 1,2 ,...,A 1,n ]
B 1 =CA(F 0 )
C 1 =A 1 ·B 1
Figure SMS_4
Figure SMS_5
Figure SMS_6
wherein A is a feature vector after deep convolution, n represents an nth neuron, CA (DEG) is a channel attention module function, B is a feature vector of channel attention, C is a depth convolution feature after channel attention weighting, gamma and beta are scaling and translation parameter vectors, and the scaling and translation parameter vectors are the same as the dimension C;
the feature map with weighted channel attention is obtained, point convolution dimension-lifting operation with one convolution kernel of 384 multiplied by 1 is carried out, a GELU activation function is used, randomness is introduced for a sigma activation function, the training process is more robust, point convolution with one convolution kernel of 96 multiplied by 1 is used for dimension-reducing operation, and the channel interaction capacity of depth convolution is improved; short-connection adding the input feature map and the residual extraction feature map to obtain an output feature map;
the above can be expressed by the following formula:
Figure SMS_7
Figure SMS_8
E 1 =GELU(D 1 )
Figure SMS_9
where D, F is a point convolution operation and E is a GELU activation function.
As shown in fig. 3, step 2-2-3: the method comprises the steps of constructing a characteristic fusion module of the depth separable convolutional neural network, wherein the characteristic fusion module is formed by connecting six attention residual modules of the depth separable convolutional channels in series; respectively fusing the feature graphs output by the attention residual error modules of the six depth separable convolution channels in the channel dimension, then carrying out feature recombination by convolution with a convolution kernel of 96 multiplied by 3, and adding long connections in the input feature graphs and the residual error extraction feature graphs to form a feature fusion module; the function of the feature fusion module is constructed: and by utilizing the complementarity between the features, the advantages of the features are fused, redundant information generated by the correlation between different features is eliminated, and the performance of the model is improved.
As shown in fig. 4, step 2-2-4: constructing a depth-separable convolution channel attention fusion residual group module, wherein the depth-separable convolution channel attention fusion residual group module is formed by connecting four characteristic fusion modules in series; the input feature images and residual extraction feature images are connected and added to form a depth separable convolution channel attention fusion residual group module; function of constructing depth separable convolution channel attention fusion residual group module: the residual groups are used as basic modules of a deeper network, each residual group can flow among information to obtain more useful information, richer features are focused, and better performance of the model is obtained through residual learning.
Step 2-3: and an image reconstruction module for constructing a depth separable convolutional neural network model, wherein the image reconstruction module is used for upsampling the deep feature map and generating a super-resolution image through a standard set of 3 multiplied by 3.
Step 3: training a depth separable convolutional neural network model, performing loss calculation on the super-resolution image and the input high-resolution image, and optimizing the depth separable convolutional neural network model;
by constructing a training set, the training set adopts real images under mines, and the real images under the mines adopt No. 1-810 images, wherein No. 1-800 images are used as the training set; the high-resolution image HR of the dataset is subjected to downsampling by 2 times, 3 times and 4 times of scale factors by a bicubic linear interpolation method to obtain a corresponding low-resolution image LR; and carrying out loss calculation on the obtained reconstructed super-resolution image and the input high-resolution image.
Using Adam optimizer, the learning rate was 0.0002 and batch size was 8, halving the learning rate after every 100 rounds of training, for a total of 500 rounds of training.
And obtaining a reconstructed super-resolution image and an input high-resolution image for loss calculation, updating network parameters by using a back propagation algorithm, and improving the learning capacity of the model by minimizing a loss function. Average loss function L 1 The specific formula is as follows:
Figure SMS_10
wherein L is 1 To average loss, x HR For high resolution primary image, y SR The image is reconstructed for super resolution.
Step 4: testing the depth separable convolutional neural network model and testing the reconstructed super-resolution image; and selecting a group of network model parameters with the best evaluation indexes of the verification set according to the performance of the trained super-resolution reconstruction network model on the verification set. And testing the test set, and evaluating the existing network model.
Step 5: preprocessing the tested image, and inputting the preprocessed image into a depth separable convolutional neural network model for verification; preprocessing the tested images, setting patch size, respectively cutting the high-low resolution image pairs in the training set by taking the center position of the images as a starting point and the random distance, and storing the training set as a npy format serving as the input of the depth separable convolutional neural network model.
Step 6: and verifying the depth separable convolutional neural network model, evaluating the depth separable convolutional neural network model, and evaluating the reconstructed super-resolution image quality standard.
Constructing a verification set, wherein the numbers 801-810 are used as the verification set; standard for evaluating super-resolution image reconstruction quality on Y-channel transformed into YCbCr space with peak signal-to-noise ratio PSNR and structural similarity SSIM:
Figure SMS_11
the higher the two evaluation indexes PSNR and SSIM are, the closer the reconstruction result is to the real original image.
The foregoing description is only illustrative of the invention and is not to be construed as limiting the invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the present invention, should be included in the scope of the claims of the present invention.

Claims (9)

1. An image super-resolution reconstruction method for a wearable helmet is characterized by comprising the following steps of: the method comprises the following steps:
step 1: constructing a degradation model, and obtaining a high-low resolution image pair through the degradation model;
step 2: constructing a depth separable convolutional neural network model, and generating a super-resolution image by the depth separable convolutional neural network;
step 3: training a depth separable convolutional neural network model, performing loss calculation on the super-resolution image and the input high-resolution image, and optimizing the depth separable convolutional neural network model;
step 4: testing the depth separable convolutional neural network model and testing the reconstructed super-resolution image;
step 5: preprocessing an image, and inputting the image into a depth separable convolutional neural network model for verification;
step 6: and verifying the depth separable convolutional neural network model, evaluating the depth separable convolutional neural network model, and evaluating the reconstructed super-resolution image quality standard.
2. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: the construction of the degradation model comprises the following steps:
step 1-1: constructing a data set, wherein the data set is composed of real images under a mine;
step 1-2: converting a real image under the mine from an RGB image to an HSV color space, randomly reducing the saturation S and brightness V channel values, and simulating a low-illumination image;
step 1-3: performing degradation operation on the low-illumination image in the data set, and downsampling the high-resolution image in the data set by scale factors of 2 times, 3 times and 4 times through a bicubic linear interpolation method to obtain a corresponding low-resolution image;
the degradation operation is represented by the following formula:
x=D(y)=(y HSV ×s)↓ r
wherein x represents a degraded image, y represents an original image, D represents a degradation function, s is a reduction multiple, and r represents a reduction scale factor.
3. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: the construction of the depth separable convolutional neural network model, and the generation of the super-resolution image by the depth separable convolutional neural network comprises the following steps:
step 2-1: a shallow feature extraction module for constructing a depth separable convolutional neural network model is used for carrying out primary feature extraction on an input image to obtain a shallow feature extraction diagram;
step 2-2: constructing a feature fusion residual group module of the depth separable convolutional neural network model, and carrying out deep feature extraction on the extracted shallow feature map to obtain a deep feature map;
step 2-3: and an image reconstruction module for constructing a depth separable convolutional neural network model, wherein the image reconstruction module is used for upsampling the deep feature map and generating a super-resolution image through standard convolution of 3 multiplied by 3.
4. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: training a depth separable convolutional neural network model in the step 3, and constructing a training set, wherein the training set adopts real images of underground mines; the method comprises the steps of performing downsampling on a high-resolution image HR of a data set by 2 times, 3 times and 4 times of scale factors through a bicubic linear interpolation method to obtain a corresponding low-resolution image LR; and carrying out loss calculation on the obtained reconstructed super-resolution image and the input high-resolution image.
5. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: the test depth separable convolutional neural network model in the step 4 is constructed by constructing a test set, wherein the test set comprises a low-resolution image LR obtained by double three-time linear interpolation downsampling of scale factors of x2, x3 and x4 and a corresponding high-resolution image HR.
6. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: and 5, preprocessing the tested images, setting patch size, respectively cutting the high-low resolution image pairs in the training set by taking the center position of the images as a starting point and the random distance, and storing the training set as a npy format serving as the input of the depth separable convolutional neural network model.
7. The image super-resolution reconstruction method for a wearable helmet according to claim 1, wherein: in the step 6, verifying the depth separable convolutional neural network model, and constructing a verification set;
standards for assessing super-resolution image reconstruction quality on the Y-channel transformed into YCbCr space by peak signal-to-noise ratio PSNR and structural similarity SSIM:
Figure FDA0003814216600000021
the higher the two evaluation indexes PSNR and SSIM are, the closer the reconstruction result is to the real original image.
8. A method of image super-resolution reconstruction for a wearable helmet according to claim 3, characterized in that: the step 2-2 is to construct a feature fusion residual group module of the depth separable convolutional neural network model, and deep feature extraction is carried out on the extracted shallow feature map to obtain a deep feature map, which comprises the following steps:
step 2-2-1: constructing a depth separable convolution channel attention residual error module of a depth separable convolution neural network, inputting a shallow feature map, and carrying out depth convolution with a layer of convolution kernel size of 96 multiplied by 7, step length of 1 and filling of 3 so as to acquire a spatial feature relation of an image through a large receptive field;
step 2-2-2: constructing a channel attention module of a depth separable convolutional neural network, carrying out average pooling on space dimensions, learning channel attention through two full connections, carrying out layer normalization by using Sigmoid to obtain a channel attention feature vector, multiplying the channel attention feature vector by feature points after deep packet convolution to obtain weighted depth convolution features, and carrying out primary layer normalization;
the feature map with weighted channel attention is obtained, the point convolution dimension-increasing operation with the primary convolution kernel of 384 multiplied by 1 is carried out, after the GELU activation function is used, the point convolution with the primary convolution kernel of 96 multiplied by 1 is used for dimension-decreasing operation, and the channel interaction capability of the depth convolution is improved; short-connection adding the input feature map and the residual extraction feature map to obtain an output feature map;
step 2-2-3: the method comprises the steps of constructing a characteristic fusion module of the depth separable convolutional neural network, wherein the characteristic fusion module is formed by connecting six attention residual modules of the depth separable convolutional channels in series; respectively fusing the feature graphs output by the attention residual error modules of the six depth separable convolution channels in the channel dimension, then carrying out feature recombination by convolution with a convolution kernel of 96 multiplied by 3, and adding long connections in the input feature graphs and the residual error extraction feature graphs to form a feature fusion module; the feature fusion module utilizes the complementarity between features to fuse the advantages between the features and eliminates redundant information generated by the correlation between different features;
step 2-2-4: constructing a depth-separable convolution channel attention fusion residual group module, wherein the depth-separable convolution channel attention fusion residual group module is formed by connecting four characteristic fusion modules in series; the input feature images and residual extraction feature images are connected and added to form a depth separable convolution channel attention fusion residual group module; the depth separable convolution channel attention fusion residual group module uses residual groups as basic modules of deeper networks, can flow among information in each residual group to obtain more useful information, pays attention to richer features, and enables a model to obtain better performance through residual learning.
9. The method for image super-resolution reconstruction for a wearable helmet of claim 4, wherein: and performing loss calculation on the reconstructed super-resolution image and the reconstructed high-resolution image, wherein the loss calculation adopts the following formula:
Figure FDA0003814216600000031
wherein L is 1 To average loss, x HR For high resolution primary image, y SR The image is reconstructed for super resolution.
CN202211021098.XA 2022-08-24 2022-08-24 Image super-resolution reconstruction method for wearable helmet Pending CN116071229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211021098.XA CN116071229A (en) 2022-08-24 2022-08-24 Image super-resolution reconstruction method for wearable helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211021098.XA CN116071229A (en) 2022-08-24 2022-08-24 Image super-resolution reconstruction method for wearable helmet

Publications (1)

Publication Number Publication Date
CN116071229A true CN116071229A (en) 2023-05-05

Family

ID=86177600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211021098.XA Pending CN116071229A (en) 2022-08-24 2022-08-24 Image super-resolution reconstruction method for wearable helmet

Country Status (1)

Country Link
CN (1) CN116071229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403213A (en) * 2023-06-08 2023-07-07 杭州华得森生物技术有限公司 Circulating tumor cell detector based on artificial intelligence and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403213A (en) * 2023-06-08 2023-07-07 杭州华得森生物技术有限公司 Circulating tumor cell detector based on artificial intelligence and method thereof

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN109785236B (en) Image super-resolution method based on super-pixel and convolutional neural network
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN111861906A (en) Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN112288632A (en) Single image super-resolution method and system based on simplified ESRGAN
CN113283444A (en) Heterogeneous image migration method based on generation countermeasure network
CN114897742B (en) Image restoration method with texture and structural features fused twice
CN112561799A (en) Infrared image super-resolution reconstruction method
CN116071229A (en) Image super-resolution reconstruction method for wearable helmet
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN107845064A (en) Image Super-resolution Reconstruction method based on active sampling and gauss hybrid models
CN116977651B (en) Image denoising method based on double-branch and multi-scale feature extraction
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN115880158B (en) Blind image super-resolution reconstruction method and system based on variation self-coding
CN112634168A (en) Image restoration method combined with edge information
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
Wang et al. Gridformer: Residual dense transformer with grid structure for image restoration in adverse weather conditions
CN114936977A (en) Image deblurring method based on channel attention and cross-scale feature fusion
CN112767539B (en) Image three-dimensional reconstruction method and system based on deep learning
CN114972024A (en) Image super-resolution reconstruction device and method based on graph representation learning
CN114549302A (en) Image super-resolution reconstruction method and system
CN114529737A (en) Optical red footprint image contour extraction method based on GAN network
CN111951177B (en) Infrared image detail enhancement method based on image super-resolution loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination