CN116229081A - Unmanned aerial vehicle panoramic image denoising method based on attention mechanism - Google Patents

Unmanned aerial vehicle panoramic image denoising method based on attention mechanism Download PDF

Info

Publication number
CN116229081A
CN116229081A CN202310221981.1A CN202310221981A CN116229081A CN 116229081 A CN116229081 A CN 116229081A CN 202310221981 A CN202310221981 A CN 202310221981A CN 116229081 A CN116229081 A CN 116229081A
Authority
CN
China
Prior art keywords
image
panoramic image
convolution
denoising
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310221981.1A
Other languages
Chinese (zh)
Inventor
邹建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Sanjie Feihang Uav Technology Co ltd
Original Assignee
Suzhou Sanjie Feihang Uav Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Sanjie Feihang Uav Technology Co ltd filed Critical Suzhou Sanjie Feihang Uav Technology Co ltd
Priority to CN202310221981.1A priority Critical patent/CN116229081A/en
Publication of CN116229081A publication Critical patent/CN116229081A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism, which comprises the following steps: the convolutional neural network codec structure performs codec operation on an input image through operations such as depth separable convolution, activation pooling, deconvolution and the like; the depth can be separated, the convolution operation on the feature map is realized through the depth convolution and the point-by-point convolution, and the parameter number of the denoising network is reduced; the channel attention module is used for giving weight to the coded feature map on the channel to improve the network denoising performance; the residual attention module with multi-scale jump connection further improves the network denoising performance by paying attention to multi-scale information of the feature map during jump connection and giving weight to the feature map on a channel. According to the method, the panoramic image shot by the unmanned aerial vehicle in the complex illumination environment is subjected to image denoising, and the problems that the noise level needs to be estimated, time is consumed, and the image denoising effect is poor in the complex illumination environment in the conventional denoising method are solved.

Description

Unmanned aerial vehicle panoramic image denoising method based on attention mechanism
Technical Field
The invention relates to an image denoising method, in particular to an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism.
Background
The panoramic imaging technology has the imaging characteristics of large visual field, low distortion and high resolution, and can be used as eyes of an unmanned aerial vehicle to realize real-time perception of a 360-degree scene. Panoramic images obtained by panoramic cameras on unmanned aerial vehicles in complex illumination environments have the problems of partial underexposure and partial underexposure, so that the panoramic images are degraded, and the perception effect of the unmanned aerial vehicles is seriously influenced. Noise, especially at low illumination, has been close to the signal intensity, overwhelming the useful detail information of the image, severely degrading the image quality. If signal amplification is carried out only, noise is further amplified, and the signal to noise ratio of a camera cannot be improved, so that the denoising of the panoramic image of the unmanned aerial vehicle in a low-light environment is particularly important.
In recent years, many researches on image denoising are carried out, but most researches on denoising are carried out on common images of a mobile phone camera and a digital camera, and researches on denoising of panoramic images of an unmanned aerial vehicle are not carried out. The image noise distribution in a complex illumination environment is related to factors such as illumination intensity, and a single noise model is difficult to cover all image noise. The prior denoising method needs to estimate the noise level, consumes time and has poor denoising effect in a complex illumination environment, which limit the application of the panoramic imaging system in the unmanned plane field.
Disclosure of Invention
Aiming at the situation, in order to overcome the defects of the prior art, the invention provides an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism.
The invention solves the technical problems by the following technical proposal: an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism is characterized by comprising the following steps:
the convolutional neural network coding and decoding structure comprises an encoder and a decoder, wherein the encoder comprises five convolution units, the convolution and activation operation is carried out on an image to finish image feature extraction, the pooling operation reduces the number of parameters, the decoder comprises four convolution units, the feature up-sampling is carried out on the image by deconvolution, the image size is gradually restored, and then the convolution and activation operation are repeated to gradually restore a denoised image;
depth separable convolutions, including depth convolutions and point-by-point convolutions, the depth convolutions convolve each channel of the input feature map with one convolution kernel, and then stack the results of all convolutions. The point-to-point convolution adjusts the channel number of the feature map according to the number of the convolution kernels, and better fuses the connection between the channels. The depth separable convolution can greatly reduce the network parameter quantity and accelerate the model operation speed;
the channel attention module comprises an average pooling module, a maximum pooling module, a shared MLP (multi-level processor) module, a feature map summation module, an activation module and a feature map product module, wherein the feature map with weight is obtained by carrying out maximum pooling and average pooling calculation on an input feature map and inputting the shared MLP module, and is summed, and finally the feature map with weight is multiplied with the original feature map, so that the weight is given to the feature map on the channel;
the residual attention module with the multi-scale jump connection comprises a common convolution module, a residual connection module and a channel attention module, wherein the input characteristic diagram is subjected to the common convolution once, the channel attention module is input, the obtained result is subjected to the residual connection with the input characteristic diagram, and finally the obtained result is subjected to multi-scale channel stacking with the characteristic diagram of the decoder.
The invention provides an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism, which is characterized by comprising the following steps of:
step one, adjusting parameters of a panoramic camera on the unmanned aerial vehicle to proper values, and collecting and storing an original panoramic image;
secondly, extracting noise from the collected panoramic image, adding the extracted noise into other clean data sets to form a noise image, matching the noise image with a corresponding clean reference image, and constructing a training image data set for denoising the panoramic image of the unmanned aerial vehicle;
step three, carrying out histogram equalization and gamma correction on the original panoramic image, and adjusting the brightness and contrast of the original image to obtain a panoramic image after brightness and contrast correction;
step four, carrying out color correction on the panoramic image with corrected brightness and contrast to obtain an image with corrected color, and completing the pretreatment work of all panoramic images;
constructing a convolutional neural warp decoding network and a loss function based on an attention mechanism, importing training images into a denoising network in a batch mode, optimizing the loss function, and performing supervised learning training;
step six, the denoising test flow is to input the preprocessed panoramic image into a trained denoising network and output the denoised panoramic image.
Preferably, the codec uses depth separable convolution to perform convolution operation, so as to reduce the parameter amount required by the model operation process and accelerate the model operation speed. The encoder performs forward propagation calculations; the decoder restores the image size through deconvolution calculation mode, and the residual attention module with multi-scale jump connection stacks the characteristic graphs of different output scales of the encoder and the decoder on the channel.
Preferably, the encoder comprises five convolution units and the decoder comprises four convolution units.
Preferably, the channel attention module is located after the encoder and before the decoder; the residual attention module is between the convolution units of the encoder and decoder.
Preferably, the complex illumination environment comprises very weak visible light and exposed visible light.
Preferably, the CMOS camera uses back-illuminated, sensitivity-enhanced, micro-wedge techniques, and the like, which can image in more extreme complex illumination environments than conventional CMOS cameras.
Preferably, the noise extraction is performed by selecting a smooth region in the image, and subtracting the image mean of the region to obtain a noise block. The training data set is generated using noise extraction, and the training data set is input in a batch manner.
Preferably, the image preprocessing operation includes image enhancement and color correction processing, the image enhancement controls image darkness or overexposure through histogram equalization and gamma correction, and the color correction adopts a gray world method.
Preferably, the training includes forward propagation and backward propagation operations. The forward propagation operation calculates the feature map of each layer according to the network layer settings, and the backward propagation operation calculates the gradient and inversely updates the parameters of the previous convolution layer according to the final feature map and the loss function value of the target image.
Preferably, the denoising test inputs the preprocessed image to be denoised, and directly outputs the denoised image on the GPU by calling the trained model and the trained parameter configuration file.
The invention has the positive progress effects that: the method comprises the steps of constructing a convolutional neural warp-knitted decoding network based on an attention mechanism as a denoising network, improving the convolutional form to be depth-separable convolutional, adding a channel attention module and a residual attention module with multi-scale jump connection, training a denoising network model, inputting an image to be denoised into a trained denoising network, and outputting the denoised image. According to the method, according to the requirements of application scenes, the panoramic camera on the unmanned aerial vehicle is used for denoising pictures under a complex illumination environment, and the problems that in the prior denoising method, the noise level needs to be estimated, time is consumed, and the denoising effect is poor under a dark light environment are solved.
Drawings
Fig. 1 is a flowchart of an unmanned aerial vehicle panoramic image denoising method based on an attention mechanism.
Fig. 2 is a block diagram of the overall structure of the attention-based codec denoising network according to the present invention.
Fig. 3 is a schematic diagram of a depth separable convolution module of the present invention.
Fig. 4 is a schematic diagram of a channel attention module of the present invention.
Fig. 5 is a schematic diagram of a residual attention module with multi-scale jump connection of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The image denoising method of the traditional method generally focuses on removing Gaussian noise and ISO noise, and the method uses noise extraction to generate a training data set, and the characteristic learned by the method using deep learning is specific to the training data, so that the method meets the panoramic image denoising requirement of the unmanned aerial vehicle in a real complex illumination environment. In addition, the deep learning has the advantages of strong generalization capability, and the GPU performance is fully utilized by reading in a batch mode, so that the image processing speed is increased. The method solves the problems that the prior image denoising method needs to estimate the noise level, consumes time and has poor denoising effect in a dark light environment.
As shown in fig. 1, the unmanned aerial vehicle panoramic image denoising method based on the attention mechanism comprises the following steps:
shooting a panoramic image by using a panoramic camera on an unmanned aerial vehicle in a complex illumination environment, adjusting parameters such as aperture, focal length, exposure time and the like of the camera to proper values, and collecting and storing an original panoramic image;
secondly, extracting noise from the collected panoramic image, adding the extracted noise into other clean data sets to form a noise panoramic image, matching the noise panoramic image with a corresponding clean reference panoramic image, and constructing a training image data set for denoising the panoramic image of the unmanned aerial vehicle under a complex illumination condition;
thirdly, carrying out histogram equalization and gamma correction on the original panoramic image, and adjusting the brightness and contrast of the original panoramic image to obtain a panoramic image after brightness and contrast correction;
step four, carrying out color correction on the panoramic image subjected to brightness and contrast correction to obtain a panoramic image subjected to color correction, and completing pretreatment work of all the panoramic images;
constructing a convolutional neural warp decoding network and a loss function based on an attention mechanism, importing training images into a denoising network in a batch mode, optimizing the loss function, and performing supervised learning training;
step six, the denoising test flow is to input the preprocessed panoramic image into a trained denoising network and output the denoised panoramic image.
In order to perform effective imaging in a complex illumination environment, the used CMOS camera adopts the techniques of back illumination, sensitivity enhancement, micro-optical wedge and the like, so that the light capturing area and the light sensing sensitivity of the CMOS are improved. For imaging in more extreme low-light environments, additional imaging with invisible near-infrared light may also be used. The collected original image is used for obtaining a noise model of the CMOS camera through a noise extraction method, and a noise block is obtained through selecting a smooth area in the image and subtracting an image mean value of the area. The determination mode of the flat sliding block is specifically as follows:
Figure BDA0004117137520000051
Figure BDA0004117137520000052
/>
wherein P is a selected image block, s 1 Sum s 2 The length and width of the image block, respectively, p is the smaller test block, h 1 And h 2 The length and width of an image block, respectively, C represents three color channels, i and j are the sequence numbers of the blocks. Mean (-) represents the calculated average, hist (-) represents the calculated histogram, λ low And lambda (lambda) low Is a threshold that determines whether smoothing is met. Therefore, the noise block is specifically expressed as follows:
Figure BDA0004117137520000061
wherein P is k Is the smoothed image block selected and N is the noise block. The training data set is generated using noise extraction.
And performing image preprocessing operation on the original image, controlling the image to be too dark or too exposed through histogram equalization and gamma correction in image enhancement, and performing image color correction through a gray world method.
As shown in fig. 2, the overall structure of the attention mechanism-based codec denoising network of the present invention includes:
the size of the input image is 128 x 128, and the convolutional neural network codec structure includes an encoder and a decoder. The encoder performs forward propagation by convolving and activating the image to complete extraction of image features and reduce the amount of computation. The encoder comprises five layers of convolution units:
the first layer convolution unit includes three 3x3 size, 2 step size, "same" fill-wise depth separable convolutions and ReLU activations followed by 2 x 2 size max pooled downsampling.
The second layer convolution unit includes three 3x3 size, 2 step size, "sarne" fill-style depth separable convolutions and ReLU activations followed by 2 x 2 size maximum pooled downsampling.
The third layer of convolution units consists of three 3x3 size, 2 step size, "sarne" fill-wise depth separable convolutions and ReLU activations followed by 2 x 2 size max pooled downsampling.
The fourth layer convolution unit includes three 3x3 size, 2 step size, "sarne" fill-style depth separable convolutions and ReLU activations followed by 2 x 2 size maximum pooled downsampling.
The fifth convolution unit includes a 3x3 size twice, a step size of 2, a depth separable convolution of the "sanne" fill mode, and a ReLU activation.
The encoded profile is input to a channel attention module and then to a decoder. The decoder gradually restores the image size by performing deconvolution, convolution and activation operations on the image, the deconvolution performs characteristic up-sampling, the convolution and activation gradually restore the de-noised image, and finally the de-noised image is output, and the decoder comprises four layers of convolution units:
the first layer of convolution units includes 2 x 2 deconvolution upsampling followed by two layers of 3x3 size, step size 2, "same" fill-wise deep separable convolution and ReLU activation.
The second layer convolution unit includes 2 x 2 deconvolution upsampling followed by two layers of 3x3 size, step size 2, "same" fill-wise depth separable convolution and ReLU activation.
The third layer of convolution units includes 2 x 2 deconvolution upsampling followed by two layers of 3x3 size, step size 2, "same" fill-wise depth separable convolution and ReLU activation.
The fourth layer convolution unit includes 2 x 2 deconvolution upsampling followed by two layers of 3x3 size, step size 2, "same" fill-wise depth separable convolution and ReLU activation.
And adding a residual attention module with multi-scale jump connection between convolution units of the encoder and the decoder, and performing multi-scale feature map connection, wherein the size of a final output image is 128×128.
As shown in fig. 3, the calculation method of the depth separable convolution module of the present invention includes:
the method comprises the steps of deep convolution and point-by-point convolution, wherein the deep convolution uses a convolution kernel to convolve each channel of an input feature map, and then channel stacking is carried out on the convolved results. If the input feature map has M channels in total, the depth convolution contains M3×3 convolution kernels in total, each convolution kernel convolves the feature map of only one channel, and then stacks all the convolved results. The point-to-point convolution adjusts the channel number of the feature map according to the number of the convolution kernels, and fuses the links among the channels. If the channel of the output feature map is N, N convolution kernels of 1×1 are used for point-by-point convolution, each convolution kernel is convolved with all channels of the input feature map, and the results are added to obtain one channel of the output feature map, so that N channels are completed in total. The depth separable convolution can greatly reduce the network parameter quantity and accelerate the model operation speed;
as shown in fig. 4, the channel attention module of the present invention includes:
the method comprises the steps of carrying out maximum pooling and average pooling calculation on an input feature map, inputting a shared MLP to obtain two feature maps with weights, setting the reduction ratio of the shared MLP to be 16, and finally multiplying the feature map with the weights by an original feature map to give weights to the feature map on a channel.
As shown in fig. 5, the residual attention module with multi-scale jump connection of the present invention comprises:
the device comprises a common convolution module, a residual connection module and a channel attention module, wherein the common convolution module is used for carrying out common convolution on a feature image, the residual connection is carried out on the feature image and an input feature image, the output result is subjected to downsampling of different multiplying powers, and the downsampled feature image and the feature image of a decoder are subjected to multi-scale channel stacking.
Unmanned aerial vehicle panoramic image denoising method based on attention mechanism trains dataset in batch mode input to the coding and decoding denoising network based on attention mechanism and supervises learning training, uses L 1 Loss as a function of loss, specifically formula (1):
L 1 =|I GT -I pre |...(4)
wherein I is GT For reference pictures, I pre Is a predictive image. And calculating gradients according to the final feature map and the loss function value of the target image, and back-propagating, and updating the parameters of the previous convolution. The batch size was set to 4 and the training dataset and validation set ratio was set to 8:2, a total of 30 epochs were trained, with a learning rate set to 0.0001. Adam optimizer is selected and used, parameter beta 1 =0.9,β 2 =0.999,epsilon=1×10 -8
And the denoising test inputs the preprocessed image to be denoised into a trained denoising network, and outputs a denoised image.
According to the invention, a training data set is constructed based on the panoramic image of the unmanned aerial vehicle, and a convolutional neural warp-knitted decoding network based on an attention mechanism is used as a denoising network. According to the method and the device, according to the requirements of application scenes, the panoramic picture shot by the panoramic camera on the unmanned aerial vehicle under the complex illumination condition is subjected to denoising treatment, so that the problems that the noise level needs to be estimated, time is consumed, and the denoising effect is poor under the complex illumination environment in the conventional denoising algorithm are solved.
The invention can be combined with other subsequent computer vision methods, such as target detection, semantic segmentation and other algorithms, according to actual demands, and the accuracy of the advanced computer vision recognition algorithm can be further improved. The above-described embodiments are merely illustrative of the present invention and are not intended to limit the scope of the present invention, and those skilled in the art can make various modifications and variations without departing from the spirit of the invention, but these modifications and variations are intended to fall within the scope of the invention as defined in the appended claims.

Claims (10)

1. An unmanned aerial vehicle panoramic image denoising method based on an attention mechanism is characterized by comprising the following steps:
the convolutional neural network codec structure comprises an encoder and a decoder, wherein the encoder comprises five convolution units, the convolution and activation operation is carried out on an image to finish image feature extraction, the pooling operation reduces the number of parameters, the decoder comprises four convolution units, the feature up-sampling is carried out on the image by deconvolution, the image size is gradually restored, and then the convolution and activation operation is repeated to gradually restore a denoised image;
depth separable convolutions, including depth convolutions and point-by-point convolutions, the depth convolutions convolve each channel of the input feature map with one convolution kernel, and then stack the results of all convolutions. The point-by-point convolution adjusts the channel number of the feature map according to the number of the convolution kernels, and better fuses the connection between the channels;
the channel attention module comprises an average pooling module, a maximum pooling module, a shared MLP (multi-level processor) module, a feature map summation module, an activation module and a feature map product module, wherein the feature map with weight is obtained by carrying out maximum pooling and average pooling calculation on an input feature map and inputting the shared MLP module, and is summed, and finally the feature map with weight is multiplied with the original feature map, so that the weight is given to the feature map on the channel;
the residual attention module with the multi-scale jump connection comprises a common convolution module, a residual connection module and a channel attention module, wherein the input characteristic diagram is subjected to the common convolution once, the channel attention module is input, the obtained result is subjected to the residual connection with the input characteristic diagram, and finally the obtained result is subjected to multi-scale channel stacking with the characteristic diagram of the decoder.
2. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism is characterized by comprising the following steps of:
step one, adjusting parameters of a panoramic camera on an unmanned aerial vehicle to proper values, and collecting and storing an original panoramic image;
secondly, extracting noise from the collected panoramic image, adding the extracted noise into other clean data sets to form a noise image, and matching the noise image with a corresponding clean reference image to construct a training image data set for denoising the panoramic image;
thirdly, carrying out histogram equalization and gamma correction on the original panoramic image, and adjusting the brightness and contrast of the original panoramic image to obtain a panoramic image after brightness and contrast correction;
step four, carrying out color correction on the panoramic image with corrected brightness and contrast to obtain an image with corrected color, and completing the pretreatment work of all panoramic images;
constructing a convolutional neural warp decoding network and a loss function based on an attention mechanism, importing training panoramic images into a denoising network in a batch mode, optimizing the loss function, and performing supervised learning training;
step six, the denoising test flow is to input the preprocessed panoramic image into a trained denoising network and output the denoised panoramic image.
3. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 1, wherein the encoder and decoder uses depth separable convolution to carry out convolution operation, so that the parameter amount required by the model operation process is reduced, and the model operation speed is increased; the channel attention module is positioned after the encoder and before the decoder; the residual attention module is used for connecting the characteristic diagrams of the encoder and the decoder, and the result of the residual attention module is subjected to downsampling of different multiplying factors, and the characteristic diagrams of different scales are connected on the channel.
4. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 2, wherein the complex illumination environment comprises weak visible light and overexposed visible light.
5. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 2, wherein the noise extraction is used to generate a training data set, which is input in a batch manner.
6. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 2, wherein the image preprocessing operation comprises image enhancement and color correction processing.
7. The unmanned aerial vehicle panoramic image denoising method based on an attention mechanism of claim 2, wherein the training comprises forward propagation and backward propagation operations. The forward propagation operation calculates the feature map of each layer according to the network layer settings, and the backward propagation operation calculates the gradient and inversely updates the parameters of the previous convolution layer according to the final feature map and the loss function value of the target image.
8. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 2, wherein the denoising test flow is to preprocess the panoramic image to be denoised by inputting, and calculate the panoramic image directly after denoising on the GPU in parallel by calling the trained model and the parameter configuration file.
9. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 6, wherein the noise extraction obtains a noise block by selecting a smooth region in the image and subtracting an image mean value of the region.
10. The unmanned aerial vehicle panoramic image denoising method based on the attention mechanism of claim 7, wherein the image enhancement controls the image to be too dark or too exposed through histogram equalization and gamma correction, and the color correction adopts a gray world method.
CN202310221981.1A 2023-03-09 2023-03-09 Unmanned aerial vehicle panoramic image denoising method based on attention mechanism Pending CN116229081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310221981.1A CN116229081A (en) 2023-03-09 2023-03-09 Unmanned aerial vehicle panoramic image denoising method based on attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310221981.1A CN116229081A (en) 2023-03-09 2023-03-09 Unmanned aerial vehicle panoramic image denoising method based on attention mechanism

Publications (1)

Publication Number Publication Date
CN116229081A true CN116229081A (en) 2023-06-06

Family

ID=86580365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310221981.1A Pending CN116229081A (en) 2023-03-09 2023-03-09 Unmanned aerial vehicle panoramic image denoising method based on attention mechanism

Country Status (1)

Country Link
CN (1) CN116229081A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681618A (en) * 2023-06-13 2023-09-01 强联智创(北京)科技有限公司 Image denoising method, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681618A (en) * 2023-06-13 2023-09-01 强联智创(北京)科技有限公司 Image denoising method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN115442515B (en) Image processing method and apparatus
CN111968044B (en) Low-illumination image enhancement method based on Retinex and deep learning
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN111311629B (en) Image processing method, image processing device and equipment
CN109410127B (en) Image denoising method based on deep learning and multi-scale image enhancement
CN111402146B (en) Image processing method and image processing apparatus
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
KR20210114856A (en) Systems and methods for image denoising using deep convolutional neural networks
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN112164011B (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
CN112348747A (en) Image enhancement method, device and storage medium
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
CN110428389B (en) Low-light-level image enhancement method based on MSR theory and exposure fusion
WO2022133194A1 (en) Deep perceptual image enhancement
CN113129236B (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
Moghimi et al. Real-time underwater image resolution enhancement using super-resolution with deep convolutional neural networks
CN116229081A (en) Unmanned aerial vehicle panoramic image denoising method based on attention mechanism
CN113379861B (en) Color low-light-level image reconstruction method based on color recovery block
CN115035011A (en) Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy
CN117351216B (en) Image self-adaptive denoising method based on supervised deep learning
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
CN113096023A (en) Neural network training method, image processing method and device, and storage medium
Song et al. Dual-model: Revised imaging network and visual perception correction for underwater image enhancement
CN117392036A (en) Low-light image enhancement method based on illumination amplitude

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination