CN113240589A - Image defogging method and system based on multi-scale feature fusion - Google Patents

Image defogging method and system based on multi-scale feature fusion Download PDF

Info

Publication number
CN113240589A
CN113240589A CN202110356145.5A CN202110356145A CN113240589A CN 113240589 A CN113240589 A CN 113240589A CN 202110356145 A CN202110356145 A CN 202110356145A CN 113240589 A CN113240589 A CN 113240589A
Authority
CN
China
Prior art keywords
convolutional
module
output
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110356145.5A
Other languages
Chinese (zh)
Inventor
彭德光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Megalight Technology Co ltd
Original Assignee
Chongqing Megalight Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Megalight Technology Co ltd filed Critical Chongqing Megalight Technology Co ltd
Priority to CN202110356145.5A priority Critical patent/CN113240589A/en
Publication of CN113240589A publication Critical patent/CN113240589A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention provides a multi-scale feature fusion image defogging method and a multi-scale feature fusion image defogging system, which comprise the following steps: constructing a convolutional coding network, wherein the convolutional coding network is provided with a plurality of convolutional coding modules which are sequentially connected in series, the output of each convolutional coding module is a union set of the input of each convolutional coding module and the output of the current convolutional coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolutional coding network; constructing a convolutional decoding network, wherein the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; receiving the multi-scale feature input through the convolutional decoding network to obtain a reconstructed image; the invention can simultaneously ensure the defogging performance and the universality.

Description

Image defogging method and system based on multi-scale feature fusion
Technical Field
The invention relates to the field of image processing, in particular to a multi-scale feature fusion image defogging method and system.
Background
At present, the field of image defogging mainly comprises a traditional method and a deep learning method, wherein the traditional method is mainly divided into a defogging algorithm based on image enhancement and a defogging algorithm based on image restoration. The defogging algorithm based on image enhancement starts to remove image noise as much as possible and improve the image contrast, thereby recovering a fog-free clear image. The defogging algorithm based on image restoration basically performs corresponding defogging processing on the foggy image based on an atmospheric degradation model. The deep learning based method uses deep neural network images including convolutional neural networks, cyclic neural networks for defogging. The prior art has the following disadvantages: 1. the defogging algorithm based on image enhancement is not good in universality and poor in fog effect, and the image is easily made to be unnatural due to over-enhancement. The defogging algorithm based on image restoration seriously depends on the capability of estimating unknown parameters in an atmosphere model by prior knowledge, has great difficulty and can not ensure the defogging effect generally. 2. The traditional convolution neural network structure in the deep learning method cannot complete the conversion process from the image to the image.
Disclosure of Invention
In view of the problems in the prior art, the invention provides a multi-scale feature fusion image defogging method and system, which mainly solve the problem of poor defogging effect of the existing method.
In order to achieve the above and other objects, the present invention adopts the following technical solutions.
A multi-scale feature fusion image defogging method comprises the following steps:
constructing a convolutional coding network, wherein the convolutional coding network is provided with a plurality of convolutional coding modules which are sequentially connected in series, the output of each convolutional coding module is a union set of the input of each convolutional coding module and the output of the current convolutional coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolutional coding network;
constructing a convolutional decoding network, wherein the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; and receiving the multi-scale characteristic input through the convolutional decoding network to obtain a reconstructed image.
Optionally, the feature map obtained by each convolutional coding module is down-sampled to obtain the output of the convolutional coding module.
Optionally, each of the convolutional encoding modules includes a plurality of residual modules, and a final output of each of the residual modules is a sum of an output of a last residual module and an output of a current residual module.
Optionally, the input of each residual module is processed by a convolution layer, a batch normalization layer and an activation function to obtain the output of the current residual module.
Optionally, the output sequence of each of the convolutional decoding modules is up-sampled and then used as the input of the next convolutional decoding module.
Optionally, in a training stage, a loss function of the convolutional decoding network is constructed according to the multi-dimensional loss difference between the reconstructed image and the original image, and parameters of the convolutional coding network and the convolutional decoding network are updated through back propagation.
Optionally, the multi-dimensional loss difference comprises: average absolute error loss, perceptual loss, mean square error loss;
the loss function is expressed as:
loss=λ1MAE(x,out)+λ2MSE(x,out)+λ3PL(x,out)
wherein MAE is the loss of mean absolute error, MSE is the mean square error, PL is the perception error, and lambda1,λ2,λ3The weights lost for each term.
Optionally, the perceptual error is expressed as:
Figure BDA0003003938030000021
wherein, C'j,H′j,W′jRepresenting extraction from the jth convolutional coding block in a convolutional coding networkThe size of the feature map.
Optionally, before the blurred image is input into the convolutional encoding network, normalizing the blurred image includes:
Figure BDA0003003938030000022
wherein x is the input pixel value matrix of the foggy image, mean is the pixel value mean moment of the image
Arraying; std is calculated as follows:
Figure BDA0003003938030000031
where σ denotes a standard deviation of pixel values, and N denotes the number of pixels of the input fogging image x.
A multi-scale feature fused image defogging system comprising:
the system comprises a coding module, a convolution coding network and a control module, wherein the coding module is used for constructing the convolution coding network, the convolution coding network is provided with a plurality of convolution coding modules which are sequentially connected in series, the output of each convolution coding module is the union set of the input of each previous convolution coding module and the output of the current convolution coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolution coding network;
the decoding module is used for constructing a convolutional decoding network, the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; and receiving the multi-scale characteristic input through the convolutional decoding network to obtain a reconstructed image.
As described above, the image defogging method and system based on multi-scale feature fusion of the invention have the following advantages.
Through the conversion process from the image to the image, the defogging effect is ensured while the universality is ensured.
Drawings
Fig. 1 is a schematic flow chart of an image defogging method based on multi-scale feature fusion according to an embodiment of the invention.
Fig. 2 is a schematic structural diagram of a convolutional encoding network and a convolutional decoding network in an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The invention provides an image defogging method based on multi-scale feature fusion.
The method specifically comprises the following steps:
referring to fig. 1, in step 01, an input image may be collected in advance; specifically, the original input image may be captured by any image capturing device, and the format of the input image may be a 24-bit-depth ". JPG" or ". PNG" file.
The formats of images acquired by different camera devices are different, and when the image formats do not meet the requirements, the images can be converted into the formats meeting the requirements through related software and application programs.
In step 02, the acquired foggy images in the image are normalized to balance the distribution of pixel values of each color component in the foggy images, and the convolution neural network can be guaranteed to be correctly converged in a training stage. The specific standardized calculation process is as follows:
Figure BDA0003003938030000041
wherein, x is an input pixel value matrix of the foggy image, and mean is a pixel value mean matrix of the foggy image.
std is calculated as follows:
Figure BDA0003003938030000042
where σ denotes a standard deviation of pixel values and N denotes the number of pixels of the input fogging image x.
In step 03, multi-scale feature extraction is performed on the normalized image features using a Convolutional coding Network (Convolutional encoding Network)
In one embodiment, a convolutional coding network is constructed, the convolutional coding network is provided with a plurality of convolutional coding modules which are sequentially connected in series, the output of each convolutional coding module is the union of the input of each convolutional coding module and the output of the current convolutional coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolutional coding network.
Specifically, after the normalization of the image is completed, feature extraction is performed using a convolutional coding network. As shown in fig. 2, the convolutional encoding network is composed of a plurality of convolutional encoding modules. The number, size, moving step length, filling mode and filling size of the convolution kernels can be set according to needs.
In step 04, the feature map obtained by each convolutional coding module is down-sampled to obtain the output of the convolutional coding module.
In particular, with xlRepresenting the output of the l-th convolutional coding block, HlIndicating the l convolutional coding Module, DOWN uses convolution to perform feature mappingThe rows reduce resolution and increase the number of channels. Then, for the output x of the l-th layerl
xl=DOWN(Hl([x0,x1,…,xl-1])) (3)
Wherein each convolution module HlThe method comprises a plurality of residual error modules, the final output of each residual error module is the sum of the output of the last residual error module and the output of the current residual error module, and the specific calculation mode is as follows:
ri+1=ri+F(ri,Wi) (4)
wherein r isiAnd (3) representing the output of the ith residual error module, wherein W is a weight matrix of a convolution kernel, F represents the input characteristic, and W is subjected to convolution, batch standardization and activation, and the specific calculation is as follows:
Figure BDA0003003938030000051
wherein
Figure BDA0003003938030000052
The nth input of the ith residual error module is represented, b represents the bias of a convolution kernel, m represents the serial number of a local receptive field, n represents the required number of different feature maps of the same layer, and ACT is an activation function.
In step 05, the multi-scale features are reconstructed using a Convolutional decoding Network (Convolutional Decode Network),
in one embodiment, a convolutional decoding network is constructed, the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; and receiving the multi-scale characteristic input through the convolutional decoding network to obtain a reconstructed image.
Specifically, the method comprises the following steps:
the convolutional decoding network also comprises a plurality of convolutional decoding modules, which for each convolutional decoding module can be expressed as:
yl=UP(Hl([y0,y1,…,yl-1,xl′])) (6)
wherein, ylRepresenting the output of the l-th convolutional decoding block, xl'denotes the same resolution feature map in the l' th convolutional coding module, and UP denotes combining and upsampling the channels of the feature map.
In step 06, the convolutional decoding network will output a fog-free image through the foregoing steps.
In one embodiment, in a training stage, a loss function of a convolutional decoding network is constructed according to multi-dimensional loss difference between a reconstructed fog-free image and an original image, and parameters of the convolutional coding network and the convolutional decoding network are updated through back propagation;
specifically, in order for the deep neural network to correctly predict the result, the multidimensional loss difference may include a mean square error loss, a mean absolute error loss, and a perceptual loss. Calculating the difference between the reconstructed image and the original clear image through the multi-dimensional loss difference, and performing back propagation to update each parameter in the network
The output of the network is denoted as out, the loss function of the network can be expressed as follows:
loss=λ1MAE(x,out)+λ2MSE(x,out)+λ3PL(x,out) (7)
wherein MAE is the loss of mean absolute error, MSE is the mean square error, PL is the perception error, and lambda1,λ2,λ3The weights lost for each term.
Wherein, the calculation process of PL is as follows:
Figure BDA0003003938030000061
wherein, C'j,H′j,W′jRepresenting the jth convolutional coding module from a convolutional coding networkSize of the extracted feature map.
Inputting the fog image to the trained convolutional coding network and convolutional decoding network, and then carrying out forward propagation on the neural network to obtain the defogged image.
The embodiment also provides a multi-scale feature fusion image defogging system, which is used for executing the multi-scale feature fusion image defogging method in the method embodiment. Since the technical principle of the system embodiment is similar to that of the method embodiment, repeated description of the same technical details is omitted.
In one embodiment, the image defogging system with fusion of multi-scale features comprises a coding module for constructing a convolutional coding network, wherein the convolutional coding network is provided with a plurality of convolutional coding modules which are sequentially connected in series, the output of each convolutional coding module is a union set of the input of each convolutional coding module and the output of the current convolutional coding module, and the multi-scale features of the input foggy image are obtained through the convolutional coding network;
the decoding module is used for constructing a convolutional decoding network, the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; receiving the multi-scale feature input through the convolutional decoding network to obtain a reconstructed image
In summary, the image defogging method and system based on multi-scale feature fusion of the invention have the advantages that the end-to-end image defogging process does not need complicated preprocessing steps and prior estimation of parameters; in the convolutional decoding process, the multi-scale characteristic information in the encoding process is used for reconstructing an image, so that the expression capability of a convolutional decoding network is greatly improved, and the defogging performance is improved; the defogging network obtained based on mass data training does not depend on any prior knowledge, and the model has high universality. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A multi-scale feature fusion image defogging method is characterized by comprising the following steps:
constructing a convolutional coding network, wherein the convolutional coding network is provided with a plurality of convolutional coding modules which are sequentially connected in series, the output of each convolutional coding module is a union set of the input of each convolutional coding module and the output of the current convolutional coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolutional coding network;
constructing a convolutional decoding network, wherein the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; and receiving the multi-scale characteristic input through the convolutional decoding network to obtain a reconstructed image.
2. The method according to claim 1, wherein the feature map obtained by each convolutional coding module is down-sampled to obtain the output of the convolutional coding module.
3. The method according to claim 2, wherein each of the convolutional encoding modules comprises a plurality of residual modules, and the final output of each of the residual modules is the sum of the output of the previous residual modules and the output of the current residual module.
4. The method according to claim 3, wherein the input of each residual module is processed by a convolutional layer, a batch normalization layer and an activation function to obtain the output of the current residual module.
5. The multi-scale feature fused image defogging method according to claim 1, wherein the output sequence of each convolution decoding module is up-sampled and then used as the input of the next convolution decoding module.
6. The method according to claim 1, wherein in a training phase, a loss function of the convolutional decoding network is constructed according to the multi-dimensional loss difference between the reconstructed image and the original image, and parameters of the convolutional coding network and the convolutional decoding network are updated through back propagation.
7. The multi-scale feature fused image defogging method according to claim 6, wherein said multi-dimensional loss difference comprises: average absolute error loss, perceptual loss, mean square error loss;
the loss function is expressed as:
loss=λ1MAE(x,out)+λ2MSE(x,out)+λ3PL(x,out)
wherein MAE is the loss of mean absolute error, MSE is the mean square error, PL is the perception error, and lambda1,λ2,λ3The weights lost for each term.
8. The multi-scale feature fused image defogging method according to claim 7, wherein the perception error is expressed as:
Figure FDA0003003938020000021
wherein, C'j,H′j,W′jRepresenting the size of the feature map extracted from the jth convolutional coding module in the convolutional coding network.
9. The method of claim 1, wherein normalizing the blurred image before inputting the blurred image into the convolutional encoding network comprises:
Figure FDA0003003938020000023
wherein x is the input pixel value matrix of the foggy image, and mean is the pixel value mean matrix of the foggy image; std is calculated as follows:
Figure FDA0003003938020000022
where σ denotes a standard deviation of pixel values, and N denotes the number of pixels of the input fogging image x.
10. A multi-scale feature fused image defogging system comprising:
the system comprises a coding module, a convolution coding network and a control module, wherein the coding module is used for constructing the convolution coding network, the convolution coding network is provided with a plurality of convolution coding modules which are sequentially connected in series, the output of each convolution coding module is the union set of the input of each previous convolution coding module and the output of the current convolution coding module, and the multi-scale characteristics of the input foggy image are obtained through the convolution coding network;
the decoding module is used for constructing a convolutional decoding network, the convolutional decoding network is provided with a plurality of convolutional decoding modules which are sequentially connected in series, the output of the convolutional coding module is added into the output sequence of the convolutional decoding module with the same output resolution, and the final output of each convolutional decoding module is the union of the output sequence of the previous convolutional decoding module and the output sequence of the current convolutional decoding module; and receiving the multi-scale characteristic input through the convolutional decoding network to obtain a reconstructed image.
CN202110356145.5A 2021-04-01 2021-04-01 Image defogging method and system based on multi-scale feature fusion Pending CN113240589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110356145.5A CN113240589A (en) 2021-04-01 2021-04-01 Image defogging method and system based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110356145.5A CN113240589A (en) 2021-04-01 2021-04-01 Image defogging method and system based on multi-scale feature fusion

Publications (1)

Publication Number Publication Date
CN113240589A true CN113240589A (en) 2021-08-10

Family

ID=77130925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110356145.5A Pending CN113240589A (en) 2021-04-01 2021-04-01 Image defogging method and system based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN113240589A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286113A (en) * 2021-12-24 2022-04-05 国网陕西省电力有限公司西咸新区供电公司 Image compression recovery method and system based on multi-head heterogeneous convolution self-encoder

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378848A (en) * 2019-07-08 2019-10-25 中南大学 A kind of image defogging method based on derivative figure convergence strategy
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning
CN110675330A (en) * 2019-08-12 2020-01-10 广东石油化工学院 Image rain removing method of encoding-decoding network based on channel level attention mechanism
CN110866879A (en) * 2019-11-13 2020-03-06 江西师范大学 Image rain removing method based on multi-density rain print perception
CN110880165A (en) * 2019-10-15 2020-03-13 杭州电子科技大学 Image defogging method based on contour and color feature fusion coding
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN111738942A (en) * 2020-06-10 2020-10-02 南京邮电大学 Generation countermeasure network image defogging method fusing feature pyramid
CN112037139A (en) * 2020-08-03 2020-12-04 哈尔滨工业大学(威海) Image defogging method based on RBW-cycleGAN network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378848A (en) * 2019-07-08 2019-10-25 中南大学 A kind of image defogging method based on derivative figure convergence strategy
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN110675330A (en) * 2019-08-12 2020-01-10 广东石油化工学院 Image rain removing method of encoding-decoding network based on channel level attention mechanism
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning
CN110880165A (en) * 2019-10-15 2020-03-13 杭州电子科技大学 Image defogging method based on contour and color feature fusion coding
CN110866879A (en) * 2019-11-13 2020-03-06 江西师范大学 Image rain removing method based on multi-density rain print perception
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN111738942A (en) * 2020-06-10 2020-10-02 南京邮电大学 Generation countermeasure network image defogging method fusing feature pyramid
CN112037139A (en) * 2020-08-03 2020-12-04 哈尔滨工业大学(威海) Image defogging method based on RBW-cycleGAN network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴嘉炜 等: "一种基于深度学习的两阶段图像去雾网络", 《计算机应用与软件》, pages 197 - 202 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286113A (en) * 2021-12-24 2022-04-05 国网陕西省电力有限公司西咸新区供电公司 Image compression recovery method and system based on multi-head heterogeneous convolution self-encoder
CN114286113B (en) * 2021-12-24 2023-05-30 国网陕西省电力有限公司西咸新区供电公司 Image compression recovery method and system based on multi-head heterogeneous convolution self-encoder

Similar Documents

Publication Publication Date Title
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN113096017B (en) Image super-resolution reconstruction method based on depth coordinate attention network model
CN112215755B (en) Image super-resolution reconstruction method based on back projection attention network
CN112396645B (en) Monocular image depth estimation method and system based on convolution residual learning
CN113177882B (en) Single-frame image super-resolution processing method based on diffusion model
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN111582483A (en) Unsupervised learning optical flow estimation method based on space and channel combined attention mechanism
CN111861884B (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN112699844B (en) Image super-resolution method based on multi-scale residual hierarchy close-coupled network
CN112862689A (en) Image super-resolution reconstruction method and system
CN112581370A (en) Training and reconstruction method of super-resolution reconstruction model of face image
CN112767283A (en) Non-uniform image defogging method based on multi-image block division
CN113449691A (en) Human shape recognition system and method based on non-local attention mechanism
CN112509106A (en) Document picture flattening method, device and equipment
CN114565539B (en) Image defogging method based on online knowledge distillation
CN115511708A (en) Depth map super-resolution method and system based on uncertainty perception feature transmission
CN115272438A (en) High-precision monocular depth estimation system and method for three-dimensional scene reconstruction
CN111444923A (en) Image semantic segmentation method and device under natural scene
CN113902658B (en) RGB image-to-hyperspectral image reconstruction method based on dense multiscale network
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN116342385A (en) Training method and device for text image super-resolution network and storage medium
CN111553961B (en) Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing

Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing

Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

CB02 Change of applicant information