CN114841872A - Digital halftone processing method based on multi-agent deep reinforcement learning - Google Patents

Digital halftone processing method based on multi-agent deep reinforcement learning Download PDF

Info

Publication number
CN114841872A
CN114841872A CN202210377696.4A CN202210377696A CN114841872A CN 114841872 A CN114841872 A CN 114841872A CN 202210377696 A CN202210377696 A CN 202210377696A CN 114841872 A CN114841872 A CN 114841872A
Authority
CN
China
Prior art keywords
agent
reinforcement learning
image
halftone
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210377696.4A
Other languages
Chinese (zh)
Inventor
黄凯
江海天
熊东亮
蒋小文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210377696.4A priority Critical patent/CN114841872A/en
Publication of CN114841872A publication Critical patent/CN114841872A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention belongs to the technical field of image processing, and discloses a digital halftone processing method based on multi-agent deep reinforcement learning, which comprises the following steps: step 1: calculating the forward direction of the neural network; step 2: estimating the gradient of a multi-agent reinforcement learning strategy; and step 3: and (4) calculating an anisotropic inhibition loss function. The invention provides a digital halftone processing method based on multi-agent deep reinforcement learning, which can quickly generate a high-quality halftone image.

Description

Digital halftone processing method based on multi-agent deep reinforcement learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a digital halftone processing method based on multi-agent deep reinforcement learning.
Background
On printers, ink screens, and the like, the number of display bits of an image is limited and small. An Image having only 0 and 1 per pixel value, called a Halftone Image (Halftone Image); correspondingly, an Image with a high number of pixel values (e.g., 8 bits) or a series of images is referred to as a Continuous-tone Image (Continuous-tone Image). Digital Halftoning (Halftoning for short) is a technique for converting a continuous tone image into a halftone image. By properly arranging the arrangement of the limited pixels, the formed halftone image can give a visual feeling identical to that of the original halftone image. High quality halftone images appear to be free of noticeable artifacts and have blue noise characteristics: the low-frequency components on the image power spectrum are few, the medium-high frequency components are many, and the anisotropy is low.
Conventional halftone techniques can be generally classified into three types, an ordered dithering method, an error diffusion method, and a search method. (1) The ordered dithering method designs a fixed threshold matrix in advance. During actual processing, comparing the continuous tone image with the threshold matrix value pixel by pixel, if the continuous tone image is larger than the threshold matrix value, setting the corresponding halftone pixel as 1, otherwise, setting the corresponding halftone pixel as 0; (2) the error diffusion method is based on the idea of keeping the local tone unchanged, continuously-modulated image pixels are compared with a certain threshold value one by one according to a specific sequence to determine corresponding halftone pixel values, and quantization errors caused by binary quantization are diffused to adjacent unprocessed continuously-modulated pixel values; (3) the search method treats halftone processing as a mathematical optimization problem: under the constraint that the pixel value is a finite digit, a suboptimal solution which is closest to an original continuous tone image after being filtered by a Human Vision System (HVS) is given by utilizing a search algorithm such as simulated annealing and the like.
In recent years, machine learning techniques typified by deep learning have been remarkably successful in many fields. In the field of halftone processing, existing methods successfully generate halftone images using a general convolutional neural network by adding gaussian noise to the input layer.
The above halftone processing method is superior or inferior in processing efficiency and image quality. The ordered dithering method has high parallelism and simple calculation, but the processing process does not consider the image content, so the details are easy to lose, and the generated image often has a periodic phenomenon; the image quality generated by the error diffusion method is still, the calculation is relatively simple, but worm-shaped artificial traces are easily introduced; the halftone image generated by the iterative optimization method has high quality, but the search optimization algorithm depending on the halftone image has low efficiency and poor parallelism; the existing halftone processing algorithm based on the deep neural network cannot generate high-quality halftone images and needs a complex training scheme. Practical printing apparatuses need to perform halftone processing in real time with high quality, and thus there is still room for improvement in existing solutions.
Disclosure of Invention
The invention aims to provide a digital halftone processing method based on multi-agent deep reinforcement learning, so as to solve the technical problems.
In order to solve the technical problems, the specific technical scheme of the digital halftone processing method based on multi-agent deep reinforcement learning of the invention is as follows:
a digital halftone processing method based on multi-agent deep reinforcement learning comprises the following steps:
Step 1: calculating the neural network forward;
and 2, step: estimating the gradient of a multi-agent reinforcement learning strategy;
and step 3: and (4) calculating an anisotropic inhibition loss function.
Further, the step 1 comprises the following specific steps:
inputting a stack of a continuous tone image c and a white noise image z obtained by standard Gaussian sampling on a channel dimension;
the neural network main body is a multilayer convolutional neural network CNN, and the parameter of the neural network CNN is theta;
the last layer of the neural network is a Sigmoid function, and output is restricted to be in a (0,1) range;
the neural network model outputs a probability prediction value of each pixel value of the halftone image, namely:
Pr(h|c,z)=π(h|c,z;θ)
where h is the output discrete halftone image; pi refers to the agent association policy parameterized by CNN. Further, the step 2 comprises the following specific steps:
with objective evaluation indexes of optimized halftone images as targets, firstly, modeling halftone processing as a reinforcement learning problem is proposed; learning to obtain a CNN parameterized halftone processing strategy pi, abstracting the halftone processing into a multi-agent reinforcement learning problem MARL: each pixel on the output halftone image is regarded as a virtual agent at a corresponding position to receive an input environment state, and the virtual agent communicates with each other in network forward propagation, and finally gives a decision action; the quantity of the virtual agents is the same as the total quantity of the output image pixels, and the virtual agents share the same group of parameters, namely the convolutional layer parameter theta of the CNN;
Defining objective quality evaluation index E (h, c) of the halftone image as a visual error of the halftone image after the halftone image and an original halftone image are subjected to a low-pass filter G (-); the reinforcement learning reward function is defined as the negative of the visual error.
Further, the weight of the filter G (-) is defined by the HVS model.
Further, the step 2 includes a policy gradient calculation method:
in the case of sampling other agent actions, the method for calculating the policy gradient of the current agent, which is to traverse the calculation of the profit from each agent taking 0/1 actions, is:
Figure BDA0003591415200000031
wherein a is the agent number; h is -a Refers to a joint action other than agent a.
Further, the step 2 comprises the following specific steps:
by calculating the radial power spectrum of the output probability map, anisotropy is inhibited on a frequency domain, so that the output halftone image has blue noise characteristics;
the power spectrum is defined as:
Figure BDA0003591415200000032
wherein DFT is discrete Fourier transform, N is pixel number, p is probability graph with action as 1, namely output of CNN;
the radial power spectrum is defined as:
Figure BDA0003591415200000041
wherein r (f) ρ ) Finger sum f ρ All frequency components within a discrete distance of 1, n (-) referring to the number of discrete frequencies within the range;
anisotropy is defined as:
Figure BDA0003591415200000042
Minimizing the loss function:
Figure BDA0003591415200000043
randomly sampling a batch of pure-color continuous tone maps as additional input data during training, and only carrying out anisotropic inhibition on the batch of data; the anisotropy of the probability map is directly suppressed.
Further, a training phase is included, which completes the following training steps several times until convergence:
1) preparing data:
giving a batch of continuous tone images c, and obtaining a batch of pure color continuous tone images c through uniform sampling g And a batch of white noise images z obtained by standard normal sampling;
2) and (3) forward calculation of the neural network:
p=CNN θ (c,z)
p g =CNN θ (c g ,z)
3) multi-agent reinforcement learning strategy gradient estimation:
sampling h' Bernoulli (p)
Calculating a policy gradient
Figure BDA0003591415200000044
4) Calculating the anisotropic inhibition loss function:
Figure BDA0003591415200000045
Figure BDA0003591415200000051
Figure BDA0003591415200000052
5) updating parameters:
Figure BDA0003591415200000053
wherein ω is a Is a super-parameter that can be adjusted.
Further, the method comprises an inference phase, wherein the inference phase comprises the following specific steps:
1) giving a continuous tone image c to be processed and a Gaussian white noise image z obtained through standard normal sampling;
2) the neural network calculates forward and quantizes to obtain a halftone image,
h=argmax(CNN θ (c,z))。
has the advantages that: the invention provides a digital halftone processing method based on multi-agent deep reinforcement learning, which can quickly generate a high-quality halftone image.
Drawings
FIG. 1 is a schematic diagram of a training process of the present invention;
FIG. 2 is a schematic diagram of the inference process of the present invention;
FIG. 3 is a graph of PSNR training increase in one embodiment of the present invention;
FIG. 4 is a graph illustrating SSIM training growth in one embodiment of the present invention;
FIG. 5 is a sample diagram of a halftone image generated in one embodiment of the invention.
Detailed Description
In order to better understand the purpose, structure and function of the present invention, the following describes a digital halftone processing method based on multi-agent deep reinforcement learning in detail with reference to the attached drawings.
The method provided by the invention mainly comprises the steps of neural network forward calculation, multi-agent reinforcement learning strategy gradient estimation and anisotropic inhibition loss function calculation.
1. And (3) forward calculation of the neural network:
inputting a stack of a continuous tone image c and a white noise image z obtained by standard Gaussian sampling on a channel dimension;
the Neural Network main body is a multilayer Convolutional Neural Network (CNN), and a parameter of the Neural Network main body is θ. The CNN is not limited by a specific model structure, and general designs such as ResNet and U-Net can be adopted.
The last layer of the neural network is a Sigmoid function, and the output is restricted to be in a range of (0, 1).
The neural network model outputs a probability prediction value of each pixel value of the halftone image, namely:
Pr(h|c,z)=π(h|c,z;θ)
wherein h is the output discrete halftone image; pi refers to the agent association policy parameterized by CNN.
2. Multi-agent reinforcement learning strategy gradient estimation:
the objective evaluation index of the optimized halftone image is taken as a target, and the halftone processing modeling is firstly proposed as a reinforcement learning problem. Different from a search method for optimizing a specific single image each time, the method learns a halftone processing strategy pi parameterized by CNN, and the strategy can quickly calculate to obtain a relatively optimal halftone image of a continuous tone image. The present invention further abstracts the halftone processing into a Multi-Agent Reinforcement Learning problem (MARL): each pixel on the output halftone image is regarded as a virtual intelligent body at the corresponding position to receive the input environment state (continuous tone image and noise image), and the virtual intelligent body communicates with each other in the network forward propagation, and finally gives a decision action. The number of virtual agents is the same as the total number of output image pixels and shares the same set of parameters, namely the convolutional layer parameter θ of CNN.
The objective quality evaluation Index E (h, c) of the halftone image is defined as the visual error (HVS may be Gaussian low-pass filter; visual error may be MSE (mean Square error) or SSIM (structural Similarity Index measure) after passing through the low-pass filter G (weight defined by HVS model) with the original halftone image. The reinforcement learning reward function is defined as the negative of the visual error.
The existing multi-agent reinforcement learning algorithm generally aims at a multi-step Markov decision process, and the number of generation steps in the modeling method is only one step, so that the accumulated reward brought by actions can be directly calculated.
In the above modeling context, the present invention proposes a policy gradient calculation method: in the case of sampling other agent actions, the method for calculating the policy gradient of the current agent, which is to traverse the calculation of the profit from each agent taking 0/1 actions, is:
Figure BDA0003591415200000071
wherein a is the agent number; h is -a Refers to a joint action other than agent a.
Compared with the existing method, the method has the advantages that the estimated strategy gradient variance is smaller, and the learning process is simpler and more stable.
3. Anisotropic suppression loss function calculation:
in order to further improve the quality of the halftone image generated by the convolutional neural network, the invention provides an anisotropic inhibition loss function calculation module. The module suppresses Anisotropy (anisotpy) in the frequency domain by computing the radial power spectrum of the output probability map, thereby making the output halftone image have a blue noise characteristic.
The power spectrum is defined as:
Figure BDA0003591415200000072
where dft (discrete Fourier transform) is the discrete Fourier transform, N is the number of pixels, and p is the probability map of motion as 1 (output of CNN).
The radial power spectrum is defined as:
Figure BDA0003591415200000073
wherein r (f) ρ ) Finger sum f ρ The discrete distance is all frequency components within 1 and n (-) refers to the number of discrete frequencies within this range.
Anisotropy is defined as:
Figure BDA0003591415200000074
to suppress this anisotropy, the module minimizes a loss function:
Figure BDA0003591415200000081
considering that the characteristic of low anisotropy only works on the corresponding halftone map of the pure-color continuous tone map, the method randomly samples a batch of the pure-color continuous tone map as additional input data during training and only carries out anisotropy suppression on the batch of data. In addition, considering that the probability map of the convolution output can already reflect the anisotropy of the output halftone image, the present method directly suppresses the anisotropy of the probability map instead of the quantized halftone image.
The step enables the generated halftone image to have excellent blue noise characteristics.
The invention is based on reinforcement learning and is therefore divided into two phases of training and reasoning.
A training stage:
fig. 1 is a schematic diagram of a training phase.
The following training steps are completed multiple times until convergence.
6) Preparing data
Giving a batch of continuous tone images c, and obtaining a batch of pure color continuous tone images c through uniform sampling g And a batch of white noise images Z obtained by standard normal sampling;
7) Neural network forward computing
p=CNN θ (c,z)
p g =CNN θ (c g ,z)
8) Multi-agent reinforcement learning strategy gradient estimation
Sampling h' Bernoulli (p)
Calculating a policy gradient
Figure BDA0003591415200000082
9) Calculating an anisotropy suppression loss function
Figure BDA0003591415200000083
Figure BDA0003591415200000084
Figure BDA0003591415200000085
10) Parameter updating
Figure BDA0003591415200000091
Wherein ω is a Is a super-parameter that can be adjusted.
And (3) reasoning stage:
fig. 2 is a schematic diagram of a testing phase.
3) Giving a continuous tone image c to be processed and a Gaussian white noise image z obtained through standard normal sampling;
4) the neural network calculates forward and quantizes to obtain a halftone image,
h=argmax(CNN θ (c,z))
example (b):
the public data set VOC2012 was used to train and test the halftone image generation capability of the present method. The data set comprises 17125 pictures in total. 13758 pictures are randomly selected for training, and the rest 3367 pictures are used for testing;
during training, brightness dithering is performed on an input image as data enhancement. Enhancing by using a ColorJetter (bright ═ 0.9) class in an open source torchvision toolkit;
g employs the Nasanen human visual system with scale parameter set to 2000;
the error is designed as E (h, c) ═ MSE (g (h), g (c)) -0.006 · SSIM (h, c);
hyper-parameter omega a Set to 0.002;
a ResNet convolutional neural network was used as a model. The neural network comprises 16 ResBlock channels, and the number of the channels is 32.
Model training 200000 times, adopting Adam optimizer. The learning rate was reduced from 3e-4 to 1e-5, and cosine _ connecting schedule was used to adjust the learning rate.
FIG. 3 is a PSNR increase curve during training; FIG. 4 is an SSIM growth curve during training; FIG. 5 is a sample output result.
It is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (8)

1. A digital halftone processing method based on multi-agent deep reinforcement learning is characterized by comprising the following steps:
step 1: calculating the forward direction of the neural network;
step 2: estimating the gradient of a multi-agent reinforcement learning strategy;
and step 3: and (4) calculating an anisotropic inhibition loss function.
2. The multi-agent deep reinforcement learning-based digital halftone processing method according to claim 1, wherein the step 1 comprises the following specific steps:
inputting a stack of a continuous tone image c and a white noise image z obtained by standard Gaussian sampling on a channel dimension;
the neural network main body is a multilayer convolutional neural network CNN, and the parameter of the neural network CNN is theta;
the last layer of the neural network is a Sigmoid function, and output is restricted to be in a (0, 1) range;
the neural network model outputs a probability prediction value of each pixel value of the halftone image, namely:
Pr(h|c,z)=π(h|c,z;θ)
where h is the output discrete halftone image; pi refers to the agent association policy parameterized by CNN.
3. The multi-agent deep reinforcement learning-based digital halftone processing method according to claim 1, wherein said step 2 comprises the following specific steps:
with objective evaluation indexes of optimized halftone images as targets, firstly, modeling halftone processing as a reinforcement learning problem is proposed; learning to obtain a CNN parameterized halftone processing strategy pi, abstracting the halftone processing into a multi-agent reinforcement learning problem MARL: each pixel on the output halftone image is regarded as a virtual agent at a corresponding position to receive an input environment state, and the virtual agent communicates with each other in network forward propagation, and finally gives a decision action; the quantity of the virtual agents is the same as the total quantity of the output image pixels, and the virtual agents share the same group of parameters, namely the convolutional layer parameter theta of the CNN;
Defining objective quality evaluation index E (h, c) of the halftone image as a visual error of the halftone image after the halftone image and an original halftone image are subjected to a low-pass filter G (-); the reinforcement learning reward function is defined as the negative of the visual error.
4. The multi-agent deep reinforcement learning-based digital halftoning method of claim 3, wherein the weight of the filter G (-) is defined by an HVS model.
5. The multi-agent deep reinforcement learning-based digital halftoning method of claim 3, wherein the step 2 comprises a policy gradient calculation method:
in the case of sampling other agent actions, the method for calculating the policy gradient of the current agent, which is to traverse the calculation of the profit from each agent taking 0/1 actions, is:
Figure FDA0003591415190000021
wherein a is the agent number; h is -a Refers to a joint action other than agent a.
6. The multi-agent deep reinforcement learning-based digital halftone processing method according to claim 1, wherein said step 3 comprises the following specific steps:
by calculating the radial power spectrum of the output probability map, anisotropy is inhibited on a frequency domain, so that the output halftone image has blue noise characteristics;
The power spectrum is defined as:
Figure FDA0003591415190000022
wherein DFT is discrete Fourier transform, N is pixel number, p is probability graph with action as 1, namely output of CNN;
the radial power spectrum is defined as:
Figure FDA0003591415190000023
wherein r (f) ρ ) Finger sum f ρ All frequency components within a discrete distance of 1, n (-) referring to the number of discrete frequencies within the range;
anisotropy is defined as:
Figure FDA0003591415190000024
minimization of the loss function:
Figure FDA0003591415190000031
randomly sampling a batch of pure-color continuous tone maps as additional input data during training, and only carrying out anisotropic inhibition on the batch of data; the anisotropy of the probability map is directly suppressed.
7. The multi-agent deep reinforcement learning-based digital halftoning method of claim 1, comprising a training phase that completes the following training steps multiple times until convergence:
1) preparing data:
giving a batch of continuous tone images c, and obtaining a batch of pure color continuous tone images c through uniform sampling g And a batch of white noise images Z obtained by standard normal sampling;
2) and (3) forward calculation of the neural network:
p=CNN θ (c,z)
p g =CNN θ (c g ,z)
3) multi-agent reinforcement learning strategy gradient estimation:
sampling h' Bernoulli (p)
Calculating a policy gradient
Figure FDA0003591415190000032
4) Calculating the anisotropic inhibition loss function:
Figure FDA0003591415190000033
Figure FDA0003591415190000034
Figure FDA0003591415190000035
5) updating parameters:
Figure FDA0003591415190000036
Wherein w a Is a super-parameter that can be adjusted.
8. The multi-agent deep reinforcement learning-based digital halftone processing method according to claim 1, characterized by comprising an inference phase, wherein the inference phase comprises the following specific steps:
1) giving a continuous tone image c to be processed and a Gaussian white noise image z obtained through standard normal sampling;
2) the neural network calculates forward and quantizes to obtain a halftone image,
h=argmax(CNN θ (c,z))。
CN202210377696.4A 2022-04-12 2022-04-12 Digital halftone processing method based on multi-agent deep reinforcement learning Pending CN114841872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210377696.4A CN114841872A (en) 2022-04-12 2022-04-12 Digital halftone processing method based on multi-agent deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210377696.4A CN114841872A (en) 2022-04-12 2022-04-12 Digital halftone processing method based on multi-agent deep reinforcement learning

Publications (1)

Publication Number Publication Date
CN114841872A true CN114841872A (en) 2022-08-02

Family

ID=82564708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210377696.4A Pending CN114841872A (en) 2022-04-12 2022-04-12 Digital halftone processing method based on multi-agent deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN114841872A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934618A (en) * 2023-07-13 2023-10-24 江南大学 Image halftone method, system and medium based on improved residual error network
CN117114078A (en) * 2023-10-23 2023-11-24 中国科学技术大学 Method, equipment and storage medium for improving training efficiency of continuous control robot
WO2024077742A1 (en) * 2022-10-13 2024-04-18 北京大学 Inverse halftoning method and apparatus based on conditional diffusion network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077742A1 (en) * 2022-10-13 2024-04-18 北京大学 Inverse halftoning method and apparatus based on conditional diffusion network
CN116934618A (en) * 2023-07-13 2023-10-24 江南大学 Image halftone method, system and medium based on improved residual error network
CN117114078A (en) * 2023-10-23 2023-11-24 中国科学技术大学 Method, equipment and storage medium for improving training efficiency of continuous control robot
CN117114078B (en) * 2023-10-23 2024-02-23 中国科学技术大学 Method, equipment and storage medium for improving training efficiency of continuous control robot

Similar Documents

Publication Publication Date Title
CN114841872A (en) Digital halftone processing method based on multi-agent deep reinforcement learning
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
WO2020015330A1 (en) Enhanced neural network-based image restoration method, storage medium, and system
CN114418883B (en) Blind image deblurring method based on depth priori
Wang et al. Frequency compensated diffusion model for real-scene dehazing
CN117593235A (en) Retinex variation underwater image enhancement method and device based on depth CNN denoising prior
JPH08305846A (en) Neuro filter, image area dividing method, and filter device
CN116797468A (en) Low-light image enhancement method based on self-calibration depth curve estimation of soft-edge reconstruction
Zhang et al. Image inverse halftoning and descreening: a review
CN116246126A (en) Iterative unsupervised domain self-adaption method and device
Jiang et al. Conditional diffusion process for inverse halftoning
CN117291816A (en) Method for enhancing image contrast based on Zero-DCE network structure
CN110648291B (en) Unmanned aerial vehicle motion blurred image restoration method based on deep learning
Xie et al. DHD-Net: A novel deep-learning-based dehazing network
JP2003317095A (en) Method and program for image sharpening processing, recording medium where the image sharpening processing program is recorded, and image output device
CN112132024B (en) Underwater target recognition network optimization method and device
CN113205159B (en) Knowledge migration method, wireless network equipment individual identification method and system
CN116977220B (en) Blind image motion blur removal algorithm based on image quality heuristic
Liu et al. Learning weighting map for bit-depth expansion within a rational range
CN113850736A (en) Poisson-Gaussian mixed noise removing method
WO2022146361A1 (en) A method for the model-independent solution of inverse problems with deep learning in image/video processing
CN114897733A (en) Weak light imaging restoration method based on DCT and CGAN
CN117689595A (en) Image processing model training method, image processing method and device
Xu et al. Underwater image enhancement based on unsupervised adaptive uncertainty distribution
CN117237226A (en) Light field refocusing image deblurring algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination