CN111586298A - Real-time moire removing method based on multi-scale network and dynamic feature coding - Google Patents

Real-time moire removing method based on multi-scale network and dynamic feature coding Download PDF

Info

Publication number
CN111586298A
CN111586298A CN202010385721.4A CN202010385721A CN111586298A CN 111586298 A CN111586298 A CN 111586298A CN 202010385721 A CN202010385721 A CN 202010385721A CN 111586298 A CN111586298 A CN 111586298A
Authority
CN
China
Prior art keywords
branch
scale
feature
moire
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010385721.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huayan Mutual Entertainment Technology Co ltd
Original Assignee
Beijing Huayan Mutual Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huayan Mutual Entertainment Technology Co ltd filed Critical Beijing Huayan Mutual Entertainment Technology Co ltd
Priority to CN202010385721.4A priority Critical patent/CN111586298A/en
Publication of CN111586298A publication Critical patent/CN111586298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention discloses a real-time moire removing method based on a multi-scale network and dynamic feature coding, which samples an input moire image through a multi-scale residual error network of a plurality of resolution branches to obtain image features with different resolutions, adds dynamic feature coding at each sampling branch, codes the image features with different resolutions, introduces a branch scaling module into each sampling branch, automatically learns the importance of each branch through back propagation, gives a weight to each branch, and finally sums the images with different resolutions to obtain an output image. According to the invention, Moire fringes of different wave bands of the image are removed through the multi-scale residual error network structure, more details of the image are retained, dynamic characteristic codes are introduced into branches of the multi-scale residual error network structure, and the Moire fringes can be dynamically coded, so that the model can better cope with the variability of the Moire fringes, the Moire fringe removing effect is further improved, the removing speed is improved, the Moire fringes are removed in real time, and key application scenes such as LED screen shooting can be supported.

Description

Real-time moire removing method based on multi-scale network and dynamic feature coding
Technical Field
The invention relates to the field of image processing technology and LED wall photography, in particular to a Moire pattern removal method based on a multi-scale network and dynamic feature coding.
Background
The moire phenomenon is a visual phenomenon in which rainbow color stripes are generated at intervals between objects at the time of actual photographing, and is called a moire phenomenon. The reason is that when the spatial frequency of the pixel of the photographic equipment is close to the spatial frequency of the stripe in the shot picture, a new wavy interference pattern is generated, which degrades the overall quality of the picture. With the wide application of large-size LED walls, especially in television studios and film shooting scenes, the Moire problem caused by shooting LED walls becomes a key to hinder the further popularization of the LED walls.
In the prior art, a method for removing moire fringes by utilizing a neural network is available, but the method has no strong network structure and weak feature expression capability, and cannot fully extract features of an input image; moire fringes can also be removed by a deep convolutional network method, but in this method only a low resolution scale is used and the moire can be depicted only on a single frequency band, and the frequency distribution of moire fringes is wide, and thus the moire fringes cannot be removed well in their method. In addition, all the existing methods cannot meet the real-time requirement of real-time shooting and cannot be applied to an LED wall shooting system.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a real-time moire removing method based on a multi-scale network and dynamic feature coding, so as to solve the technical problem that moire cannot be well removed in the prior art, which is proposed in the above technical problems.
In order to solve the above problems, an embodiment of the present invention provides a real-time moire removing method based on a multi-scale network and dynamic feature coding, including the following steps:
s1, sampling the input mole image through a multi-scale residual error network with a plurality of resolution branches to obtain image characteristics with different resolutions;
s2, adding dynamic feature codes at each sampling branch, and coding image features with different resolutions;
s3, introducing a branch scaling module for each sampling branch, automatically learning the importance of each branch through back propagation, and giving a weight to each branch;
and S4, summing the images with different resolutions to obtain an output image.
Preferably, the specific method for sampling the molar image in step S1 is as follows:
s11, each branch of the multi-scale residual error network learns the original resolution and the nonlinear mapping on the 2X, 4X, 8X, 16X and 32X down-sampling resolutions;
s12, upsampling the feature to the original resolution using sub-pixel convolution at the end of each branch.
Preferably, the step S2 includes the following substeps:
s31, designing a bypass branch for each main scale branch;
s32, encoding image features under different spatial resolutions;
s33, embedding an additional bypass branch into each main-scale branch through adaptive instance normalization (AdaIN) to dynamically adjust parameters of the main-scale branches.
Preferably, the specific method for adjusting the main-scale branch parameter in step S33 is as follows:
a1, calculating the mean and variance of feature mapping of a backbone network passing through a residual block;
a2, transmitting the mean and variance of the step A1 to adaptive instance normalization (AdaIN) of the trunk network, and calculating the statistic value of the mole pattern to adjust the parameters of the trunk branches.
Preferably, the mean value calculation formula of the feature mapping in step a1 is as follows:
Figure BDA0002483341400000021
the feature mapping of the variometer formula is:
Figure BDA0002483341400000022
where H and W represent the height and width of the feature map,
Figure BDA0002483341400000023
coding the feature x from the ith coding layer in the branch for dynamic featuresencIs determined by the average value of (a) of (b),
Figure BDA0002483341400000024
coding the feature x from the ith coding layer in the branch for dynamic featuresencThe variance of (c).
Preferably, the calculation formula of the mole pattern statistic in the step a2 is as follows:
Figure BDA0002483341400000025
wherein the content of the first and second substances,
Figure BDA0002483341400000026
and
Figure BDA0002483341400000027
representing statistical information from the trunk branches, xiAnd (3) representing a feature diagram of the ith residual block in the trunk branch.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, Moire fringes of different wave bands of the image are removed through the multi-scale residual error network structure, more details of the image are reserved, dynamic characteristic coding is introduced on branches of the multi-scale residual error network structure, and the Moire fringes can be dynamically coded, so that the model can better cope with the variability of the Moire fringes, and the Moire fringe removal effect is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for removing moire patterns in real time based on multi-scale network and dynamic feature coding according to an embodiment of the present invention;
FIG. 2 is a multi-scale network structure diagram of a real-time moire removal method based on a multi-scale network and dynamic feature coding according to an embodiment of the present invention;
FIG. 3 is a diagram of a dynamic feature code residual block (CDR) structure of a multi-scale network and dynamic feature code based real-time Moire removal method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a visual structure comparison of a real-time moire removal method based on a multi-scale network and dynamic feature coding according to an embodiment of the present invention;
Detailed Description
The following detailed description of the present invention is given for the purpose of better understanding technical solutions of the present invention by those skilled in the art, and the present description is only exemplary and explanatory and should not be construed as limiting the scope of the present invention in any way.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It is to be understood that the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like are used in a generic and descriptive sense only and not for purposes of limitation, the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like are used in the generic and descriptive sense only and not for purposes of limitation, as the term is used in the generic and descriptive sense, and not for purposes of limitation, unless otherwise specified or implied, and the specific reference to a device or element is intended to be a reference to a particular element, structure, or component. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal", "vertical", "overhang" and the like do not imply that the components are required to be absolutely horizontal or overhang, but may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The embodiment of the invention provides a real-time moire removing method based on a multi-scale network and dynamic feature coding, which comprises the following steps:
s1, sampling the input mole image through a multi-scale residual error network with a plurality of resolution branches to obtain image characteristics with different resolutions;
s2, adding dynamic feature codes at each sampling branch, and coding image features with different resolutions;
s3, introducing a branch scaling module for each sampling branch, automatically learning the importance of each branch through back propagation, and giving a weight to each branch;
and S4, summing the images with different resolutions to obtain an output image.
The specific method for sampling the input molar image comprises the following steps:
s11, each branch of the multi-scale residual error network learns the original resolution and the nonlinear mapping on the 2X, 4X, 8X, 16X and 32X down-sampling resolutions;
s12, upsampling the feature to the original resolution using sub-pixel convolution at the end of each branch.
Moire is distributed over the low and high frequency parts of the image, and in order to remove moire from the display results of different resolutions combined with the reconstructed image while preserving more image details, a multi-scale network structure model is built, as shown in fig. 2, in which the high resolution branches have fewer convolutional layers and the low resolution branches have more convolutional layers and more complex structures.
In the model, branch 0 keeps the original resolution, only 3 layers of convolution structures are used, and the calculation process of branch 0 is as follows:
F0=σpW3pW2(σpW1(I))) (1)
wherein, F0Is the output characteristic generated from branch 0, I represents the draft image, σpDenotes the PReLU activation function, and W denotes the weight of the convolutional layer in the branch.
The resolution of branch 1 is half of the original resolution, it uses two residual blocks, and increases the channel attention. The calculation process for branch 1 can be expressed as:
F1=Up(CDR3(CDR2(CDR1(Down(F0))))+Down(F0)), (2)
wherein, F1Is to represent the output characteristics from the trusted branch with higher resolution, Down (.) represents the downsample block, UP(.) represents an upsampling block and the CDR represents an abbreviation of a channel attention dynamic feature coding residual block.
Referring to fig. 3, the structure of CDR is shown, in which the operation of channel attention can be divided into a compression part and an excitation part, which are defined in formula (3) and formula (4), respectively:
Figure BDA0002483341400000051
where S (.) represents the squeeze process with a global average pool, H and W represent the height and width of the feature map, and XC represents the channel C of the input feature map X.
Ac(x)=σs(Wuσp(WdS(x)))*x (4)
Wherein A isC(.) represents the channel attention function, σpRepresenting the PreLU function, WUAnd WdRepresents two 1X1 convolutional layers, σsA sigmoid function representing the mapping of features between 0 and 1.
Therein, the structure of branch 2 to branch 5 is similar to branch 1, except that the number of residual blocks is increased, corresponding to the spatial resolution of 1/4, 1/8, 1/16 and 1/32, respectively.
Fi+1=Up(NL(Down(CDRk...(CDR1(Fi)+Fi))) (5)
Wherein, FiAnd Fi+1High resolution representing adjacent branches and characteristic, CDR, of the current branchKDenotes the kth block in the branch, NL (.) denotes the region level non-local operation at the end of the branch.
The dynamic feature coding is mainly used for enhancing the capability of a model to deal with dynamic textures, and is used in each resolution branch of the multi-scale residual error network, and the specific operation steps are as follows:
s31, designing a bypass branch for each main scale branch;
s32, encoding image features under different spatial resolutions;
s33, embedding an additional bypass branch into each main-scale branch through adaptive instance normalization (AdaIN) to dynamically adjust parameters of the main-scale branches.
The specific method for adjusting the main-scale branch parameters comprises the following steps:
a1, calculating the feature mapping mean and variance of the backbone network passing through the residual block;
a2, transmitting the mean and variance of the step A1 to adaptive instance normalization (AdaIN) of the trunk network, and calculating the statistic value of the mole pattern to adjust the parameters of the trunk branches.
The mean value calculation formula of the feature mapping is as follows:
Figure BDA0002483341400000061
the feature mapping of the variometer formula is:
Figure BDA0002483341400000062
where H and W represent the height and width of the feature map,
Figure BDA0002483341400000063
coding the feature x from the ith coding layer in the branch for dynamic featuresencIs determined by the average value of (a) of (b),
Figure BDA0002483341400000064
coding the feature x from the ith coding layer in the branch for dynamic featuresencThe variance of (c).
Wherein, the calculation formula of the statistic value of the mole mode is as follows:
Figure BDA0002483341400000065
wherein the content of the first and second substances,
Figure BDA0002483341400000066
and
Figure BDA0002483341400000067
representing statistical information from the trunk branches, xiAnd (3) representing a feature diagram of the ith residual block in the trunk branch.
Since the distribution of the moir é distribution and the distribution of the image details on different resolution branches are different, a branch scaling module is introduced at each sampling branch, the importance of each branch is automatically learned through back propagation, and each branch is given a weight, and the final image re-process is as follows:
Figure BDA0002483341400000068
wherein the content of the first and second substances,
Figure BDA0002483341400000069
representing a reconstructed clean image, S (.) is a scaling function, FiRepresenting the output characteristics of each branch.
Since direct optimization mean square error (MES) is easier to smooth and thus blur the image, it is desirable to minimize the loss of focus, specifically, the loss function is defined in equation (10):
Figure BDA0002483341400000071
wherein, N represents the size of the batch,
Figure BDA0002483341400000072
and I denote moire and moire-free real images generated by the reconstruction network, respectively, ∈ being a parameter in the charbonier penalty, set to 0.001 in this example.
To further verify the effectiveness of the present embodiment, in the examples we used peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) as performance evaluation indicators. The results are shown in table 1 below, and it can be seen from table 1 that the results of our proposed multi-scale residual error network (MDDM) are clearly superior to the methods disclosed in other papers.
Figure BDA0002483341400000073
TABLE 1
Next, referring to fig. 4, the multi-scale residual error network proposed in the present scheme is compared with some other methods based on deep learning, and left to right in fig. 4 are the results of the mole image, DnCNN, MSFE, Sun, and the multi-scale residual error network (MDDM) and reference original image we propose. The peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) are displayed below the image, and the visualization result shows that the multi-scale residual error network (MDDM) method provided by the scheme is obviously superior to other methods. The multi-scale residual error network (MDDM) of the present solution removes moir patterns cleaner, while other methods bring more work or do not remove moir patterns completely.
Therefore, the method removes the Moire fringes at different wave bands of the image through the multi-scale residual error network structure, retains more details of the image, introduces dynamic feature codes on branches of the multi-scale residual error network structure, and can dynamically code the Moire fringes, so that the model can better cope with the variability of the Moire fringes, and further improves the Moire fringe removal effect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts of the present invention. It should be noted that there are no specific structures but a few objective structures due to the limited character expressions, and that those skilled in the art may make various improvements, decorations or changes without departing from the principle of the invention or may combine the above technical features in a suitable manner; such modifications, variations, combinations, or adaptations of the invention using its spirit and scope, as defined by the claims, may be directed to other uses and embodiments.

Claims (6)

1. A real-time moire removing method based on a multi-scale network and dynamic feature coding is characterized by comprising the following steps:
s1, sampling the input mole image through a multi-scale residual error network with a plurality of resolution branches to obtain image characteristics with different resolutions;
s2, adding dynamic feature codes at each sampling branch, and coding image features with different resolutions;
s3, introducing a branch scaling module for each sampling branch, automatically learning the importance of each branch through back propagation, and giving a weight to each branch;
and S4, summing the images with different resolutions to obtain an output image.
2. The method for removing moire fringes in real time based on multi-scale network and dynamic feature coding as claimed in claim 1, wherein the specific method for sampling moire image in step S1 is as follows:
s11, each branch of the multi-scale residual error network learns the original resolution and the nonlinear mapping on the 2X, 4X, 8X, 16X and 32X down-sampling resolutions;
s12, upsampling the feature to the original resolution using sub-pixel convolution at the end of each branch.
3. The method for removing moire fringes in real time based on multi-scale network and dynamic feature coding as claimed in claim 1, wherein said step S2 comprises the following sub-steps:
s31, designing a bypass branch for each main scale branch;
s32, encoding image features under different spatial resolutions;
s33, embedding an additional bypass branch into each main-scale branch through adaptive instance normalization (AdaIN) to dynamically adjust parameters of the main-scale branches.
4. The method for removing the moire fringes in real time based on the multi-scale network and the dynamic feature coding, prepared by the method of claim 3, wherein the specific method for adjusting the main-scale branch parameters in the step S33 is as follows:
a1, calculating the mean and variance of feature mapping of a backbone network passing through a residual block;
a2, transmitting the mean and variance of the step A1 to adaptive instance normalization (AdaIN) of the trunk network, and calculating the statistic value of the mole pattern to adjust the parameters of the trunk branches.
5. The method for removing the moire fringes in real time based on the multi-scale network and the dynamic feature coding prepared by the method of claim 4, wherein the mean value calculation formula of the feature mapping in the step A1 is as follows:
Figure FDA0002483341390000011
the feature mapping of the variometer formula is:
Figure FDA0002483341390000021
where H and W represent the height and width of the feature map,
Figure FDA0002483341390000022
coding the feature x from the ith coding layer in the branch for dynamic featuresencIs determined by the average value of (a) of (b),
Figure FDA0002483341390000023
coding the feature x from the ith coding layer in the branch for dynamic featuresencThe variance of (c).
6. The method for removing Moire patterns in real time based on multi-scale network and dynamic eigen coding as claimed in claim 4, wherein the statistical formula of the Moire patterns in step A2 is:
Figure FDA0002483341390000024
wherein the content of the first and second substances,
Figure FDA0002483341390000025
and
Figure FDA0002483341390000026
representing statistical information from the trunk branches, xiAnd (3) representing a feature diagram of the ith residual block in the trunk branch.
CN202010385721.4A 2020-05-09 2020-05-09 Real-time moire removing method based on multi-scale network and dynamic feature coding Pending CN111586298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010385721.4A CN111586298A (en) 2020-05-09 2020-05-09 Real-time moire removing method based on multi-scale network and dynamic feature coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010385721.4A CN111586298A (en) 2020-05-09 2020-05-09 Real-time moire removing method based on multi-scale network and dynamic feature coding

Publications (1)

Publication Number Publication Date
CN111586298A true CN111586298A (en) 2020-08-25

Family

ID=72112098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010385721.4A Pending CN111586298A (en) 2020-05-09 2020-05-09 Real-time moire removing method based on multi-scale network and dynamic feature coding

Country Status (1)

Country Link
CN (1) CN111586298A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161166A (en) * 2019-12-16 2020-05-15 西安交通大学 Image moire eliminating method based on depth multi-resolution network
CN112184591A (en) * 2020-09-30 2021-01-05 佛山市南海区广工大数控装备协同创新研究院 Image restoration method based on deep learning image Moire elimination
CN114596479A (en) * 2022-01-29 2022-06-07 大连理工大学 Image moire removing method and device suitable for intelligent terminal and storage medium
CN115272131A (en) * 2022-08-22 2022-11-01 苏州大学 Image Moire pattern removing system and method based on self-adaptive multi-spectral coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5125045A (en) * 1987-11-20 1992-06-23 Hitachi, Ltd. Image processing system
US20110273728A1 (en) * 2010-05-06 2011-11-10 Canon Kabushiki Kaisha Image forming apparatus, image forming method and program
CN106131391A (en) * 2016-08-31 2016-11-16 宇龙计算机通信科技(深圳)有限公司 A kind of image pickup method and device
CN108573477A (en) * 2018-03-14 2018-09-25 深圳怡化电脑股份有限公司 Eliminate method, system and the terminal device of image moire fringes
CN110287969A (en) * 2019-06-14 2019-09-27 大连理工大学 Mole text image binaryzation system based on figure residual error attention network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5125045A (en) * 1987-11-20 1992-06-23 Hitachi, Ltd. Image processing system
US20110273728A1 (en) * 2010-05-06 2011-11-10 Canon Kabushiki Kaisha Image forming apparatus, image forming method and program
CN106131391A (en) * 2016-08-31 2016-11-16 宇龙计算机通信科技(深圳)有限公司 A kind of image pickup method and device
CN108573477A (en) * 2018-03-14 2018-09-25 深圳怡化电脑股份有限公司 Eliminate method, system and the terminal device of image moire fringes
CN110287969A (en) * 2019-06-14 2019-09-27 大连理工大学 Mole text image binaryzation system based on figure residual error attention network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XI CHENG 等: "Multi-scale Dynamic Feature Encoding Network for Image Demoireing", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161166A (en) * 2019-12-16 2020-05-15 西安交通大学 Image moire eliminating method based on depth multi-resolution network
CN112184591A (en) * 2020-09-30 2021-01-05 佛山市南海区广工大数控装备协同创新研究院 Image restoration method based on deep learning image Moire elimination
CN114596479A (en) * 2022-01-29 2022-06-07 大连理工大学 Image moire removing method and device suitable for intelligent terminal and storage medium
CN115272131A (en) * 2022-08-22 2022-11-01 苏州大学 Image Moire pattern removing system and method based on self-adaptive multi-spectral coding

Similar Documents

Publication Publication Date Title
CN111586298A (en) Real-time moire removing method based on multi-scale network and dynamic feature coding
Lim et al. DSLR: Deep stacked Laplacian restorer for low-light image enhancement
CN109064396B (en) Single image super-resolution reconstruction method based on deep component learning network
Chang et al. Effective use of spatial and spectral correlations for color filter array demosaicking
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN111598778B (en) Super-resolution reconstruction method for insulator image
Hwang et al. Adaptive image interpolation based on local gradient features
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN112102163B (en) Continuous multi-frame image super-resolution reconstruction method based on multi-scale motion compensation framework and recursive learning
JP2003018398A (en) Method for generating a super-resolution image from pixel image
JP4328424B2 (en) Image conversion method
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN113850741B (en) Image noise reduction method and device, electronic equipment and storage medium
CN114881888A (en) Video Moire removing method based on linear sparse attention transducer
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN108717690B (en) Method for synthesizing high dynamic range picture
Yue et al. Image noise estimation and removal considering the bayer pattern of noise variance
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
Feng et al. Real-world non-homogeneous haze removal by sliding self-attention wavelet network
CN111353982B (en) Depth camera image sequence screening method and device
CN113298740A (en) Image enhancement method and device, terminal equipment and storage medium
Din et al. RETRACTED ARTICLE: Lightweight deep dense Demosaicking and Denoising using convolutional neural networks
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication