CN113592742A - Method for removing image moire - Google Patents

Method for removing image moire Download PDF

Info

Publication number
CN113592742A
CN113592742A CN202110907877.9A CN202110907877A CN113592742A CN 113592742 A CN113592742 A CN 113592742A CN 202110907877 A CN202110907877 A CN 202110907877A CN 113592742 A CN113592742 A CN 113592742A
Authority
CN
China
Prior art keywords
network
domain
moire
loss
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110907877.9A
Other languages
Chinese (zh)
Inventor
郭晓杰
田巧雨
汪海铃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202110907877.9A priority Critical patent/CN113592742A/en
Publication of CN113592742A publication Critical patent/CN113592742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a method for removing image moire, which is a double-domain and double-branch network structure based on a space domain and a frequency domain, wherein the double-domain and double-branch network structure comprises a student network with a double-branch structure for removing the image moire and two teacher subnetworks for supervising the training of the student network; the student network double branch is composed of a space domain module and a frequency domain module, and the teacher network is composed of a space domain teacher network and a frequency domain teacher network; the invention can complement each other by combining knowledge expression between the space domain and the frequency domain, thereby improving the recovery effect of the Moire pattern image and better keeping the pattern detail and color of the image.

Description

Method for removing image moire
Technical Field
The invention belongs to the field of removal of interference pixels in an image, and particularly relates to a method for removing image moire.
Background
The existing method for removing image moire is based on a deep neural network to remove moire. The skilled person proposes a multiresolution convolutional neural network, which accomplishes moir é removal based on the fact that moir patterns span multiple frequency bands. In subsequent studies, a multiscale-based network structure is used for removing moire fringes. Some of the prior art introduces additional marker data based on shape, color and frequency characteristics to accurately remove moir é patterns. However, the moire removal is performed on the pixel domain alone, and the removal effect of the moire is not good. In addition, the technician uses frequency priors to better distinguish moire patterns from native image patterns. While the above methods may produce effective results, little simultaneous consideration is given to a priori knowledge of the space and frequency domains.
Since the sub-pixel arrangement layout of the camera sensor array and the liquid crystal display has a slight difference in spatial frequency, moire fringes can appear on a picture taken on a screen, pattern artifacts of various shapes and colors are formed, and the quality of a captured image is seriously reduced. The removal of moire from the screen shot image is a challenge, and manual retouching takes a lot of time, so the removal of moire is crucial. The method provided by the invention can be applied to the patterns with Moire patterns shot by a mobile phone or a camera, and the effect of image recovery is realized.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a method for removing image moire, which utilizes different emphasis points and different capabilities of a space domain module and a frequency domain module for recovering an image, wherein the image recovered on the space domain module can better keep the main texture structure information of the image, and the frequency domain module can better recover the high-frequency detail part of the image.
The invention adopts the following technical scheme:
a method for removing image moire is a dual-domain and dual-branch network structure based on a space domain and a frequency domain, wherein the dual-domain and dual-branch network structure comprises a student network with a dual-branch structure for removing image moire and two teacher sub-networks for supervising the training of the student network; the student network double branch is composed of a space domain module and a frequency domain module, and the teacher network is composed of a space domain teacher network and a frequency domain teacher network; wherein: the method comprises the following steps of:
in the first stage, two teacher networks are trained to carry out clear image reconstruction in a space domain and a frequency domain, and rich feature representation is extracted from clear images; and taking the L1 distance between the reconstruction result and the Ground truths as the reconstruction loss, which is defined as follows:
Figure BDA0003202397450000021
in the formula frec(Igt;θrec) To reconstruct the results, IgtIs the corresponding group treuths;
in the second stage, training the combination of the moire removing loss, the feature simulation loss and the perception loss of the student network to obtain the following overall objective function as a moire removing image output; the overall objective function is defined as follows:
Figure BDA0003202397450000022
wherein λ1,λ2,λ3Is the coefficient of the balance loss.
Furthermore, the two teacher networks are composed of a down-sampling module, a backbone module and an up-sampling module, wherein the backbone module comprises 6 residual blocks; the two teacher networks carry out clear image reconstruction without moire fringes on a specific domain, and provide intermediate characteristic representation of clear images for the student networks; wherein: in the frequency teacher network, discrete wavelet transform and inverse transform are used to replace the down-sampling and up-sampling modules.
Further, the spatial domain module is provided with a three-scale structure; the spatial branch of the student network is formed by stacking 6 spatial blocks of 64 channels, and each feature map is generated by down-sampling; the top branch of the space block processes the moire at the original scale, and the other two branches process the moire at the thicker scale; the two down-sampling modules respectively adjust the original input to be half and one fourth, and then input three different scales into the three convolution layers to capture the output characteristic diagram of each branch; the two coarser outputs are upsampled to conform their sizes to the original size of the highest scale; finally, the feature maps of the branches are combined together as the output of the space block.
Further, the feature map generated by each frequency domain module comprises 48 channels; 4 residual blocks with ReLU are arranged at the front part of the frequency domain module; a channel attention module is then applied to the output characteristics of the portion; wherein: the attention module adopts global average pooling operation, and then two full connections and a Sigmoid function are used for learning the weight of each channel; and finally, multiplying each channel of the feature map by a weight, and automatically selecting a channel which is most useful for removing the moire pattern through learning. Further, the moire loss represents the distance between the result after moire removal and the Ground truth, and the formula is as follows:
Figure BDA0003202397450000031
in the formula fdem(Imoire;θdem) Estimated democratic results for the student network.
Further, the characteristic simulation loss is composed of a space domain loss and a frequency domain loss; each space domain loss and frequency domain loss part is defined by the characteristic diagram of each branch and the L1 distance between the student network and the corresponding domain teacher network in several candidate stages; the formula for the characteristic simulation loss is as follows:
Figure BDA0003202397450000032
wherein m, n and mu respectively represent the characteristic simulation loss of each stageThe number of missing m layers, n layers and weights,
Figure BDA0003202397450000036
is an alternative triplet set, the total signature modeling loss consists of two parts, which can be written as:
Figure BDA0003202397450000033
further, the perceptual loss is used for measuring the similarity of high semantic features, and the description formula of the perceptual loss is as follows:
Figure BDA0003202397450000034
wherein phil(z) is the l-th layer feature map of z in the pre-trained high semantic feature extractor network phi,
Figure BDA0003202397450000035
representing the sum of the selected layers.
Advantageous effects
1. The present invention can accurately remove moire from a single image by prior in the spatial and frequency domains of a two-domain network. The invention has double domain prior: the RGB image is directly used for training or the input is converted into a frequency domain and then trained, and knowledge learned by the network has different expressions and is emphasized. The invention mainly considers the space domain and the frequency domain, so that the knowledge expression between the two domains can be mutually complemented, thereby improving the recovery effect of the Moire pattern image and better keeping the pattern detail and the color of the image.
2. The invention introduces a process-guided learning strategy to guide the removal process of moire fringes, designs a process-guided loss to measure the characteristic similarity between a teacher network and a student network, and realizes end-to-end moire fringe removal. The invention discloses a two-domain process guide mechanism: a two-domain process boot mechanism is introduced in the training framework. The network structure is divided into a plurality of stages from shallow to deep, the network learning process is supervised by minimizing the difference of feature maps of candidate stages corresponding to the student sub-networks and the two teacher sub-networks, so that the student sub-networks can better learn information helpful for image recovery from clear image feature expressions provided by the teacher sub-networks.
3. The model generalization refers to the performance of the model when processing a fresh sample which is not seen in the learning process, and the generalization proves that the method has good generalization, namely the model has better processing effect when processing an image which is not seen and has larger distribution difference with the training data.
4. For some application scenes, the efficiency of the model is also important on the basis of ensuring the effect of the model. Although the network structure of the invention comprises three sub-networks, only student sub-networks are reserved in the testing stage, so that the model in the testing stage is only 16.0Mb, the model is light in weight and high in processing speed, and the invention has strong application value and embodies the high efficiency of the invention.
5. The effectiveness of the invention in removing image moire is verified through a large number of experiments.
Drawings
FIG. 1 is a schematic diagram of a network structure of a method for removing moire in an image according to the present invention.
FIG. 2 is a schematic diagram of a spatial domain module structure in a method for removing moire in an image according to the present invention.
FIG. 3 is a schematic diagram of a frequency domain module structure in a method for removing moire in an image according to the present invention.
Fig. 4 is a comparison of the visual effect of the present invention with that of the prior art.
Fig. 5 is a visual comparison of ablation experiments of the present invention for different structural modules.
Fig. 6 is a visual comparison of a sample with a lower index in the recovery results of the present invention with a true sharp image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the following detailed discussion of the present invention will be made with reference to the accompanying drawings and examples, which are only illustrative and not limiting, and the scope of the present invention is not limited thereby.
As shown in FIG. 1, the present invention provides a method for removing moire in an image, which is based on a dual-domain, dual-branched moire removal distillation network in the spatial and frequency domains. The neural network framework of the method is trained in an end-to-end manner, and comprises three sub-networks: one student sub-network with a double branch structure for image moir é removal and two teacher sub-networks for supervising the training of the student sub-networks. The invention adopts a dual-domain process guide mechanism, and then designs and sets the layout, the loss function and the specific experiment of the network. Wherein:
the dual domain procedure boot mechanism: the invention constructs two teacher networks (a frequency domain teacher network and a space domain teacher network) to promote the training of the Moire pattern removing network on different domains. As shown in fig. 1, both teacher networks consist of a down-sampling module 04, a backbone module and an up-sampling module 05. The backbone module contains 6 residual blocks 01. In the frequency teacher network, Discrete Wavelet Transform (DWT)06 and inverse transform (IDWT)07 are employed instead of the down-sampling and up-sampling modules performing discrete wavelet transform, respectively. Two teacher networks perform image reconstruction on a specific domain to provide intermediate feature representation of a clear image for student networks, i.e., moir é removal networks.
Except for the backbone module and the convergence module, the student network is similar to the architecture of the teacher network. As shown in fig. 1, the student network is composed of two modules, one is a space domain module 02 and the other is a frequency domain module 03. At several candidate stages of each module, its features are fully mapped to a clear feature representation that approximates each corresponding teacher's network. In order to naturally control the learning process of students, the invention avoids the limitation of the shallow stage and strengthens the limitation of the deep stage.
The network structure is as follows: dual-domain/dual-branch networks take advantage of two different domains and can complement each other. The invention adopts the structure of two modules for two different domains and captures the output after processing respectively. Finally, the two outputs are concatenated and then fused by a simple 1 × 1 convolution operation as the final demosaicing result of our proposed framework. Wherein, student's network includes:
a spatial domain module: since moire patterns span multiple scales in the spatial domain, the moire pattern can be processed by establishing each spatial module as a multi-scale architecture. The arrangement of the spatial domain modules is a three-dimensional structure as shown in fig. 2. The spatial branch of the student network is formed by stacking 6 such 64-channel spatial blocks, each signature graph being generated by downsampling. The top branch of the space block handles moir é at the original scale, and the remaining two branches handle moir é at a coarser scale. Two downsampling modules adjust the original input to half and quarter, respectively, and then input three different scales into three sets of convolutional layers to capture the output profile of each branch. The two coarser outputs are upsampled to fit their sizes to the original size of the highest scale. Finally, the feature maps of the branches are combined together as the output of the space block.
A frequency domain module: the present invention makes use of moire to be apparent in a particular wavelet sub-band, and moire is more easily removed after wavelet transform. In order to effectively remove moire in the image and recover image details, the invention adds a frequency module in the moire removing network. The signature is generated by a wavelet transform, each frequency domain block containing 48 channels. Each signature generated by a frequency block generated by the wavelet transform contains 48 channels. As shown in fig. 3, there are 4 residual blocks with ReLU in the front part of the frequency domain module. A channel attention module is then applied to the output characteristics of this portion. Note that the module employs a global average pooling operation followed by two full connections and a Sigmoid function to learn the weight of each channel. Finally, each channel of the feature map is multiplied by a weight, which means that the block can automatically select the channel most useful for moir é pattern removal through learning.
Reconstruction loss: in the network architecture, the training phase of the student network is based on two pre-trained teacher networks. Therefore, in the first stage, two teacher networks are trained to perform clear image reconstruction in a space domain and a frequency domain, and rich feature representations are extracted from clear images. The L1 distance between the reconstruction result and the Ground truths is taken as the reconstruction loss, and is defined as follows:
Figure BDA0003202397450000061
in the formula frec(Igt;θrec) To reconstruct the results, IgtIs the corresponding group treuths. Once the teacher network model is trained, the weights of the network are frozen. After that, we start training the student network.
Moire loss removal: the Moire loss represents the distance between the result after Moire removal and the Ground channel, and is given by the formula:
Figure BDA0003202397450000071
in the formula fdem(Imoire;θdem) Estimated democratic results for the student network.
Characteristic simulation loss: the characteristic simulation loss is composed of a space domain loss and a frequency domain loss. The invention uses the loss to guide the learning process of the student network in the feature space. Each section is defined by the profile of each branch and the L1 distance between several candidate stages of the student network and its corresponding domain teacher network. For simplifying the formula, the invention Sm(Imoire) Setting T for the mth layer characteristic diagram of Moire pattern input in student networkn(Igt) The characteristic diagram of the nth layer of the corresponding clear image in the teacher network. The formula for the characteristic simulation loss is as follows:
Figure BDA0003202397450000072
wherein m, n and mu respectively represent m layers, n layers and weight of characteristic simulation loss of each stage,
Figure BDA0003202397450000073
is an alternative triplet set. The total characteristic analog loss consists of two parts, which can be written as:
Figure BDA0003202397450000074
loss of perception: the perception loss is adopted to measure the similarity of high semantic features, and a satisfactory effect is achieved. The characteristic loss is described by the following formula:
Figure BDA0003202397450000075
wherein phil(z) is the l-th layer feature map of z in the pre-trained high semantic feature extractor network phi,
Figure BDA0003202397450000076
representing the aggregate of the selected layers. The invention adopts VGG-19[9 ]]As a feature extractor, the extractor is obtained by performing image classification training on ImageNet.
Overall loss: in training a student network, the overall objective function is a combination of democratic loss, feature simulation loss, and feature loss. The overall objective function is defined as follows:
Figure BDA0003202397450000077
wherein λ1,λ2,λ3Is the coefficient of the balance loss.
According to experimental observation, the spatial domain module and the frequency domain module have different side focus points and different capabilities for restoring the image, the image restored on the spatial domain module can better reserve the main texture structure information of the image, and the frequency domain module can better restore the high-frequency detail part of the image. Therefore, the invention considers two domains of space domain and frequency domain at the same time, and aims to make the information from the two domains complement each other, thereby obtaining better recovery result.
The method trains the two teacher sub-networks to respectively reconstruct the Moire-free image in a space domain and a frequency domain, so that the two teacher sub-networks learn the characteristic expression of the Moire-free image in the space domain and the frequency domain. The invention provides a feature simulation loss function to minimize the difference between feature maps of different stages between two branches of a student sub-network and a teacher sub-network of a corresponding domain to guide the learning process of the student sub-network, and the training strategy is called as a two-domain process guide mechanism. The structures of the two teacher subnetworks are very similar, and the difference between the two is that the frequency domain teacher subnetwork replaces the down-sampling and up-sampling operations in the spatial domain teacher subnetwork with wavelet transform (DWT) and inverse wavelet transform (IDWT), and the network blocks stacked in the backbone network are different.
The method proposed by the present invention is compared to several other moire recovery methods. The method using the comparison includes: DnCNN, DMCNN, MSFE and MBCNN. The present invention selects a widely used LCDmoire data set to compare quantitatively and qualitatively our proposed method with the above method. In order to evaluate the quality of image restoration, we used peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) as quantitative indicators, and the comparison results of the indicators of the respective methods are shown in table 1. The visual effect pair is shown in fig. 4.
Figure BDA0003202397450000081
TABLE 1 quantitative comparison of our method with other methods
The DnCNN is designed for the task of image denoising, and does not consider the characteristics of the moire image. From table 1 it can be seen that DnCNN has a limited recovery capability and a minimum index compared to other methods specifically designed for moire image recovery tasks. DMCNN and MSFE designed as multi-scale structures, although having better recovery capability than DnCNN, neither method takes into account the frequency domain information that is very important for moire image recovery, and thus is much lower than the present invention and MBCNN index. Both MBCNN and the method of the present invention consider both spatial and frequency domain information. Although MBCNN has higher PSNR and SSIM than the present invention, the present invention shows little difference visually from MBCNN in most of the test results. Even on some difficult cases, the invention has better recovery results, like the test image in the first row of fig. 4, the invention can remove moire more thoroughly, but MBCNN does not remove well. In addition, the present invention includes a model size of 16.0Mb, smaller and lighter than 54.5Mb for MBCNN.
To verify the network architecture and other generalization capabilities of the present invention, we randomly selected 100 pairs of degraded and clear images with moire in the TIP2018 dataset to form a test dataset. All comparison methods were trained on the LCDMoire data set and then tested on the data set described above. Quantitative results of the generalization test are shown in table 2, and the highest PSNR obtained by the method is 0.24dB higher than that of MBCNN, which proves that the model has stronger generalization ability.
Figure BDA0003202397450000091
TABLE 2 comparison of the generalization of our method with other methods
The effectiveness of each module in the model is also discussed by carrying out ablation experiments on the spatial domain branches and the frequency rate branches of the two teacher sub-networks and the student sub-networks and the attention module.
Figure BDA0003202397450000092
Table 3. quantitative comparison of teacher subnets for ablation experiments of different structural modules: the teacher sub-network guides the training process of the student network by minimizing the differences between the sharp image feature expressions and the intermediate feature maps of the student sub-networks through feature simulation loss. To verify the effectiveness of the teacher subnetwork and the feature simulation loss, the present invention eliminates the teacher subnetwork and the feature simulation loss as a comparison. As shown in Table 3 and FIG. 5, w/o PGM shows that the teacher sub-network and the feature simulation loss are removed, and it can be seen that PSNR is 1.53dB lower than that of the complete model, and that many moire patterns are not removed, demonstrating the effectiveness of the teacher sub-network and the feature simulation loss.
Spatial domain module and frequency domain module: since the student sub-networks are of a spatial domain and frequency domain dual-branch structure, it is necessary to verify the validity of the spatial domain branches and the frequency domain branches. We removed the spatial domain and frequency domain branches separately to test the ability of the model single branch. W/o B in Table 3 and FIG. 5sAnd w/o BfDenoted as drop spatial domain branch and frequency domain branch, respectively. Compared with the complete model, the PSNR is reduced by 6.24dB without spatial domain branching, the PSNR is reduced by 3.79dB without frequency domain branching, and the Moire residue is more visually. The experimental result effectively proves the effectiveness of the double-domain and double-branch strategy.
An attention module: in order to verify the effect of the channel attention module CA, the invention removes the attention module for training and testing. As shown in table 3, the model for the channel-less attention module has a PSNR reduction of about 3.23dB compared to the full model. The reason is that the channel attention module can adaptively select the channel which is most favorable for the moire image recovery, namely the corresponding wavelet frequency band, so that the network has stronger recovery capability.
The average index of the method is lower than that of the MBCNN, but in most test samples, the recovery result of the method provided by the invention is almost not different from that of the MBCNN in vision. In fig. 6, i give several test examples with lower criteria. The upper row is the true sharp image and the following is the recovery result of the invention and the corresponding PSNR and SSIM indices. It can be seen that even though these examples have a low index, it is difficult to visually observe the visual difference between them and the corresponding sharp image.
The present invention is not limited to the above-described embodiments. The foregoing description of the embodiments is intended to describe and illustrate the invention and is provided for the purpose of illustration only and not for the purpose of limitation. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A method of removing moir é from an image, comprising: the method is based on a dual-domain and dual-branch network structure of a space domain and a frequency domain, wherein the dual-domain and dual-branch network structure comprises a student network with a dual-branch structure for removing image moire and two teacher subnetworks for supervising the training of the student network; the student network double branch is composed of a space domain module and a frequency domain module, and the teacher network is composed of a space domain teacher network and a frequency domain teacher network; wherein: the method comprises the following steps of:
in the first stage, two teacher networks are trained to carry out clear image reconstruction in a space domain and a frequency domain, and rich feature representation is extracted from clear images; and taking the L1 distance between the reconstruction result and the Ground truths as the reconstruction loss, which is defined as follows:
Figure FDA0003202397440000011
in the formula frec(Igt;θrec) To reconstruct the results, IgtIs the corresponding group treuths;
in the second stage, training the combination of the moire removing loss, the feature simulation loss and the perception loss of the student network to obtain the following overall objective function as a moire removing image output; the overall objective function is defined as follows:
Figure FDA0003202397440000012
wherein λ1,λ2,λ3Is the coefficient of the balance loss.
2. A method of removing moir é from an image as defined in claim 1, wherein: the two teacher networks are composed of a down-sampling module, a backbone module and an up-sampling module, and the backbone module comprises 6 residual blocks; the two teacher networks carry out clear image reconstruction without moire fringes on a specific domain, and provide intermediate feature representation of clear images for the student networks; wherein: in the frequency teacher network, discrete wavelet transform and inverse transform are used to replace the down-sampling and up-sampling modules.
3. A method of removing moir é from an image as defined in claim 1, wherein: the spatial domain module is provided with a three-dimensional structure; the spatial branch of the student network is formed by stacking 6 spatial blocks of 64 channels, and each feature map is generated by down-sampling; the top branch of the space block processes the moire at the original scale, and the other two branches process the moire at the thicker scale; the two down-sampling modules respectively adjust the original input to be half and one fourth, and then input three different scales into the three convolution layers to capture the output characteristic diagram of each branch; two coarser outputs are upsampled to fit their sizes to the original size of the highest scale; finally, the feature maps of the branches are combined together as the output of the space block.
4. A method of removing moir é from an image as defined in claim 1, wherein: the characteristic diagram generated by each frequency domain module comprises 48 channels; 4 residual blocks with ReLU are arranged at the front part of the frequency domain module; a channel attention module is then applied to the output characteristics of the portion; wherein: the attention module adopts global average pooling operation, and then two full connections and a Sigmoid function are used for learning the weight of each channel; and finally, multiplying each channel of the feature map by a weight, and automatically selecting the channel which is most useful for removing the moire pattern through learning.
5. A method of removing moir é from an image as defined in claim 1, wherein: the moire loss represents the distance between the result after moire removal and the Ground truth, and the formula is as follows:
Figure FDA0003202397440000021
in the formula fdem(Imoire;θdem) Estimated democratic results for the student network.
6. A method of removing moir é from an image as defined in claim 1, wherein: the characteristic simulation loss consists of a space domain loss and a frequency domain loss; each spatial domain loss and frequency domain loss is partially defined by the feature map of each branch and the L1 distance between several candidate stages of the student network and its corresponding domain teacher network; the formula for the characteristic simulation loss is as follows:
Figure FDA0003202397440000022
wherein m, n and mu respectively represent m layers, n layers and weight of characteristic simulation loss of each stage,
Figure FDA0003202397440000025
is a selectable triplet set, the total signature modeling penalty includes two parts, which can be written as:
Figure FDA0003202397440000023
7. a method of removing moir é from an image as defined in claim 1, wherein: the perception loss is used for measuring the similarity of high semantic features, and the description formula of the perception loss is as follows:
Figure FDA0003202397440000024
wherein phil(z) is the l-th layer feature map of z in the pre-trained high semantic feature extractor network phi,
Figure FDA0003202397440000031
representing the aggregate of the selected layers.
CN202110907877.9A 2021-08-09 2021-08-09 Method for removing image moire Pending CN113592742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110907877.9A CN113592742A (en) 2021-08-09 2021-08-09 Method for removing image moire

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110907877.9A CN113592742A (en) 2021-08-09 2021-08-09 Method for removing image moire

Publications (1)

Publication Number Publication Date
CN113592742A true CN113592742A (en) 2021-11-02

Family

ID=78256349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110907877.9A Pending CN113592742A (en) 2021-08-09 2021-08-09 Method for removing image moire

Country Status (1)

Country Link
CN (1) CN113592742A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596479A (en) * 2022-01-29 2022-06-07 大连理工大学 Image moire removing method and device suitable for intelligent terminal and storage medium
WO2023151511A1 (en) * 2022-02-08 2023-08-17 维沃移动通信有限公司 Model training method and apparatus, image moire removal method and apparatus, and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830813A (en) * 2018-06-12 2018-11-16 福建帝视信息科技有限公司 A kind of image super-resolution Enhancement Method of knowledge based distillation
CN111489300A (en) * 2020-03-11 2020-08-04 天津大学 Screen image Moire removing method based on unsupervised learning
CN111681178A (en) * 2020-05-22 2020-09-18 厦门大学 Knowledge distillation-based image defogging method
CN112766087A (en) * 2021-01-04 2021-05-07 武汉大学 Optical remote sensing image ship detection method based on knowledge distillation
CN112766411A (en) * 2021-02-02 2021-05-07 天津大学 Target detection knowledge distillation method for adaptive regional refinement
CN113066025A (en) * 2021-03-23 2021-07-02 河南理工大学 Image defogging method based on incremental learning and feature and attention transfer
CN113160086A (en) * 2021-04-28 2021-07-23 东南大学 Image Moire removing method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830813A (en) * 2018-06-12 2018-11-16 福建帝视信息科技有限公司 A kind of image super-resolution Enhancement Method of knowledge based distillation
CN111489300A (en) * 2020-03-11 2020-08-04 天津大学 Screen image Moire removing method based on unsupervised learning
CN111681178A (en) * 2020-05-22 2020-09-18 厦门大学 Knowledge distillation-based image defogging method
CN112766087A (en) * 2021-01-04 2021-05-07 武汉大学 Optical remote sensing image ship detection method based on knowledge distillation
CN112766411A (en) * 2021-02-02 2021-05-07 天津大学 Target detection knowledge distillation method for adaptive regional refinement
CN113066025A (en) * 2021-03-23 2021-07-02 河南理工大学 Image defogging method based on incremental learning and feature and attention transfer
CN113160086A (en) * 2021-04-28 2021-07-23 东南大学 Image Moire removing method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAILING WANG: "Image Demoiréing with a Dual-Domain Distilling Network", 《IEEE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596479A (en) * 2022-01-29 2022-06-07 大连理工大学 Image moire removing method and device suitable for intelligent terminal and storage medium
WO2023151511A1 (en) * 2022-02-08 2023-08-17 维沃移动通信有限公司 Model training method and apparatus, image moire removal method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-spatial combination and gradient domain loss
CN110992275A (en) Refined single image rain removing method based on generation countermeasure network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
Ma et al. A wavelet-based dual-stream network for underwater image enhancement
CN105096280A (en) Method and device for processing image noise
CN113592742A (en) Method for removing image moire
CN113284051B (en) Face super-resolution method based on frequency decomposition multi-attention machine system
CN113436076B (en) Image super-resolution reconstruction method with characteristics gradually fused and electronic equipment
CN110533614B (en) Underwater image enhancement method combining frequency domain and airspace
CN113284061B (en) Underwater image enhancement method based on gradient network
CN111429392A (en) Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation
CN111833261A (en) Image super-resolution restoration method for generating countermeasure network based on attention
CN108765330A (en) Image de-noising method and device based on the joint constraint of global and local priori
CN116485741A (en) No-reference image quality evaluation method, system, electronic equipment and storage medium
CN116797488A (en) Low-illumination image enhancement method based on feature fusion and attention embedding
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Wang et al. No-reference stereoscopic image quality assessment using quaternion wavelet transform and heterogeneous ensemble learning
CN106485684B (en) A kind of single image based on dual-tree complex wavelet transform goes cloud and mist method
CN112150356A (en) Single compressed image super-resolution reconstruction method based on cascade framework
CN112200719B (en) Image processing method, electronic device, and readable storage medium
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
Chen et al. Joint denoising and super-resolution via generative adversarial training
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
CN107169484B (en) Image quality evaluation method based on human visual characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211102