CN110738609B - Method and device for removing image moire - Google Patents

Method and device for removing image moire Download PDF

Info

Publication number
CN110738609B
CN110738609B CN201910860840.8A CN201910860840A CN110738609B CN 110738609 B CN110738609 B CN 110738609B CN 201910860840 A CN201910860840 A CN 201910860840A CN 110738609 B CN110738609 B CN 110738609B
Authority
CN
China
Prior art keywords
network
moire
image
characteristic diagram
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910860840.8A
Other languages
Chinese (zh)
Other versions
CN110738609A (en
Inventor
段凌宇
何斌
王策
施柏鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201910860840.8A priority Critical patent/CN110738609B/en
Publication of CN110738609A publication Critical patent/CN110738609A/en
Application granted granted Critical
Publication of CN110738609B publication Critical patent/CN110738609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a method for removing image moire, which comprises the following steps: inputting an original image into a model, and outputting a target edge characteristic graph to a multi-scale fusion network in the model by using the original image through an edge detection network in the model; extracting the characteristics of an original image through a characteristic extraction layer in the model to obtain a multi-scale fusion network and a synthesis network, wherein the original characteristic image is output to the model; acquiring a synthetic network of the multi-scale fusion characteristic diagram output to the model by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network; obtaining a category characteristic diagram by using an original image through a classification network in the model and outputting the category characteristic diagram to a synthesis network in the model; and generating a moire-free image by using the original feature map, the category feature map and the multi-scale fusion feature map through a synthesis network. The method has stronger generalization capability and robustness for removing different types of moire fringes, and can better retain the content details in the original picture while effectively removing the moire fringes.

Description

Method and device for removing image moire
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for removing image moire.
Background
People usually use a mobile phone or a camera photographing function to record the drips in life. However, the spatial positions of the camera and the photographed object are different, and the photographed picture is often damaged by moire fringes with various shapes and colors, so that the quality of the picture is affected, and bad experience is brought to a user.
In the prior art, a neural network method is used for removing moire fringes on a picture, but the existing network has insufficient exploration on the unique physical properties (such as frequency domain and color channel distribution) of the moire fringes, and a moire fringe residue phenomenon is generated in the process of removing low-frequency multicolor moire fringes, so that the generalization capability of the existing network for removing different types of moire fringes is poor.
Disclosure of Invention
The present invention provides a method and a device for removing moire in an image, which is aimed at overcoming the defects of the prior art mentioned above.
A first aspect of the invention provides a method of removing moire from an image, the method comprising:
inputting an original image into a trained moire removing model, obtaining a target edge characteristic image without moire by using the original image through an edge detection network in the moire removing model, and outputting the target edge characteristic image to a multi-scale fusion network in the moire removing model;
extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removal model;
acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removal model;
obtaining a class characteristic diagram by utilizing an original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model;
and generating and outputting an image without moire patterns by using the original feature map, the class feature map and the multi-scale fusion feature map through the synthesis network.
A second aspect of the present invention provides an apparatus for removing moire in an image, the apparatus comprising:
the input module is used for inputting the original image into the trained moire removing model;
the model processing module is used for obtaining a target edge characteristic diagram without moire by utilizing an original image through an edge detection network in the moire removing model and outputting the target edge characteristic diagram to a multi-scale fusion network in the moire removing model; extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removal model; acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removing model; obtaining a class characteristic diagram by utilizing an original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model; and generating and outputting an image without moire patterns by using the original feature map, the class feature map and the multi-scale fusion feature map through the synthesis network.
In the embodiment of the application, after the original image is input into the trained moire removing model, the original image can be used to obtain a target edge image without moire through an edge detection network in the moire removing model, and the target edge image is output to the multi-scale fusion network; extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and respectively outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network; acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge diagram through a multi-scale fusion network in the moire removal model, and outputting the multi-scale fusion characteristic diagram to a synthesis network; and finally, generating an image without moire by using the original characteristic diagram, the category characteristic diagram and the multi-scale fusion characteristic diagram through the synthesis network.
Based on the description, the problem of complex frequency spectrum distribution of moire fringes in the frequency domain can be solved by the multi-scale fusion network, the problem of uneven intensity distribution of moire fringes in different color channels can be solved by the edge detection network, and the generalization capability of removing different types of moire fringes can be enhanced by the classification network. Therefore, compared with the prior art, the method has stronger generalization capability and robustness on different types of moire removal. In addition, the moire-free image generated based on the original feature map in the synthetic network can better retain the content details in the original image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is an image of a different type of moir e pattern as shown in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a Moire pattern removal model according to an exemplary embodiment of the present invention;
FIG. 3A is a flowchart illustrating an embodiment of a method of removing moir e in an image according to an exemplary embodiment of the invention;
FIG. 3B is a schematic diagram of an edge detection network according to the embodiment shown in FIG. 3A;
FIG. 3C is a schematic diagram illustrating a multi-scale converged network structure according to the embodiment shown in FIG. 3A;
FIG. 3D is a diagram illustrating a classification network according to the embodiment shown in FIG. 3A;
FIG. 3E is a schematic diagram of a composite network structure according to the embodiment shown in FIG. 3A;
FIG. 4 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating an embodiment of an apparatus for removing moire in an image according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Moire is a phenomenon generated after two patterns with similar distribution are superimposed to interfere with each other, as shown in fig. 1, wherein a moire generated by fringe interference is shown in a diagram (a), a moire generated by a camera shooting screen is shown in a diagram (b), and a moire generated when a camera shoots a dense object with a texture is shown in a diagram (c). It follows that moir e patterns appear as stripes or wavy, colored or monochromatic stripes that degrade the quality of the picture.
Generally, in a picture, the frequency of moire fringes in different areas is high or low, and the shape distribution is also changed, so that the removal of moire fringes is challenged.
With the development of deep learning research, a neural network method is applied to a task of removing moire at present, but the existing model has insufficient exploration on unique physical properties (such as frequency domain and color channel distribution) of moire, so that a moire residual phenomenon can be generated by removing the moire by the existing model, and the removal effect is not ideal.
In order to solve the above technical problem, the present invention provides a moire pattern removal model, as shown in fig. 2, the moire pattern removal model includes a feature extraction layer, an edge detection network, a multi-scale fusion network, a classification network, and a synthesis network.
After the original image is input into the trained moire removing model, a target edge image without moire can be obtained by using the original image through an edge detection network, and the target edge image is output to a multi-scale fusion network; extracting the characteristics of the original image through a characteristic extraction layer to obtain an original characteristic diagram, and respectively outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network; acquiring a multi-scale fusion characteristic graph by using the original characteristic graph and the target edge graph through a multi-scale fusion network, and outputting the multi-scale fusion characteristic graph to a synthesis network; and finally, generating an image without moire patterns by using the original feature map, the category feature map and the multi-scale fusion feature map through the synthesis network.
Based on the description, the problem of complex frequency spectrum distribution of moire fringes in the frequency domain can be solved by the multi-scale fusion network, the problem of uneven intensity distribution of moire fringes in different color channels can be solved by the edge detection network, and the generalization capability of removing different types of moire fringes can be enhanced by the classification network. Therefore, compared with the prior art, the method has stronger generalization capability and robustness on different types of moire fringe removal. In addition, the moire-free image generated based on the original feature map in the synthetic network can better retain the content details in the original image.
The technical scheme of removing image moire according to the present invention is described in detail with specific embodiments.
Fig. 3A is a flowchart illustrating an embodiment of a method for removing moire fringes according to an exemplary embodiment of the present invention, where the method for removing moire fringes can be applied to an electronic device, and the electronic device can be any terminal device such as a mobile phone, a camera, a PC, and the like. In combination with the moire removing model shown in fig. 2, as shown in fig. 3A, the method for removing moire in an image includes the following steps:
step 301: the original image is input to a trained moir e removal model.
The original image may be any image, and the original image has moire.
Step 302: and obtaining a target edge image without moire by using the original image through an edge detection network in the moire removing model, and outputting the target edge image to a multi-scale fusion network in the moire removing model.
In an embodiment, as shown in fig. 3B, an edge extraction layer in the edge detection network may extract an edge map of each color channel from an original image and output the edge map to a first merging layer in the edge detection network, where the first merging layer merges the original image and the edge map of each color channel to obtain a first merged map and outputs the first merged map to an edge prediction network in the edge detection network, the edge prediction network performs edge prediction processing on each color channel of the first merged map to obtain a target edge map of each color channel and outputs the target edge map to a feature extraction layer in the edge detection network, and the feature extraction layer extracts features of the target edge map of each color channel to obtain a target edge feature map of each color channel.
Therefore, in consideration of the distribution difference of the moire intensities on different color channels, the edge map of each color channel is extracted to perform edge prediction processing on the edge map of each color channel, so that the moire at the edge on each color channel is removed, and the target edge map of each color channel is obtained.
Illustratively, the edge extraction layer may extract an edge map of each color channel from the original image through a Sobel operator. The extraction principle for each color channel of the original image is: and for each pixel point, respectively calculating the edge gradient strength of the pixel point in the horizontal direction and the vertical direction of the image, and then adding the calculated edge gradient strength in the horizontal direction and the calculated edge gradient strength in the vertical direction to obtain the edge gradient strength of the pixel point.
Illustratively, the edge prediction network may include a convolutional layer, a pooling layer, a downsampling layer, an upsampling layer, and other computational layers. For a given feature map, since the non-local module can calculate the correlation coefficient between a certain position and all the rest positions, the response value of the position on the feature map can be updated by comprehensively considering all the rest positions through the correlation coefficient. In order to better predict the target edge map without moire, a non-local module can be added in the edge prediction network to be used for the edge feature with weaker response to the edge, and the response is enhanced by other strong edges related to the edge feature, so that the target edge map without moire finally predicted has stronger and more continuous edge response.
For example, in order to enable the target edge map to be combined with the original feature map (i.e. the shallow feature) into the multi-scale fusion network, for the feature extraction layer, the target edge map may be mapped to the same feature space as the shallow feature by one convolution layer (e.g. the convolution kernel size may be 3 × 3, and the step size is 1).
Step 303: and extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and respectively outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removal model.
The original feature map obtained by the operation of the feature extraction layer can be combined with other feature maps to enter a scale fusion network and a synthesis network so as to meet the requirement of further processing. The feature extraction layer may be implemented by using a convolution layer (e.g., convolution kernel size may be 3 x 3, step size is 1).
Step 304: and acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge diagram through a multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removing model.
In an embodiment, as shown in fig. 3C, the original feature map and the target edge map of each color channel may be merged by a second merging layer in the multi-scale fusion network to obtain a second merged map, and the second merged map is output to a multi-scale feature extraction network in the multi-scale fusion network, the multi-scale feature extraction network extracts a plurality of feature maps of different scales from the second merged map and outputs the feature maps to the fusion network in the multi-scale fusion network, and finally the fusion network fuses the feature maps of different scales to obtain the multi-scale fusion feature map. Wherein the sizes and the channel numbers of the feature maps with different scales are consistent.
Therefore, the moire fringes with different frequencies are focused on the feature images with different scales, so that the distribution difference of moire fringe frequency spectrums can be reduced by extracting a plurality of feature images with different scales and then fusing the feature images, and the synthesis processing of a subsequent synthetic network is facilitated.
Illustratively, the multi-scale feature extraction network may include SE (compact-and-activation) modules, upsampling layers (e.g., bilinear interpolation operations), convolutional layers, pooling layers, and the like. Through the combined output of different numbers of convolution layers, pooling layers and up-sampling layers, characteristic graphs with different scales and consistent size and channel number can be obtained.
Illustratively, the converged network can include a merging layer, SE module. Considering that the feature maps under different scales focus on moire patterns with different frequencies, and in one picture, the distribution range of the moire patterns with different frequencies is large or small, in order to give more attention to the moire patterns with wider distribution and larger influence on picture quality, the combined feature maps are further subjected to SE processing to obtain the feature maps with redistributed weights, so that the neural network can dynamically give more attention to the moire patterns with wider distribution, and the moire patterns are better removed.
Step 305: and obtaining a class characteristic diagram by utilizing the original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model.
In an embodiment, as shown in fig. 3D, a moire frequency attribute and a moire shape attribute of the original image are extracted through a first task classification network in the classification network, a structural matrix of the moire frequency attribute and a structural matrix of the moire shape attribute are respectively obtained and output to a third merging layer in the classification network, meanwhile, a moire color attribute of the original image is extracted through a second task classification network in the classification network, a structural matrix of the moire color attribute is obtained and output to a third merging layer in the classification network, and finally, the third merging layer merges the three structural matrices to obtain a category feature map.
The moire frequency attribute has two types of high frequency and low frequency, the moire shape attribute has two types of curve and straight line, and the moire color attribute has two types of multicolor and single color. Therefore, each attribute is a two-class problem, and thus 0 and 1 can be used as scalars of each attribute to distinguish different types under the same attribute.
Therefore, three appearance attributes of the color, the shape and the frequency of the moire fringes in the original image are extracted, and the category feature map of the original image is obtained according to the three appearance attributes, so that the generalization capability of the synthetic network on the removal of different moire fringes can be enhanced subsequently through the category feature map.
Illustratively, since the moire frequency attribute and the shape attribute are both attributes at the moire geometry level, they can be extracted through the first task classification network. The first task classification network may include an edge extraction layer, a convolutional layer, a pooling layer, and a fully-connected layer.
For example, for the moire color attribute, only the difference in color thereof may be considered without paying attention to the geometric information, and thus may be extracted through the second task classification network. In order to simplify the difficulty of classification network learning, before entering a classification network, Gaussian filtering is performed on an input original image, so that the classification network focuses more on color discrimination and does not pay more attention to geometric or content information.
It should be noted that, in order to apply the extracted three attributes to a subsequent synthesis network to enhance the generalization capability of the synthesis network to different moire removal, the two classification networks also need to construct a matrix with the same length and width as those of the multi-scale fusion feature map by using the scalar (0 or 1) of each attribute obtained by extraction, and merge the matrices constructed corresponding to the three attributes into the class feature map through the third merging layer.
In one example, the first task classification network and the second task classification network may each be implemented using a VGG-16 model structure.
Step 306: and generating and outputting an image without moire by using the original feature map, the category feature map and the multi-scale fusion feature map through a synthesis network.
In an embodiment, as shown in fig. 3E, a third merged graph is obtained by merging the category feature graph and the multi-scale merged feature graph through a fourth merged layer in the present synthesis network, and is output to an optimization processing network in the present synthesis network, the optimization processing network performs optimization processing on the third merged graph, and outputs the processed third merged graph to a fifth merged layer in the present synthesis network, the fifth merged layer merges the original feature graph and the processed third merged graph again to obtain a feature graph without moire, and outputs the feature graph to a conversion layer in the present synthesis network, and finally the conversion layer converts the feature graph without moire into an image that can be used for display.
Therefore, the synthesis network comprehensively considers the non-moire image generated by the category feature map, the multi-scale fusion feature map and the original feature map, so that the content details in the original image can be better retained while the moire is effectively removed.
Illustratively, the optimization processing network may include an SE module, a convolutional layer. The processing flow can firstly re-distribute the weight value again through the SE module, and then carry out convolution twice.
It should be noted that the first merging layer to the fifth merging layer involved in the above steps 301 to 306 are merged according to the channel direction. Taking the first merging layer as an example, the original image and the edge map are merged, each pixel point in the original image is composed of pixel values of three color channels, and each pixel point in the edge map of each color channel is composed of gradient intensity of the color channel, so that after being merged by the first merging layer according to the channel direction, each pixel point in the obtained first merging map is composed of pixel values of three color channels and gradient intensity values of three color channels, that is, the first merging map is a map of 6-dimensional channels. Wherein, the three color channels refer to red, green and blue channels.
The training process for the moir e removal model is described as follows:
in the first stage, the classification network is trained end to end by using training samples, and a preset number of epochs are trained to ensure the convergence of the classification network. Wherein each of the training samples is a moire pattern picture.
And in the second stage, firstly labeling the Moire pattern attribute of each sample in the training samples, obtaining a corresponding class characteristic diagram for each sample according to the labeled label of the sample, then performing end-to-end training on the edge detection network, the multi-scale fusion network and the synthesis network by using the training samples and the class characteristic diagram of each sample, and training a preset number of epochs to ensure that the networks can be well cooperated with one another.
For the two-stage training process described above, an ADAM optimization algorithm can be used as the optimizer for the entire training process, and the optimizer weight decay is set to 0.0001 while the optimizer momentum is set to 0.9.
It is worth to be noted that the loss function for training the classification network in the first stage may be the sum of cross-entropy losses of three moire attributes; the loss functions for training the edge detection network, the multi-scale fusion network and the synthesis network in the second stage are as follows:
Figure BDA0002199710260000131
wherein L isE,eRepresenting the loss function of the edge detection network, LE,oRepresenting the loss function between the moire-free image of a sample and the true moire-free image of the sample output by the synthesis network,
Figure BDA0002199710260000132
indicating control edge detection network lossThe weight of the loss, which is a constant, can be set to 0.1, LFThe feature-based perceptual loss is expressed as follows:
Figure BDA0002199710260000133
wherein the content of the first and second substances,
Figure BDA0002199710260000134
representing the characteristics of the moir e free image output by the synthesis network,
Figure BDA0002199710260000135
representing the features of the actual supervised moire-free image, C, W, H represents the number of channels, width, length of the features, respectively.
In the embodiment of the application, after the original image is input into the trained moire removing model, a target edge image without moire can be obtained by using the original image through an edge detection network, and the target edge image is output to a multi-scale fusion network; extracting the characteristics of the original image through a characteristic extraction layer to obtain an original characteristic diagram, and respectively outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network; acquiring a multi-scale fusion characteristic graph by using the original characteristic graph and the target edge graph through a multi-scale fusion network, and outputting the multi-scale fusion characteristic graph to a synthesis network; and finally, generating an image without moire patterns by using the original feature map, the category feature map and the multi-scale fusion feature map through the synthesis network.
Based on the description, the problem of complex frequency spectrum distribution of moire fringes in the frequency domain can be solved by the multi-scale fusion network, the problem of uneven intensity distribution of moire fringes in different color channels can be solved by the edge detection network, and the generalization capability of removing different types of moire fringes can be enhanced by the classification network. Therefore, compared with the prior art, the method has stronger generalization capability and robustness on different types of moire fringe removal. In addition, the moire-free image generated based on the original feature map in the synthetic network can better retain the content details in the original image.
Fig. 4 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present application, where the electronic device includes: a communication interface 401, a processor 402, a machine-readable storage medium 403, and a bus 404; wherein the communication interface 401, the processor 402 and the machine-readable storage medium 403 communicate with each other via a bus 404. The processor 402 may execute the above-described method for removing image moire by reading and executing machine executable instructions in the machine readable storage medium 403 corresponding to the control logic of the method for removing image moire, and the specific content of the method is described in the above embodiments and will not be described herein again.
The machine-readable storage medium 403 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 403 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), any type of storage disk (e.g., an optical disk, a DVD, etc.), or similar storage medium, or a combination thereof.
Fig. 5 is a flowchart illustrating an embodiment of an apparatus for removing moire in an image according to an exemplary embodiment of the present invention, where the apparatus for removing moire in an image can be applied to an electronic device, and the apparatus for removing moire in an image includes:
an input module 510, configured to input an original image to the trained moir e removal model;
the model processing module 520 is configured to obtain a target edge feature map without moire through an edge detection network in the moire removal model by using an original image, and output the target edge feature map to a multi-scale fusion network in the moire removal model; extracting the characteristics of the original image through a characteristic extraction layer in the moire removing model to obtain an original characteristic diagram, and outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removing model; acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removal model; obtaining a class characteristic diagram by utilizing an original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model; and generating and outputting an image without moire patterns by using the original feature map, the class feature map and the multi-scale fusion feature map through the synthesis network.
In an optional implementation manner, the model processing module 520 is specifically configured to, in a process that the edge detection network obtains a target edge feature map without moire patterns by using an original image, extract an edge map of each color channel from the original image through an edge extraction layer in the edge detection network, and output the edge map to a first merging layer in the edge detection network; the first merging layer merges the original image and the edge image of each color channel to obtain a first merging image, and outputs the first merging image to an edge prediction network in the edge detection network; the edge prediction network carries out edge prediction processing on each color channel of the first combined graph to obtain a target edge graph of each color channel and outputs the target edge graph to a feature extraction layer in the edge detection network; and the characteristic extraction layer extracts the characteristics of the target edge image of each color channel to obtain the target edge characteristic image of each color channel.
In an optional implementation manner, the model processing module 520 is specifically configured to, in the process that the multi-scale fusion network obtains the multi-scale fusion feature map by using the original feature map and the target edge feature map, merge the original feature map and the target edge feature map of each color channel through a second merging layer in the multi-scale fusion network to obtain a second merged map, and output the second merged map to the multi-scale feature extraction network in the multi-scale fusion network; the multi-scale feature extraction network extracts a plurality of feature maps with different scales from the second merged map and outputs the feature maps to a fusion network in the multi-scale fusion network; the fusion network fuses the feature maps with different scales to obtain a multi-scale fusion feature map; and the sizes and the channel numbers of the feature maps with different scales are consistent.
In an optional implementation manner, the model processing module 520 is specifically configured to, in a process that the classification network obtains the category feature map by using an original image, extract a moire frequency attribute and a moire shape attribute of the original image through a first task classification network in the classification network, obtain a structural matrix of the moire frequency attribute and a structural matrix of the moire shape attribute respectively, and output the structural matrices to a third merging layer in the classification network; extracting the moire color attribute of the original image through a second task classification network in the classification network, obtaining a construction matrix of the moire color attribute and outputting the construction matrix to a third merging layer in the classification network; and the third merging layer merges the three construction matrixes to obtain a category characteristic diagram.
In an optional implementation manner, the model processing module 520 is specifically configured to, in a process that the synthesis network generates an image without moire patterns by using the original feature map, the category feature map, and the multi-scale fusion feature map, merge the category feature map and the multi-scale fusion feature map through a fourth merging layer in the synthesis network to obtain a third merging map, and output the third merging map to an optimization processing network in the synthesis network; the optimization processing network is used for optimizing the third merged graph and outputting the processed third merged graph to a fifth merging layer in the synthesis network; the fifth merging layer merges the original characteristic diagram and the processed third merging diagram to obtain a characteristic diagram without Moire fringes and outputs the characteristic diagram to a conversion layer in the synthetic network; the conversion layer converts the moire-free feature map into an image which can be used for displaying.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of removing moir e from an image, the method comprising:
inputting an original image into a trained moire removing model, obtaining a target edge characteristic image without moire by using the original image through an edge detection network in the moire removing model, and outputting the target edge characteristic image to a multi-scale fusion network in the moire removing model;
extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removal model;
acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removal model;
obtaining a class characteristic diagram by utilizing an original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model;
and generating and outputting an image without moire patterns by using the original feature map, the class feature map and the multi-scale fusion feature map through the synthesis network.
2. The method of claim 1, wherein the edge detection network obtains a moire-free target edge feature map using the original image, comprising:
extracting an edge image of each color channel from the original image through an edge extraction layer in the edge detection network, and outputting the edge image to a first merging layer in the edge detection network;
the first merging layer merges the original image and the edge image of each color channel to obtain a first merging image, and outputs the first merging image to an edge prediction network in the edge detection network;
the edge prediction network carries out edge prediction processing on each color channel of the first combined graph to obtain a target edge graph of each color channel and outputs the target edge graph to a feature extraction layer in the edge detection network;
and the characteristic extraction layer extracts the characteristics of the target edge image of each color channel to obtain the target edge characteristic image of each color channel.
3. The method according to claim 1, wherein the multi-scale fusion network acquires a multi-scale fusion feature map by using the original feature map and the target edge feature map, and comprises:
merging the original feature map and the target edge feature map of each color channel through a second merging layer in the multi-scale fusion network to obtain a second merging map, and outputting the second merging map to a multi-scale feature extraction network in the multi-scale fusion network;
the multi-scale feature extraction network extracts a plurality of feature maps with different scales from the second merged map and outputs the feature maps to a fusion network in the multi-scale fusion network;
the fusion network fuses the feature maps with different scales to obtain a multi-scale fusion feature map;
and the sizes and the channel numbers of the feature maps with different scales are consistent.
4. The method of claim 1, wherein the classification network obtains a class feature map using the raw image, comprising:
extracting moire frequency attributes and moire shape attributes of the original image through a first task classification network in the classification network, respectively obtaining a construction matrix of the moire frequency attributes and a construction matrix of the moire shape attributes, and outputting the construction matrices to a third merging layer in the classification network;
extracting the moire color attribute of the original image through a second task classification network in the classification network, obtaining a construction matrix of the moire color attribute and outputting the construction matrix to a third merging layer in the classification network;
and the third merging layer merges the three construction matrixes to obtain a category characteristic diagram.
5. The method of claim 1, wherein the synthesis network generates a moir e removed image using the raw feature map, the class feature map, and the multi-scale fused feature map, comprising:
merging the category characteristic diagram and the multi-scale fusion characteristic diagram through a fourth merging layer in the synthesis network to obtain a third merging diagram, and outputting the third merging diagram to an optimization processing network in the synthesis network;
the optimization processing network is used for optimizing the third merged graph and outputting the processed third merged graph to a fifth merging layer in the synthesis network;
the fifth merging layer merges the original characteristic diagram and the processed third merging diagram to obtain a characteristic diagram without Moire fringes and outputs the characteristic diagram to a conversion layer in the synthetic network;
the conversion layer converts the moire-free feature map into an image which can be used for displaying.
6. An apparatus for removing moir e from an image, the apparatus comprising:
the input module is used for inputting the original image into the trained moire removing model;
the model processing module is used for obtaining a target edge characteristic diagram without moire by utilizing an original image through an edge detection network in the moire removing model and outputting the target edge characteristic diagram to a multi-scale fusion network in the moire removing model; extracting the characteristics of the original image through a characteristic extraction layer in the moire removal model to obtain an original characteristic diagram, and outputting the original characteristic diagram to a multi-scale fusion network and a synthesis network in the moire removal model; acquiring a multi-scale fusion characteristic diagram by using the original characteristic diagram and the target edge characteristic diagram through the multi-scale fusion network, and outputting the multi-scale fusion characteristic diagram to a synthesis network in the moire removal model; obtaining a class characteristic diagram by utilizing an original image through a classification network in the moire removing model, and outputting the class characteristic diagram to a synthesis network in the moire removing model; and generating and outputting an image without moire patterns by using the original feature map, the class feature map and the multi-scale fusion feature map through the synthesis network.
7. The apparatus according to claim 6, wherein the model processing module is specifically configured to, in the process that the edge detection network obtains the target edge feature map without moire patterns by using an original image, extract an edge map of each color channel from the original image through an edge extraction layer in the edge detection network, and output the edge map to a first merging layer in the edge detection network; the first merging layer merges the original image and the edge image of each color channel to obtain a first merging image, and outputs the first merging image to an edge prediction network in the edge detection network; the edge prediction network carries out edge prediction processing on each color channel of the first merged graph to obtain a target edge graph of each color channel and outputs the target edge graph to a feature extraction layer in the edge detection network; and the characteristic extraction layer extracts the characteristics of the target edge image of each color channel to obtain the target edge characteristic image of each color channel.
8. The apparatus according to claim 6, wherein the model processing module is specifically configured to, in the process that the multi-scale fusion network acquires the multi-scale fusion feature map by using the original feature map and the target edge feature map, merge the original feature map and the target edge feature map of each color channel through a second merging layer in the multi-scale fusion network to obtain a second merged map, and output the second merged map to the multi-scale feature extraction network in the multi-scale fusion network; the multi-scale feature extraction network extracts a plurality of feature maps with different scales from the second merged map and outputs the feature maps to a fusion network in the multi-scale fusion network; the fusion network fuses the feature maps with different scales to obtain a multi-scale fusion feature map; and the sizes and the channel numbers of the feature maps with different scales are consistent.
9. The apparatus according to claim 6, wherein the model processing module is specifically configured to, in the process that the classification network obtains the class feature map by using the original image, extract a moire frequency attribute and a moire shape attribute of the original image through a first task classification network in the classification network, obtain a structural matrix of the moire frequency attribute and a structural matrix of the moire shape attribute, respectively, and output the structural matrices to a third merging layer in the classification network; extracting the moire color attribute of the original image through a second task classification network in the classification network, obtaining a construction matrix of the moire color attribute and outputting the construction matrix to a third merging layer in the classification network; and the third merging layer merges the three construction matrixes to obtain a category characteristic diagram.
10. The apparatus according to claim 6, wherein the model processing module is specifically configured to, in the process that the synthesis network generates the moire-removed image by using the original feature map, the class feature map, and the multi-scale fusion feature map, obtain a third fusion map by merging the class feature map and the multi-scale fusion feature map through a fourth merging layer in the synthesis network, and output the third fusion map to the optimization processing network in the synthesis network; the optimization processing network is used for optimizing the third merged graph and outputting the processed third merged graph to a fifth merging layer in the synthesis network; the fifth merging layer merges the original characteristic diagram and the processed third merging diagram to obtain a characteristic diagram without moire fringes and outputs the characteristic diagram to a conversion layer in the synthetic network; the conversion layer converts the moire-free feature map into an image which can be used for displaying.
CN201910860840.8A 2019-09-11 2019-09-11 Method and device for removing image moire Active CN110738609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910860840.8A CN110738609B (en) 2019-09-11 2019-09-11 Method and device for removing image moire

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910860840.8A CN110738609B (en) 2019-09-11 2019-09-11 Method and device for removing image moire

Publications (2)

Publication Number Publication Date
CN110738609A CN110738609A (en) 2020-01-31
CN110738609B true CN110738609B (en) 2022-05-06

Family

ID=69267628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910860840.8A Active CN110738609B (en) 2019-09-11 2019-09-11 Method and device for removing image moire

Country Status (1)

Country Link
CN (1) CN110738609B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489300B (en) * 2020-03-11 2022-07-08 天津大学 Screen image Moire removing method based on unsupervised learning
CN111429374B (en) * 2020-03-27 2023-09-22 中国工商银行股份有限公司 Method and device for eliminating moire in image
CN111583129A (en) * 2020-04-09 2020-08-25 天津大学 Screen shot image moire removing method based on convolutional neural network AMNet
CN111476737B (en) 2020-04-15 2022-02-11 腾讯科技(深圳)有限公司 Image processing method, intelligent device and computer readable storage medium
CN113706392A (en) * 2020-05-20 2021-11-26 Tcl科技集团股份有限公司 Moire pattern processing method, computer-readable storage medium and terminal device
CN112184591A (en) * 2020-09-30 2021-01-05 佛山市南海区广工大数控装备协同创新研究院 Image restoration method based on deep learning image Moire elimination
CN113066027B (en) * 2021-03-31 2022-06-28 天津大学 Screen shot image moire removing method facing to Raw domain
CN114596479A (en) * 2022-01-29 2022-06-07 大连理工大学 Image moire removing method and device suitable for intelligent terminal and storage medium
WO2023221119A1 (en) * 2022-05-20 2023-11-23 北京小米移动软件有限公司 Image processing method and apparatus and storage medium
CN114972947B (en) * 2022-07-26 2022-12-06 之江实验室 Depth scene text detection method and device based on fuzzy semantic modeling

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424123A (en) * 2017-03-29 2017-12-01 北京粉笔未来科技有限公司 A kind of moire fringes minimizing technology and device
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101699919B1 (en) * 2011-07-28 2017-01-26 삼성전자주식회사 High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
CN107784654B (en) * 2016-08-26 2020-09-25 杭州海康威视数字技术股份有限公司 Image segmentation method and device and full convolution network system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424123A (en) * 2017-03-29 2017-12-01 北京粉笔未来科技有限公司 A kind of moire fringes minimizing technology and device
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks
CN108921799A (en) * 2018-06-22 2018-11-30 西北工业大学 Thin cloud in remote sensing image minimizing technology based on multiple dimensioned Cooperative Study convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Moiré pattern removal from texture images via low-rank and sparse matrix decomposition;Fanglei Liu 等;《2015 Visual Communications and Image Processing (VCIP)》;20151216;第1-4页 *
应用色彩纹理特征的人脸防欺骗算法;包晓安 等;《计算机科学》;20190812;第46卷(第10期);第180-185页 *

Also Published As

Publication number Publication date
CN110738609A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN110738609B (en) Method and device for removing image moire
CN108389224B (en) Image processing method and device, electronic equipment and storage medium
CN101329760B (en) Signal processing device and signal processing method
CN103247036B (en) Many exposure images fusion method and device
KR20210139450A (en) Image display method and device
Roeder et al. A computational image analysis glossary for biologists
Cheng et al. Zero-shot image super-resolution with depth guided internal degradation learning
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN110705634B (en) Heel model identification method and device and storage medium
CN108377374A (en) Method and system for generating depth information related to an image
CN111476213A (en) Method and device for filling covering area of shelter based on road image
Calatroni et al. Unveiling the invisible: mathematical methods for restoring and interpreting illuminated manuscripts
KR101795952B1 (en) Method and device for generating depth image of 2d image
KR101833943B1 (en) Method and system for extracting and searching highlight image
CN113744142B (en) Image restoration method, electronic device and storage medium
Khadidos et al. Bayer image demosaicking and denoising based on specialized networks using deep learning
CN112132753B (en) Infrared image super-resolution method and system for multi-scale structure guide image
Zhang et al. Deep joint neural model for single image haze removal and color correction
JP2023508641A (en) Data augmentation-based matter analysis model learning device and method
CN112365451A (en) Method, device and equipment for determining image quality grade and computer readable medium
CN116798041A (en) Image recognition method and device and electronic equipment
CN115496654A (en) Image super-resolution reconstruction method, device and medium based on self-attention mechanism
JP7362924B2 (en) Data augmentation-based spatial analysis model learning device and method
CN112052863B (en) Image detection method and device, computer storage medium and electronic equipment
RU2635883C1 (en) Image processing method and system for forming superhigh-resolution images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant