CN113554739A - Relighting image generation method and device and electronic equipment - Google Patents

Relighting image generation method and device and electronic equipment Download PDF

Info

Publication number
CN113554739A
CN113554739A CN202110729940.4A CN202110729940A CN113554739A CN 113554739 A CN113554739 A CN 113554739A CN 202110729940 A CN202110729940 A CN 202110729940A CN 113554739 A CN113554739 A CN 113554739A
Authority
CN
China
Prior art keywords
image
relighting
wavelet transform
processed
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729940.4A
Other languages
Chinese (zh)
Inventor
李甫
邓瑞峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110729940.4A priority Critical patent/CN113554739A/en
Publication of CN113554739A publication Critical patent/CN113554739A/en
Priority to PCT/CN2022/074900 priority patent/WO2023273340A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Abstract

The disclosure provides a generation method and device of a relighting image and electronic equipment, relates to the field of artificial intelligence, in particular to computer vision and deep learning technology, and can be applied to an image processing scene. The scheme is as follows: acquiring an image to be processed; inputting the image to be processed into a relighting image generation system, performing relighting rendering by N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed. Therefore, the method does not depend on manual design or a convolutional neural network model obtained based on neural network learning training, and the relighting image generation system formed by at least one wavelet transformation model is used for rendering the image to be processed, so that the obtained relighting image keeps a scene content structure at low frequency and detail shadow information at high frequency, and the relighting image with more accurate and reliable rendering effect is obtained.

Description

Relighting image generation method and device and electronic equipment
Technical Field
Embodiments of the present disclosure relate generally to the field of image processing technology, and more particularly, to the field of artificial intelligence, and more particularly, to computer vision and deep learning techniques applicable in image processing scenarios.
Background
With the rapid development of mobile terminal technology and Image Processing (Image Processing) technology, various Applications (APPs) having special effects based on lighting technology have come to be developed, and users have increasingly demanded functions such as adding filters to images and changing the shadow effect of human faces.
In the related art, the user abnormal behavior detection is generally performed in the following two ways: and obtaining a model for performing relighting rendering on the image to be processed based on a manual rendering mode and a neural network learning training mode.
However, rendering is performed manually, which causes problems of extremely high labor cost, low generation efficiency of relighting images, and poor reliability; based on the network obtained by neural network learning training, the generated relighting image often has the problems of artifact generation, incapability of learning shadow change and the like.
Therefore, how to improve the effectiveness and reliability in the generation process of the relighting image has become one of important research directions.
Disclosure of Invention
The disclosure provides a generation method and device of a relighting image and electronic equipment.
According to a first aspect, there is provided a generation method of a relighting image, comprising:
acquiring an image to be processed;
inputting the image to be processed into a relighting image generation system, performing relighting rendering by N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed, wherein N is an integer greater than or equal to 1.
According to a second aspect, there is provided a generation apparatus of a relighting image, comprising:
the acquisition module is used for acquiring an image to be processed;
the first output module is used for inputting the image to be processed into a relighting image generation system, performing relighting rendering by using N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed, wherein N is an integer greater than or equal to 1.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating a relighted image according to the first aspect of the disclosure.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of generating a relighted image according to the first aspect of the present disclosure.
According to a fifth aspect, a computer program product is provided, comprising a computer program, characterized in that the computer program, when being executed by a processor, realizes the method of generating a relighted image according to the first aspect of the disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a relighting image generation process;
FIG. 3 is a schematic illustration of an image to be processed;
FIG. 4 is a schematic diagram of different directional components in a relighting image generation process;
FIG. 5 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another relighting image generation process;
FIG. 9 is a schematic diagram of another relighting image generation process;
FIG. 10 is a schematic diagram of another relighting image generation process;
FIG. 11 is a schematic diagram of another relighting image generation process;
fig. 12 is a block diagram of a generation apparatus of a relighting image used to implement the generation method of a relighting image of the embodiment of the present disclosure;
fig. 13 is a block diagram of a generation apparatus of a relighting image used to implement the generation method of a relighting image of the embodiment of the present disclosure;
fig. 14 is a block diagram of an electronic device used to implement the generation of a relighting image or the generation of a relighting image of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following briefly describes the technical field to which the disclosed solution relates:
image Processing (Image Processing) techniques that analyze an Image with a computer to achieve a desired result. Also known as image processing. Image processing generally refers to digital image processing. Digital images are large two-dimensional arrays of elements called pixels and values called gray-scale values, which are captured by industrial cameras, video cameras, scanners, etc. Image processing techniques generally include image compression, enhancement and restoration, matching, description and identification of 3 parts.
AI (Artificial Intelligence) is a subject for studying a computer to simulate some thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) of a human being, and has a technology at a hardware level and a technology at a software level. Artificial intelligence hardware techniques generally include computer vision techniques, speech recognition techniques, natural language processing techniques, and learning/deep learning thereof, big data processing techniques, knowledge-graph techniques, and the like.
Computer Vision (Computer Vision) is a science for researching how to make a machine "see", and further, it means that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can acquire 'information' from images or multidimensional data. The information referred to herein refers to information defined by Shannon that can be used to help make a "decision". Because perception can be viewed as extracting information from sensory signals, computer vision can also be viewed as the science of how to make an artificial system "perceive" from images or multidimensional data.
DL (Deep Learning), a new research direction in the field of ML Machine Learning (Machine Learning), is introduced into Machine Learning to make it closer to the original target, artificial intelligence. Deep learning is the intrinsic law and representation hierarchy of learning sample data, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art.
A generation method, an apparatus, and an electronic device of a relighting image according to an embodiment of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that the execution subject of the method for generating a relighting image in this embodiment is a relighting image generation device, and the relighting image generation device may specifically be a hardware device, or software in a hardware device, or the like. The hardware devices are, for example, terminal devices, servers, and the like. As shown in fig. 1, the generation method of a relighting image proposed by this embodiment includes the following steps:
and S101, acquiring an image to be processed.
The image to be processed may be any image input by a user, and for example, any video, such as a teaching video and a movie and a drama, may be decoded and frame-extracted to obtain a frame of image as the image to be processed.
It should be noted that when attempting to acquire the image to be processed, the included image pre-stored in the local or remote storage area may be acquired as the image to be processed, or the image may be directly captured as the image to be processed.
Optionally, the stored image or video may be acquired from at least one of a local or remote image library and a video library to acquire the image to be processed; alternatively, an image may be directly captured as the image to be processed. The method for acquiring the image to be processed is not limited, and the method can be selected according to actual conditions.
S102, inputting the image to be processed into a relighting image generation system, performing relighting rendering by N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed, wherein N is an integer greater than or equal to 1.
Among them, the Relighting technique (Relighting) refers to changing the illumination direction and color temperature of a given image, so as to generate another image with different illumination direction and color temperature.
For example, as shown in fig. 2, fig. 2(a) is a scene image with a light source at the east of 2500K, and fig. 2(b) is a scene image with a light source at the west of 6500K. Therefore, when the color temperature value is low, the color of the image is yellow and belongs to warm tone; when the color temperature value is higher, the image is whitened in color and belongs to a cool tone. Meanwhile, when the light source positions are different, the shadows generated are also different. In summary, the relighting rendering is performed with the goal of rendering fig. 2(a), generating fig. 2(b), and the scene content remains consistent, changing only the color temperature and the shadow direction.
The relighting image generation system comprises N wavelet transformation models, wherein N is an integer greater than or equal to 1. For example, a relighting image generation system includes 1 wavelet transform model; for another example, the relighting image generation system includes 3 structurally identical wavelet transform models, and in this case, the 3 wavelet transform models are connected in a Cascade (Cascade) manner.
It should be noted that, in the related art, when performing the relighting processing on the image to be processed, rendering is usually performed manually, or a model for performing the relighting rendering on the image to be processed is obtained based on Neural network learning training, for example, a Convolutional Neural Network (CNN) model is obtained by training.
However, rendering is performed manually, which causes problems of extremely high labor cost, low generation efficiency of relighting images, and poor reliability; the network obtained based on neural network learning training generally only corresponds to the time domain, namely, the operation is directly performed under an RGB (Red Green blue) image.
Therefore, the relighting image generation method provided by the disclosure can be used for performing relighting rendering on the image to be processed by training to obtain a wavelet transform model, so that a relighting image with higher quality is generated by operating on the frequency domain image.
It should be noted that the present disclosure is not limited to the kind of wavelet transform, and may be selected according to actual situations. Optionally, a discrete wavelet transform model may be selected for relighting rendering of the image to be processed.
According to the generation method of the relighting image, the relighting image generation system formed by at least one wavelet transformation model is used for rendering the image to be processed, so that the obtained relighting image keeps a scene content structure at a low frequency and detail shadow information at a high frequency, and the relighting image with more accurate and reliable rendering effect is obtained.
The following briefly describes the processing procedure of the wavelet transform model to which the disclosed solution relates:
the frequency of the image is an index for representing the intensity of the change of the gray level in the image, and is the gradient of the gray level on the plane space.
For example, given a large-area desert image, a region with slow gray level change is obtained, and the corresponding frequency value is very low; for the edge region with drastic change of surface property, such as the mountain of ridges and peaks, the corresponding frequency value is higher in the image, which is a region with drastic change of gray scale.
Thus, in terms of physical effects, the wavelet transform can convert an image from a spatial domain to a frequency domain, that is, a gray distribution function of the image can be transformed into a frequency distribution function of the image, and a frequency distribution function of the image can be transformed into a gray distribution function by inverse transformation.
Taking the processing procedure of the two-dimensional Discrete Wavelet transform model of the image to be processed as an example, for the image to be processed shown in fig. 3, optionally, a one-dimensional Discrete Wavelet Transform (DWT) may be performed on each line of pixels of the image to be processed to obtain a low-frequency component L and a high-frequency component H of the original image (the image to be processed) in the horizontal direction. Further, one more dimension DWT may be performed for each column of pixels of the transformed data, resulting in four results as shown in FIG. 4.
Wherein, from the obtained low frequency component in the horizontal direction and the low frequency component in the vertical direction, i.e., LL, an image as shown in fig. 4(a) can be obtained; from the low-frequency component in the horizontal direction and the high-frequency component in the vertical direction, i.e., LH, an image as shown in fig. 4(b) can be obtained; an image as shown in fig. 4(c) can be obtained from the high-frequency component in the horizontal direction and the low-frequency component in the vertical direction, i.e., HL; from the high-frequency component in the horizontal direction and the high-frequency component in the vertical direction, i.e., HH, an image as shown in fig. 4(d) can be obtained.
In this case, as shown in fig. 4(a), an image that can represent the placement of the object in the image to be processed, that is, an approximate image of the image to be processed can be obtained for the image to be processed shown in fig. 3. The image shown in fig. 4(a) corresponds to the low frequency part of the image to be processed, while the three images shown in fig. 4(b) to (d) correspond to the contour of the image to be processed, in turn, the detail images in three directions, horizontal, vertical and diagonal, and correspond to the high frequency part of the image to be processed.
In the embodiment of the present disclosure, if the width and the height of the input to-be-processed image are 1024, and the number of channels is 3, in this case, the size of the to-be-processed image may be represented as 1024 × 3. Alternatively, after DWT processing through a discrete wavelet transform network in a discrete wavelet transform model, the size of the image becomes 512 × 3.
Further, by concatenating the four images in fig. 4(a) to (d) in the channel dimension, an image with a size of 512 × 12 can be obtained. In this case, after the DWT, the width and height of the image are both reduced by 2 times, and the number of channels is increased by 4 times, which is also called Spatial-to-Depth (Spatial2Depth) conversion process.
Therefore, the wavelet transform processing operation replaces the operation of maximum pooling (max pooling) or avg pooling (average pooling) commonly used in CNN, so that the whole image to be processed can be transformed by DWT instead of being transformed only locally, and the advantages of larger receptive field and wider processing area are achieved, and the processing result is more accurate.
Further, after the processing is performed through the wavelet transform network in the wavelet transform model, optionally, the IDWT processing may be performed through an Inverse discrete wavelet transform network in the discrete wavelet transform model, and a process of an offline Inverse wavelet transform (IDWT) is similar to the DWT, and is not described herein again.
It should be noted that, in the present disclosure, in order to further improve the rendering effect and reliability of the relighting image, a relighting image generation system in which at least two wavelet transform models are cascaded may be employed.
As a possible implementation manner, as shown in fig. 5, the generation method of a relighting image proposed by the present disclosure specifically includes, on the basis of the foregoing embodiment, the following steps:
s501, acquiring an image to be processed.
The step S501 is the same as the step S101 in the previous embodiment, and is not described herein again.
The step S102 in the previous embodiment may specifically include the following steps S502 to S504.
S502, aiming at the first wavelet transformation model, the image to be processed is input into the first wavelet transformation model for relighting rendering, and an intermediate relighting image is output.
In the embodiment of the present disclosure, a multi-stage rendering strategy may be adopted, that is, for a first wavelet transformation model, an image to be processed is input into the first wavelet transformation model for relighting rendering, an intermediate relighting image is output, and a mapping relationship from the image to be processed to the output intermediate relighting image is learned.
It should be noted that, in the stage of model training, after the image to be processed is input into the first wavelet transform model for relighting rendering and the intermediate relighting image is output for the first wavelet transform model, the first wavelet transform model is fixed, a training set (a preset number of sample images to be processed) is processed according to the model, and the intermediate relighting image of the training set under the first wavelet transform model is output.
And S503, from the second wavelet transformation model, inputting the intermediate relighting image output by the previous wavelet transformation model into the next wavelet transformation model for relighting rendering, and outputting the intermediate relighting image corresponding to the next wavelet transformation model.
In the embodiment of the present disclosure, from the second wavelet transform model, the intermediate relighting image output by the previous wavelet transform model may be input into the next wavelet transform model for relighting rendering, and the intermediate relighting image corresponding to the next wavelet transform model is output. Meanwhile, aiming at the model training stage, the training difficulty of the next-level wavelet transform model can be greatly reduced.
And S504, when one of the wavelet transformation models outputs a corresponding intermediate relighting image and the corresponding intermediate relighting image meets the optimization stopping condition, stopping transmitting the corresponding intermediate relighting image to the next wavelet transformation model, and taking the corresponding intermediate relighting image as a target relighting image.
The optimization stop condition may be set according to an actual situation, and the disclosure is not limited.
Alternatively, the optimization stop condition may be set as the number of models that process the image; alternatively, the optimization stop condition may be set to the rendering effect of the intermediate relighting image.
For example, if the optimization stop condition is that the number of models for processing the image is 2, in this case, the intermediate relighting image corresponding to the output of the one-level wavelet transform model is the image obtained after the processing of the second wavelet transform model, and it is described that the corresponding intermediate relighting image satisfies the optimization stop condition, the transmission of the corresponding intermediate relighting image to the next-level wavelet transform model is stopped, and the corresponding intermediate relighting image is used as the target relighting image.
And S505, if the corresponding intermediate relighting image does not meet the optimization stop condition, the intermediate relighting image is continuously transmitted to the next-level wavelet transform model, and the next-level wavelet transform model continuously performs relighting rendering on the corresponding intermediate relighting image until the intermediate relighting image output by the one-level wavelet transform model meets the optimization stop condition, and the intermediate relighting image meeting the optimization stop condition is used as the target relighting image.
For example, if the optimization stop condition is that the number of the models for processing the image is 3, in this case, the intermediate relighting image corresponding to the output of the first-level wavelet transform model is the image obtained after the processing of the second wavelet transform model, and it is described that the corresponding intermediate relighting image does not satisfy the optimization stop condition, the intermediate relighting image is continuously transmitted to the third-level wavelet transform model, the third-level wavelet transform model continuously performs relighting rendering on the corresponding intermediate relighting image, and the intermediate relighting image corresponding to the third-level wavelet transform model is continuously used as the target relighting image.
According to the generation method of the relighting image, relighting rendering is carried out on the image to be processed without depending on a single wavelet transformation model, the problem that a single model cannot learn a complete mapping relation is solved, the relighting image generation system is formed in a mode of cascading a plurality of models, models of different levels can learn different mapping dimensions, and the rendering effect and reliability of the output relighting image are further improved.
It should be noted that, in the present disclosure, a residual network (ResBlock) and a cross-layer Connection (Skip Connection) are added in the middle of the process of downsampling and upsampling to improve the rendering effect of the generated relight image.
As a possible implementation manner, as shown in fig. 6, the generation method of a relighting image proposed by the present disclosure, on the basis of the foregoing embodiment, a process of performing relighting rendering on an image by using a wavelet transform model at any level specifically includes the following steps:
s601, inputting the image into a wavelet transformation network of a wavelet transformation model, performing down-sampling processing on the image by the wavelet transformation network, and outputting a characteristic image corresponding to the image, wherein the image comprises an image to be processed and an intermediate relighting image.
And S602, inputting the characteristic image into a residual error network of the wavelet transform model, reconstructing the characteristic image by the residual error network, and outputting a reconstructed characteristic image.
And S603, inputting the reconstructed characteristic image into a wavelet inverse transformation network of the wavelet transformation model, performing up-sampling processing on the reconstructed characteristic image by the wavelet inverse transformation network, and outputting a relighting image.
In the embodiment of the disclosure, the image may be downsampled to obtain the feature image corresponding to the image. And then, carrying out up-sampling on the reconstructed characteristic image obtained by the residual error network reconstruction to obtain a relighting image, wherein the down-sampling frequency and the multiple are the same as the up-sampling frequency and the multiple. The frequency and the multiple of the up-sampling and the down-sampling can be set according to the actual situation.
For example, the image may be downsampled 4 times in a stepwise manner, each downsampling is 2 times, and the downsampling is 16 times, so as to obtain the feature image corresponding to the image. Further, the reconstructed feature image is up-sampled 4 times step by step, each time by 2 times, and up-sampled by 16 times to obtain a relighting image. In the process of sampling the image, the acquired feature image is kept consistent with the image size.
According to the generation method of the relighting image, the residual error network and the cross-layer connection mode are added into the wavelet transformation model, so that the up-sampling input is combined with the corresponding down-sampling output on the basis of the up-sampling output of the upper layer, the relighting rendering process is supervised, the learning error is prevented, and the rendering effect and the reliability of the output relighting image are further improved.
IN the present disclosure, a local convolution-normalization-nonlinear network (Conv-IN-Relu) is added to the relighting image generation system, and the obtained feature image is further processed.
Alternatively, only the images acquired by downsampling may be preprocessed; alternatively, only the image obtained by up-sampling may be preprocessed; alternatively, the pre-processing may be performed separately for the images acquired by the down-sampling and the up-sampling.
As a possible implementation manner, as shown in fig. 7, on the basis of the foregoing embodiment, taking as an example that the pre-processing is performed on the images acquired by the down-sampling and the up-sampling respectively, the method specifically includes the following steps:
s701, inputting the feature image obtained by down-sampling into a first convolution network of a wavelet transform model, preprocessing the feature image by the first convolution network, and inputting the preprocessed feature image output by the first convolution network into a residual error network.
S702, inputting the up-sampling characteristic image obtained after the up-sampling processing into a second convolution network of the wavelet transformation model, and preprocessing the up-sampling characteristic image by the second convolution network.
The process of preprocessing the characteristic image mainly comprises the operations of convolution, normalization, activation and the like on the image, the preprocessed characteristic image integrates the local information of the original characteristic image, and nonlinearity is added.
According to the generation method of the relighting image, the image is preprocessed, so that a network is deepened, the learning capability and the fitting capability of the wavelet transformation model are enhanced, and the rendering effect and the reliability of the output relighting image are further improved.
It should be noted that the generation method of the relighting image provided by the present disclosure may be applied to various image processing scenes.
For the application scene of adding the filter to the common scene type picture, as shown in fig. 8-9, different filter effects can be created by changing the color temperature, so that a user can obtain a plurality of results with different hues by only taking a picture, the user can conveniently use the picture for subsequent editing, the user experience is improved, and the user interest is attracted.
As shown in fig. 8, the image to be processed shown in fig. 8(a) is relighted to obtain the relighted image shown in fig. 8(b), the color temperature of the relighted image shown in fig. 8(b) is changed, and the black shaded area on the left side of the image to be processed shown in fig. 8(a) is also eliminated.
As shown in fig. 9, the image to be processed shown in fig. 9(a) is relighted to obtain the relighted image shown in fig. 9(b), the shadow of the relighted image shown in fig. 9(b) is changed, and at the same time, a new shadow area is generated on the right side of the stub of the relighted image shown in fig. 9(b), and the color tone of the whole image is changed into a cool color tone.
For an application scene of adding a special effect to a portrait picture, as shown in fig. 10, multiple effects can be generated by changing the degree and position of a shadow, so that a new playing method is added, and a user is attracted to use a product.
In summary, as shown in fig. 11, in the generation method of a relighting image provided by the present disclosure, in the downsampling stage, discrete wavelet transform is adopted to reduce the image resolution and increase the number of image channels. Unlike the operation of local convolution in the related art, the wavelet transform directly processes the global information of the whole image, so the receptive field area is larger. Similarly, in the up-sampling stage, inverse discrete wavelet transform is adopted to improve the image resolution and reduce the number of image channels.
Further, after down-sampling and up-sampling, a local convolution-normalization-nonlinear network is added to pre-process the characteristic image, and further process the obtained characteristic image. In addition, a residual error network and a cross-layer connection mode are added between the down sampling and the up sampling, and the rendering effect of the generated relighting image is improved.
It should be noted that, in the technical solution of the present disclosure, the acquisition, storage and application of the personal information of the related user all conform to the regulations of the relevant laws and regulations, and do not violate the good custom of the public order. It is an intention of the present disclosure that personal information data should be managed and processed in a manner that minimizes the risk of inadvertent or unauthorized access to the use. By limiting data collection and deleting data when it is no longer needed, risks are minimized. All information related to a person in the present disclosure is collected with the knowledge and consent of the person.
Corresponding to the methods for generating a relighting image provided in the foregoing several embodiments, an embodiment of the present disclosure further provides a device for generating a relighting image, and since the device for generating a relighting image provided in the embodiment of the present disclosure corresponds to the methods for generating a relighting image provided in the foregoing several embodiments, the implementation manner of the method for generating a relighting image is also applicable to the device for generating a relighting image provided in the embodiment, and will not be described in detail in the embodiment.
Fig. 12 is a schematic structural diagram of a generation apparatus of a relighting image according to an embodiment of the present disclosure.
As shown in fig. 12, the relighting image generation apparatus 1200 includes: an acquisition module 1210 and a first output module 1220. Wherein:
an obtaining module 1210 for obtaining an image to be processed;
the first output module 1220 is configured to input the image to be processed into a relighting image generation system, perform relighting rendering by using N wavelet transform models in the relighting image generation system, and output a target relighting image corresponding to the image to be processed, where N is an integer greater than or equal to 1.
Fig. 13 is a schematic structural diagram of a generation apparatus of a relighting image according to another embodiment of the present disclosure.
As shown in fig. 13, the relighting image generation apparatus 1300 includes: an acquisition module 1310 and a first output module 1320.
Wherein: n is an integer greater than 1, wherein the first output module 1310 includes:
the first output submodule 13201 is configured to, for a first wavelet transform model, input the image to be processed into the first wavelet transform model for relighting rendering, and output an intermediate relighting image;
a second output sub-module 13202, configured to, starting from the second wavelet transform model, input the intermediate relighting image output by the previous-level wavelet transform model into the next-level wavelet transform model for relighting rendering, and output an intermediate relighting image corresponding to the next-level wavelet transform model;
a first determining submodule 13203, configured to stop transmitting the corresponding intermediate relighting image to the next-level wavelet transform model and take the corresponding intermediate relighting image as the target relighting image when it is determined that the corresponding intermediate relighting image meets the optimization stop condition every time one of the level wavelet transform models outputs the corresponding intermediate relighting image;
a second determining sub-module 13204, configured to determine that the corresponding intermediate relighting image does not meet the optimization stop condition, transmit the intermediate relighting image to a next-level wavelet transform model, and perform relighting rendering on the corresponding intermediate relighting image by using the next-level wavelet transform model until the intermediate relighting image output by one of the level wavelet transform models meets the optimization stop condition, and then use the intermediate relighting image meeting the optimization stop condition as the target relighting image.
The generation apparatus 1300 of the relighting image further includes:
a second output module 1330, configured to input an image into a wavelet transform network of the wavelet transform model, perform downsampling on the image by the wavelet transform network, and output a feature image corresponding to the image, where the image includes the image to be processed and the intermediate relighting image;
a third output module 1340, configured to input the feature image into a residual error network of the wavelet transform model, reconstruct the feature image by the residual error network, and output a reconstructed feature image;
a fourth output module 1350, configured to input the reconstructed feature image into an inverse wavelet transform network of the wavelet transform model, perform upsampling processing on the reconstructed feature image by the inverse wavelet transform network, and output the relighting image;
a sampling module 1360, configured to down-sample and up-sample the image according to a preset frequency and a preset multiple;
the preprocessing module 1370 is configured to input the upsampled feature image obtained after the upsampling processing to a second convolution network of the wavelet transform model, and the second convolution network preprocesses the upsampled feature image.
The third output module 1340 includes:
a third output sub-module 13401, configured to input the feature image obtained by downsampling into a first convolution network of the wavelet transform model, pre-process the feature image by the first convolution network, and input the pre-processed feature image output by the first convolution network into the residual error network.
It should be noted that the obtaining module 1210 has the same function and structure as the obtaining module 1310.
According to the generation device of the relighting image, the relighting image generation system formed by at least one wavelet transformation model is used for rendering the image to be processed, so that the obtained relighting image keeps a scene content structure at a low frequency and detail shadow information at a high frequency, and the relighting image with a more accurate and reliable rendering effect is obtained.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 14 shows a schematic block diagram of an example electronic device 1400 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM1403, various programs and data required for the operation of the device 1400 can also be stored. The calculation unit 1401, the ROM 1402, and the RAM1403 are connected to each other via a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
Various components in device 1400 connect to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1401 performs the respective methods and processes described above, such as the generation method of the relighting image. For example, in some embodiments, the generation method of the relighting image may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1400 via ROM 1402 and/or communication unit 1409. When the computer program is loaded into the RAM1403 and executed by the computing unit 1401, one or more steps of the generation method of a relighted image described above may be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured by any other suitable means (e.g. by means of firmware) to perform the generation method of the relighting image.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable character image restoration apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The service end can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service (Virtual Private Server, or VPS for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
The present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the generation method of a relighted image as described above.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A generation method of a relighting image, comprising:
acquiring an image to be processed;
inputting the image to be processed into a relighting image generation system, performing relighting rendering by N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed, wherein N is an integer greater than or equal to 1.
2. The generation method according to claim 1, wherein N is an integer greater than 1, wherein the inputting the image to be processed into a relighting image generation system, performing relighting rendering by N wavelet transform models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed comprises:
aiming at a first wavelet transformation model, inputting the image to be processed into the first wavelet transformation model for relighting rendering, and outputting an intermediate relighting image;
from the second wavelet transform model, inputting the intermediate relighting image output by the previous wavelet transform model into the next wavelet transform model for relighting rendering, and outputting the intermediate relighting image corresponding to the next wavelet transform model;
and when one of the wavelet transform models outputs a corresponding intermediate relighting image and the corresponding intermediate relighting image meets the optimization stopping condition, stopping transmitting the corresponding intermediate relighting image to the next wavelet transform model, and taking the corresponding intermediate relighting image as the target relighting image.
3. The generation method according to claim 2, further comprising:
and if the corresponding intermediate relighting image does not meet the optimization stopping condition, the intermediate relighting image is continuously transmitted to a next-level wavelet transform model, and the next-level wavelet transform model continuously performs relighting rendering on the corresponding intermediate relighting image until the intermediate relighting image output by the one-level wavelet transform model meets the optimization stopping condition, and the intermediate relighting image meeting the optimization stopping condition is taken as the target relighting image.
4. The generation method according to any one of claims 1 to 3, wherein the process of performing relighting rendering on the image by using the wavelet transform model at any level comprises the following steps:
inputting an image into a wavelet transform network of the wavelet transform model, performing downsampling processing on the image by the wavelet transform network, and outputting a characteristic image corresponding to the image, wherein the image comprises the image to be processed and the intermediate relighting image;
inputting the characteristic image into a residual error network of the wavelet transformation model, reconstructing the characteristic image by the residual error network, and outputting a reconstructed characteristic image;
and inputting the reconstruction characteristic image into a wavelet inverse transformation network of the wavelet transformation model, performing up-sampling processing on the reconstruction characteristic image by the wavelet inverse transformation network, and outputting the relighting image.
5. The generation method of claim 4, further comprising:
and performing down-sampling and up-sampling on the image according to a preset frequency and a preset multiple.
6. The generation method according to claim 4, wherein the inputting the feature image into a residual network of the wavelet transform model further comprises:
inputting the feature image obtained by down-sampling into a first convolution network of the wavelet transform model, preprocessing the feature image by the first convolution network, and inputting the preprocessed feature image output by the first convolution network into the residual error network.
7. The generation method of claim 4, further comprising:
and inputting the up-sampling characteristic image obtained after the up-sampling treatment into a second convolution network of the wavelet transformation model, and preprocessing the up-sampling characteristic image by the second convolution network.
8. A generation apparatus of a relighting image, comprising:
the acquisition module is used for acquiring an image to be processed;
the first output module is used for inputting the image to be processed into a relighting image generation system, performing relighting rendering by using N wavelet transformation models in the relighting image generation system, and outputting a target relighting image corresponding to the image to be processed, wherein N is an integer greater than or equal to 1.
9. The generation apparatus of claim 8, wherein N is an integer greater than 1, wherein the first output module comprises:
the first output sub-module is used for inputting the image to be processed into a first wavelet transformation model for relighting rendering aiming at the first wavelet transformation model and outputting an intermediate relighting image;
the second output sub-module is used for inputting the intermediate relighting image output by the previous-level wavelet transform model into the next-level wavelet transform model from the second wavelet transform model for relighting rendering and outputting the intermediate relighting image corresponding to the next-level wavelet transform model;
and the first determining sub-module is used for stopping transmitting the corresponding intermediate relighting image to the next-level wavelet transform model and taking the corresponding intermediate relighting image as the target relighting image when the corresponding intermediate relighting image is determined to meet the optimization stopping condition every time one of the wavelet transform models outputs the corresponding intermediate relighting image.
10. The generation apparatus of claim 9, further comprising:
and the second determining sub-module is used for determining that the corresponding intermediate relighting image does not meet the optimization stopping condition, transmitting the intermediate relighting image to a next-level wavelet transform model, and performing relighting rendering on the corresponding intermediate relighting image by the next-level wavelet transform model until the intermediate relighting image output by the one-level wavelet transform model meets the optimization stopping condition, and taking the intermediate relighting image meeting the optimization stopping condition as the target relighting image.
11. The generation apparatus according to any one of claims 8 to 10, further comprising:
the second output module is used for inputting an image into a wavelet transform network of the wavelet transform model, performing downsampling processing on the image by the wavelet transform network, and outputting a characteristic image corresponding to the image, wherein the image comprises the image to be processed and the intermediate relighting image;
the third output module is used for inputting the characteristic image into a residual error network of the wavelet transform model, reconstructing the characteristic image by the residual error network and outputting a reconstructed characteristic image;
and the fourth output module is used for inputting the reconstructed characteristic image into a wavelet inverse transformation network of the wavelet transformation model, performing up-sampling processing on the reconstructed characteristic image by the wavelet inverse transformation network, and outputting the relighting image.
12. The generation apparatus of claim 11, further comprising:
and the sampling module is used for carrying out down-sampling and up-sampling on the image according to a preset frequency and a preset multiple.
13. The generation apparatus of claim 11, wherein the third output module comprises:
and the third output sub-module is used for inputting the feature image acquired by down-sampling into a first convolution network of the wavelet transform model, preprocessing the feature image by the first convolution network, and inputting the preprocessed feature image output by the first convolution network into the residual error network.
14. The generation apparatus of claim 11, further comprising:
and the preprocessing module is used for inputting the up-sampling characteristic image obtained after the up-sampling processing into a second convolution network of the wavelet transform model, and preprocessing the up-sampling characteristic image by the second convolution network.
15. An electronic device comprising a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the method according to any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110729940.4A 2021-06-29 2021-06-29 Relighting image generation method and device and electronic equipment Pending CN113554739A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110729940.4A CN113554739A (en) 2021-06-29 2021-06-29 Relighting image generation method and device and electronic equipment
PCT/CN2022/074900 WO2023273340A1 (en) 2021-06-29 2022-01-29 Method and apparatus for generating relighting image, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729940.4A CN113554739A (en) 2021-06-29 2021-06-29 Relighting image generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113554739A true CN113554739A (en) 2021-10-26

Family

ID=78102491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729940.4A Pending CN113554739A (en) 2021-06-29 2021-06-29 Relighting image generation method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113554739A (en)
WO (1) WO2023273340A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546010A (en) * 2022-09-21 2022-12-30 荣耀终端有限公司 Image processing method and electronic device
CN115546041A (en) * 2022-02-28 2022-12-30 荣耀终端有限公司 Training method of light supplement model, image processing method and related equipment
WO2023273340A1 (en) * 2021-06-29 2023-01-05 北京百度网讯科技有限公司 Method and apparatus for generating relighting image, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060214931A1 (en) * 2005-03-22 2006-09-28 Microsoft Corporation Local, deformable precomputed radiance transfer
CN1889128A (en) * 2006-07-17 2007-01-03 北京航空航天大学 Method for precalculating radiancy transfer full-frequency shadow based on GPU
US9001226B1 (en) * 2012-12-04 2015-04-07 Lytro, Inc. Capturing and relighting images using multiple devices

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8090160B2 (en) * 2007-10-12 2012-01-03 The University Of Houston System Automated method for human face modeling and relighting with application to face recognition
CN113554739A (en) * 2021-06-29 2021-10-26 北京百度网讯科技有限公司 Relighting image generation method and device and electronic equipment
CN113592998A (en) * 2021-06-29 2021-11-02 北京百度网讯科技有限公司 Relighting image generation method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060214931A1 (en) * 2005-03-22 2006-09-28 Microsoft Corporation Local, deformable precomputed radiance transfer
CN1889128A (en) * 2006-07-17 2007-01-03 北京航空航天大学 Method for precalculating radiancy transfer full-frequency shadow based on GPU
US9001226B1 (en) * 2012-12-04 2015-04-07 Lytro, Inc. Capturing and relighting images using multiple devices

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MAJED EI HELOU 等: "NTIRE 2021 Depth Guided Image Relighting Challenge", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) WORKSHOPS, pages 566 - 577 *
PUTHUSSERY,DENSEN等: ""WDRN: A Wavelet Decomposed RelightNet for Image Relighting"", 《HTTPS://ARXIV.53YU.COM/PDF/2009.06678》 *
PUTHUSSERY,DENSEN等: ""WDRN: A Wavelet Decomposed RelightNet for Image Relighting"", 《HTTPS://ARXIV.53YU.COM/PDF/2009.06678》, 31 January 2021 (2021-01-31), pages 519 - 534 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273340A1 (en) * 2021-06-29 2023-01-05 北京百度网讯科技有限公司 Method and apparatus for generating relighting image, and electronic device
CN115546041A (en) * 2022-02-28 2022-12-30 荣耀终端有限公司 Training method of light supplement model, image processing method and related equipment
CN115546041B (en) * 2022-02-28 2023-10-20 荣耀终端有限公司 Training method of light supplementing model, image processing method and related equipment thereof
CN115546010A (en) * 2022-09-21 2022-12-30 荣耀终端有限公司 Image processing method and electronic device
CN115546010B (en) * 2022-09-21 2023-09-12 荣耀终端有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
WO2023273340A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
Lu et al. Multi-scale adversarial network for underwater image restoration
EP4109392A1 (en) Image processing method and image processing device
CN113554739A (en) Relighting image generation method and device and electronic equipment
CN110675336A (en) Low-illumination image enhancement method and device
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN113592998A (en) Relighting image generation method and device and electronic equipment
CN108364270B (en) Color reduction method and device for color cast image
CN111353955A (en) Image processing method, device, equipment and storage medium
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Chen et al. Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN113129236A (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
US20240054605A1 (en) Methods and systems for wavelet domain-based normalizing flow super-resolution image reconstruction
Si et al. A novel method for single nighttime image haze removal based on gray space
Moghimi et al. A joint adaptive evolutionary model towards optical image contrast enhancement and geometrical reconstruction approach in underwater remote sensing
Barai et al. Human visual system inspired saliency guided edge preserving tone-mapping for high dynamic range imaging
WO2023215371A1 (en) System and method for perceptually optimized image denoising and restoration
CN114648467B (en) Image defogging method and device, terminal equipment and computer readable storage medium
Wu et al. Learning to joint remosaic and denoise in quad bayer cfa via universal multi-scale channel attention network
Yu et al. Decolorization algorithm based on contrast pyramid transform fusion
CN110766153A (en) Neural network model training method and device and terminal equipment
CN115375909A (en) Image processing method and device
Hsu et al. Structure-transferring edge-enhanced grid dehazing network
CN114240794A (en) Image processing method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211026