WO2023273340A1 - 重光照图像的生成方法、装置及电子设备 - Google Patents

重光照图像的生成方法、装置及电子设备 Download PDF

Info

Publication number
WO2023273340A1
WO2023273340A1 PCT/CN2022/074900 CN2022074900W WO2023273340A1 WO 2023273340 A1 WO2023273340 A1 WO 2023273340A1 CN 2022074900 W CN2022074900 W CN 2022074900W WO 2023273340 A1 WO2023273340 A1 WO 2023273340A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
wavelet transform
illuminated
transform model
output
Prior art date
Application number
PCT/CN2022/074900
Other languages
English (en)
French (fr)
Inventor
李甫
邓瑞峰
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Publication of WO2023273340A1 publication Critical patent/WO2023273340A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Definitions

  • Embodiments of the present disclosure generally relate to the technical field of image processing, and more specifically relate to the field of artificial intelligence, particularly computer vision and deep learning technologies, which can be applied in image processing scenarios.
  • the following two methods are usually used to detect abnormal user behavior: a method based on artificial rendering, and a method based on neural network learning and training to obtain a model for re-illumination rendering of the image to be processed.
  • the disclosure provides a method, device and electronic equipment for generating a re-illuminated image.
  • a method for generating a re-illuminated image including:
  • Input the image to be processed into the re-illuminated image generation system perform re-illuminated rendering by N wavelet transform models in the re-illuminated image generation system, and output the target re-illuminated image corresponding to the image to be processed, wherein, N is an integer greater than or equal to 1.
  • a device for generating a re-illuminated image including:
  • Acquisition module used for obtaining the image to be processed
  • the first output module is configured to input the image to be processed into the re-illuminated image generating system, perform re-illuminated rendering by N wavelet transform models in the re-illuminated image generating system, and output the image corresponding to the image to be processed Target relit image, where N is an integer greater than or equal to 1.
  • an electronic device comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein, the memory stores instructions executable by the at least one processor , the instructions are executed by the at least one processor, so that the at least one processor can execute the method for generating a re-illuminated image according to the first aspect of the present disclosure.
  • a non-transitory computer-readable storage medium storing computer instructions, the computer instructions are used to make the computer execute the method for generating a re-illuminated image according to the first aspect of the present disclosure.
  • a computer program product including a computer program, wherein when the computer program is executed by a processor, the method for generating a re-illuminated image according to the first aspect of the present disclosure is implemented.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of a heavy illumination image generation process
  • Fig. 3 is a schematic diagram of an image to be processed
  • Fig. 4 is a schematic diagram corresponding to different direction components in the process of re-illumination image generation
  • FIG. 5 is a schematic diagram according to a second embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram according to a third embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of another re-illumination image generation process
  • Fig. 9 is a schematic diagram of another re-illumination image generation process.
  • Fig. 10 is a schematic diagram of another re-illumination image generation process
  • Fig. 11 is a schematic diagram of another re-illumination image generation process
  • Fig. 12 is a block diagram of a device for generating a re-illuminated image for implementing the method for generating a re-illuminated image according to an embodiment of the present disclosure
  • Fig. 13 is a block diagram of a device for generating a re-illuminated image for implementing the method for generating a re-illuminated image according to an embodiment of the present disclosure
  • FIG. 14 is a block diagram of an electronic device for realizing the generation of a re-illuminated image or the generation of a re-illuminated image according to an embodiment of the present disclosure.
  • Image processing is a technology that uses a computer to analyze images to achieve the desired results. Also known as image processing. Image processing generally refers to digital image processing. A digital image refers to a large two-dimensional array obtained by shooting with industrial cameras, video cameras, scanners and other equipment. The elements of this array are called pixels, and their values are called grayscale values. Image processing technology generally includes three parts: image compression, enhancement and restoration, matching, description and recognition.
  • AI Artificial Intelligence
  • a discipline that studies certain thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) that allow computers to simulate life. It includes both hardware-level technology and software-level technology.
  • Artificial intelligence hardware technology generally includes computer vision technology, speech recognition technology, natural language processing technology and its learning/deep learning, big data processing technology, knowledge map technology and other major aspects.
  • Computer Vision is a science that studies how to make machines "see”. To put it further, it refers to the use of cameras and computers instead of human eyes to identify, track and measure targets, and further make graphics. Processing, so that the computer processing becomes an image that is more suitable for human observation or sent to the instrument for detection.
  • computer vision studies related theories and technologies, trying to build artificial intelligence systems that can obtain 'information' from images or multidimensional data.
  • the information referred to here refers to information that can be used to help make a "decision” as defined by Shannon. Because perception can be thought of as extracting information from sensory signals, computer vision can also be thought of as the science of how to make artificial systems "perceive" from images or multidimensional data.
  • Deep learning is a new research direction in the field of ML machine learning (Machine Learning, machine learning), which is introduced into machine learning to make it closer to the original goal - artificial intelligence.
  • Deep learning is to learn the internal law and representation level of sample data. The information obtained in the learning process is of great help to the interpretation of data such as text, images and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to be able to recognize data such as text, images, and sounds.
  • Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition that far exceed previous related technologies.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure.
  • the re-illuminated image generating method of this embodiment is executed by a re-illuminated image generating device, and the re-illuminated image generating device may specifically be a hardware device or software in a hardware device.
  • the hardware device is, for example, a terminal device, a server, and the like.
  • the generation method of the heavily illuminated image proposed in this embodiment includes the following steps:
  • the image to be processed can be any image input by the user, and for example, any video, such as teaching video and film and television drama works, can be decoded and framed to obtain a frame of image as the image to be processed .
  • an included image pre-stored in a local or remote storage area may be obtained as the image to be processed, or an image may be directly captured as the image to be processed.
  • a stored image or video may be obtained from at least one of a local or remote image library and a video library to obtain an image to be processed; optionally, an image may be directly captured as an image to be processed.
  • the embodiment of the present disclosure does not limit the manner of acquiring the image to be processed, which may be selected according to actual conditions.
  • Relighting refers to changing the illumination direction and color temperature of a given image to generate another image with different illumination direction and color temperature.
  • Figure 2(a) is a scene image when the color temperature is 2500K and the light source is in the east
  • Figure 2(b) is a scene image when the color temperature is 6500K and the light source is in the west.
  • the color temperature value is low
  • the image color is yellowish, which belongs to warm tone
  • the color temperature value is high
  • the image color is whitish, which belongs to cool tone.
  • the shadows produced are also different.
  • the purpose of heavy lighting rendering is to render Figure 2(a) to generate Figure 2(b), and the scene content remains consistent, only changing the color temperature and shadow direction.
  • the re-illuminated image generation system includes N wavelet transform models, where N is an integer greater than or equal to 1.
  • the re-illuminated image generation system includes one wavelet transform model; another example, the re-illuminated image generation system includes three wavelet transform models with the same structure. In this case, the three wavelet transform models are cascaded (Cascade) to connect.
  • the network obtained based on neural network learning and training generally only corresponds to the time domain, that is, RGB (Red Green Blue)
  • RGB Red Green Blue
  • the method for generating a relit image proposed in this disclosure can obtain a wavelet transform model through training to perform relit rendering on the image to be processed, so as to generate a higher quality relit image by operating on the frequency domain image.
  • the present disclosure does not limit the type of wavelet transform, which can be selected according to actual conditions.
  • the discrete wavelet transform model can be selected to perform re-illumination rendering on the image to be processed.
  • the method for generating a re-illuminated image neither relies on manual design nor on a convolutional neural network model obtained based on neural network learning and training, but utilizes a re-illuminated image generation system composed of at least one wavelet transform model Render the image to be processed, so that the obtained re-illuminated image retains the scene content structure at low frequency, and retains the detailed shadow information at high frequency, so as to obtain a re-illuminated image with more accurate and reliable rendering effect.
  • the frequency of the image is an index that characterizes the intensity of the gray level change in the image, and it is the gradient of the gray level in the plane space.
  • the wavelet transform can convert the image from the spatial domain to the frequency domain, that is, the gray distribution function of the image can be transformed into the frequency distribution function of the image, and the frequency distribution function of the image can be transformed by inverse transformation is the gray distribution function.
  • one-dimensional discrete wavelet transform can be performed on each row of pixels of the image to be processed Transformation, referred to as DWT), to obtain the low-frequency component L and high-frequency component H of the original image (image to be processed) in the horizontal direction.
  • one-dimensional DWT can be performed on each column of pixels of the transformed data, so as to obtain four results as shown in FIG. 4 .
  • the image shown in Figure 4(a) can be obtained; according to the low-frequency components in the horizontal direction and high-frequency components in the vertical direction , that is LH, the image shown in Figure 4(b) can be obtained; according to the high-frequency component in the horizontal direction and the low-frequency component in the vertical direction, namely HL, the image shown in Figure 4(c) can be obtained; according to The high-frequency component in the horizontal direction and the high-frequency component in the vertical direction, that is, HH, can obtain an image as shown in Figure 4(d).
  • an image as shown in FIG. 4( a ) that can reflect the arrangement of objects in the image to be processed can be obtained, that is, an approximate image of the image to be processed.
  • the image shown in Figure 4(a) corresponds to the low-frequency part of the image to be processed
  • the three images shown in Figure 4(b) ⁇ (d) correspond to the contours of the image to be processed, in order of horizontal, vertical and diagonal Lines, a total of detail images in three directions, corresponding to the high-frequency part of the image to be processed.
  • the size of the image to be processed can be expressed as 1024*1024*3.
  • the size of the image becomes 512*512*3.
  • the IDWT processing can be performed through the discrete wavelet inverse transform network in the discrete wavelet transform model, offline wavelet inverse transform (Inverse discrete wavelet transform, IDWT for short ) process is similar to DWT, and will not be repeated here.
  • offline wavelet inverse transform Inverse discrete wavelet transform, IDWT for short
  • a re-illuminated image generation system in which at least two wavelet transform models are cascaded may be used.
  • the method for generating a re-illuminated image proposed in the present disclosure specifically includes the following steps on the basis of the above-mentioned embodiments:
  • This step S501 is the same as the step S101 in the previous embodiment, and will not be repeated here.
  • Step S102 in the previous embodiment may specifically include the following steps S502-S504.
  • a multi-stage rendering strategy can be adopted, that is, for the first wavelet transform model, the image to be processed is input into the first wavelet transform model for re-illumination rendering, and the intermediate re-illumination image is output. Handles the mapping between images to output intermediate relit images.
  • the first wavelet transform model after inputting the image to be processed into the first wavelet transform model for re-illumination rendering and outputting the intermediate re-illumination image, the first wavelet transform model can be fixed.
  • a wavelet transform model and process the training set (a preset number of sample images to be processed) according to the model, and output the intermediate re-illuminated image of the training set under the first wavelet transform model.
  • the intermediate re-illumination image output by the upper-level wavelet transform model can be input into the next-level wavelet transform model for re-illumination rendering, and the next-level wavelet transform model can be output
  • the corresponding intermediate re-illuminated image in this case, because the upper-level wavelet transform model has learned most of the mapping relationship, so the intermediate re-illuminated image corresponding to the next-level wavelet transform model is compared with the upper-level wavelet transform
  • the intermediate relit image corresponding to the model is closer to the accurate result (Ground Truth).
  • the difficulty of training the next-level wavelet transform model will be greatly reduced.
  • optimization stop condition may be set according to the actual situation, which is not limited in the present disclosure.
  • the optimization stop condition can be set as the number of models that process the image; optionally, the optimization stop condition can be set as the rendering effect of the intermediate re-illuminated image.
  • the optimization stop condition is that the number of models that process the image is 2, in this case, the intermediate re-illuminated image corresponding to the output of the first-level wavelet transform model is the image obtained after the second wavelet transform model is processed, It shows that the corresponding intermediate re-illuminated image satisfies the optimization stop condition, then stop transmitting the corresponding intermediate re-illuminated image to the next-level wavelet transform model, and take the corresponding intermediate re-illuminated image as the target re-illuminated image.
  • the intermediate re-illuminated image corresponding to the output of the first-level wavelet transform model is the image obtained after the second wavelet transform model is processed
  • the intermediate re-illuminated image will continue to be passed to the third-level wavelet transform model
  • the third-level wavelet transform model will continue to perform re-illumination rendering on the corresponding intermediate re-illuminated image
  • the intermediate re-illuminated image corresponding to the third-level wavelet transform model is taken as the target re-illuminated image.
  • the method for generating a re-illuminated image does not rely on a single wavelet transform model for re-illuminated rendering of the image to be processed, avoiding the problem that a single model cannot learn a complete mapping relationship, and adopts a cascading method of multiple models Constituting a re-illuminated image generation system allows models at different levels to learn different mapping dimensions, further improving the rendering effect and reliability of the output re-illuminated image.
  • a residual network (Res Block) and a cross-layer connection (Skip Connection) are added in the process of downsampling and upsampling to improve the rendering effect of the generated re-illuminated image.
  • the process of re-illuminating an image with any level of wavelet transform model specifically includes The following steps:
  • the image may be down-sampled to obtain a feature image corresponding to the image.
  • the reconstructed feature image obtained by the residual network reconstruction is then up-sampled to obtain a re-illuminated image, wherein the frequency and multiple of the down-sampling are the same as the frequency and multiple of the up-sampling.
  • the frequency and multiple of up-sampling and down-sampling can be set according to actual conditions.
  • the image may be down-sampled 4 times step by step, downsampled by 2 times each time, and down-sampled 16 times in total, so as to obtain the feature image corresponding to the image.
  • the reconstructed feature image is up-sampled 4 times step by step, each up-sampled by 2 times, and up-sampled 16 times in total to obtain the re-illuminated image. It should be noted that, in the process of sampling the image, the acquired feature image is kept consistent with the size of the image.
  • the upsampled input is combined with the corresponding
  • the output of downsampling plays a supervisory role in the re-illumination rendering process, prevents learning mistakes, and further improves the rendering effect and reliability of the output re-illumination image.
  • a local convolution-normalization-nonlinear network (Conv-IN-Relu) is added to the heavily illuminated image generation system to further process the obtained feature images.
  • preprocessing can be performed only on images obtained by downsampling; optionally, preprocessing can be performed only on images obtained by upsampling; optionally, images obtained by downsampling and upsampling can be respectively Do preprocessing.
  • taking preprocessing for images obtained by down-sampling and up-sampling respectively as an example specifically includes the following steps:
  • the process of preprocessing the feature image mainly includes convolution, normalization, activation and other operations on the image.
  • the preprocessed feature image integrates the local information of the original feature image and adds nonlinearity.
  • the network is deepened, the learning ability and fitting ability of the wavelet transform model are enhanced, and the rendering effect and the output of the output heavily illuminated image are further improved. reliability.
  • the image to be processed as shown in Figure 8(a) is re-illuminated and rendered to obtain the re-illuminated image as shown in Figure 8(b), and the re-illuminated image as shown in Figure 8(b)
  • the color temperature has changed, and at the same time, the black shadow area on the left side of the image to be processed shown in Figure 8(a) has also been eliminated.
  • the image to be processed as shown in Figure 9(a) is re-illuminated and rendered to obtain the re-illuminated image as shown in Figure 9(b), and the re-illuminated image as shown in Figure 9(b)
  • the shadow has changed, and at the same time, a new shadow area is generated on the right side of the tree stump in the relit image shown in Figure 9(b), and the tone of the overall image has also become a cool tone.
  • discrete wavelet transform is used to reduce image resolution while increasing the number of image channels.
  • the wavelet transform directly processes the global information of the whole image, so the receptive field area is larger.
  • inverse discrete wavelet transform is used to improve the image resolution while reducing the number of image channels.
  • a local convolution-normalization-nonlinearization network is added to preprocess the feature image, and the obtained feature map is further processed.
  • a residual network and a cross-layer connection are added between downsampling and upsampling to improve the rendering effect of the generated re-illuminated image.
  • an embodiment of the present disclosure also provides a device for generating a re-illuminated image.
  • the method for generating a re-illuminated image provided in the first embodiment corresponds to the method for generating a re-illuminated image, so the implementation of the method for generating a re-illuminated image is also applicable to the device for generating a re-illuminated image provided in this embodiment, and will not be described in detail in this embodiment.
  • Fig. 12 is a schematic structural diagram of an apparatus for generating a re-illuminated image according to an embodiment of the present disclosure.
  • the apparatus 1200 for generating a re-illuminated image includes: an acquisition module 1210 and a first output module 1220 . in:
  • An acquisition module 1210 configured to acquire an image to be processed
  • the first output module 1220 is configured to input the image to be processed into the re-illuminated image generating system, perform re-illuminated rendering by the N wavelet transform models in the re-illuminated image generating system, and output the corresponding The target relit image of , where N is an integer greater than or equal to 1.
  • Fig. 13 is a schematic structural diagram of an apparatus for generating a re-illuminated image according to another embodiment of the present disclosure.
  • the apparatus 1300 for generating a re-illuminated image includes: an acquisition module 1310 and a first output module 1320 .
  • N is an integer greater than 1, wherein, the first output module 1310 includes:
  • the first output sub-module 13201 is configured to input the image to be processed into the first wavelet transform model for re-illumination rendering for the first wavelet transform model, and output an intermediate re-illumination image;
  • the second output sub-module 13202 is used to input the intermediate re-illuminated image output by the upper-level wavelet transform model into the next-level wavelet transform model for re-illuminated rendering from the second wavelet transform model, and output the next Intermediate re-illumination image corresponding to level wavelet transform model;
  • the first determination sub-module 13203 is used to stop transferring the intermediate re-illuminated image to the next-level wavelet transform model whenever it is determined that the corresponding intermediate re-illuminated image satisfies the optimization stop condition.
  • the second determination sub-module 13204 is used to determine that the corresponding intermediate re-illuminated image does not satisfy the optimization stop condition, then continue to pass the intermediate re-illuminated image to the next-level wavelet transform model, and the next-level The wavelet transform model continues to perform relighting rendering on the corresponding intermediate reilluminated image until the intermediate reilluminated image output by the first-level wavelet transform model meets the optimization stop condition, then the intermediate relight image that satisfies the optimization stop condition image as the target relit image.
  • the generating device 1300 of the re-illuminated image also includes:
  • the second output module 1330 is configured to input an image into the wavelet transform network of the wavelet transform model, and the wavelet transform network performs down-sampling processing on the image, and outputs a feature image corresponding to the image, wherein the The image includes the image to be processed and the intermediate re-light image;
  • the third output module 1340 is configured to input the feature image into the residual network of the wavelet transform model, reconstruct the feature image by the residual network, and output the reconstructed feature image;
  • the fourth output module 1350 is configured to input the reconstructed feature image into the wavelet inverse transform network of the wavelet transform model, perform up-sampling processing on the reconstructed feature image by the wavelet inverse transform network, and output the reconstructed feature image light image;
  • a sampling module 1360 configured to down-sample and up-sample the image according to a preset frequency and a preset multiple;
  • the preprocessing module 1370 is configured to input the upsampled feature image obtained after the upsampling process to the second convolutional network of the wavelet transform model, and the second convolutional network pre-processes the upsampled feature image deal with.
  • the third output module 1340 includes:
  • the third output sub-module 13401 is configured to input the feature image acquired by downsampling into the first convolutional network of the wavelet transform model, and perform preprocessing on the feature image by the first convolutional network , and input the preprocessed feature image output by the first convolutional network into the residual network.
  • the obtaining module 1210 has the same function and structure as the obtaining module 1310 .
  • the device for generating a heavily illuminated image neither relies on manual design nor on a convolutional neural network model obtained based on neural network learning and training, but utilizes a heavily illuminated image generation system composed of at least one wavelet transform model Render the image to be processed, so that the obtained re-illuminated image retains the scene content structure at low frequency, and retains the detailed shadow information at high frequency, so as to obtain a re-illuminated image with more accurate and reliable rendering effect.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • FIG. 14 shows a schematic block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 1400 includes a computing unit 1401 that can execute according to a computer program stored in a read-only memory (ROM) 1402 or loaded from a storage unit 1408 into a random-access memory (RAM) 1403. Various appropriate actions and treatments. In the RAM 1403, various programs and data necessary for the operation of the device 1400 can also be stored.
  • the computing unit 1401, ROM 1402, and RAM 1403 are connected to each other through a bus 1404.
  • An input/output (I/O) interface 1405 is also connected to the bus 1404 .
  • the I/O interface 1405 includes: an input unit 1406, such as a keyboard, a mouse, etc.; an output unit 1407, such as various types of displays, speakers, etc.; a storage unit 1408, such as a magnetic disk, an optical disk, etc. ; and a communication unit 1409, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 1401 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 1401 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 1401 executes various methods and processes described above, such as a method for generating a re-illuminated image. For example, in some embodiments, the method of generating a relit image may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408 .
  • part or all of the computer program may be loaded and/or installed on the device 1400 via the ROM 1402 and/or the communication unit 1409 .
  • the computer program When the computer program is loaded into the RAM 1403 and executed by the computing unit 1401, one or more steps of the method for generating a re-illuminated image described above can be performed.
  • the computing unit 1401 may be configured in any other appropriate way (for example, by means of firmware) to execute the method for generating a re-illuminated image.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes can be provided to a processor or controller of a general-purpose computer, a special-purpose computer, or other programmable character image restoration devices, so that when the program code is executed by the processor or controller, the program codes specified in the flowchart and/or block diagram can be implemented. The function/operation is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: local area networks (LANs), wide area networks (WANs), the Internet, and blockchain networks.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as a cloud computing server or a cloud host. ), there are defects such as high management difficulty and weak business scalability.
  • the server can also be a server of a distributed system, or a server combined with a blockchain.
  • the present disclosure also provides a computer program product, including a computer program.
  • a computer program product including a computer program.
  • the computer program is executed by a processor, the above-mentioned method for generating a re-illuminated image is realized.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

一种重光照图像的生成方法、装置及电子设备,方法包括:获取待处理图像;将待处理图像输入至重光照图像生成系统中,由重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出待处理图像对应的目标重光照图像,N为大于或者等于1的整数。

Description

重光照图像的生成方法、装置及电子设备
相关申请的交叉引用
本公开要求北京百度网讯科技有限公司于2021年06月29日提交的、发明名称为“重光照图像的生成方法、装置及电子设备”的、中国专利申请号“202110729940.4”的优先权。
技术领域
本公开的实施例总体上涉及图像处理技术领域,并且更具体地涉及人工智能领域,具体涉及计算机视觉和深度学习技术,可应用于图像处理场景下。
背景技术
随着移动终端技术以及图像处理(Image Processing)技术的迅速发展,各种具有基于重光照(Relighting)技术的特效功能的应用程序(Application,简称APP)应运而生,用户对于为图像增加滤镜、更改人脸阴影效果等功能的要求也日益提高。
相关技术中,通常采用以下两种方式进行用户异常行为检测:基于人工进行渲染的方式,以及基于神经网络学习训练得到一个用于对待处理图像进行重光照渲染的模型的方式。
然而,基于人工进行渲染,存在人力成本极高,且重光照图像生成效率低、可靠性差的问题;基于神经网络学习训练得到的网络,生成的重光照图像往往会存在产生伪影、无法学习到阴影变化等问题。
因此,如何提高重光照图像的生成过程中的有效性和可靠性,已成为了重要的研究方向之一。
发明内容
本公开提供了一种重光照图像的生成方法、装置及电子设备。
根据第一方面,提供了一种重光照图像的生成方法,包括:
获取待处理图像;
将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
根据第二方面,提供了一种重光照图像的生成装置,包括:
获取模块,用于获取待处理图像;
第一输出模块,用于将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
根据第三方面,提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本公开第一方面所述的重光照图像的生成方法。
根据第四方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行本公开第一方面所述的重光照图像的生成方法。
根据第五方面,提供了一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序在被处理器执行时实现根据本公开第一方面所述的重光照图像的生成方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开第一实施例的示意图;
图2是一种重光照图像生成过程的示意图;
图3是一种待处理图像的示意图;
图4是一种重光照图像生成过程中不同方向分量对应的示意图;
图5是根据本公开第二实施例的示意图;
图6是根据本公开第三实施例的示意图;
图7是根据本公开第四实施例的示意图;
图8是另一种重光照图像生成过程的示意图;
图9是另一种重光照图像生成过程的示意图;
图10是另一种重光照图像生成过程的示意图;
图11是另一种重光照图像生成过程的示意图;
图12是用来实现本公开实施例的重光照图像的生成方法的重光照图像的生成装置的框图;
图13是用来实现本公开实施例的重光照图像的生成方法的重光照图像的生成装置的框图;
图14是用来实现本公开实施例的重光照图像的生成或者重光照图像的生成的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
以下对本公开的方案涉及的技术领域进行简要说明:
图像处理(Image Processing),用计算机对图像进行分析,以达到所需结果的技术。又称影像处理。图像处理一般指数字图像处理。数字图像是指用工业相机、摄像机、扫描仪等设备经过拍摄得到的一个大的二维数组,该数组的元素称为像素,其值称为灰度值。图像处理技术一般包括图像压缩,增强和复原,匹配、描述和识别3个部分。
AI(Artificial Intelligence,人工智能),是研究使计算机来模拟人生的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术,也有软件层面的技术。人工智能硬件技术一般包括计算机视觉技术、语音识别技术、自然语言处理技术以及及其学习/深度学习、大数据处理技术、知识图谱技术等几大方面。
计算机视觉(Computer Vision),是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取‘信息’的人工智能系统。这里所指的信息指Shannon定义的,可以用来帮助做一个“决定”的信息。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工系统从图像或多维数据中“感知”的科学。
DL(Deep Learning,深度学习),是ML机器学习(Machine Learning,机器学习)领域中一个新的研究方向,它被引入机器学习使其更接近于最初的目标——人工智能。深度学习是学习样本数据的内在律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。深度学习是一个复杂的机器学习算法,在语音和图像识别方面取得的效果,远远超过先前相关技术。
下面参考附图描述本公开实施例的一种重光照图像的生成方法、装置及电子设备。
图1是根据本公开第一实施例的示意图。其中,需要说明的是,本实施例的重光照图像的生成方法的执行主体为重光照图像的生成装置,重光照图像的生成装置具体可以为硬件设备,或者硬件设备中的软件等。其中,硬件设备例如终端设备、服务器等。如图1所示,本实施例提出的重光照图像的生成方法,包括如下步骤:
S101、获取待处理图像。
其中,待处理图像,可以为用户输入的任一图像,又例如,可以对任一视频,例如教学视频和影视剧作品等视频,进行解码、抽帧后,得到一帧的图像作为待处理图像。
需要说明的是,在试图获取待处理图像时,可以获取本地或者远程存储区域中预先存储的包括图像作为待处理图像,也可以直接拍摄图像作为待处理图像。
可选地,可以从本地或者远程图像库、视频库中的至少一处获取存储的图像或者视频,以获取待处理图像;可选地,也可以直接拍摄图像作为待处理图像。本公开实施例对获取待处理图像的方式不作限定,可以根据实际情况进行选取。
S102、将待处理图像输入至重光照图像生成系统中,由重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
其中,重光照技术(Relighting),是指将一幅给定图像的光照方向和色温进行改变,从而生成另一幅不同光照方向和色温的图像。
举例而言,如图2所示,图2(a)是在色温2500K,光源在东边时的场景图像,图2(b)是在色温6500K,光源在西边时的场景图像。由此可知,当色温值较低时,图像颜色偏黄,属于暖色调;当色温值较高时,图像颜色泛白,属于冷色调。同时,当光源位置不同时,产生的阴影也不同。综上所述,进行重光照渲染,目的是对图2(a)进行渲染,生成图2(b),并且场景内容保持一致,只改变色温和阴影方向。
其中,重光照图像生成系统,包括N个小波变换模型,N为大于或者等于1的整数。例如,重光照图像生成系统包括1个小波变换模型;又例如,重光照图像生成系统包括3个结构一致的小波变换模型,此种情况下,3个小波变换模型以级联(Cascade)的方式进行连接。
需要说明的是,相关技术中,在对待处理图像进行重光照处理时,通常会采用基于人工进行渲染,或者基于神经网络学习训练得到一个用于对待处理图像进行重光照渲染的模型,例如训练得到一个卷积神经网络(Convolutional Neural Networks,CNN)模型。
然而,基于人工进行渲染,存在人力成本极高,且重光照图像生成效率低、可靠性差的问题;基于神经网络学习训练得到的网络,一般仅对应于时域,即于RGB(Red Green Blue) 图像下直接进行操作,此种情况下,由于网络设计存在缺陷,得到的重光照图像往往会存在产生伪影、无法学习到阴影变化等问题。
由此,本公开提出的重光照图像的生成方法,能够通过训练得到一个小波变换模型对待处理图像进行重光照渲染,以通过在频域图像上进行操作,生成更优质的重光照图像。
需要说明的是,本公开对于小波变换的种类不作限定,可以根据实际情况进行选择。可选地,可以选择离散小波变换模型对待处理图像进行重光照渲染。
根据本公开实施例的重光照图像的生成方法,既不依赖人工设计,也不再依赖基于神经网络学习训练得到的卷积神经网络模型,利用由至少一个小波变换模型构成的重光照图像生成系统对待处理图像进行渲染,使得得到的重光照图像在低频上保留场景内容结构,在高频上保留细节阴影信息,从而得到渲染效果更加准确、可靠的重光照图像。
下面对本公开的方案涉及的小波变换模型的处理过程进行简要说明:
图像的频率是表征图像中灰度变化剧烈程度的指标,是灰度在平面空间上的梯度。
举例来说,给定一张大面积的沙漠图像,那么得到的是一片灰度变化缓慢的区域,对应的频率值很低;而对于地表属性变换剧烈的边缘区域,例如层峦叠嶂的高山,在图像中是一片灰度变化剧烈的区域,则对应的频率值较高。
由此,从物理效果上看,小波变换可以将图像从空间域转换到频率域,即可以将图像的灰度分布函数变换为图像的频率分布函数,通过逆变换可以将图像的频率分布函数变换为灰度分布函数。
以待处理图像的二维离散小波变换模型的处理过程为例,针对如图3所示的待处理图像,可选地,可以对待处理图像的每一行像素进行一维的离散小波变换(Discrete Wavelet Transformation,简称DWT),以得到原始图像(待处理图像)在水平方向上的低频分量L和高频分量H。进一步地,可以对变换后的数据的每一列像素再进行一维的DWT,从而得到如图4中所示的四个结果。
其中,根据得到的水平方向上的低频分量和垂直方向上的低频分量,即LL,可以得到如图4(a)所示的图像;根据水平方向上的低频分量和垂直方向上的高频分量,即LH,可以得到如图4(b)所示的图像;根据水平方向上的高频分量和垂直方向上的低频分量,即HL,可以得到如图4(c)所示的图像;根据水平方向上的高频分量和垂直方向上的高频分量,即HH,可以得到如图4(d)所示的图像。
此种情况下,针对图3所示的待处理图像,能够得到如图4(a)所示的能够体现待处理图像中物体摆放情况的图像,即待处理图像的的近似图像。图4(a)所示的图像对应于待处理图像的低频部分,而图4(b)~(d)所示的三张图则对应待处理图像 的轮廓,依次为水平、垂直和对角线,共三个方向的细节图像,对应待处理图像的高频部分。
本公开实施例中,若输入的待处理图像宽、高均为1024,通道数为3,此种情况下,待处理图像的尺寸可以表示为1024*1024*3。可选地,通过离散小波变换模型中的离散小波变换网络进行DWT处理之后,图像的尺寸则变为512*512*3。
进一步地,通过将图4(a)~(d)中的四张图像在通道维度上进行串接,即可得到尺寸为512*512*12的图像。此种情况下,通过DWT之后,图像的宽、高均减小了2倍,同时,通道数增加了4倍,此过程又称空间到深度(Spatial2Depth)的转换过程。
由此,以上述小波变换处理操作替代CNN中常用的最大池化(max pooling)或avg pooling(平均池化)的操作,能够不再仅针对局部进行转换,而是通过DWT针对整个待处理图像进行转换,具有感受野更大、处理区域更广泛的优势,这样一来,处理结果也更加准确。
进一步地,在通过小波变换模型中的小波变换网络进行处理后,可选地,可以通过离散小波变换模型中的离散小波反变换网络进行IDWT处理,离线小波反变换(Inverse discrete wavelet transform,简称IDWT)的过程与DWT类似,此处不再赘述。
需要说明的是,本公开中,为了进一步提升重光照图像的渲染效果和可靠性,可以采用由至少两个小波变换模型级联的重光照图像生成系统。
作为一种可能的实现方式,如图5所示,本公开提出的重光照图像的生成方法,在上述实施例的基础上,具体包括以下步骤:
S501、获取待处理图像。
该步骤S501与上一实施例中的步骤S101相同,此处不再赘述。
上一实施例中的步骤S102具体可包括以下步骤S502~S504。
S502、针对第一个小波变换模型,将待处理图像输入至第一个小波变换模型中进行重光照渲染,输出中间重光照图像。
本公开实施例中,可以采用多阶段的渲染策略,即针对第一个小波变换模型,将待处理图像输入至第一个小波变换模型中进行重光照渲染,输出中间重光照图像,学习从待处理图像到输出的中间重光照图像之间的映射关系。
需要说明的是,在模型训练阶段,在针对第一个小波变换模型,将待处理图像输入至第一个小波变换模型中进行重光照渲染,输出中间重光照图像之后,即可固定第一个小波变换模型,并根据该模型将训练集(预设数量的待处理样本图像)进行处理,输出训练集在第一个小波变换模型下的中间重光照图像。
S503、从第二个小波变换模型起,将上一级小波变换模型输出的中间重光照图像 输入至下一级小波变换模型中进行重光照渲染,输出下一级小波变换模型对应的中间重光照图像。
本公开实施例中,可以从第二个小波变换模型起,将上一级小波变换模型输出的中间重光照图像输入至下一级小波变换模型中进行重光照渲染,输出下一级小波变换模型对应的中间重光照图像,此种情况下,由于上一级小波变换模型已经学到了大部分的映射关系,因此,使得下一级小波变换模型对应的中间重光照图像相较上一级小波变换模型对应的中间重光照图像,更接近准确结果(Ground Truth)。同时,针对模型训练阶段,下一级小波变换模型的训练难度也会极大降低。
S504、每当其中一级小波变换模型输出对应的中间重光照图像,确定对应的中间重光照图像满足优化停止条件,则停止向下一级小波变换模型传递对应的中间重光照图像,并将对应的中间重光照图像作为目标重光照图像。
其中,优化停止条件可以根据实际情况进行设定,本公开不作限定。
可选地,可以设定优化停止条件为对图像进行处理的模型数量;可选地,可以设定优化停止条件为中间重光照图像的渲染效果。
举例而言,若优化停止条件为对图像进行处理的模型数量为2,此种情况下,其中一级小波变换模型输出对应的中间重光照图像为第二个小波变换模型处理后得到的图像,说明对应的中间重光照图像满足优化停止条件,则停止向下一级小波变换模型传递对应的中间重光照图像,并将对应的中间重光照图像作为目标重光照图像。
S505、确定对应的中间重光照图像未满足优化停止条件,则将中间重光照图像继续向下一级小波变换模型传递,由下一级小波变换模型继续对对应的中间重光照图像进行重光照渲染,直至其中一级小波变换模型输出的中间重光照图像满足优化停止条件,则将满足优化停止条件的中间重光照图像作为目标重光照图像。
举例而言,若优化停止条件为对图像进行处理的模型数量为3,此种情况下,其中一级小波变换模型输出对应的中间重光照图像为第二个小波变换模型处理后得到的图像,说明对应的中间重光照图像未满足优化停止条件,则将中间重光照图像继续向第三级小波变换模型传递,由第三级小波变换模型继续对对应的中间重光照图像进行重光照渲染,并将第三级小波变换模型继续对应的的中间重光照图像作为目标重光照图像。
根据本公开实施例的重光照图像的生成方法,既不依赖单个小波变换模型对待处理图像进行重光照渲染,避免了单个模型无法学习到完整的映射关系的问题,采用多个模型级联的方式构成重光照图像生成系统,可以让不同级模型学习到不同的映射维度,进一步提升了输出的重光照图像的渲染效果和可靠性。
需要说明的是,本公开中,在下采样和上采样的处理过程中间加入了残差网络(Res Block)以及跨层连接(Skip Connection)来提升生成的重光照图像的渲染效果。
作为一种可能的实现方式,如图6所示,本公开提出的重光照图像的生成方法,在上述实施例的基础上,任一级小波变换模型对图像进行重光照渲染的过程,具体包括以下步骤:
S601、将图像输入至小波变换模型的小波变换网络中,由小波变换网络对图像进行下采样处理,输出图像对应的特征图像,其中,图像包括待处理图像和中间重光照图像。
S602、将特征图像输入至小波变换模型的残差网络中,由残差网络对特征图像进行重建,输出重建特征图像。
S603、将重建特征图像输入至小波变换模型的小波反变换网络中,由小波反变换网络对重建特征图像进行上采样处理,输出重光照图像。
本公开实施例中,可以对图像进行下采样,以获取图像对应的特征图像。再将残差网络重建得到的重建特征图像进行上采样,以获取重光照图像,其中,下采样的频率和倍数与上采样的频率和倍数相同。其中,上、下采样的频率和倍数可以根据实际情况进行设定。
举例而言,可以将图像逐级下采样4次,每次下采样2倍,共下采样16倍,以获取图像对应的特征图像。进一步地,将重建特征图像逐级上采样4次,每次上采样2倍,共上采样16倍,以得到重光照图像。需要说明的是,在对图像进行采样的过程中,保持获取到的特征图像与图像大小一致。
根据本公开实施例的重光照图像的生成方法,通过在小波变换模型中加入残差网络以及跨层连接的连接方式,使得上采样的输入在上一层上采样输出的基础上,结合对应的下采样的输出,对重光照渲染过程起到了监督作用,防止学错,进一步提升了输出的重光照图像的渲染效果和可靠性。
需要说明的是,本公开中,重光照图像生成系统中增加了局部卷积-归一化-非线性化网络(Conv-IN-Relu),进一步对得到的特征图像进行处理。
可选地,可以仅针对下采样获取到的图像进行预处理;可选地,可以仅针对上采样获取到的图像进行预处理;可选地,可以分别针对下采样和上采样获取到的图像进行预处理。
作为一种可能的实现方式,如图7所示,在上述实施例的基础上,以分别针对下采样和上采样获取到的图像进行预处理为例,具体包括以下步骤:
S701、将下采样获取到的特征图像输入至小波变换模型的第一卷积网络中,由第 一卷积网络对特征图像进行预处理,并将第一卷积网络输出的预处理后的特征图像输入至残差网络中。
S702、将进行上采样处理后得到的上采样特征图像输入至小波变换模型的第二卷积网络,由第二卷积网络对上采样特征图像进行预处理。
其中,对特征图像进行预处理的过程,主要包括对图像进行卷积、归一化、激活等操作,进行预处理后的特征图像整合了原特征图像的局部信息,并且增加了非线性。
根据本公开实施例的重光照图像的生成方法,通过对图像进行预处理,使得网络被加深,增强了小波变换模型的学习能力及拟合能力,进一步提升了输出的重光照图像的渲染效果和可靠性。
需要说明的是,本公开提出的重光照图像的生成方法,可以运用于多种图像处理场景中。
针对为普通场景类图片增加滤镜的应用场景,如图8~9所示,可以通过改变其色温来创作不同的滤镜效果,让用户只需要拍一张图片即可得到多张不同色调的结果,方便用户后续编辑使用,提升了用户体验、吸引了用户兴趣。
其中,如图8所示,将如图8(a)所示的待处理图像进行重光照渲染,得到如图8(b)所示的重光照图像,8(b)所示的重光照图像色温发生了改变,同时图8(a)所示的待处理图像左侧的黑色阴影区域也被消除掉了。
其中,如图9所示,将如图9(a)所示的待处理图像进行重光照渲染,得到如图9(b)所示的重光照图像,9(b)所示的重光照图像阴影发生了改变,同时图9(b)所示的重光照图像的树桩右侧,生成了新的阴影区域,整体图的色调也变成了冷色调。
针对为人像类图片增加特效的应用场景,如图10所示,可以通过改变阴影的程度和位置生成多种效果,增加了新玩法,吸引用户使用产品。
综上所述,如图11所示,本公开提供的重光照图像的生成方法,在下采样阶段,采用了离散小波变换来降低图像分辨率,同时提高图像通道数。与相关技术中采用局部卷积的操作不同,小波变换直接对整图的全局信息处理,因此感受野区域更大。同样地,在上采样阶段,采用了离散小波反变换来提升图像分辨率,同时减小图像通道数。
进一步地,在下采样和上采样之后,增加局部卷积-归一化-非线性化网络对特征图像进行预处理,进一步对得到的特征图进行处理。此外,在下采样和上采样中间加入了残差网络以及跨层连接的连接方式,提升了生成的重光照图像的渲染效果。
需要说明的是,本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用,均符合相关法律法规的规定,且不违背公序良俗。本公开的意图是,应以使无意 或未经授权的使用访问风险最小化的方式来管理和处理个人信息数据。通过限制数据收集并在不再需要时删除数据,从而将风险降到最低。需要说明的是,本公开中与人员有关的所有信息,均在人员知情且同意的情况下收集。
与上述几种实施例提供的重光照图像的生成方法相对应,本公开的一个实施例还提供一种重光照图像的生成装置,由于本公开实施例提供的重光照图像的生成装置与上述几种实施例提供的重光照图像的生成方法相对应,因此在重光照图像的生成方法的实施方式也适用于本实施例提供的重光照图像的生成装置,在本实施例中不再详细描述。
图12是根据本公开一个实施例的重光照图像的生成装置的结构示意图。
如图12所示,该重光照图像的生成装置1200,包括:获取模块1210和第一输出模块1220。其中:
获取模块1210,用于获取待处理图像;
第一输出模块1220,用于将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
图13是根据本公开另一个实施例的重光照图像的生成装置的结构示意图。
如图13所示,该重光照图像的生成装置1300,包括:获取模块1310和第一输出模块1320。
其中:N为大于1的整数,其中,第一输出模块1310,包括:
第一输出子模块13201,用于针对第一个小波变换模型,将所述待处理图像输入至所述第一个小波变换模型中进行重光照渲染,输出中间重光照图像;
第二输出子模块13202,用于从第二个小波变换模型起,将上一级小波变换模型输出的中间重光照图像输入至下一级小波变换模型中进行重光照渲染,输出所述下一级小波变换模型对应的中间重光照图像;
第一确定子模块13203,用于每当其中一级小波变换模型输出对应的中间重光照图像,确定所述对应的中间重光照图像满足优化停止条件,则停止向下一级小波变换模型传递所述对应的中间重光照图像,并将所述对应的中间重光照图像作为所述目标重光照图像;
第二确定子模块13204,用于确定所述对应的中间重光照图像未满足所述优化停止条件,则将所述中间重光照图像继续向下一级小波变换模型传递,由所述下一级小波变换模型继续对所述对应的中间重光照图像进行重光照渲染,直至其中一级小波变换模型输出的中间重光照图像满足所述优化停止条件,则将所述满足优化停止条件的中 间重光照图像作为所述目标重光照图像。
其中,重光照图像的生成装置1300,还包括:
第二输出模块1330,用于将图像输入至所述小波变换模型的小波变换网络中,由所述小波变换网络对所述图像进行下采样处理,输出所述图像对应的特征图像,其中,所述图像包括所述待处理图像和所述中间重光照图像;
第三输出模块1340,用于将所述特征图像输入至所述小波变换模型的残差网络中,由所述残差网络对所述特征图像进行重建,输出重建特征图像;
第四输出模块1350,用于将所述重建特征图像输入至所述小波变换模型的小波反变换网络中,由所述小波反变换网络对所述重建特征图像进行上采样处理,输出所述重光照图像;
采样模块1360,用于按照预设频率和预设倍数对所述图像进行下采样和上采样;
预处理模块1370,用于将进行上采样处理后得到的上采样特征图像输入至所述小波变换模型的第二卷积网络,由所述第二卷积网络对所述上采样特征图像进行预处理。
其中,第三输出模块1340,包括:
第三输出子模块13401,用于将下采样获取到的所述特征图像输入至所述小波变换模型的第一卷积网络中,由所述第一卷积网络对所述特征图像进行预处理,并将所述第一卷积网络输出的预处理后的特征图像输入至所述残差网络中。
需要说明的是,获取模块1210与获取模块1310具有相同功能和结构。
根据本公开实施例的重光照图像的生成装置,既不依赖人工设计,也不再依赖基于神经网络学习训练得到的卷积神经网络模型,利用由至少一个小波变换模型构成的重光照图像生成系统对待处理图像进行渲染,使得得到的重光照图像在低频上保留场景内容结构,在高频上保留细节阴影信息,从而得到渲染效果更加准确、可靠的重光照图像。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图14示出了可以用来实施本公开的实施例的示例电子设备1400的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图14所示,设备1400包括计算单元1401,其可以根据存储在只读存储器(ROM) 1402中的计算机程序或者从存储单元1408加载到随机访问存储器(RAM)1403中的计算机程序,来执行各种适当的动作和处理。在RAM1403中,还可存储设备1400操作所需的各种程序和数据。计算单元1401、ROM 1402以及RAM 1403通过总线1404彼此相连。输入/输出(I/O)接口1405也连接至总线1404。
设备1400中的多个部件连接至I/O接口1405,包括:输入单元1406,例如键盘、鼠标等;输出单元1407,例如各种类型的显示器、扬声器等;存储单元1408,例如磁盘、光盘等;以及通信单元1409,例如网卡、调制解调器、无线通信收发机等。通信单元1409允许设备1400通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元1401可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元1401的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元1401执行上文所描述的各个方法和处理,例如重光照图像的生成方法。例如,在一些实施例中,重光照图像的生成方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元1408。在一些实施例中,计算机程序的部分或者全部可以经由ROM1402和/或通信单元1409而被载入和/或安装到设备1400上。当计算机程序加载到RAM1403并由计算单元1401执行时,可以执行上文描述的重光照图像的生成方法的一个或多个步骤。备选地,在其他实施例中,计算单元1401可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行重光照图像的生成方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程人物图像的修复装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行, 作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、互联网以及区块链网络。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务端可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务(“Virtual Private Server”,或简称“VPS”)中,存在的管理难度大,业务扩展性弱的缺陷。服务器也可以为分布式系统的服务器,或者是结合了区块链的服务器。
本公开还提供一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时,实现如上所述的重光照图像的生成方法。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (17)

  1. 一种重光照图像的生成方法,包括:
    获取待处理图像;
    将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
  2. 根据权利要求1所述的生成方法,其中,N为大于1的整数,其中,所述将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,包括:
    针对第一个小波变换模型,将所述待处理图像输入至所述第一个小波变换模型中进行重光照渲染,输出中间重光照图像;
    从第二个小波变换模型起,将上一级小波变换模型输出的中间重光照图像输入至下一级小波变换模型中进行重光照渲染,输出所述下一级小波变换模型对应的中间重光照图像;
    每当其中一级小波变换模型输出对应的中间重光照图像,确定所述对应的中间重光照图像满足优化停止条件,则停止向下一级小波变换模型传递所述对应的中间重光照图像,并将所述对应的中间重光照图像作为所述目标重光照图像。
  3. 根据权利要求2所述的生成方法,其中,还包括:
    确定所述对应的中间重光照图像未满足所述优化停止条件,则将所述中间重光照图像继续向下一级小波变换模型传递,由所述下一级小波变换模型继续对所述对应的中间重光照图像进行重光照渲染,直至其中一级小波变换模型输出的中间重光照图像满足所述优化停止条件,则将所述满足所述优化停止条件的中间重光照图像作为所述目标重光照图像。
  4. 根据权利要求1-3任一项所述的生成方法,其中,任一级小波变换模型对图像进行重光照渲染的过程,包括:
    将图像输入至所述小波变换模型的小波变换网络中,由所述小波变换网络对所述图像进行下采样处理,输出所述图像对应的特征图像,其中,所述图像包括 所述待处理图像和所述中间重光照图像;
    将所述特征图像输入至所述小波变换模型的残差网络中,由所述残差网络对所述特征图像进行重建,输出重建特征图像;
    将所述重建特征图像输入至所述小波变换模型的小波反变换网络中,由所述小波反变换网络对所述重建特征图像进行上采样处理,输出所述重光照图像。
  5. 根据权利要求4所述的生成方法,其中,还包括:
    按照预设频率和预设倍数对所述图像进行下采样和上采样。
  6. 根据权利要求4所述的生成方法,其中,所述将所述特征图像输入至所述小波变换模型的残差网络中,还包括:
    将下采样获取到的所述特征图像输入至所述小波变换模型的第一卷积网络中,由所述第一卷积网络对所述特征图像进行预处理,并将所述第一卷积网络输出的预处理后的特征图像输入至所述残差网络中。
  7. 根据权利要求4所述的生成方法,其中,还包括:
    将进行上采样处理后得到的上采样特征图像输入至所述小波变换模型的第二卷积网络,由所述第二卷积网络对所述上采样特征图像进行预处理。
  8. 一种重光照图像的生成装置,包括:
    获取模块,用于获取待处理图像;
    第一输出模块,用于将所述待处理图像输入至重光照图像生成系统中,由所述重光照图像生成系统中的N个小波变换模型进行重光照渲染,输出所述待处理图像对应的目标重光照图像,其中,N为大于或者等于1的整数。
  9. 根据权利要求8所述的生成装置,其中,N为大于1的整数,其中,所述第一输出模块,包括:
    第一输出子模块,用于针对第一个小波变换模型,将所述待处理图像输入至所述第一个小波变换模型中进行重光照渲染,输出中间重光照图像;
    第二输出子模块,用于从第二个小波变换模型起,将上一级小波变换模型输出的中间重光照图像输入至下一级小波变换模型中进行重光照渲染,输出所述下一级小波变换模型对应的中间重光照图像;
    第一确定子模块,用于每当其中一级小波变换模型输出对应的中间重光照图像,确定所述对应的中间重光照图像满足优化停止条件,则停止向下一级小波变换模型传递所述对应的中间重光照图像,并将所述对应的中间重光照图像作为所述目标重光照图像。
  10. 根据权利要求9所述的生成装置,其中,还包括:
    第二确定子模块,用于确定所述对应的中间重光照图像未满足所述优化停止条件,则将所述中间重光照图像继续向下一级小波变换模型传递,由所述下一级小波变换模型继续对所述对应的中间重光照图像进行重光照渲染,直至其中一级小波变换模型输出的中间重光照图像满足所述优化停止条件,则将所述满足优化停止条件的中间重光照图像作为所述目标重光照图像。
  11. 根据权利要求8-10任一项所述的生成装置,其中,还包括:
    第二输出模块,用于将图像输入至所述小波变换模型的小波变换网络中,由所述小波变换网络对所述图像进行下采样处理,输出所述图像对应的特征图像,其中,所述图像包括所述待处理图像和所述中间重光照图像;
    第三输出模块,用于将所述特征图像输入至所述小波变换模型的残差网络中,由所述残差网络对所述特征图像进行重建,输出重建特征图像;
    第四输出模块,用于将所述重建特征图像输入至所述小波变换模型的小波反变换网络中,由所述小波反变换网络对所述重建特征图像进行上采样处理,输出所述重光照图像。
  12. 根据权利要求11所述的生成装置,其中,还包括:
    采样模块,用于按照预设频率和预设倍数对所述图像进行下采样和上采样。
  13. 根据权利要求11所述的生成装置,其中,所述第三输出模块,包括:
    第三输出子模块,用于将下采样获取到的所述特征图像输入至所述小波变换模型的第一卷积网络中,由所述第一卷积网络对所述特征图像进行预处理,并将所述第一卷积网络输出的预处理后的特征图像输入至所述残差网络中。
  14. 根据权利要求11所述的生成装置,其中,还包括:
    预处理模块,用于将进行上采样处理后得到的上采样特征图像输入至所述小 波变换模型的第二卷积网络,由所述第二卷积网络对所述上采样特征图像进行预处理。
  15. 一种电子设备,其特征在于,包括处理器和存储器;
    其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如权利要求1-7中任一项所述的方法。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一项所述的方法。
  17. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-7中任一项所述的方法。
PCT/CN2022/074900 2021-06-29 2022-01-29 重光照图像的生成方法、装置及电子设备 WO2023273340A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110729940.4A CN113554739A (zh) 2021-06-29 2021-06-29 重光照图像的生成方法、装置及电子设备
CN202110729940.4 2021-06-29

Publications (1)

Publication Number Publication Date
WO2023273340A1 true WO2023273340A1 (zh) 2023-01-05

Family

ID=78102491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074900 WO2023273340A1 (zh) 2021-06-29 2022-01-29 重光照图像的生成方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN113554739A (zh)
WO (1) WO2023273340A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554739A (zh) * 2021-06-29 2021-10-26 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备
CN115546041B (zh) * 2022-02-28 2023-10-20 荣耀终端有限公司 补光模型的训练方法、图像处理方法及其相关设备
CN115546010B (zh) * 2022-09-21 2023-09-12 荣耀终端有限公司 图像处理方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889128A (zh) * 2006-07-17 2007-01-03 北京航空航天大学 基于gpu的预计算辐射度传递全频阴影的方法
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
US9001226B1 (en) * 2012-12-04 2015-04-07 Lytro, Inc. Capturing and relighting images using multiple devices
CN113554739A (zh) * 2021-06-29 2021-10-26 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备
CN113592998A (zh) * 2021-06-29 2021-11-02 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7633503B2 (en) * 2005-03-22 2009-12-15 Microsoft Corporation Local, deformable precomputed radiance transfer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889128A (zh) * 2006-07-17 2007-01-03 北京航空航天大学 基于gpu的预计算辐射度传递全频阴影的方法
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
US9001226B1 (en) * 2012-12-04 2015-04-07 Lytro, Inc. Capturing and relighting images using multiple devices
CN113554739A (zh) * 2021-06-29 2021-10-26 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备
CN113592998A (zh) * 2021-06-29 2021-11-02 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"16th European Conference - Computer Vision – ECCV 2020", vol. 31, 1 January 1900, CORNELL UNIVERSITY LIBRARY,, 201 Olin Library Cornell University Ithaca, NY 14853, article PUTHUSSERY DENSEN; PANIKKASSERIL SETHUMADHAVAN HRISHIKESH; KURIAKOSE MELVIN; CHARANGATT VICTOR JIJI: "WDRN: A Wavelet Decomposed RelightNet for Image Relighting", pages: 519 - 534, XP047575775, DOI: 10.1007/978-3-030-67070-2_31 *
MAJED EL HELOU; RUOFAN ZHOU; SABINE SUSSTRUNK; RADU TIMOFTE: "NTIRE 2021 Depth Guided Image Relighting Challenge", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 27 April 2021 (2021-04-27), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081944752 *

Also Published As

Publication number Publication date
CN113554739A (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2023273340A1 (zh) 重光照图像的生成方法、装置及电子设备
WO2023273536A1 (zh) 重光照图像的生成方法、装置及电子设备
WO2023092813A1 (zh) 一种基于通道注意力的Swin-Transformer图像去噪方法及系统
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
CN107123089B (zh) 基于深度卷积网络的遥感图像超分辨重建方法及系统
CN112419151B (zh) 图像退化处理方法、装置、存储介质及电子设备
CN108364270B (zh) 偏色图像颜色还原方法及装置
CN112837224A (zh) 一种基于卷积神经网络的超分辨率图像重建方法
Chen et al. MICU: Image super-resolution via multi-level information compensation and U-net
CN117597703A (zh) 用于图像分析的多尺度变换器
Wang et al. No-reference stereoscopic image quality assessment using quaternion wavelet transform and heterogeneous ensemble learning
Chen et al. Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera
Wu et al. A novel perceptual loss function for single image super-resolution
Kumar et al. Dynamic stochastic resonance and image fusion based model for quality enhancement of dark and hazy images
Han Texture Image Compression Algorithm Based on Self‐Organizing Neural Network
Barai et al. Human visual system inspired saliency guided edge preserving tone-mapping for high dynamic range imaging
Rafid Hashim et al. Single image dehazing by dark channel prior and luminance adjustment
Ding et al. Learning-based underwater image enhancement: An efficient two-stream approach
Ma et al. An efficient framework for deep learning‐based light‐defect image enhancement
Yang et al. Unsupervised Low Illumination Enhancement Model Based on YCbCr Color Space
CN111062886A (zh) 酒店图片的超分辨方法、系统、电子产品和介质
Gao et al. [Retracted] Application of Multimedia Semantic Extraction Method in Fast Image Enhancement Control
Chen et al. Hcsam-Net: Multistage Network with a Hybrid of Convolution and Self-Attention Mechanism for Low-Light Image Enhancement
Mehmood Deep learning based super resolution of aerial and satellite imagery
WO2021232708A1 (zh) 一种图像处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22831174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22831174

Country of ref document: EP

Kind code of ref document: A1