WO2023010751A1 - 图像高亮区域的信息补偿方法、装置、设备及存储介质 - Google Patents

图像高亮区域的信息补偿方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023010751A1
WO2023010751A1 PCT/CN2021/138081 CN2021138081W WO2023010751A1 WO 2023010751 A1 WO2023010751 A1 WO 2023010751A1 CN 2021138081 W CN2021138081 W CN 2021138081W WO 2023010751 A1 WO2023010751 A1 WO 2023010751A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
initial
overexposure
information
processed
Prior art date
Application number
PCT/CN2021/138081
Other languages
English (en)
French (fr)
Inventor
章政文
陈翔宇
董超
乔宇
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2023010751A1 publication Critical patent/WO2023010751A1/zh

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the technical field of image processing, and in particular to an information compensation method, device, equipment and storage medium for image highlight regions.
  • Image optimization processing tasks generally include image editing, image retouching and color correction, image coloring, SDR video conversion to HDR video, etc.
  • the optimized image has higher contrast and richer colors, and can better reflect the visual information in the real environment.
  • the exposure of the original image is too high, the information of some highlighted areas is not easy to be extracted. If the overexposed original image is processed according to the optimization processing method for the normal exposure image, the content information of some highlighted areas will be lost in the optimized image, resulting in deviations in the color of the optimized image, and the optimization effect is poor.
  • Embodiments of the present application provide an information compensation method, device, device, and storage medium for highlight areas of an image, which can solve the problem of loss of content in highlight areas of optimized images in image optimization processing tasks.
  • an embodiment of the present application provides a method for compensating information of a highlighted area of an image, the method comprising: acquiring an overexposure mask image and an initial optimization image of the image to be processed, and the overexposure mask image is used to indicate the image to be processed Obtain the global exposure information according to the initial optimized image; determine the overexposure information of the highlighted area according to the overexposure mask image and the global exposure information; use the overexposure information to compensate the highlighted area of the initial optimized image, and obtain the initial Compensated image for an optimized image.
  • the highlight area of the image to be processed can be identified according to the overexposure mask image, and accordingly, the highlight area of the initial optimized image of the image to be processed can also be identified.
  • Extract the global exposure information from the initial optimization image and determine the overexposure information of the highlight area of the initial optimization image according to the overexposure mask image, and fuse the overexposure information with the initial optimization image, and the information of the highlight area of the initial optimization image can be calculated Compensation is performed to make up for the content information missing in the highlighted part of the initial optimized image.
  • the highlighted area of the compensated image has more feature information than the highlighted area of the initially optimized image, and the content of the highlighted area of the optimized image is solved. The problem of loss, which in turn improves the quality of the optimized image.
  • obtaining the global exposure information according to the initial optimized image includes: inputting the initial optimized image into a trained generator for processing to obtain the global exposure information.
  • the training method of the generator includes: constructing a generation confrontation network, the generation confrontation network includes an initial model of the generator and a discriminator; using a preset loss function and a training set to perform confrontation training on the generation confrontation network to obtain the generator,
  • the training set includes initial optimized image samples, overexposure mask image samples and compensation image samples corresponding to a plurality of image samples to be processed;
  • the loss function is used to describe the comprehensive loss value of the absolute error loss value between the compensated image sample and the predicted image, the perceptual loss value between the compensated image sample and the predicted image, and the discriminator loss value of the predicted image;
  • the predicted image refers to the initial optimization After the image sample is processed by the initial model, it is multiplied by the overexposure mask image sample, and then superimposed with the initial optimized image sample to obtain an image.
  • the loss function is expressed as:
  • L represents the loss function
  • I GT represents the compensated image sample
  • I H represents the predicted image
  • D( ⁇ ) represents the output of the discriminator
  • ⁇ , ⁇ , and ⁇ are all hyperparameters.
  • the method for determining the pixel value of the pixel in the overexposure mask image includes:
  • I mask (x, y) represents the pixel value of the pixel in the overexposure mask image at (x, y)
  • I S (x, y) represents The pixel value of the pixel point of the image to be processed located at (x, y)
  • represents the preset overexposure threshold.
  • the image to be processed is an SDR video frame obtained by extracting frames from the SDR video
  • the initial optimized image is an HDR video frame obtained by HDR converting the SDR video frame.
  • an information compensation device for an image highlight area the device includes:
  • the acquisition unit is used to acquire the overexposure mask image and the initial optimization image of the image to be processed, and the overexposure mask image is used to indicate the highlighted area of the image to be processed;
  • the processing unit acquires the global exposure information according to the initial optimization image, and according to the The exposure mask image and the global exposure information determine the overexposure information of the highlight area, and the overexposure information is used to compensate the highlight area of the initial optimization image to obtain a compensated image of the initial optimization image.
  • obtaining the global exposure information according to the initial optimized image includes: inputting the initial optimized image into a trained generator for processing to obtain the global exposure information.
  • the embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • a terminal device including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, any of the above-mentioned first aspect one method.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method according to any one of the above-mentioned first aspects is implemented.
  • an embodiment of the present application provides a computer program product, which, when the computer program product is run on a terminal device, causes the terminal device to execute the method in any one of the foregoing first aspects.
  • FIG. 1 is a flow chart of an information compensation method for an image highlight area provided by an embodiment of the present application
  • Fig. 2 is a network structure diagram of a generator provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a range of HDR and SDR color gamuts provided by an embodiment of the present application
  • Fig. 4 is a training flowchart of a generator provided by an embodiment of the present application.
  • Fig. 5 is a schematic flow chart of converting an HDR video to an SDR video provided by an embodiment of the present application
  • Fig. 6 is a schematic structural diagram of an information compensation device for an image highlight area provided by an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Embodiments of the present application provide an information compensation method, device, device, and storage medium for image highlight regions. Firstly, the highlighted region of the image to be processed is identified according to the overexposure mask image of the image to be processed. Then extract the global exposure information from the initial optimized image of the image to be processed, and determine the overexposure information of the highlighted area of the initial optimized image according to the overexposure mask image, and fuse the overexposure information with the initial optimized image to realize the optimization of the initial optimized image.
  • the information compensation of the highlighted area of the original optimized image makes up for the content information missing in the highlighted part of the initial optimized image.
  • the highlighted area of the compensated image has more feature information than the highlighted area of the initially optimized image, which solves the problem of image optimization
  • the content information of the highlighted part of the optimized image is lost in the task, thereby improving the quality of the optimized image.
  • the method for compensating the information of the image highlight area provided by the present application includes: acquiring an overexposure mask image and an initial optimization image of the image to be processed, and the overexposure mask image is used to indicate the Highlight area. Obtain global exposure information from the initial optimized image. The overexposure information of the highlighted area is determined according to the overexposure mask image and the global exposure information, and the overexposure information is used to compensate the highlighted area of the initial optimized image to obtain a compensated image of the initial optimized image.
  • the image to be processed can be optimized by using a color lookup table, a trained deep learning method, or a traditional digital image processing method to obtain an initial optimized image of the image to be processed.
  • the initial optimized image has higher color richness, but in the process of optimizing the image to be processed, the information of the highlighted part of the initially optimized image may be lost. Therefore, in order to ensure the quality of the initial optimized image, it is necessary to perform further processing on the initial optimized image to compensate for the content information missing in the highlighted area of the initial optimized image.
  • the pixel value of each pixel in the overexposure mask image can be obtained by formula (1), namely:
  • I mask (x, y) represents the pixel value of the pixel point of the overexposure mask image I mask at (x, y);
  • I S (x, y) represents the image to be processed I S at The pixel value of the pixel at (x, y);
  • is a preset overexposure threshold, which is used to control the overexposure degree of the image to be processed, and the corresponding value can be set according to actual needs.
  • the highlight area in the image to be processed can be determined according to the pixel values of the pixels in the overexposure mask image.
  • a deep learning method may be used to identify overexposure information by using a trained neural network model.
  • the embodiment of the present application provides a generator based on which the overexposure information in the initial optimized image corresponding to the image to be processed can be identified.
  • the structure of the generator (Generator) is shown in FIG. 2 , and the generator includes: multiple down-sampling modules connected in sequence and multiple up-sampling modules corresponding to the multiple up-sampling modules one-to-one.
  • the down-sampling module includes a convolution layer and a down-sampling layer (DownSample)
  • the up-sampling module includes an up-sampling layer (UpSample) and a convolution layer.
  • the initial optimized image is input into the trained generator, and the global exposure information can be obtained from the initial optimized image.
  • the overexposure information of the highlighted region may be determined according to the overexposure mask image and global exposure information, and the overexposure information may be used to compensate the highlighted region of the initial optimized image to obtain a compensated image of the initially optimized image.
  • the specific implementation method is: multiply the global exposure information and the overexposure mask image pixel by pixel to obtain the overexposure information of the highlighted area; add the overexposure information to the initial optimization image to obtain the compensation image of the initial optimization image. This process can also be expressed as formula (2):
  • I H I mask ⁇ G(I coarse )+I coarse (2)
  • I H represents the compensated image
  • I mask represents the overexposure mask image
  • I coarse represents the initial optimized image
  • G(I coarse ) represents the global exposure information obtained by the generator after processing the initial optimized image I coarse .
  • the information compensation method for the image highlight area identifies the highlight area of the image to be processed through the overexposure mask image of the image to be processed, and at the same time, extracts the global exposure information from the initial optimized image, and according to the overexposure mask image Determine the overexposure information of the highlighted area of the initial optimized image, and after fusing the overexposure information with the initial optimized image, it can make up for the missing content information of the highlighted part of the initial optimized image, and the highlighted area of the obtained compensated image is higher than that of the initial optimized image
  • the bright area has more feature information, thereby solving the problem of information loss in the highlighted part of the optimized image.
  • the information compensation method for the highlighted area provided by this application is universal. It can be applied to any task that requires color optimization or color conversion of the image to be processed, such as image editing, image retouching and toning, image coloring, SDR (Standard Dynamic Range) video to HDR (High Dynamic Range) video, etc.
  • the overexposed image to be processed can be optimized first to obtain an initial optimized image, and then the content information of the highlighted area of the initial optimized image can be compensated by using the information compensation method of the highlighted area.
  • FIG. 3 is a schematic diagram showing ranges of HDR and SDR color gamuts.
  • BT.709 and BT.2020 are TV parameter standards issued by ITU (International Telecommunication Union)
  • DCI-P3 is a color gamut standard formulated by the American film industry for digital cinema. It can be seen from Figure 3 that BT.2020 has the largest color gamut among DCI-P3, BT.709 and BT.2020, followed by DCI-P3, and BT.709 has the smallest color gamut. .
  • HDR video uses the BT.709 color gamut
  • HDR video uses the wider BT.2020 color gamut or DCI-P3 color gamut.
  • the HDR video can show higher contrast and richer colors than the SDR video.
  • the information compensation method for the highlight area provided by the present application can extract the information of the highlight area in each video frame of the SDR video, and fuse the information of the highlight area with the initial optimized image corresponding to the video frame to obtain a compensated image, The loss of highlight area information in HDR video can be avoided.
  • the initial model of the generator can be trained by designing corresponding training sets and loss functions, so as to obtain generators suitable for different tasks.
  • the generator can be trained by building a generative confrontation network.
  • Generative adversarial networks include an initial model including a generator and a discriminator. Use the preset loss function and training set to conduct confrontation training on the generative confrontation network to obtain the generator.
  • the training set includes initial optimization image samples, overexposure mask image samples and compensation image samples corresponding to a plurality of image samples to be processed.
  • Step 1 Get the training set.
  • the training set includes a plurality of training samples, and each training sample includes an initial optimized image sample, an overexposure mask image sample, and an HDR video frame sample corresponding to the SDR video frame sample.
  • an SDR video sample and its corresponding HDR video sample are acquired first.
  • SDR video samples and corresponding HDR video samples can be obtained from public video websites. It is also possible to perform SDR and HDR processing on videos in the same RAW data format, respectively, to obtain SDR video samples and corresponding HDR video samples. It is also possible to use the SDR camera and the HDR camera respectively to shoot corresponding SDR video samples and HDR video samples in the same scene.
  • the SDR video sample and its corresponding HDR video sample are respectively subjected to frame extraction processing to obtain a plurality of SDR video frame samples (equivalent to image samples to be processed), and in The HDR video frame samples (equivalent to the compensation image samples) correspond one-to-one to the multiple SDR video frame samples in time sequence and space.
  • the SDR video frame sample may be subjected to HDR conversion through a color lookup table, a trained deep learning method, or a traditional digital image processing method to obtain a corresponding initial optimized image sample.
  • the overexposure mask image samples corresponding to the SDR video frame samples can be obtained by using the above formula (1).
  • Step 2 After the initial optimized image samples in the training set are processed by the generator's initial model, they are multiplied by the overexposure mask image samples, and then superimposed with the initial optimized image samples to obtain a predicted image.
  • the initial optimized image sample is input into the initial model of the generator for processing to obtain global exposure information. After multiplying the global exposure information and the overexposure mask image sample pixel by pixel, the overexposure information of the highlighted area is obtained. The overexposure information is fused with the initial optimized image samples to obtain the predicted image.
  • Step 3 Input the predicted image and the HDR video frame samples corresponding to the training set into the discriminator for iterative training to obtain the trained generator.
  • the predicted image and the corresponding HDR video frame sample are input into the discriminator for processing to obtain the discriminant result of the training sample.
  • Adversarial training is performed according to the discrimination results of each training sample and the preset loss function to obtain a trained generator.
  • the preset loss function L provided by the embodiment of the present application can be expressed as formula (3):
  • L 1 represents the absolute error loss
  • L p represents the perceptual loss
  • L GAN represents the generative confrontation loss
  • I GT represents the compensated image sample
  • I H represents the predicted image
  • ⁇ , ⁇ , and ⁇ are all hyperparameters.
  • the initial model of the generator can be trained using the gradient descent method.
  • the preset loss function meets certain requirements, it means that the model has converged, that is, the training of the initial model has been completed, and a trained generator is obtained.
  • the trained generator can be applied to the task of converting SDR video to HDR video.
  • frame extraction processing is performed on the acquired SDR video to be processed to obtain a plurality of SDR video frames.
  • HDR conversion is performed on the SDR video frame to obtain the HDR video frame, and an overexposure mask image corresponding to the SDR video frame is obtained.
  • Input the HDR video frame into the trained generator to get the global exposure information.
  • the overexposure information of the highlight area is determined according to the overexposure mask image and the global exposure information, and the overexposure information is used to compensate the highlight area of the HDR video frame to obtain a compensated image of the HDR video frame.
  • the HDR video corresponding to the SDR video to be processed is obtained by combining frames.
  • an embodiment of the present application provides an information compensation device 100 for a highlight region of an image.
  • the device 100 includes:
  • the acquiring unit 101 is configured to acquire an image to be processed and an initial optimized image corresponding to the image to be processed, and an overexposure mask image is used to indicate a highlighted area of the image to be processed.
  • the processing unit 102 acquires the global exposure information according to the initial optimization image, determines the overexposure information of the highlight area according to the overexposure mask image and the global exposure information, and uses the overexposure information to compensate the highlight area of the initial optimization image to obtain the initial optimization Compensated image of the image.
  • obtaining the global exposure information according to the initial optimized image includes: inputting the initial optimized image into a trained generator for processing to obtain the global exposure information.
  • the training method of the generator includes: constructing a generation confrontation network, the generation confrontation network includes an initial model of the generator and a discriminator; using a preset loss function and a training set to perform confrontation training on the generation confrontation network to obtain the generator,
  • the training set includes initial optimized image samples, overexposure mask image samples and compensation image samples corresponding to a plurality of image samples to be processed;
  • the loss function is used to describe the comprehensive loss value of the absolute error loss value between the compensated image sample and the predicted image, the perceptual loss value between the compensated image sample and the predicted image, and the discriminator loss value of the predicted image;
  • the predicted image refers to the initial optimization After the image sample is processed by the initial model, it is multiplied by the overexposure mask image sample, and then superimposed with the initial optimized image sample to obtain an image.
  • the loss function is expressed as:
  • L represents the loss function
  • I GT represents the compensated image sample
  • I H represents the predicted image
  • D( ⁇ ) represents the output of the discriminator
  • ⁇ , ⁇ , and ⁇ are all hyperparameters.
  • the method for determining the pixel value of the pixel in the overexposure mask image includes:
  • I mask (x, y) represents the pixel value of the pixel in the overexposure mask image at (x, y)
  • I S (x, y) represents The pixel value of the pixel point of the image to be processed located at (x, y)
  • represents the preset overexposure threshold.
  • the image to be processed is an SDR video frame obtained by extracting frames from the SDR video
  • the initial optimized image is an HDR video frame obtained by HDR converting the SDR video frame.
  • a terminal device 200 in this embodiment includes: a processor 201 , a memory 202 , and a computer program 204 stored in the memory 202 and operable on the processor 201 .
  • the computer program 404 can be run by the processor 201 to generate instructions 203 , and the processor 201 can implement the steps in the above embodiments of the image color optimization method according to the instructions 203 .
  • the processor 201 executes the computer program 204, the functions of the modules/units in the above-mentioned device embodiments are realized, for example, the functions of the unit 101 and the unit 102 shown in FIG. 6 .
  • the computer program 204 can be divided into one or more modules/units, and one or more modules/units are stored in the memory 202 and executed by the processor 201 to complete the present application.
  • One or more modules/units may be a series of computer program instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer program 204 in the terminal device 200 .
  • FIG. 7 is only an example of the terminal device 200, and does not constitute a limitation to the terminal device 200. It may include more or less components than those shown in the figure, or combine certain components, or different components. , for example, the terminal device 200 may also include an input and output device, a network access device, a bus, and the like.
  • the processor 201 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), on-site Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like.
  • the storage 202 may be an internal storage unit of the terminal device 200 , such as a hard disk or memory of the terminal device 200 .
  • the memory 202 can also be an external storage device of the terminal device 200, such as a plug-in hard disk equipped on the terminal device 200, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card) and so on.
  • the memory 202 may also include both an internal storage unit of the terminal device 200 and an external storage device.
  • the memory 202 is used to store computer programs and other programs and data required by the terminal device 200 .
  • the memory 202 can also be used to temporarily store data that has been output or will be output.
  • the terminal device provided in this embodiment can execute the foregoing method embodiment, and its implementation principle and technical effect are similar, and details are not repeated here.
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in the foregoing method embodiment is implemented.
  • the embodiment of the present application further provides a computer program product, which, when the computer program product runs on a terminal device, enables the terminal device to implement the method described in the foregoing method embodiments when executed.
  • the above integrated units are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable storage medium may at least include: any entity or device capable of carrying computer program codes to a photographing device/terminal device, a recording medium, a computer memory, a read-only memory (Read-Only Memory, ROM), a random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium.
  • a photographing device/terminal device a recording medium
  • a computer memory a read-only memory (Read-Only Memory, ROM), a random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium.
  • ROM read-only memory
  • RAM random access Memory
  • electrical carrier signal telecommunication signal and software distribution medium.
  • U disk mobile hard disk, magnetic disk or optical disk, etc.
  • references to "one embodiment” or “some embodiments” or the like in this application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • first and second are used for description purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include at least one of these features.
  • connection and “connected” should be understood in a broad sense, for example, it can be mechanical connection or electrical connection; it can be direct connection or through An intermediate medium is indirectly connected, which can be the internal communication of two elements or the interaction relationship between two elements. Unless otherwise clearly defined, those of ordinary skill in the art can understand the above terms in this application according to the specific situation. specific meaning.

Abstract

本申请提供一种图像高亮区域的信息补偿方法、装置、设备及存储介质,涉及图像处理技术领域。图像高亮区域的信息补偿方法包括:获取待处理图像的过曝掩模图像和初始优化图像,过曝掩模图像用于指示待处理图像的高亮区域;根据初始优化图像获取全局曝光信息;根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息;利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。本申请提供的图像高亮区域的信息补偿方法可以解决图像优化处理任务中图像的高亮区域内容丢失的问题。

Description

图像高亮区域的信息补偿方法、装置、设备及存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像高亮区域的信息补偿方法、装置、设备及存储介质。
背景技术
图像优化处理任务一般包括图像编辑、图像修饰与调色、图像上色、SDR视频转HDR视频等。优化后的图像与对应的原始图像相比,具备更高的对比度以及更丰富的色彩,也可以更好的反应真实环境中的视觉信息。在对原始图像进行优化处理时,若原始图像的曝光度过高,一些高亮区域的信息不容易被提取出来。如果按照对正常曝光图像的优化处理方法对过曝光的原始图像进行处理,会使优化图像中丢失部分高亮区域的内容信息,导致优化图像的色彩存在偏差,优化效果较差。
发明内容
本申请实施例提供一种图像高亮区域的信息补偿方法、装置、设备及存储介质,可以解决图像优化处理任务中优化图像的高亮区域内容丢失的问题。
第一方面,本申请实施例提供一种图像高亮区域的信息补偿方法,该方法包括:获取待处理图像的过曝掩模图像和初始优化图像,过曝掩模图像用于指示待处理图像的高亮区域;根据初始优化图像获取全局曝光信息;根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息;利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。
基于本申请提供的图像高亮区域的信息补偿方法,可以根据过曝掩模图像识别待处理图像的高亮区域,相应地,也可以识别到待处理图像的初始优化图像的高亮区域。从初始优化图像中提取全局曝光信息,并根据过 曝掩模图像确定初始优化图像高亮区域的过曝光信息,将过曝光信息与初始优化图像融合,可以对初始优化图像的高亮区域的信息进行补偿,从而弥补了初始优化图像高亮部分缺失的内容信息,得到的补偿图像的高亮区域比初始优化图像的高亮区域具备更多的特征信息,解决了优化图像图像的高亮区域内容丢失的问题,进而提高了优化图像的质量。
可选地,根据初始优化图像获取全局曝光信息,包括:将初始优化图像输入到已训练的生成器中进行处理,得到全局曝光信息。
可选地,生成器的训练方法包括:构建生成对抗网络,生成对抗网络包括生成器的初始模型和判别器;利用预设的损失函数和训练集对生成对抗网络进行对抗训练,得到生成器,其中,训练集包括与多个待处理图像样本对应的初始优化图像样本、过曝掩模图像样本和补偿图像样本;
损失函数用于描述补偿图像样本与预测图像之间的绝对误差损失值、补偿图像样本与预测图像之间的感知损失值和预测图像的判别器损失值的综合损失值;预测图像是指初始优化图像样本经过初始模型处理后,与过曝掩模图像样本相乘,再与初始优化图像样本叠加得到的图像。
可选地,损失函数表示为:
Figure PCTCN2021138081-appb-000001
其中,L表示损失函数,I GT表示补偿图像样本,I H表示预测图像,D(·)表示判别器的输出,α、β和γ均为超参数。
可选地,过曝掩模图像中像素点的像素值的确定方法包括:
根据公式
Figure PCTCN2021138081-appb-000002
确定过曝掩模图像中像素点的像素值,其中,I mask(x,y)表示过曝掩模图像位于(x,y)处的像素点的像素值,I S(x,y)表示待处理图像位于(x,y)处的像素点的像素值,λ表示预设的过曝阈值。
可选地,待处理图像为从SDR视频中抽帧得到的SDR视频帧,初始优化图像为SDR视频帧通过HDR转换得到的HDR视频帧。
第二方面,本申请实施例提供了一种图像高亮区域的信息补偿装置,该装置包括:
获取单元,用于获取待处理图像的过曝掩模图像和初始优化图像,过曝掩模图像用于指示待处理图像的高亮区域;处理单元,根据初始优化图 像获取全局曝光信息,根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息,利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。
可选地,根据初始优化图像获取全局曝光信息,包括:将初始优化图像输入到已训练的生成器中进行处理,得到全局曝光信息。
第三方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现如上述第一方面中任一项的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现如上述第一方面中任一项的方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项的方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面和第一方面的各可能的实施方式所带来的有益效果的相关描述,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种图像高亮区域的信息补偿方法的流程图;
图2是本申请一实施例提供的一种生成器的网络结构图;
图3是本申请一实施例提供的一种HDR和SDR色域表示范围的示意图;
图4是本申请一实施例提供的一种生成器的训练流程图;
图5是本申请一实施例提供的一种HDR视频转SDR视频的流程示意 图;
图6是本申请一实施例提供的一种图像高亮区域的信息补偿装置的结构示意图;
图7是本申请一实施例提供的一种终端设备的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了解决图像的高亮部分信息丢失的问题。本申请实施例提供一种图像高亮区域的信息补偿方法、装置、设备和存储介质。首先根据待处理图像的过曝掩模图像识别待处理图像的高亮区域。然后从待处理图像的初始优化图像中提取全局曝光信息,并根据过曝掩模图像确定初始优化图像高亮区域的过曝光信息,将过曝光信息与初始优化图像融合,可以实现对初始优化图像的高亮区域的信息补偿,从而弥补了初始优化图像中高亮部分缺失的内容信息,得到的补偿图像的高亮区域比初始优化图像的高亮区域具备更多的特征信息,解决了图像优化处理任务中优化图像的高亮部分的内容信息丢失的问题,进而提高了优化图像的质量。
下面结合附图,对本申请的技术方案进行详细描述。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
结合图1对本申请实施例提供的图像高亮区域的信息补偿方法进行示例性的说明。在一种可能的实现方式中,本申请提供的图像高亮区域的信息补偿方法包括:获取待处理图像的过曝掩模图像和初始优化图像,过曝掩模图像用于指示待处理图像的高亮区域。根据初始优化图像获取全局曝光信息。根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息,利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。
在一个实施例中,可以通过颜色查找表、训练好的深度学习方法或传统的数字图像处理方法对待处理图像进行优化处理,得到待处理图像的初始优化图像。初始优化图像与待处理图像相比色彩丰富度更高,但在对待处理图像进行优化处理的过程中,初始优化图像高亮部分的信息可能会丢失。因此,为了保证初始优化图像的质量,需要对初始优化图像做进一步的处理,对初始优化图像的高亮区域缺失的内容信息进行补偿。
在一个实施例中,可以通过公式(1)获取过曝掩模图像中每个像素点的像素值,即:
Figure PCTCN2021138081-appb-000003
在公式(1)中,I mask(x,y)表示过曝掩模图像I mask在(x,y)处的像素点的像素值;I S(x,y)表示待处理图像I S在(x,y)处的像素点的像素值;λ为预设的过曝阈值,用于控制待处理图像的过曝程度,可以根据实际需要设定对应的数值。可以根据过曝掩模图像中的像素点的像素值确定待处理图像中的高亮区域。
在一种可能的实现方式中,可以采用深度学习的方法,利用已训练的神经网络模型来识别过曝光信息。例如本申请实施例提供了一种生成器,基于该生成器可以识别待处理图像对应的初始优化图像中的过曝光信息。该生成器(Generator)的结构如图2所示,生成器包括:依次连接的多个下采样模块和与多个上采样模块一一对应的多个上采样模块。其中,下采样模块包括卷积层和下采样层(DownSample),上采样模块包括上采样层(UpSample)和卷积层。在本申请中,将初始优化图像输入到已训练的生成器中,可以从初始优化图像中获取到全局曝光信息。
在本申请实施例中,可以根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息,利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。具体的实现方式为:将全局曝光信息与过曝掩模图像逐像素相乘,得到高亮区域的过曝光信息;将过曝光信息与初始优化图像相加,得到初始优化图像的补偿图像。该过程也可以表示为公式(2):
I H=I mask×G(I coarse)+I coarse        (2)
在公式(2)中,I H表示补偿图像;I mask表示过曝掩模图像;I coarse表示初始优化图像;G(I coarse)表示生成器对初始优化图像I coarse处理后得到的全局曝光信息。
本申请提供的图像高亮区域的信息补偿方法通过待处理图像的过曝掩模图像识别待处理图像的高亮区域,同时,从初始优化图像中提取全局曝光信息,并根据过曝掩模图像确定初始优化图像高亮区域的过曝光信息,将过曝光信息与初始优化图像融合后,可以弥补初始优化图像高亮部分缺失的内容信息,得到的补偿图像的高亮区域比初始优化图像的高亮区域具备更多的特征信息,进而解决了优化图像的高亮部分信息丢失的问题。
本申请提供的高亮区域的信息补偿方法具备泛用性。可以应用在任何需要对待处理图像进行颜色优化或者颜色转换的任务中,例如图像编辑、图像修饰与调色、图像上色、SDR(Standard Dynamic Range)视频转HDR(High Dynamic Range)视频等。具体地,可以先对过曝光的待处理图像进行优化处理,得到初始优化图像,然后利用高亮区域的信息补偿方法对初始优化图像的高亮区域的内容信息进行补偿。
以SDR视频转HDR视频为例,由于受到拍摄设备的限制,现有的HDR视频资源较少,需要将已有的大量的SDR视频转换成HDR视频以满足用户的需求。图3为HDR和SDR色域表示范围的示意图。其中,BT.709和BT.2020都是ITU(国际电信联盟)发布的电视参数标准,DCI-P3是美国电影工业为数字电影院所制定的色域标准。从图3中可以看出,DCI-P3、BT.709和BT.2020中色域范围最大的是BT.2020,DCI-P3的色域范围次之,BT.709所表示的色域范围最小。目前,SDR视频采用的是BT.709色域,而HDR视频采用的是色域范围更为宽广的BT.2020色域或DCI-P3色域。就同一视频而言,无论HDR视频采用BT.2020色域还是DCI-P3色域,HDR视频可以比SDR视频展现出更高的对比度以及更加丰度的色彩。
现有技术中,常见的将SDR视频转为HDR视频的方法大多数是通过图像编码技术将SDR数据转换成HDR数据,使得HDR数据可以在HDR终端设备上播放。此外还需要通过超分辨率转换方法,将低分辨率的SDR视频内容转换成符合HDR视频标准的高分辨率HDR视频内容。现有的视频转换方法的计算成本较高,且转换后的HDR视频中会丢失部分高亮区域 的内容信息,从而影响视频的质量。本申请提供的高亮区域的信息补偿方法可以提取SDR视频中的每一帧视频帧中高亮区域的信息,并将高亮区域的信息与视频帧对应的初始优化图像进行融合,得到补偿图像,可以避免HDR视频中高亮区域信息的损失。
可以理解的是,针对不同的任务,可以通过设计对应的训练集和损失函数对生成器的初始模型进行训练,从而得到适用于不同任务的生成器。
在本申请中,可以通过构建生成对抗网络对生成器进行训练。生成对抗网络包括包括生成器的初始模型和判别器。利用预设的损失函数和训练集对生成对抗网络进行对抗训练,得到生成器。其中,训练集包括与多个待处理图像样本对应的初始优化图像样本、过曝掩模图像样本和补偿图像样本。
下面以SDR视频转HDR视频任务为例,同时结合图4对本申请提供的生成器的训练过程及应用进行示例性的说明。
步骤一:获取训练集。
针对SDR视频转HDR视频任务,训练集包括多个训练样本,每个训练样本包括与SDR视频帧样本对应的初始优化图像样本、过曝掩模图像样本和HDR视频帧样本。
具体的,首先获取SDR视频样本及其对应的HDR视频样本。示例性的,可以从公开的视频网站中获取SDR视频样本及对应的HDR视频样本。也可以对同一RAW数据格式的视频分别进行SDR和HDR处理,得到SDR视频样本及其对应的HDR视频样本。还可以分别利用SDR相机和HDR相机在同一场景下,分别拍摄对应的SDR视频样本和HDR视频样本。在获取到SDR视频样本及其对应的HDR视频样本之后,分别对SDR视频样本及其对应的HDR视频样本进行抽帧处理,得到多个SDR视频帧样本(相当于待处理图像样本),以及在时序上和空间上与多个SDR视频帧样本一一对应的HDR视频帧样本(相当于补偿图像样本)。
在一个示例中,针对每一个SDR视频帧样本,可以通过颜色查找表、训练好的深度学习方法或传统的数字图像处理方法将SDR视频帧样本进行HDR转换,得到对应的初始优化图像样本。可以利用上述公式(1)获取SDR视频帧样本对应的过曝掩模图像样本。
步骤二:将训练集中的初始优化图像样本经过生成器的初始模型处理后,与过曝掩模图像样本相乘,再与初始优化图像样本叠加得到预测图像。
具体地,针对训练集中的每个训练样本,将初始优化图像样本输入到生成器的初始模型中进行处理,得到全局曝光信息。将全局曝光信息与过曝掩模图像样本逐像素相乘后,得到高亮区域的过曝光信息。将过曝光信息与初始优化图像样本进行融合,得到预测图像。
步骤三:将预测图像与训练集中对应的HDR视频帧样本输入到判别器中进行迭代训练,得到已训练的生成器。
在一个实施例中,针对训练集中的每个训练样本,将预测图像和对应的HDR视频帧样本输入到判别器中进行处理,得到训练样本的判别结果。根据每个训练样本的判别结果和预设的损失函数进行对抗训练,得到已训练的生成器。
在本申请实施例中,损失函数用于描述补偿图像样本与预测图像之间的绝对误差损失值
Figure PCTCN2021138081-appb-000004
补偿图像样本与预测图像之间的感知损失值
Figure PCTCN2021138081-appb-000005
和预测图像的判别器损失值L GAN=-logD(I H)这三个损失的综合损失值。本申请实施例提供的预设的损失函数L可以表示为公式(3):
Figure PCTCN2021138081-appb-000006
其中,L 1表示绝对误差损失;L p表示感知损失;L GAN表示生成对抗损失;I GT表示补偿图像样本;I H表示预测图像,α、β和γ均为超参数。
示例性的,可以使用梯度下降法对生成器的初始模型进行训练,当预设的损失函数满足一定的要求时,表示模型已经收敛,即初始模型已经完成训练,得到已训练的生成器。
如图5所示,已训练的生成器可以应用于SDR视频转HDR视频的任务。示例性的,对获取到的待处理的SDR视频进行抽帧处理,得到多个SDR视频帧。针对每个SDR视频帧,对SDR视频帧进行HDR转换得到HDR视频帧,并获取SDR视频帧对应的过曝掩模图像。将HDR视频帧输入到已训练的生成器中,得到全局曝光信息。根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息,利用过曝光信息对HDR视频帧的高亮区域进行补偿,得到HDR视频帧的补偿图像。最终通过合帧得到与待处理的 SDR视频对应的HDR视频。
基于同一发明构思,如图6所示,本申请实施例提供了一种图像高亮区域的信息补偿装置100。该装置100包括:
获取单元101,用于获取待处理图像及与待处理图像对应的初始优化图像,过曝掩模图像用于指示待处理图像的高亮区域。
处理单元102,根据初始优化图像获取全局曝光信息,根据过曝掩模图像和全局曝光信息确定高亮区域的过曝光信息,利用过曝光信息对初始优化图像的高亮区域进行补偿,得到初始优化图像的补偿图像。
可选地,根据初始优化图像获取全局曝光信息,包括:将初始优化图像输入到已训练的生成器中进行处理,得到全局曝光信息。
可选地,生成器的训练方法包括:构建生成对抗网络,生成对抗网络包括生成器的初始模型和判别器;利用预设的损失函数和训练集对生成对抗网络进行对抗训练,得到生成器,其中,训练集包括与多个待处理图像样本对应的初始优化图像样本、过曝掩模图像样本和补偿图像样本;
损失函数用于描述补偿图像样本与预测图像之间的绝对误差损失值、补偿图像样本与预测图像之间的感知损失值和预测图像的判别器损失值的综合损失值;预测图像是指初始优化图像样本经过初始模型处理后,与过曝掩模图像样本相乘,再与初始优化图像样本叠加得到的图像。
可选地,损失函数表示为:
Figure PCTCN2021138081-appb-000007
其中,L表示损失函数,I GT表示补偿图像样本,I H表示预测图像,D(·)表示判别器的输出,α、β和γ均为超参数。
可选地,过曝掩模图像中像素点的像素值的确定方法包括:
根据公式
Figure PCTCN2021138081-appb-000008
确定过曝掩模图像中像素点的像素值,其中,I mask(x,y)表示过曝掩模图像位于(x,y)处的像素点的像素值,I S(x,y)表示待处理图像位于(x,y)处的像素点的像素值,λ表示预设的过曝阈值。
可选地,待处理图像为从SDR视频中抽帧得到的SDR视频帧,初始优化图像为SDR视频帧通过HDR转换得到的HDR视频帧。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅 以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
基于同一发明构思,本申请实施例还提供了一种终端设备。如图7所示,该实施例的终端设备200包括:处理器201、存储器202以及存储在存储器202中并可在处理器201上运行的计算机程序204。计算机程序404可被处理器201运行,生成指令203,处理器201可根据指令203实现上述各个图像色彩优化方法实施例中的步骤。或者,处理器201执行计算机程序204时实现上述各装置实施例中各模块/单元的功能,例如图6所示的单元101和单元102的功能。
示例性的,计算机程序204可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器202中,并由处理器201执行,以完成本申请。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序204在终端设备200中的执行过程。
本领域技术人员可以理解,图7仅仅是终端设备200的示例,并不构成对终端设备200的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如终端设备200还可以包括输入输出设备、网络接入设备、总线等。
处理器201可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理 器或者该处理器也可以是任何常规的处理器等。
存储器202可以是终端设备200的内部存储单元,例如终端设备200的硬盘或内存。存储器202也可以是终端设备200的外部存储设备,例如终端设备200上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器202还可以既包括终端设备200的内部存储单元也包括外部存储设备。存储器202用于存储计算机程序以及终端设备200所需的其它程序和数据。存储器202还可以用于暂时地存储已经输出或者将要输出的数据。
本实施例提供的终端设备可以执行上述方法实施例,其实现原理与技术效果类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例所述的方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现上述方法实施例所述的方法。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。
在本申请中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中” 等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。
此外,在本申请中,除非另有明确的规定和限定,术语“连接”、“相连”等应做广义理解,例如可以是机械连接,也可以是电连接;可以是直接连接,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定、对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种图像高亮区域的信息补偿方法,其特征在于,所述方法包括:
    获取待处理图像的过曝掩模图像和初始优化图像,所述过曝掩模图像用于指示所述待处理图像的高亮区域;
    根据所述初始优化图像获取全局曝光信息;
    根据所述过曝掩模图像和所述全局曝光信息确定所述高亮区域的过曝光信息;
    利用所述过曝光信息对所述初始优化图像的高亮区域进行补偿,得到所述初始优化图像的补偿图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述初始优化图像获取全局曝光信息,包括:
    将所述初始优化图像输入到已训练的生成器中进行处理,得到所述全局曝光信息。
  3. 根据权利要求2所述的方法,其特征在于,所述生成器的训练方法包括:
    构建生成对抗网络,所述生成对抗网络包括所述生成器的初始模型和判别器;
    利用预设的损失函数和训练集对所述生成对抗网络进行对抗训练,得到所述生成器,其中,训练集包括与多个待处理图像样本对应的初始优化图像样本、过曝掩模图像样本和补偿图像样本;
    所述损失函数用于描述所述补偿图像样本与预测图像之间的绝对误差损失值、所述补偿图像样本与预测图像之间的感知损失值和所述预测图像的判别器损失值的综合损失值;所述预测图像是指所述初始优化图像样本经过所述初始模型处理后,与所述过曝掩模图像样本相乘,再与所述初始优化图像样本叠加得到的图像。
  4. 根据权利要求3所述的方法,其特征在于,所述损失函数表示为:
    Figure PCTCN2021138081-appb-100001
    其中,L表示所述损失函数,I GT表示所述补偿图像样本,I H表示所述预测图像,D(·)表示所述判别器的输出,α、β和γ均为超参数。
  5. 根据权利要求1所述的方法,其特征在于,所述过曝掩模图像中像素 点的像素值的确定方法包括:
    根据公式
    Figure PCTCN2021138081-appb-100002
    确定所述过曝掩模图像中像素点的像素值,其中,I mask(x,y)表示所述过曝掩模图像位于(x,y)处的像素点的像素值,I S(x,y)表示所述待处理图像位于(x,y)处的像素点的像素值,λ表示预设的过曝阈值。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述待处理图像为从SDR视频中抽帧得到的SDR视频帧,所述初始优化图像为所述SDR视频帧通过HDR转换得到的HDR视频帧。
  7. 一种图像高亮区域的信息补偿装置,其特征在于,包括:
    获取单元,用于获取待处理图像的过曝掩模图像和初始优化图像,所述过曝掩模图像用于指示所述待处理图像的高亮区域;
    处理单元,根据所述初始优化图像获取全局曝光信息,根据所述过曝掩模图像和所述全局曝光信息确定所述高亮区域的过曝光信息,利用所述过曝光信息对所述初始优化图像的高亮区域进行补偿,得到所述初始优化图像的补偿图像。
  8. 根据权利要求7所述的装置,其特征在于,所述根据所述初始优化图像获取全局曝光信息,包括:
    将所述初始优化图像输入到已训练的生成器中进行处理,得到所述全局曝光信息。
  9. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述的方法。
PCT/CN2021/138081 2021-08-02 2021-12-14 图像高亮区域的信息补偿方法、装置、设备及存储介质 WO2023010751A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110883140.8A CN113781321B (zh) 2021-08-02 2021-08-02 图像高亮区域的信息补偿方法、装置、设备及存储介质
CN202110883140.8 2021-08-02

Publications (1)

Publication Number Publication Date
WO2023010751A1 true WO2023010751A1 (zh) 2023-02-09

Family

ID=78836583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138081 WO2023010751A1 (zh) 2021-08-02 2021-12-14 图像高亮区域的信息补偿方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113781321B (zh)
WO (1) WO2023010751A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781321B (zh) * 2021-08-02 2024-03-12 中国科学院深圳先进技术研究院 图像高亮区域的信息补偿方法、装置、设备及存储介质
CN115082358B (zh) * 2022-07-21 2022-12-09 深圳思谋信息科技有限公司 图像增强方法、装置、计算机设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289360A (zh) * 2011-08-25 2011-12-21 浙江大学 一种自适应投影颜色补偿方法
CN104994364A (zh) * 2015-04-30 2015-10-21 西安电子科技大学 一种图像处理方法和装置
CN105208281A (zh) * 2015-10-09 2015-12-30 广东欧珀移动通信有限公司 一种夜景拍摄方法及装置
CN105335980A (zh) * 2015-11-02 2016-02-17 吉林大学 一种适合图像sift特征匹配的彩色图像转亮度图像方法
CN106791471A (zh) * 2016-12-29 2017-05-31 宇龙计算机通信科技(深圳)有限公司 图像优化方法、图像优化装置和终端
CN112070682A (zh) * 2019-06-10 2020-12-11 杭州海康慧影科技有限公司 图像亮度补偿的方法和装置
CN113038026A (zh) * 2021-03-01 2021-06-25 维沃移动通信有限公司 图像处理方法和电子设备
CN113781321A (zh) * 2021-08-02 2021-12-10 中国科学院深圳先进技术研究院 图像高亮区域的信息补偿方法、装置、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100637A (zh) * 2015-08-31 2015-11-25 联想(北京)有限公司 一种图像处理方法及电子设备
CN107635102B (zh) * 2017-10-30 2020-02-14 Oppo广东移动通信有限公司 高动态范围图像曝光补偿值获取方法和装置
US10764496B2 (en) * 2018-03-16 2020-09-01 Arcsoft Corporation Limited Fast scan-type panoramic image synthesis method and device
CN110062160B (zh) * 2019-04-09 2021-07-02 Oppo广东移动通信有限公司 图像处理方法和装置
CN110210514B (zh) * 2019-04-24 2021-05-28 北京林业大学 生成式对抗网络训练方法、图像补全方法、设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289360A (zh) * 2011-08-25 2011-12-21 浙江大学 一种自适应投影颜色补偿方法
CN104994364A (zh) * 2015-04-30 2015-10-21 西安电子科技大学 一种图像处理方法和装置
CN105208281A (zh) * 2015-10-09 2015-12-30 广东欧珀移动通信有限公司 一种夜景拍摄方法及装置
CN105335980A (zh) * 2015-11-02 2016-02-17 吉林大学 一种适合图像sift特征匹配的彩色图像转亮度图像方法
CN106791471A (zh) * 2016-12-29 2017-05-31 宇龙计算机通信科技(深圳)有限公司 图像优化方法、图像优化装置和终端
CN112070682A (zh) * 2019-06-10 2020-12-11 杭州海康慧影科技有限公司 图像亮度补偿的方法和装置
CN113038026A (zh) * 2021-03-01 2021-06-25 维沃移动通信有限公司 图像处理方法和电子设备
CN113781321A (zh) * 2021-08-02 2021-12-10 中国科学院深圳先进技术研究院 图像高亮区域的信息补偿方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113781321A (zh) 2021-12-10
CN113781321B (zh) 2024-03-12

Similar Documents

Publication Publication Date Title
WO2023010754A1 (zh) 一种图像处理方法、装置、终端设备及存储介质
US10861133B1 (en) Super-resolution video reconstruction method, device, apparatus and computer-readable storage medium
WO2023010751A1 (zh) 图像高亮区域的信息补偿方法、装置、设备及存储介质
WO2023010750A1 (zh) 一种图像颜色映射方法、装置、终端设备及存储介质
JP7359521B2 (ja) 画像処理方法および装置
WO2023010749A1 (zh) 一种hdr视频转换方法、装置、设备及计算机存储介质
CN111353948A (zh) 一种图像降噪方法、装置及设备
US20220261961A1 (en) Method and device, electronic equipment, and storage medium
US20170150041A1 (en) Double-exposure photographing method and apparatus of electronic device
CN108665415B (zh) 基于深度学习的图像质量提升方法及其装置
CN108550106B (zh) 一种全景图像的颜色校正方法、装置和电子设备
WO2020215180A1 (zh) 图像处理方法、装置和电子设备
US20210398247A1 (en) Image processing apparatus, image processing method, and storage medium
WO2021213336A1 (zh) 一种画质增强装置及相关方法
CN113962859A (zh) 一种全景图生成方法、装置、设备及介质
CN112686810A (zh) 一种图像处理的方法及装置
US20180197282A1 (en) Method and device for producing a digital image
WO2023010755A1 (zh) 一种hdr视频转换方法、装置、设备及计算机存储介质
US20160164961A1 (en) Method and apparatus for converting content using cloud
CN113112422A (zh) 图像处理方法、装置、电子设备、计算机可读介质
CN111860363A (zh) 一种视频图像的处理方法及装置、电子设备、存储介质
WO2023010753A1 (zh) 一种色域映射方法、装置、终端设备及存储介质
WO2019179230A1 (zh) 一种基于raw图像的全景拼接方法及电子设备
CN113132562A (zh) 镜头阴影校正方法、装置及电子设备
WO2017072011A1 (en) Method and device for selecting a process to be applied on video data from a set of candidate processes driven by a common set of information data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE