WO2021082088A1 - 一种色调映射方法、装置及电子设备 - Google Patents

一种色调映射方法、装置及电子设备 Download PDF

Info

Publication number
WO2021082088A1
WO2021082088A1 PCT/CN2019/118585 CN2019118585W WO2021082088A1 WO 2021082088 A1 WO2021082088 A1 WO 2021082088A1 CN 2019118585 W CN2019118585 W CN 2019118585W WO 2021082088 A1 WO2021082088 A1 WO 2021082088A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
dynamic range
range image
high dynamic
network
Prior art date
Application number
PCT/CN2019/118585
Other languages
English (en)
French (fr)
Inventor
王荣刚
张宁
高文
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Publication of WO2021082088A1 publication Critical patent/WO2021082088A1/zh
Priority to US17/725,334 priority Critical patent/US20220245775A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This specification relates to the field of digital image processing technology, and in particular to a tone mapping method, device and electronic equipment.
  • High Dynamic Range With the rapid development of High Dynamic Range (HDR) technology, various high dynamic range videos, images and other content are increasing. Compared with ordinary dynamic range images, high dynamic range images can provide more dynamic range and Image details, so high dynamic range images can better restore the visual effects in the real environment. However, since most multimedia devices still display images with limited dynamic range (ie low dynamic range), high dynamic range images cannot be displayed normally on such multimedia devices, so how to display high dynamic range images on such devices Normal display, that is, tone mapping technology has become a more important technology in the field of digital image processing.
  • tone mapping is limited by the bit depth of multimedia devices and other conditions, it is impossible to reproduce high dynamic range images on multimedia devices completely, so how to compress the dynamic range while retaining as many local details as possible, that is, restore as much as possible High dynamic range images have become the focus of research.
  • a high dynamic range image is divided into a basic layer and a detail layer through a filter.
  • the basic layer contains low-frequency information such as brightness of the image, and the detail layer contains high-frequency information such as image edges.
  • the basic layer is compressed, and the detail layer is compressed. It is enhanced and finally merged into a low dynamic range image.
  • the filtering process will introduce noises such as halos and artifacts, and these noises will have a serious impact on the result of tone mapping, easily cause color difference and reduce the naturalness of the image.
  • the existing tone mapping method cannot robustly complete the high Conversion of dynamic range image to low dynamic range image.
  • the purpose of the present invention is to provide a tone mapping method, device and electronic device to solve the problems of the prior art that the tone mapping will produce chromatic aberration and the conversion is not robust enough.
  • An embodiment of the present specification provides a tone mapping method, the method includes:
  • the storage form of the high dynamic range image is a predetermined storage form, performing a decomposition operation on the high dynamic range image to obtain the first component, the second component, and the third component of the high dynamic range image;
  • the first component and the second component after the mapping are merged with the third component to obtain a merged low dynamic range image corresponding to the high dynamic range image, so as to complete tone mapping.
  • the method before performing the decomposition operation on the high dynamic range image, the method further includes:
  • the storage format of the high dynamic range image is a non-predetermined storage format
  • a conversion operation is performed on the high dynamic range image to convert it into a high dynamic range image in a predetermined storage format, and the converted Perform a decomposition operation on the high dynamic range image.
  • the predetermined storage form includes an HSV color space
  • the performing a decomposition operation on the high dynamic range image to obtain the first component, the second component, and the third component of the high dynamic range image includes:
  • the components in the HSV color space corresponding to the high dynamic range image are extracted to obtain the first component, the second component, and the third component; wherein the first component includes saturation information, and the first component includes saturation information.
  • the second component includes brightness information, and the third component includes hue information.
  • the predetermined deep neural network is a generative adversarial network
  • the generative adversarial network includes a generative network and a discriminant network, wherein:
  • the generation network is established based on the U-Net network, the generation network includes an encoder and a decoder, the encoder includes at least one convolution block and a plurality of residual blocks, and the decoder includes a plurality of deconvolutions.
  • the discriminant network includes a plurality of convolutional blocks, and each convolutional block includes a convolutional layer, a normalization layer, and an activation layer arranged in sequence.
  • the generative confrontation network is obtained by training a predetermined loss function, and the loss function includes one or more of the generative confrontation loss function, the mean square error function, and the multi-scale structural similarity loss function.
  • the fusing the mapped first component and the second component with the third component to obtain the fused low dynamic range image corresponding to the high dynamic range image includes:
  • the first component and the second component after the mapping are superimposed with the third component to obtain a low dynamic range image conforming to a predetermined storage format.
  • the method further includes:
  • a conversion operation is performed on the low dynamic range image, so as to convert it into a low dynamic range image corresponding to the RGB color space.
  • An embodiment of the present specification provides a tone mapping device, the device includes:
  • the acquisition module is used to acquire one or more high dynamic range images and judge the storage form of the high dynamic range images
  • the decomposition module is used to perform a decomposition operation on the high dynamic range image when it is determined that the storage form of the high dynamic range image is a predetermined storage form to obtain the first component, the second component, and the high dynamic range image.
  • Third component
  • the mapping module is used to input the first component and the second component into a predetermined deep neural network, and use the deep neural network to map the first component and the second component respectively to obtain the mapped first component The first component and the second component;
  • a fusion module for fusing the mapped first component and second component with the third component to obtain a fused low dynamic range image corresponding to the high dynamic range image, so as to complete tone mapping .
  • the device further includes:
  • the first conversion module is configured to perform a conversion operation on the high dynamic range image when it is determined that the storage form of the high dynamic range image is a non-predetermined storage form before performing the decomposition operation on the high dynamic range image, In order to convert it into a high dynamic range image in a predetermined storage form, and perform a decomposition operation on the converted high dynamic range image.
  • the predetermined storage form includes an HSV color space
  • the decomposition module is specifically configured to:
  • the components in the HSV color space corresponding to the high dynamic range image are extracted to obtain the first component, the second component, and the third component; wherein the first component includes saturation information, and the first component includes saturation information.
  • the second component includes brightness information, and the third component includes hue information.
  • the fusion module is specifically used for:
  • the first component and the second component after the mapping are superimposed with the third component to obtain a low dynamic range image conforming to a predetermined storage format.
  • the device further includes:
  • the second conversion module is configured to perform a conversion operation on the low dynamic range image after the low dynamic range image conforming to the predetermined storage format is obtained, so as to convert it into a low dynamic range image corresponding to the RGB color space.
  • An electronic device provided by an embodiment of this specification includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the above-mentioned tone mapping method when the program is executed.
  • the present invention obtains one or more high dynamic range images and determines the storage form of the high dynamic range image.
  • the storage form of the high dynamic range image is a predetermined storage form
  • the high dynamic range image is decomposed into the first component and the second component.
  • Two components and a third component input the first component and the second component into a predetermined deep neural network, and use the deep neural network to map the first component and the second component respectively to obtain the mapped first component and the second component
  • Two components; the mapped first component and the second component and the third component are fused to obtain a low dynamic range image corresponding to the high dynamic range image after fusion, so as to complete tone mapping.
  • FIG. 1 is a schematic flowchart of a tone mapping method provided by an embodiment of this specification
  • FIG. 2 is a schematic flowchart of tone mapping using a generative adversarial network in a specific application scenario provided by an embodiment of this specification;
  • Fig. 3 is a schematic structural diagram of a tone mapping device provided by an embodiment of this specification.
  • High dynamic range (HDR) technology As one of the important branches in the field of image processing technology, high dynamic range (HDR) technology has also risen, and various high dynamic range videos and images are increasing. . High dynamic range images can be considered to provide more dynamic range and detailed images than ordinary dynamic range images. Therefore, high dynamic range images can better restore the visual effects in the real environment. Dynamic range is the ratio of the highest luminance in the scene and the lowest luminance, in practical applications, the dynamic range of the image may be more than 105 is considered a high dynamic range image.
  • Tone mapping refers to a computer graphics technology that approximately displays high dynamic range images on a limited dynamic range medium.
  • the limited dynamic range medium includes LCD display devices, projection devices, and so on. Because tone mapping is a pathological problem, limited by the bit depth of multimedia devices and other conditions, it is impossible to reproduce high dynamic range images on multimedia devices completely, so how to compress the dynamic range while retaining as much local details as possible Therefore, it has become the focus of research to restore high dynamic range images as much as possible.
  • a high dynamic range image is divided into a basic layer and a detail layer through a filter.
  • the basic layer contains low-frequency information such as brightness of the image
  • the detail layer contains high-frequency information such as image edges. Layers are enhanced, and finally merged into a low dynamic range image.
  • this existing processing method has many drawbacks. For example, the filtering process will introduce noises such as halos and artifacts. These noises are difficult to eliminate, and the noise will have a serious impact on the result of tone mapping, easily causing chromatic aberration, and degrading the image. Naturalness.
  • the existing deep learning method is based on direct tone mapping in the RGB color space, so the color difference problem is still unavoidable; in addition, the existing deep learning method
  • the tone-mapped image obtained by the traditional filtering method is still used as the label for deep learning training, but the low dynamic range image obtained by the traditional filtering method has a relatively large color difference, resulting in the quality of the image label used for deep learning training The overall situation is poor, so it is difficult to learn a high-quality tone-mapped image.
  • the following embodiments of this specification are performed on high dynamic range images as the processing object.
  • the embodiments of this specification do not limit the storage format of high dynamic range images.
  • the storage format can be the high dynamic range of the RGB color space.
  • the image is the processing object, and the high dynamic range image in the RGB color space is only an embodiment in the actual application scenario of this specification, and does not constitute a limitation on the application scope of the embodiment of this specification.
  • FIG. 1 is a schematic flowchart of a tone mapping method provided by an embodiment of this specification. The method may specifically include the following steps:
  • step S110 one or more high dynamic range images are acquired, and the storage form of the high dynamic range images is judged.
  • the high dynamic range image can be regarded as the object of tone mapping processing. Therefore, acquiring one or more high dynamic range images can be regarded as acquiring one or more original processing objects or Target image.
  • the original processing object in the embodiment of this specification can be a high dynamic range image stored in any storage form.
  • the storage form of the high dynamic range image includes but is not limited to: RGB, HSV, CMY, CMYK , YIQ, Lab and other color spaces (or called color spaces).
  • the storage form of different color spaces can be considered as using different matrices and color variables, so the high dynamic range image can be analyzed by The matrix structure or color judges the storage form of the high dynamic range image.
  • the HSV color space its spatial matrix structure is a hexagonal pyramid model, and the color of an image is described by hue, saturation, and brightness.
  • step S120 when it is determined that the storage format of the high dynamic range image is a predetermined storage format, a decomposition operation is performed on the high dynamic range image to obtain the first component, the second component, and the high dynamic range image.
  • the third component is a decomposition operation performed on the high dynamic range image to obtain the first component, the second component, and the high dynamic range image.
  • the next step is determined according to the judgment result, which may specifically include the following situations:
  • Case 1 When it is determined that the storage format of the high dynamic range image is a predetermined storage format, the decomposition operation is performed on the high dynamic range image to obtain the first component, the second component, and the third component of the high dynamic range image.
  • the predetermined storage format may be HSV color space.
  • the target image that is, the high dynamic range image
  • the target image can be directly executed. Decompose the operation to obtain the first component, the second component, and the third component of the target image.
  • Case 2 When it is judged that the storage format of the high dynamic range image is not a predetermined storage format, that is, when the storage format of the target image does not use the HSV color space, for example, it is determined that the storage format of the target image is the RGB color space; Before performing the decomposition operation on the high dynamic range image, it is also necessary to perform a conversion operation on the high dynamic range image in order to convert it into a high dynamic range image in a predetermined storage format (ie HSV color space), so that the converted high dynamic range image Perform the decomposition operation.
  • a predetermined storage format ie HSV color space
  • the high dynamic range image can be converted from the RGB color space based on the computer vision processing technology under Opencv To the HSV color space. Therefore, by converting the storage form of the high dynamic range image, a high dynamic range image conforming to the predetermined storage form is obtained, so as to convert the original processing object into a to-be-processed image that can be directly used for decomposition.
  • the following method can be used to perform the decomposition operation on the high dynamic range image, so as to obtain the first component, the second component, and the high dynamic range image.
  • the third component can specifically include the following:
  • the HSV color space uses Hue, Saturation, and Value to describe the color of an image
  • the HSV color space contains the hue component (H channel), saturation component (S channel) and Luminance component (V channel), so the above three components can be extracted directly from the HSV color space and denoted as the first component, the second component and the third component.
  • the first component can be used to represent the saturation information
  • the second The component represents brightness information
  • the third component represents hue information
  • the “first”, “second”, and “third” in the above-mentioned first, second, and third components are only for distinguishing different components, not as a pair The specific component name and content limitation.
  • the reason why the embodiment of this specification converts the original processing object into the HSV color space and decomposes the components of the high dynamic range image in the HSV color space is that it is considered that the tone mapping is mainly for the dynamic range. Compression, the hue problem is generally solved by color gamut mapping. Therefore, the high dynamic range image is converted from RGB color space to HSV color space and decomposed into H channel, S channel and V channel. Among them, H channel contains hue information, S The channel contains the saturation information, the V channel contains the brightness information. The saturation component and the brightness component are learned and mapped, and the hue component is not processed temporarily. The hue component is retained, and then merged to form a low dynamic range image. Because the hue component is retained, It reduces the impact on colors and reduces the color difference of the image after tone mapping.
  • step S130 the first component and the second component are input into a predetermined deep neural network, and the deep neural network is used to map the first component and the second component respectively to obtain the mapped first component One component and second component.
  • the predetermined deep neural network is a generative adversarial network.
  • the generative adversarial network can include a generative network and a discriminant network.
  • the structure of the generative network and the discriminant network will be further described below. Include the following:
  • the generation network is established based on the U-Net network.
  • the generation network includes an encoder and a decoder.
  • the encoder contains at least one convolution block and multiple residual blocks, and the decoder contains multiple deconvolution blocks;
  • the generating network can also be called a generator, and the generating network is established based on the U-Net network structure; the encoder contains one convolution block and four residual blocks arranged in sequence, where the The convolution block contains a convolution layer and an activation layer.
  • the size of the convolution kernel of the convolution layer is 3*3, the step size is 2, the padding is 1, and the number of channels is 64; each residual block contains sequentially arranged Convolutional layer, activation layer, convolutional layer and activation layer, and before the second activation layer, it also includes the addition of the input information of the current residual block and the output information of the second convolutional layer, where,
  • the convolution kernel size of the convolution layer in the residual block is 3*3, the step size is 2, and the number of channels of each residual block is doubled from 64.
  • the activation layer in the encoder uses the RELU activation function, in order to Keep the size of the feature map unchanged and use mirror symmetry to do edge filling; after the last residual block of the encoder, there is also a convolutional layer with a channel of 512 and a convolution kernel of 1*1 for feature transformation;
  • the decoder contains five deconvolution blocks arranged in sequence for upsampling.
  • the convolution kernel of the deconvolution layer (transposed convolution layer) in the deconvolution block is 3*3, the step size is 2, and the number of channels is as follows Decrease by one-half.
  • a skip connection is added between the convolutional blocks of the same resolution of the encoder and the decoder to recover the loss of spatial structure information due to the halving of the resolution.
  • After the decoder connect two convolution blocks for fine adjustment.
  • the convolution kernel of the convolution layer in the two convolution blocks is 3*3, the step size is 1, and the channels are 64 and 2, respectively.
  • the RELU activation function is used for the rest except that the activation layer of the last layer uses the Sigmoid activation function.
  • the discriminant network includes multiple convolutional blocks, and each convolutional block contains a convolutional layer, a normalization layer, and an activation layer arranged in sequence. Further, in the embodiment of this specification, the discriminant network can also be called a discriminator, the discriminant network is composed of four convolution blocks, the size of the convolution kernel of the convolution layer in the convolution block is 3*3, and the step size is 2. , The normalization layer in the discrimination network adopts layer normalization, and the activation layer adopts the RELU activation function.
  • the generative adversarial network can be trained by a predetermined loss function, and the loss function includes one or more of the generative adversarial loss function, the mean square error function, and the multi-scale structural similarity loss function.
  • step S140 the mapped first component and the second component are merged with the third component to obtain a merged low dynamic range image corresponding to the high dynamic range image, so as to complete tone mapping .
  • the brightness component and saturation component are input to generate the anti-network learning map, the mapped brightness component and saturation component are output, and the mapped brightness component is output. And the saturation component and the hue component are fused, and then the fused low dynamic range image corresponding to the original processing object (high dynamic range image) can be obtained, that is, the hue mapping is completed.
  • the above-mentioned components may be merged in the following manner to obtain a low dynamic range image, specifically:
  • the mapped first component, the second component and the third component are superimposed to obtain a low dynamic range image conforming to a predetermined storage format.
  • the S channel and the V channel obtained after learning and mapping will be learned.
  • the low dynamic range image corresponding to the HSV color space is still obtained.
  • the low dynamic range image may also include: performing a conversion operation on the low dynamic range image , In order to convert it into a low dynamic range image corresponding to the RGB color space; of course, it is easy to understand that there are no specific restrictions on the color space corresponding to the original processing object (high dynamic range image) in the embodiments of this specification, so the low The color space to which the dynamic range image is converted can be determined according to actual needs.
  • FIG. 2 shows a schematic flowchart of tone mapping using a generative adversarial network in a specific application scenario provided by an embodiment of this specification.
  • this figure shows a schematic flowchart of tone mapping using a generative adversarial network in a specific application scenario provided by an embodiment of this specification.
  • sufficient multi-scale information is learned by using the U-Net network structure in the generator part; since the tone mapping is mainly correct
  • the brightness mapping the structure of the object and other information will not change, so the residual block is introduced into the encoder to reduce the difficulty of network learning while maintaining structural integrity and avoiding information loss.
  • tone mapping often results in unrealistic mapping results, the use of generative adversarial networks and the introduction of adversarial loss to learn at the perceptual level can improve the naturalness of the mapped image.
  • the saturation component and the brightness component of the high dynamic range image are simultaneously input into the generation confrontation network to learn the mapping, and the original hue components are retained, and finally they are merged to form a low dynamic range image.
  • the brightness component and saturation component obtained after the learning and mapping of the confrontation network are generated by the present invention are fused with the original hue component.
  • the structure is highly consistent with the original high dynamic range image, but also has a high degree of naturalness. It can avoid problems such as color difference while learning brightness and saturation mapping.
  • Using the image obtained by tone mapping in the embodiment of this specification as the data set for training and generating the confrontation network can improve the learning effect of the neural network, and it is also possible to obtain a high-quality tone mapping label data set by adjusting the parameters.
  • Fig. 3 is a tone mapping device provided by the embodiment of this specification.
  • the device 300 mainly includes:
  • the acquiring module 301 is configured to acquire one or more high dynamic range images, and judge the storage form of the high dynamic range images;
  • the decomposition module 302 is configured to perform a decomposition operation on the high dynamic range image when it is determined that the storage form of the high dynamic range image is a predetermined storage form to obtain the first component and the second component of the high dynamic range image And the third component;
  • the mapping module 303 is configured to input the first component and the second component into a predetermined deep neural network, and use the deep neural network to respectively map the first component and the second component to obtain the mapped The first component and the second component;
  • the fusion module 304 is configured to merge the mapped first component and the second component with the third component to obtain a fused low dynamic range image corresponding to the high dynamic range image, so as to complete the tone Mapping.
  • the device further includes:
  • the first conversion module 305 is configured to perform a conversion operation on the high dynamic range image when it is determined that the storage form of the high dynamic range image is a non-predetermined storage form before performing the decomposition operation on the high dynamic range image , So as to convert it into a high dynamic range image in a predetermined storage form, and perform a decomposition operation on the converted high dynamic range image.
  • the predetermined storage form includes an HSV color space
  • the decomposition module 302 is specifically configured to:
  • the components in the HSV color space corresponding to the high dynamic range image are extracted to obtain the first component, the second component, and the third component; wherein the first component includes saturation information, and the first component includes saturation information.
  • the second component includes brightness information, and the third component includes hue information.
  • the fusion module 304 is specifically configured to:
  • the first component and the second component after the mapping are superimposed with the third component to obtain a low dynamic range image conforming to a predetermined storage format.
  • the device further includes:
  • the second conversion module 306 is configured to perform a conversion operation on the low dynamic range image after the low dynamic range image conforming to the predetermined storage format is obtained, so as to convert it into a low dynamic range image corresponding to the RGB color space.
  • the embodiments of the present specification also provide an electronic device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the above-mentioned tone mapping method when the program is executed.
  • the device, electronic device, and method provided in the embodiments of this specification are corresponding. Therefore, the device and electronic device also have beneficial technical effects similar to the corresponding method. Since the beneficial technical effects of the method have been described in detail above, therefore, here The beneficial technical effects of the corresponding devices and electronic equipment will not be repeated.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • the instructions can also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communication network.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

本说明书实施例提供一种色调映射方法、装置及电子设备。所述方法包括:获取一个或多个高动态范围图像,并判断所述高动态范围图像的存储形式;当所述高动态范围图像的存储形式为预定的存储形式时,将所述高动态范围图像分解为第一分量、第二分量以及第三分量;将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。采用本申请的技术方案,能够降低色调映射图像的色差,更加鲁棒地完成色调映射。

Description

一种色调映射方法、装置及电子设备
本申请要求享有2019年10月31日提交的名称为“一种色调映射方法、装置及电子设备”的中国专利申请CN201911057461.1的优先权,其全部内容通过引用并入本文中。
技术领域
本说明书涉及数字图像处理技术领域,尤其涉及一种色调映射方法、装置及电子设备。
背景技术
随着高动态范围(High Dynamic Range,HDR)技术的飞速发展,各种高动态范围视频、图像等内容日益增多,高动态范围图像相比普通动态范围的图像,可以提供更多的动态范围和图像细节,因此高动态范围图像能够更好地还原真实环境中的视觉效果。然而,由于目前大多数多媒体设备仍然显示的是有限动态范围(即低动态范围)的图像,高动态范围图像无法在这类多媒体设备上正常显示,所以如何将高动态范围图像在这类设备上正常显示,即色调映射技术成为了数字图像处理领域中比较重要的技术。由于色调映射受限于多媒体设备的位深等条件,因此无法完全一致的在多媒体设备上重现高动态范围图像,于是如何在压缩动态范围的同时保留尽可能多的局部细节,即尽可能还原高动态范围图像成为了研究的重点。
现有技术中,通过滤波器将高动态范围图像分为基本层和细节层,基础层包含图像的亮度等低频信息,细节层包含图像边缘等高频信息,对于基础层进行压缩,对细节层进行增强,最后融合成为低动态范围图像。然而,滤波的过 程会引入光晕、伪影等噪声,并且这些噪声会对色调映射的结果产生严重影响,容易造成色差,降低图像的自然度,现有的色调映射方法无法鲁棒地完成高动态范围图像向低动态范围图像的转换。
基于现有技术,需要提供一种能够避免噪声影响,降低图像色差,并且鲁棒地完成高动态范围图像向低动态范围图像转换的色调映射方案。
发明内容
有鉴于此,本发明的目的在于提供一种色调映射方法、装置及电子设备,以解决现有技术存在的色调映射会产生色差,转换不够鲁棒的问题。
为解决上述技术问题,本说明书实施例是这样实现的:
本说明书实施例提供的一种色调映射方法,所述方法包括:
获取一个或多个高动态范围图像,并对所述高动态范围图像的存储形式进行判断;
当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量;
将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;
将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
可选地,所述对所述高动态范围图像执行分解操作之前,还包括:
当判断所述高动态范围图像的存储形式为非预定的存储形式时,对所述高动态范围图像执行转换操作,以便将其转换为预定存储形式的高动态范围图像,并对所述转换后的高动态范围图像执行分解操作。
可选地,所述预定的存储形式包括HSV颜色空间,所述对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量,包括:
对所述高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取所述第一分量、第二分量以及第三分量;其中,所述第一分量包括饱和度信息,所述第二分量包括亮度信息,所述第三分量包括色相信息。
可选地,所述预定的深度神经网络为生成对抗网络,所述生成对抗网络包括生成网络和判别网络,其中:
所述生成网络基于U-Net网络建立,所述生成网络包括编码器和解码器,所述编码器内包含至少一个卷积块以及多个残差块,所述解码器内包含多个反卷积块;
所述判别网络包括多个卷积块,每个所述卷积块内包含依次排列的卷积层、归一化层和激活层。
可选地,所述生成对抗网络由预定的损失函数训练得到,所述损失函数包括生成对抗损失函数、均方误差函数和多尺度结构相似性损失函数中的一种或几种。
可选地,所述将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,包括:
将所述映射后的第一分量以及第二分量与所述第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
可选地,所述得到符合预定存储形式的低动态范围图像之后,还包括:
对所述低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的低动态范围图像。
本说明书实施例提供的一种色调映射装置,所述装置包括:
获取模块,用于获取一个或多个高动态范围图像,并对所述高动态范围图 像的存储形式进行判断;
分解模块,用于当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量;
映射模块,用于将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;
融合模块,用于将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
可选地,所述装置还包括:
第一转换模块,用于在对所述高动态范围图像执行分解操作之前,当判断所述高动态范围图像的存储形式为非预定的存储形式时,对所述高动态范围图像执行转换操作,以便将其转换为预定存储形式的高动态范围图像,并对所述转换后的高动态范围图像执行分解操作。
可选地,所述预定的存储形式包括HSV颜色空间,所述分解模块具体用于:
对所述高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取所述第一分量、第二分量以及第三分量;其中,所述第一分量包括饱和度信息,所述第二分量包括亮度信息,所述第三分量包括色相信息。
可选地,所述融合模块具体用于:
将所述映射后的第一分量以及第二分量与所述第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
可选地,所述装置还包括:
第二转换模块,用于在所述得到符合预定存储形式的低动态范围图像之后,对所述低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的 低动态范围图像。
本说明书实施例提供的一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述一种色调映射方法。
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:
本发明通过获取一个或多个高动态范围图像,并判断高动态范围图像的存储形式,当高动态范围图像的存储形式为预定的存储形式时,将高动态范围图像分解为第一分量、第二分量以及第三分量;将第一分量和第二分量输入到预定的深度神经网络中,并利用深度神经网络分别对第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;将映射后的第一分量以及第二分量与第三分量进行融合,得到融合后的与高动态范围图像相对应的低动态范围图像,以便完成色调映射。采用本申请的技术方案,能够避免噪声影响,降低色调映射后低动态范围图像的色差,实现更加鲁棒地完成高动态范围图像向低动态范围图像的转换。
附图说明
为了更清楚的说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见的下面描述中的附图仅仅是本发明的实施例,对于本领域普通人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本说明书实施例提供的一种色调映射方法的流程示意图;
图2是本说明书实施例提供的一种具体应用场景下利用生成对抗网络进行色调映射的流程示意图;
图3是本说明书实施例提供的一种色调映射装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本说明书实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
随着数字图像处理技术领域的发展,作为图像处理技术领域中的重要分支之一,高动态范围(High Dynamic Range,HDR)技术也随之崛起,各种高动态范围视频、图像等内容日益增多。高动态范围图像可以认为是相比普通动态范围的图像,可以提供更多动态范围和细节的图像,因此高动态范围图像能够更好地还原真实环境中的视觉效果。动态范围是指场景中的最高亮度和最低亮度之比,在实际应用中,可以将动态范围超过10 5的图像认为是高动态范围图像。但是,由于目前大多数多媒体设备仍然显示的是有限动态范围(即低动态范围)的图像,因此高动态范围图像无法在这类多媒体设备上正常显示,所以如何将高动态范围图像在这类设备上正常显示,即色调映射技术成为了数字图像处理领域中比较重要的技术。
色调映射是指在有限动态范围媒介上近似显示高动态范围图像的一项计算机图形学技术,有限动态范围媒介包括LCD显示设备、投影设备等。由于色调映射是一个病态问题,受限于多媒体设备的位深等条件,因此无法完全一致的在多媒体设备上重现高动态范围图像,于是如何在压缩动态范围的同时保留尽可能多的局部细节,即尽可能还原高动态范围图像成为了研究的重点。
在现有技术中,通过滤波器将高动态范围图像分为基本层和细节层,基础层包含图像的亮度等低频信息,细节层包含图像边缘等高频信息,对于基础层进行压缩,对细节层进行增强,最后融合成为低动态范围图像。然而,现有的这种处理方式存在很多弊端,例如滤波的过程会引入光晕、伪影等噪声,这些噪声难以消除,并且噪声会对色调映射的结果产生严重影响,容易造成色差, 降低图像的自然度。
进一步地,现有技术中虽然已提出利用深度学习方法完成色调映射,但是现有的深度学习方法是基于RGB颜色空间直接进行的色调映射,因此仍然无法避免色差问题;另外,现有深度学习方法中仍然采用由传统滤波方法得到的色调映射后的图像作为深度学习训练的标签,但是由传统滤波方法所得到的低动态范围图像本身色差就比较大,导致用于深度学习训练的图像标签的质量整体较差,因此难以学习到高质量的色调映射图像。
因此,针对高动态范围图像,需要提供一种能避免噪声影响,降低色调映射图像的色差,更加鲁棒地完成高动态范围图像向低动态范围图像转换的色调映射方案。需要说明的是,本说明书以下实施例是针对高动态范围图像作为处理对象进行的,本说明书实施例不对高动态范围图像的存储形式作限制,例如可以将存储形式为RGB颜色空间的高动态范围图像作为处理对象,RGB颜色空间的高动态范围图像只是本说明书实际应用场景下的一种实施例,不构成对本说明书实施例应用范围的限定。
图1为本说明书实施例提供的一种色调映射方法的流程示意图。该方法具体可以包括以下步骤:
在步骤S110中,获取一个或多个高动态范围图像,并对所述高动态范围图像的存储形式进行判断。
在本说明书一个或多个实施例中,所述高动态范围图像可以认为是色调映射处理的对象,因此获取一个或多个高动态范围图像,即可以认为是获取一个或多个原始处理对象或目标图像。根据前述内容,本说明书实施例中的原始处理对象可以是使用任意存储形式存储的高动态范围图像,在现实应用中,高动态范围图像的存储形式包括但不限于:RGB、HSV、CMY、CMYK、YIQ、Lab等颜色空间(或称为色彩空间)。
进一步地,在本说明书实施例中,由于图像是以四维矩阵的方式存储在计算机中的,不同颜色空间的存储形式可以认为是采用了不同矩阵和颜色变量, 因此可以通过分析高动态范围图像的矩阵结构或颜色等对高动态范围图像的存储形式进行判断。例如,对于HSV颜色空间来说,其空间矩阵结构为六角锥体模型,并且通过色相(Hue)、饱和度(Saturation)以及亮度(Value)来描述图像的颜色。
在步骤S120中,当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量。
在本说明书一个或多个实施例中,基于上述实施例中对高动态范围图像的存储形式的判断(即判断颜色空间),根据判断结果确定执行下一步操作,具体可以包括以下情形:
情形一:当判断高动态范围图像的存储形式为预定的存储形式时,对高动态范围图像执行分解操作,得到高动态范围图像的第一分量、第二分量以及第三分量。
进一步地,在本说明书实施例中,所述预定的存储形式可以是HSV颜色空间,当判断高动态范围图像的存储形式为HSV颜色空间时,可直接对目标图像(即高动态范围图像)执行分解操作,从而得到目标图像的第一分量、第二分量以及第三分量。
情形二:当判断高动态范围图像的存储形式为非预定的存储形式时,即目标图像的存储形式并非采用HSV颜色空间时,例如经判断目标图像的存储形式为RGB颜色空间;此时,在对高动态范围图像执行分解操作之前,还需要对高动态范围图像执行转换操作,以便将其转换为预定存储形式(即HSV颜色空间)的高动态范围图像,从而对转换后的高动态范围图像执行分解操作。
进一步地,在本说明书实施例中,以目标图像(即原始处理对象)为RGB颜色空间的高动态范围图像为例,可以基于Opencv下的计算机视觉处理技术将高动态范围图像由RGB颜色空间转换到HSV颜色空间。因此,通过对高动态范围图像的存储形式进行转换,从而得到符合预定存储形式的高动态范围图像, 以便将原始处理对象转换为可直接用于分解的待处理图像。
在本说明书一具体实施例中,当获得了HSV颜色空间的高动态范围图像后,可以采用以下方式对高动态范围图像执行分解操作,以便得到高动态范围图像的第一分量、第二分量以及第三分量,具体可以包括以下内容:
对高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取第一分量、第二分量以及第三分量;其中,第一分量包括饱和度信息,第二分量包括亮度信息,第三分量包括色相信息。
由于HSV颜色空间中是采用色相(Hue)、饱和度(Saturation)以及亮度(Value)来描述图像颜色的,因此HSV颜色空间中包含了色相分量(H通道)、饱和度分量(S通道)和亮度分量(V通道),所以可直接从HSV颜色空间中提取上述的三个分量,并记为第一分量、第二分量以及第三分量,其中,可用第一分量表示饱和度信息,第二分量表示亮度信息,第三分量表示色相信息;上述第一分量、第二分量以及第三分量中的“第一”、“第二”、“第三”仅仅为了区分不同的分量,不作为对具体分量名称及内容的限定。
值得说明的是,本说明书实施例之所以将原始处理对象转换到HSV颜色空间,并对HSV颜色空间下的高动态范围图像进行分量分解的意义在于,考虑到色调映射主要针对的是动态范围的压缩,至于色相问题一般由色域映射解决,因此通过将高动态范围图像由RGB颜色空间转换到HSV颜色空间,并分解为H通道、S通道和V通道,其中,H通道包含色相信息,S通道包含饱和度信息,V通道包含亮度信息,对于饱和度分量和亮度分量学习映射,而对于色相分量暂不做处理,保留色相分量,继而融合形成低动态范围图像,由于保留了色相分量,因此降低了对色彩造成影响,降低了色调映射后图像的色差。
在步骤S130中,将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量。
在本说明书一个或多个实施例中,所述预定的深度神经网络为生成对抗网 络,生成对抗网络可以包括生成网络和判别网络,下面对生成网络和判别网络的结构做进一步描述,具体可以包括以下内容:
生成网络基于U-Net网络建立,生成网络包括编码器和解码器,编码器内包含至少一个卷积块以及多个残差块,解码器内包含多个反卷积块;
进一步地,在本说明书实施例中,生成网络又可以称为生成器,生成网络基于U-Net网络结构建立;编码器内包含依次排列的一个卷积块和四个残差块,其中,该卷积块内包含一个卷积层和激活层,卷积层的卷积核尺寸为3*3,步长为2,填充为1,通道数为64;每个残差块内包含依次排列的卷积层、激活层、卷积层和激活层,并且在第二个激活层之前还包括对当前残差块的输入信息与第二个卷积层的输出信息执行相加的操作,其中,残差块内卷积层的卷积核尺寸为3*3,步长为2,每个残差块的通道数从64开始按两倍递增,编码器内的激活层采用RELU激活函数,为了保持特征图尺寸不变采用镜像对称的方式做边缘填充;在编码器的最后一个残差块之后还连接有一个通道为512,卷积核为1*1的卷积层做特征变换;
解码器内包含依次排列的五个反卷积块做上采样,反卷积块内反卷积层(转置卷积层)的卷积核为3*3,步长为2,通道数按二分之一递减。在编码器和解码器相同分辨率的卷积块之间添加跳跃连接以恢复由于分辨率减半造成的空间结构信息的损失。在解码器之后连接两个卷积块做微调,两个卷积块内卷积层的卷积核为3*3,步长为1,通道分别为64和2。在解码器以及解码器之后的两个卷积块中,除最后一层的激活层采用Sigmoid激活函数外其余均采用RELU激活函数。
判别网络包括多个卷积块,每个卷积块内包含依次排列的卷积层、归一化层和激活层。进一步地,在本说明书实施例中,判别网络又可以称为判别器,判别网络由四个卷积块组成,卷积块内卷积层的卷积核大小为3*3,步长为2,判别网络内的归一化层采用layer normalization,激活层采用RELU激活函数。
在实际应用中,生成对抗网络可以由预定的损失函数训练得到,损失函数 包括生成对抗损失函数、均方误差函数和多尺度结构相似性损失函数中的一种或几种。
在步骤S140中,将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
在本说明书一个或多个实施例中,继续以上实施例的内容,在将亮度分量以及饱和度分量输入生成对抗网络学习映射,输出映射后的亮度分量以及饱和度分量,将映射后的亮度分量以及饱和度分量与色相分量进行融合,即可得到融合后的与原始处理对象(高动态范围图像)相对应的低动态范围图像,即完成了色调的映射。
进一步地,在本说明书实施例中,可以采用以下方式对上述分量进行融合以便得到低动态范围图像,具体地:
将映射后的第一分量以及第二分量与第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
在本说明书一具体实施场景中,由于第一分量、第二分量以及第三分量对应的是HSV颜色空间中的S通道、V通道和H通道,因此将学习映射后得到的S通道、V通道与原H通道进行融合,得到的依然是HSV颜色空间所对应的低动态范围图像。因此,为了便于将低动态范围图像还原到原始处理对象对应的颜色空间(如RGB颜色空间),在得到符合预定存储形式的低动态范围图像之后,还可以包括:对低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的低动态范围图像;当然,容易理解的是,本说明书实施例中对原始处理对象(高动态范围图像)对应的颜色空间没有具体限制,因此将低动态范围图像转换到何种颜色空间,可根据实际需求来确定。
下面结合一具体实施例,对利用生成对抗网络进行色调映射的过程进行说明。如图2所示,该图示出了本说明书实施例提供的一种具体应用场景下利用生成对抗网络进行色调映射的流程示意图。根据前述实施例的内容并结合图2 所示,基于本说明书实施例所公开生成对抗网络的结构,通过在生成器部分利用U-Net网络结构学习充足的多尺度信息;由于色调映射主要是对亮度的映射,物体的结构等信息不会发生变化,所以在编码器中引入残差块,在保持结构完整性和避免信息丢失的同时,降低网络学习的难度。另外,由于色调映射往往会得到不真实的映射结果,所以利用生成对抗网络,引入对抗损失在感知层面上进行学习来提高映射图片的自然度。
本说明书实施例通过将高动态范围图像的饱和度分量和亮度分量同时输入到生成对抗网络中学习映射,并保留原有的色相分量,最后将其融合形成低动态范围图像。在生成对抗网络的训练阶段,由于引入了生成对抗损失和结构相似性损失,因此利用本发明生成对抗网络学习映射后得到的亮度分量以及饱和度分量与原有的色相分量进行融合所得到的图像,不仅在结构上与原始的高动态范围图像高度一致,而且具备很高自然度,在学习亮度和饱和度映射的同时避免造成色差等问题。
利用本说明书实施例色调映射得到的图像作为训练生成对抗网络的数据集,可提升神经网络学习的效果,并且还可以通过调节参数以获得高质量的色调映射标签数据集。
基于同样的思路,本说明书实施例还提供了一种色调映射装置,如图3为本说明书实施例提供的一种色调映射装置,该装置300主要包括:
获取模块301,用于获取一个或多个高动态范围图像,并对所述高动态范围图像的存储形式进行判断;
分解模块302,用于当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量;
映射模块303,用于将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;
融合模块304,用于将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
根据本申请的实施例,所述装置还包括:
第一转换模块305,用于在对所述高动态范围图像执行分解操作之前,当判断所述高动态范围图像的存储形式为非预定的存储形式时,对所述高动态范围图像执行转换操作,以便将其转换为预定存储形式的高动态范围图像,并对所述转换后的高动态范围图像执行分解操作。
根据本申请的实施例,在所述装置中,所述预定的存储形式包括HSV颜色空间,所述分解模块302具体用于:
对所述高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取所述第一分量、第二分量以及第三分量;其中,所述第一分量包括饱和度信息,所述第二分量包括亮度信息,所述第三分量包括色相信息。
根据本申请的实施例,在所述装置中,所述融合模块304具体用于:
将所述映射后的第一分量以及第二分量与所述第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
根据本申请的实施例,在所述装置中,所述装置还包括:
第二转换模块306,用于在所述得到符合预定存储形式的低动态范围图像之后,对所述低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的低动态范围图像。
本说明书实施例还提供一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述一种色调映射方法。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实 施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、电子设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书实施例提供的装置、电子设备与方法是对应的,因此,装置、电子设备也具有与对应方法类似的有益技术效果,由于上面已经对方法的有益技术效果进行了详细说明,因此,这里不再赘述对应装置、电子设备的有益技术效果。
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型 的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
对所公开的实施例的上述说明,使本领域技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神和范围的情况下,在其他实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (13)

  1. 一种色调映射方法,所述方法包括:
    获取一个或多个高动态范围图像,并对所述高动态范围图像的存储形式进行判断;
    当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量;
    将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射后的第一分量和第二分量;
    将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
  2. 如权利要求1所述的方法,所述对所述高动态范围图像执行分解操作之前,还包括:
    当判断所述高动态范围图像的存储形式为非预定的存储形式时,对所述高动态范围图像执行转换操作,以便将其转换为预定存储形式的高动态范围图像,并对所述转换后的高动态范围图像执行分解操作。
  3. 如权利要求1所述的方法,所述预定的存储形式包括HSV颜色空间,所述对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量,包括:
    对所述高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取所述第一分量、第二分量以及第三分量;其中,所述第一分量包括饱和度信息,所述第二分量包括亮度信息,所述第三分量包括色相信息。
  4. 如权利要求1所述的方法,所述预定的深度神经网络为生成对抗网络,所述生成对抗网络包括生成网络和判别网络,其中:
    所述生成网络基于U-Net网络建立,所述生成网络包括编码器和解码器,所述编码器内包含至少一个卷积块以及多个残差块,所述解码器内包含多个反卷积块;
    所述判别网络包括多个卷积块,每个所述卷积块内包含依次排列的卷积层、归一化层和激活层。
  5. 如权利要求4所述的方法,所述生成对抗网络由预定的损失函数训练得到,所述损失函数包括生成对抗损失函数、均方误差函数和多尺度结构相似性损失函数中的一种或几种。
  6. 如权利要求1所述的方法,所述将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,包括:
    将所述映射后的第一分量以及第二分量与所述第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
  7. 如权利要求6所述的方法,所述得到符合预定存储形式的低动态范围图像之后,还包括:
    对所述低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的低动态范围图像。
  8. 一种色调映射装置,所述装置包括:
    获取模块,用于获取一个或多个高动态范围图像,并对所述高动态范围图像的存储形式进行判断;
    分解模块,用于当判断所述高动态范围图像的存储形式为预定的存储形式时,对所述高动态范围图像执行分解操作,得到所述高动态范围图像的第一分量、第二分量以及第三分量;
    映射模块,用于将所述第一分量和第二分量输入到预定的深度神经网络中,并利用所述深度神经网络分别对所述第一分量和第二分量进行映射,得到映射 后的第一分量和第二分量;
    融合模块,用于将所述映射后的第一分量以及第二分量与所述第三分量进行融合,得到融合后的与所述高动态范围图像相对应的低动态范围图像,以便完成色调映射。
  9. 如权利要求8所述的装置,所述装置还包括:
    第一转换模块,用于在对所述高动态范围图像执行分解操作之前,当判断所述高动态范围图像的存储形式为非预定的存储形式时,对所述高动态范围图像执行转换操作,以便将其转换为预定存储形式的高动态范围图像,并对所述转换后的高动态范围图像执行分解操作。
  10. 如权利要求8所述的装置,所述预定的存储形式包括HSV颜色空间,所述分解模块具体用于:
    对所述高动态范围图像所对应的HSV颜色空间中的分量进行提取,以便获取所述第一分量、第二分量以及第三分量;其中,所述第一分量包括饱和度信息,所述第二分量包括亮度信息,所述第三分量包括色相信息。
  11. 如权利要求8所述的装置,所述融合模块具体用于:
    将所述映射后的第一分量以及第二分量与所述第三分量进行叠加,得到符合预定存储形式的低动态范围图像。
  12. 如权利要求11所述的装置,所述装置还包括:
    第二转换模块,用于在所述得到符合预定存储形式的低动态范围图像之后,对所述低动态范围图像执行转换操作,以便将其转换为RGB颜色空间所对应的低动态范围图像。
  13. 一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1 至7中任一项所述的方法。
PCT/CN2019/118585 2019-10-31 2019-11-14 一种色调映射方法、装置及电子设备 WO2021082088A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/725,334 US20220245775A1 (en) 2019-10-31 2022-04-20 Tone mapping method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911057461.1A CN110796595B (zh) 2019-10-31 2019-10-31 一种色调映射方法、装置及电子设备
CN201911057461.1 2019-10-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/725,334 Continuation-In-Part US20220245775A1 (en) 2019-10-31 2022-04-20 Tone mapping method and electronic device

Publications (1)

Publication Number Publication Date
WO2021082088A1 true WO2021082088A1 (zh) 2021-05-06

Family

ID=69440621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118585 WO2021082088A1 (zh) 2019-10-31 2019-11-14 一种色调映射方法、装置及电子设备

Country Status (3)

Country Link
US (1) US20220245775A1 (zh)
CN (1) CN110796595B (zh)
WO (1) WO2021082088A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11908109B1 (en) * 2019-11-04 2024-02-20 madVR Holdings LLC Enhanced video processor
US11893482B2 (en) * 2019-11-14 2024-02-06 Microsoft Technology Licensing, Llc Image restoration for through-display imaging
CN111667430B (zh) * 2020-06-09 2022-11-22 展讯通信(上海)有限公司 图像的处理方法、装置、设备以及存储介质
CN111784598B (zh) * 2020-06-18 2023-06-02 Oppo(重庆)智能科技有限公司 色调映射模型的训练方法、色调映射方法及电子设备
CN113066019A (zh) * 2021-02-27 2021-07-02 华为技术有限公司 一种图像增强方法及相关装置
US20240054622A1 (en) * 2021-04-27 2024-02-15 Boe Technology Group Co., Ltd. Image processing method and image processing apparatus
CN116029914B (zh) * 2022-07-27 2023-10-20 荣耀终端有限公司 图像处理方法与电子设备
CN115205157B (zh) * 2022-07-29 2024-04-26 如你所视(北京)科技有限公司 图像处理方法和系统、电子设备和存储介质
CN115631428B (zh) * 2022-11-01 2023-08-11 西南交通大学 一种基于结构纹理分解的无监督图像融合方法和系统
CN117474816B (zh) * 2023-12-26 2024-03-12 中国科学院宁波材料技术与工程研究所 高动态范围图像色调映射方法、系统及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8766999B2 (en) * 2010-05-20 2014-07-01 Aptina Imaging Corporation Systems and methods for local tone mapping of high dynamic range images
CN108885782A (zh) * 2017-08-09 2018-11-23 深圳市大疆创新科技有限公司 图像处理方法、设备及计算机可读存储介质
CN110197463A (zh) * 2019-04-25 2019-09-03 深圳大学 基于深度学习的高动态范围图像色调映射方法及其系统
CN110232669A (zh) * 2019-06-19 2019-09-13 湖北工业大学 一种高动态范围图像的色调映射方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8237813B2 (en) * 2009-04-23 2012-08-07 Csr Technology Inc. Multiple exposure high dynamic range image capture
CN110383802B (zh) * 2017-03-03 2021-05-25 杜比实验室特许公司 利用逼近函数的彩色图像修改方法
CN107657594A (zh) * 2017-09-22 2018-02-02 武汉大学 一种高质量的快速色调映射方法和系统
CN108010024B (zh) * 2017-12-11 2021-12-07 宁波大学 一种盲参考色调映射图像质量评价方法
CN108024104B (zh) * 2017-12-12 2020-02-28 上海顺久电子科技有限公司 一种对输入的高动态范围图像进行处理的方法和显示设备
CN108805836A (zh) * 2018-05-31 2018-11-13 大连理工大学 基于深度往复式hdr变换的图像校正方法
CN110223256A (zh) * 2019-06-10 2019-09-10 北京大学深圳研究生院 一种逆色调映射方法、装置及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8766999B2 (en) * 2010-05-20 2014-07-01 Aptina Imaging Corporation Systems and methods for local tone mapping of high dynamic range images
CN108885782A (zh) * 2017-08-09 2018-11-23 深圳市大疆创新科技有限公司 图像处理方法、设备及计算机可读存储介质
CN110197463A (zh) * 2019-04-25 2019-09-03 深圳大学 基于深度学习的高动态范围图像色调映射方法及其系统
CN110232669A (zh) * 2019-06-19 2019-09-13 湖北工业大学 一种高动态范围图像的色调映射方法及系统

Also Published As

Publication number Publication date
CN110796595A (zh) 2020-02-14
US20220245775A1 (en) 2022-08-04
CN110796595B (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
WO2021082088A1 (zh) 一种色调映射方法、装置及电子设备
US10853925B2 (en) Methods, systems, and media for image processing
CN110717868B (zh) 视频高动态范围反色调映射模型构建、映射方法及装置
CN105850114A (zh) 用于图像的逆色调映射的方法
JP6432214B2 (ja) 画像処理装置、画像処理方法、記憶媒体及びプログラム
CN106780417A (zh) 一种光照不均图像的增强方法及系统
CN111145290B (zh) 一种图像彩色化方法、系统和计算机可读存储介质
CN109886906B (zh) 一种细节敏感的实时弱光视频增强方法和系统
CN112802137B (zh) 一种基于卷积自编码器的颜色恒常性方法
CN113284064A (zh) 一种基于注意力机制跨尺度上下文低照度图像增强方法
CN112508812A (zh) 图像色偏校正方法、模型训练方法、装置及设备
CN111105359A (zh) 一种高动态范围图像的色调映射方法
CN111226256A (zh) 用于图像动态范围调整的系统和方法
CN115393227A (zh) 基于深度学习的微光全彩视频图像自适应增强方法及系统
CN114463207B (zh) 基于全局动态范围压缩与局部亮度估计的色阶映射方法
CN111161189A (zh) 一种基于细节弥补网络的单幅图像再增强方法
CN114638764B (zh) 基于人工智能的多曝光图像融合方法及系统
CN111147924A (zh) 一种视频增强处理方法及系统
CN116468636A (zh) 低照度增强方法、装置、电子设备和可读存储介质
Buzzelli et al. Consensus-driven illuminant estimation with GANs
US11647298B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
CN114549386A (zh) 一种基于自适应光照一致性的多曝光图像融合方法
CN114862707A (zh) 一种多尺度特征恢复图像增强方法、装置及存储介质
Kim et al. Efficient-HDRTV: Efficient SDR to HDR Conversion for HDR TV
Kar et al. Statistical approach for color image detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19950262

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19950262

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19950262

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.02.2023)