CN118474553A - Image processing method and electronic device - Google Patents
Image processing method and electronic device Download PDFInfo
- Publication number
- CN118474553A CN118474553A CN202311655933.XA CN202311655933A CN118474553A CN 118474553 A CN118474553 A CN 118474553A CN 202311655933 A CN202311655933 A CN 202311655933A CN 118474553 A CN118474553 A CN 118474553A
- Authority
- CN
- China
- Prior art keywords
- image
- global
- features
- local
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 143
- 239000011159 matrix material Substances 0.000 claims abstract description 119
- 238000012545 processing Methods 0.000 claims abstract description 98
- 238000012937 correction Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 40
- 238000010586 diagram Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 abstract description 13
- 238000013528 artificial neural network Methods 0.000 description 39
- 230000006870 function Effects 0.000 description 27
- 238000004891 communication Methods 0.000 description 14
- 238000007726 management method Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 230000005236 sound signal Effects 0.000 description 9
- 238000010295 mobile communication Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 3
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 3
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/85—Camera processing pipelines; Components thereof for processing colour signals for matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical Field
本申请实施例涉及图像处理领域,尤其涉及一种图像处理方法及电子设备。The embodiments of the present application relate to the field of image processing, and in particular to an image processing method and electronic device.
背景技术Background Art
随着电子技术的发展,手机等电子设备的拍摄功能也得到了快速发展。为了提高拍摄质量,部分手机已经集成了高动态范围(High-Dynamic Range,HDR)功能。然而,手机采用HDR功能进行拍摄得到的HDR图像,亮度的动态范围比较大,这意味着图像中最亮和最暗部分之间差值非常大,如一般在0.001cd/m2-100000cd/m2。也就是说,存在亮的部分过曝,暗的部分过暗的情况,导致无法得到真实可用的图像。为了让电子设备呈现出的图像能够真实的反映出人眼所看到的场景,电子设备一般是通过多个串联的神经网络将HDR图像转换为低动态范围(Low-dynamic range,LDR)图像后呈现给用户的。With the development of electronic technology, the shooting function of electronic devices such as mobile phones has also developed rapidly. In order to improve the shooting quality, some mobile phones have integrated the High-Dynamic Range (HDR) function. However, the HDR image obtained by the mobile phone using the HDR function has a relatively large dynamic range of brightness, which means that the difference between the brightest and darkest parts of the image is very large, such as generally 0.001cd/ m2-100000cd / m2 . In other words, there is a situation where the bright part is overexposed and the dark part is too dark, resulting in the inability to obtain a real and usable image. In order for the image presented by the electronic device to truly reflect the scene seen by the human eye, the electronic device generally converts the HDR image into a low-dynamic range (LDR) image through multiple series of neural networks and presents it to the user.
但是,最终得到的LDR图像中存在严重的偏色和信息丢失,无法保证图像能够真实的反应人眼所看见的场景。However, the final LDR image suffers from severe color cast and information loss, and cannot guarantee that the image can truly reflect the scene seen by the human eye.
发明内容Summary of the invention
基于此,本申请提供一种图像处理方法及电子设备,解决了高动态范围图像转换为低动态范围图像过程中的纹理细节丢失和颜色偏差的问题,提高了获得的低动态范围图像的准确性。Based on this, the present application provides an image processing method and electronic device, which solves the problems of texture detail loss and color deviation in the process of converting high dynamic range images into low dynamic range images, and improves the accuracy of the obtained low dynamic range images.
为达到上述目的,本申请的实施例采用如下技术方案:To achieve the above objectives, the embodiments of the present application adopt the following technical solutions:
第一方面,本申请提供一种图像处理方法。该方法中,电子设备获取高动态范围的第一图像,进而获取第一图像在亮度和颜色上到对应的低动态范围图像的全局映射关系和局部映射关系,即全局映射矩阵和局部映射矩阵。在确定全局映射矩阵和局部映射矩阵之后,基于全局映射矩阵,对第一图像进行亮度校正,获得第二图像;基于局部映射矩阵,对第一图像进行颜色校正,获得第一图像对应的低动态范围的第三图像。In the first aspect, the present application provides an image processing method. In the method, an electronic device obtains a first image with a high dynamic range, and then obtains the global mapping relationship and the local mapping relationship of the first image to the corresponding low dynamic range image in terms of brightness and color, that is, the global mapping matrix and the local mapping matrix. After determining the global mapping matrix and the local mapping matrix, the brightness of the first image is corrected based on the global mapping matrix to obtain the second image; based on the local mapping matrix, the color of the first image is corrected to obtain a third image with a low dynamic range corresponding to the first image.
其中,上述图像处理方法是基于一个神经网络实现的,也就是说采用一个神经网络可以实现色调映射,即实现将高动态范围的图像转换为低动态范围的图像,避免了误差累积,确保了最终输出的低动态图像的颜色的准确性。另外,该神经网络中包括两个独立的通道(如,基于全局映射矩阵对第一图像进行亮度校正的上通道和基于局部映射矩阵对第一图像进行颜色校正的下通道),分别实现亮度和颜色的还原处理。可以看到的是,通过这两个独立的通道实现了亮度与颜色的部分解耦,避免了还原出的低动态范围图像的细节纹理丢失。综上,提高了获得的低动态范围图像的准确性。Among them, the above-mentioned image processing method is implemented based on a neural network, that is, a neural network can be used to implement tone mapping, that is, to convert a high dynamic range image into a low dynamic range image, avoid error accumulation, and ensure the accuracy of the color of the low dynamic range image finally output. In addition, the neural network includes two independent channels (such as an upper channel for brightness correction of the first image based on a global mapping matrix and a lower channel for color correction of the first image based on a local mapping matrix), which respectively realize brightness and color restoration processing. It can be seen that partial decoupling of brightness and color is achieved through these two independent channels, avoiding the loss of detail texture of the restored low dynamic range image. In summary, the accuracy of the obtained low dynamic range image is improved.
另外,通过在神经网络内部进行颜色校正处理,避免了对图像的负数截断和高位截取,进一步确保了纹理细节的准确还原。In addition, by performing color correction processing inside the neural network, negative truncation and high-bit interception of the image are avoided, further ensuring the accurate restoration of texture details.
在第一方面的一种可实现方式中,在获得第一图像之后,还可以对第一图像进行下采样处理。也就是说将获得的第一图像进行图像大小的压缩处理,从而降低图像中像素值的数量,较少计算量。In an implementation of the first aspect, after the first image is obtained, the first image may be downsampled, that is, the first image is compressed to reduce the number of pixel values in the image and reduce the amount of calculation.
在第一方面的一种可实现方式中,获取第一图像的全局映射矩阵和局部映射矩阵可以包括:获取第一图像的图像特征,接着基于图像特征确定第一图像的全局特征和局部特征,根据得到的全局特征和局部特征与不同的权重系数确定对应的全局映射矩阵和局部映射矩阵。如基于全局特征、局部特征和第一权重系数确定全局映射矩阵,基于全局特征、局部特征和第二权重系数确定局部映射矩阵。In an implementable manner of the first aspect, obtaining a global mapping matrix and a local mapping matrix of the first image may include: obtaining image features of the first image, then determining global features and local features of the first image based on the image features, and determining corresponding global mapping matrices and local mapping matrices according to the obtained global features and local features and different weight coefficients. For example, the global mapping matrix is determined based on the global features, the local features, and the first weight coefficient, and the local mapping matrix is determined based on the global features, the local features, and the second weight coefficient.
通过第一图像的全局映射矩阵和局部映射矩阵来分别对亮度和颜色进行还原处理。由于全局映射矩阵和布局映射矩阵中不仅包括全局特征,还包括局部特征,也就是说,在进行亮度和颜色还原时,不仅考虑了图像的全局特征,还考虑了图像的局部特征,即通过参数共享机制,实现亮度与颜色相互作用,进一步避免了还原出的低动态范围图像的纹理细节丢失。另外,因为对于全局特征来说,更加侧重反映的是亮度效果,所以通过第一权重系数中全局特征对应的权重值大于局部特征对应的权重值,保证对亮度的主处理;同理,对于局部特征来说,更加侧重反映的是颜色效果,所以通过第二权重系数中的局部特征对应的权重值大于全局特征对应的权重值,保证对颜色的主处理。进一步降低了图像的纹理细节丢失和颜色偏差的概率,从而保证了最后输出的第三图像从整体到局部的细节和颜色更加清楚准确。The brightness and color are restored respectively by the global mapping matrix and the local mapping matrix of the first image. Since the global mapping matrix and the layout mapping matrix include not only global features but also local features, that is, when restoring brightness and color, not only the global features of the image but also the local features of the image are considered, that is, the interaction between brightness and color is realized through the parameter sharing mechanism, and the loss of texture details of the restored low dynamic range image is further avoided. In addition, because the global features focus more on the brightness effect, the weight value corresponding to the global features in the first weight coefficient is greater than the weight value corresponding to the local features, so as to ensure the main processing of brightness; similarly, for local features, the color effect is more focused, so the weight value corresponding to the local features in the second weight coefficient is greater than the weight value corresponding to the global features, so as to ensure the main processing of color. The probability of texture detail loss and color deviation of the image is further reduced, thereby ensuring that the details and colors of the third image finally output from the whole to the local are clearer and more accurate.
在第一方面的一种可实现方式中,对第一图像的亮度进行校正可以包括:先获取第一图像的纹理信息,根据纹理信息对第一图像的全局映射矩阵进行插值处理,得到处理后的映射关系图,基于映射关系图,对第一图像进行亮度校正,以获得第二图像。如,可以通过如下公式获得第二图像。In an implementation manner of the first aspect, correcting the brightness of the first image may include: first acquiring texture information of the first image, performing interpolation processing on a global mapping matrix of the first image according to the texture information to obtain a processed mapping relationship graph, and performing brightness correction on the first image based on the mapping relationship graph to obtain a second image. For example, the second image may be obtained by the following formula.
Contrast(R,G,B)(x,y)=input(R,G,B)(x,y)*Gain_map(x,y)Contrast(R,G,B)(x,y)=input(R,G,B)(x,y)*Gain_map(x,y)
其中,Contrast(R,G,B)为第二图像,input(R,G,B)(x,y)为第一图像,Gain_map(x,y)为映射关系图。Among them, Contrast(R, G, B) is the second image, input(R, G, B)(x, y) is the first image, and Gain_map(x, y) is the mapping relationship diagram.
上述通过先确定映射关系图,再基于映射关系图对第一图像进行亮度校正,从而得到校正后的第二图像,保证了图像亮度对齐的效果。In the above, the mapping relationship diagram is first determined, and then the brightness correction is performed on the first image based on the mapping relationship diagram, so as to obtain the corrected second image, thereby ensuring the effect of image brightness alignment.
在第一方面的一种可实现方式中,对第一图像的颜色进行校正可以包括:先基于第二图像和局部映射矩阵确定颜色校正矩阵,如可以利用第二图像对局部校正矩阵进行插值处理,从而获得颜色校正矩阵,接着基于颜色校正矩阵对第一图像的颜色进行校正,如可以基于颜色校正矩阵,对第二图像的RGB通道进行颜色校正,从而确定校正后的图像,即第三图像。也就是说第三图像是亮度和颜色均校正后的图像,即低动态范围的图像。如,可通过如下公式获得第三图像。In an implementable manner of the first aspect, correcting the color of the first image may include: first determining a color correction matrix based on the second image and a local mapping matrix, such as interpolating the local correction matrix using the second image to obtain a color correction matrix, and then correcting the color of the first image based on the color correction matrix, such as performing color correction on the RGB channels of the second image based on the color correction matrix, thereby determining a corrected image, i.e., a third image. That is, the third image is an image after both brightness and color are corrected, i.e., an image with a low dynamic range. For example, the third image may be obtained by the following formula.
R’(x,y)=a0(x,y)*R(x,y)+a1(x,y)*G(x,y)+a2(x,y)*B(x,y)R'(x,y)=a 0 (x,y)*R(x,y)+a 1 (x,y)*G(x,y)+a 2 (x,y)*B(x, y)
G’(x,y)=b0(x,y)*R(x,y)+b1(x,y)*G(x,y)+b2(x,y)*B(x,y)G'(x, y)=b 0 (x, y)*R(x, y)+b 1 (x, y)*G(x, y)+b 2 (x, y)*B(x, y)
B’(x,y)=c0(x,y)*R(x,y)+c1(x,y)*G(x,y)+c2(x,y)*B(x,y)B'(x, y)=c 0 (x, y)*R(x, y)+c 1 (x, y)*G(x, y)+c 2 (x, y)*B(x, y)
其中,a0、a1、a2、b0、b1、b2、c0、c1、c2为颜色校正矩阵中的系数;R(x,y)、G(x,y)、B(x,y)为R通道、G通道、B通道的第二图像;R’(x,y)、G’(x,y)、B’(x,y)为R通道、G通道、B通道经颜色校正矩阵处理后的图像。Among them, a0 , a1 , a2 , b0 , b1 , b2 , c0 , c1 , c2 are coefficients in the color correction matrix; R(x, y), G(x, y), B(x, y) are the second images of the R channel, G channel, and B channel; R'(x, y), G'(x, y), B'(x, y) are the images of the R channel, G channel, and B channel after being processed by the color correction matrix.
上述通过对亮度和颜色分开校正处理,保证颜色与亮度的部分解耦,以及对其参数共享处理,保证颜色与亮度的相互作用,从而有效的保证最终输出的第三图像的颜色准确,细节清楚,提高用户的体验感。By separately correcting brightness and color, partial decoupling of color and brightness is ensured, and parameter sharing is performed to ensure interaction between color and brightness, thereby effectively ensuring that the color of the third image finally outputted is accurate and the details are clear, thereby improving the user experience.
在第一方面的一种可实现方式中,对于上述神经网络,可以将亮度约束和颜色约束作为图像输出的约束条件来对神经网络进行训练,从而保证通过神经网络处理得到的满足对应约束条件,如上通道输出的图像符合亮度的约束条件,下通道输出的图像符合颜色的约束条件。从而保证了将高动态范围图像转换为低动态范围图像的完成度。In an implementation manner of the first aspect, for the above-mentioned neural network, brightness constraints and color constraints can be used as constraints for image output to train the neural network, thereby ensuring that the image obtained through neural network processing meets the corresponding constraints, such as the image output by the upper channel meets the brightness constraint, and the image output by the lower channel meets the color constraint. Thereby ensuring the completion of converting the high dynamic range image to the low dynamic range image.
第二方面,本申请提供了一种图像处理装置,该装置具有实现上述第一方面的所述方法中电子设备行为的功能。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块,例如,输入单元或模块,显示单元或模块,处理单元或模块。In a second aspect, the present application provides an image processing device having the function of implementing the electronic device behavior in the method of the first aspect. The function can be implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions, for example, an input unit or module, a display unit or module, and a processing unit or module.
第三方面,提供了一种电子设备,该电子设备包括:处理器;存储器;摄像模组;以及计算机程序,其中,计算机程序存储在存储器上,当计算机程序被处理器执行时,使得电子设备执行如上述第一方面及其任一实现方式中的图像处理方法。In a third aspect, an electronic device is provided, comprising: a processor; a memory; a camera module; and a computer program, wherein the computer program is stored in the memory, and when the computer program is executed by the processor, the electronic device executes the image processing method as described in the first aspect and any implementation thereof.
第四方面,提供了一种计算机可读存储介质,计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得电子设备可以执行上述第一方面中任一项所述的方法。In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium comprising a computer program, and when the computer program runs on an electronic device, the electronic device can execute any one of the methods described in the first aspect.
第五方面,提供了一种包含指令的计算机程序产品,当其在电子设备上运行时,使得电子设备可以执行上述第一方面及其任一实现方式中的图像处理方法。In a fifth aspect, a computer program product comprising instructions is provided, which, when executed on an electronic device, enables the electronic device to execute the image processing method in the first aspect and any implementation manner thereof.
第六方面,本申请实施例提供了一种芯片,芯片包括处理器,处理器用于调用存储器中的计算机程序,以执行如第一方面及其任一实现方式中的图像处理方法。In a sixth aspect, an embodiment of the present application provides a chip, the chip including a processor, the processor being used to call a computer program in a memory to execute an image processing method as in the first aspect and any implementation thereof.
可以理解地,上述提供的第二方面所述的装置,第三方面所述的电子设备,第四方面所述的计算机可读存储介质,第五方面所述的计算机程序产品,第六方面所述的芯片所能达到的有益效果,可参考第一方面及其任一种可能的实现方式中的有益效果,此处不再赘述。It can be understood that the beneficial effects that can be achieved by the device described in the second aspect, the electronic device described in the third aspect, the computer-readable storage medium described in the fourth aspect, the computer program product described in the fifth aspect, and the chip described in the sixth aspect provided above can refer to the beneficial effects in the first aspect and any possible implementation method thereof, and will not be repeated here.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1A为相关技术提供的一种图像处理方法示意图;FIG1A is a schematic diagram of an image processing method provided by the related art;
图1B为相关技术提供的一种颜色偏差的图像;FIG1B is an image of color deviation provided by the related art;
图1C为相关技术提供的一种纹理细节丢失的图像;FIG1C is an image with lost texture details provided by the related art;
图2为本申请实施例提供的一种电子设备的结构示意图;FIG2 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application;
图3为本申请实施例提供的一种图像方法的处理流程示意图;FIG3 is a schematic diagram of a processing flow of an image method provided in an embodiment of the present application;
图4为本申请实施例提供另一种图像处理方法的流程示意图;FIG4 is a flow chart of another image processing method provided in an embodiment of the present application;
图5为本申请实施例提供的一种芯片系统的结构示意图。FIG5 is a schematic diagram of the structure of a chip system provided in an embodiment of the present application.
具体实施方式DETAILED DESCRIPTION
在本申请的描述中,除非另有说明,本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。In the description of this application, unless otherwise specified, "and/or" in this application is merely a description of the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。In the description of this application, unless otherwise specified, "plurality" means two or more than two. "At least one of the following" or similar expressions refers to any combination of these items, including any combination of single items or plural items. For example, at least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c can be single or multiple.
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, words such as "first" and "second" are used to distinguish the same or similar items with substantially the same functions and effects. Those skilled in the art can understand that words such as "first" and "second" do not limit the quantity and execution order, and words such as "first" and "second" do not necessarily limit the difference.
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。In the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplary" or "for example" is intended to present related concepts in a concrete way for easy understanding.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application.
以人眼的角度观看到的自然光线下的场景,往往会比通过电子设备看到的场景更精彩。这是因为人眼看到的场景中的亮度关系是非线性的,而电子设备记录的场景中的亮度关系是线性的,所以人眼看到的场景的动态范围是大于电子设备能记录到的动态范围的。所谓的动态范围(Dynamic Range,DR)是电子设备中图像传感器最重要的参数之一,它决定了图像传感器能接收的最暗的部分(或称为阴影部分)到最亮的部分(或称为高光部分)的强度分布范围,也就是决定了电子设备所拍摄出来的图像的细节以及层次。其中,HDR图像是具有很高的动态范围的图像,这意味着HDR图像中最亮和最暗部分之间的差别非常大。也就是说,高动态范围成像的目的就是要正确地表示真实世界中从太阳光直射到最暗的阴影这样大的范围亮度。Scenes under natural light viewed from the perspective of the human eye are often more exciting than those viewed through electronic devices. This is because the brightness relationship in the scene seen by the human eye is nonlinear, while the brightness relationship in the scene recorded by the electronic device is linear, so the dynamic range of the scene seen by the human eye is greater than the dynamic range that the electronic device can record. The so-called dynamic range (DR) is one of the most important parameters of the image sensor in the electronic device. It determines the intensity distribution range from the darkest part (or shadow part) to the brightest part (or highlight part) that the image sensor can receive, that is, it determines the details and levels of the image taken by the electronic device. Among them, HDR images are images with a very high dynamic range, which means that the difference between the brightest and darkest parts in the HDR image is very large. In other words, the purpose of high dynamic range imaging is to correctly represent the brightness of the real world, from direct sunlight to the darkest shadow.
而目前通过电子设备拍摄的图像往往受到自然光线和电子设备自身原因的影响,得到的HDR图像的亮度差别很不均匀,即亮的部分过于曝光,暗的地方过于黑暗。加之,大多数显示器、打印机等输出设备的动态范围比普通的高动态范围小得多,因此输出的图像并不能更好的反应真实场景。为了解决这个问题,色调映射技术应运而生。色调映射是将HDR图像转换到LDR图像的过程,从而使得呈现给用户的图像能够反应人眼所看见的真实场景。也就是说色调映射是重构自然图像的关键步骤。However, the images currently captured by electronic devices are often affected by natural light and the electronic devices themselves, and the brightness of the resulting HDR images is very uneven, that is, the bright parts are overexposed and the dark parts are too dark. In addition, the dynamic range of most output devices such as monitors and printers is much smaller than the ordinary high dynamic range, so the output image cannot better reflect the real scene. In order to solve this problem, tone mapping technology came into being. Tone mapping is the process of converting HDR images to LDR images, so that the images presented to users can reflect the real scene seen by the human eye. In other words, tone mapping is a key step in reconstructing natural images.
示例性的,以电子设备为手机,正常以人眼看到的是图1A中11所示的现实场景为例。当用户使用手机中的摄像模组(如图1A中后置摄像头10)拍摄现实场景11所示画面时,可能会因为受到光线等因素影响导致拍摄得到的图像亮度区间较大。如手机通过后置摄像头10拍摄到的图像如图1A中的图像12所示,可以看到的是,有部分画面因为过曝而丢失细节,即最终的输出图像无法反应现实场景的画面。可以通过色调映射将拍摄到的图像,如图1A中的图像12转换为LDR图像后再呈现给用户,以使得输出的图像能较好的反应真实场景。Exemplarily, take the electronic device as a mobile phone, and the normal human eye sees the real scene shown in 11 in Figure 1A as an example. When the user uses the camera module in the mobile phone (such as the rear camera 10 in Figure 1A) to shoot the picture shown in the real scene 11, the brightness range of the captured image may be larger due to the influence of factors such as light. As shown in image 12 in Figure 1A, the image captured by the mobile phone through the rear camera 10 is shown, it can be seen that some of the pictures lose details due to overexposure, that is, the final output image cannot reflect the picture of the real scene. The captured image, such as image 12 in Figure 1A, can be converted into an LDR image through tone mapping and then presented to the user, so that the output image can better reflect the real scene.
在相关技术中,色调映射技术主要通过两个或多个串联的神经网络来实现色调映射。这种技术中,一般包括动态范围压缩(Dynamic Range Compression,DRC)网络和局部对比增强(Local Contrast Enhancement,LCE)网络。其中,DRC网络主要负责实现对图像亮度的处理。LCE网络主要负责对图像颜色的处理。如,可以将HDR图像先通过DRC网络实现亮度提升,再通过LCE网络实现颜色细节的调整,以最终输出LDR图像。In the related art, tone mapping technology mainly realizes tone mapping through two or more neural networks connected in series. This technology generally includes a dynamic range compression (DRC) network and a local contrast enhancement (LCE) network. Among them, the DRC network is mainly responsible for processing the brightness of the image. The LCE network is mainly responsible for processing the color of the image. For example, the brightness of the HDR image can be improved through the DRC network first, and then the color details can be adjusted through the LCE network to finally output the LDR image.
但这种技术的不足主要体现在以下几方面:However, the shortcomings of this technology are mainly reflected in the following aspects:
一方面,两个或多个神经网络串联的处理方式,会存在前一级网络的误差传递到后一级网络的情况,容易形成误差累积。如,在上述技术中,DRC网络的输出是LCE网络的输入,在这种架构中,如果DRC网络在处理过程中出现误差,则该误差会传递到下一级的LCE网络,形成误差累积,从而影响最后输出的LDR图像的效果,如出现颜色偏差。如图1B所示,图1B为相关技术提供的一种颜色偏差的图像。On the one hand, the processing method of two or more neural networks in series may cause the error of the previous network to be transmitted to the next network, which is easy to form error accumulation. For example, in the above technology, the output of the DRC network is the input of the LCE network. In this architecture, if the DRC network has an error during the processing, the error will be transmitted to the next level of the LCE network, resulting in error accumulation, thereby affecting the effect of the final output LDR image, such as color deviation. As shown in Figure 1B, Figure 1B is an image of color deviation provided by the related technology.
另一方面,LCE网络主要采用颜色校正矩阵(Color correction matrix,CCM)进行颜色细节处理。但因为LCE网络是一个单独的网络,对全图采用颜色校正矩阵进行颜色处理之前需要对图像进行负数截断和高位截取。而这样的操作会造成图像高饱和区域的纹理细节丢失,导致最终输出的LDR图像的纹理细节丢失。如图1C所示,图1C为相关技术提供的一种纹理细节丢失的图像。其中,图像的像素一般的取值范围为0-255,但是图像像素的实际取值可能在该范围之外,从而保证图像的精彩程度(比如一张像素取值范围为0-625的图像)。负数截断是指输入到LCE网络的图像需要去掉像素值在0之前的像素。高位截取是指输入到LCE网络的图像需要去掉像素值在255之后的像素。On the other hand, the LCE network mainly uses a color correction matrix (CCM) for color detail processing. However, because the LCE network is a separate network, the image needs to be negatively truncated and high-bit intercepted before the color correction matrix is used for color processing on the entire image. Such an operation will cause the loss of texture details in the highly saturated area of the image, resulting in the loss of texture details in the final output LDR image. As shown in Figure 1C, Figure 1C is an image with lost texture details provided by the related art. Among them, the pixels of the image generally have a value range of 0-255, but the actual values of the image pixels may be outside this range, thereby ensuring the brilliance of the image (for example, an image with a pixel value range of 0-625). Negative truncation means that the image input to the LCE network needs to remove pixels with pixel values before 0. High-bit interception means that the image input to the LCE network needs to remove pixels with pixel values after 255.
又一方面,上述技术是采用两个独立的神经网络对图像颜色和亮度分开处理的,但实际处理中并无法做到将颜色和亮度完全解耦。也就是说,对颜色进行处理时,往往也会让亮度产生变化,导致图像的颜色发生偏差;对亮度进行处理时,颜色也会被影响,导致信息丢失。从而无法保证图像能够真实地还原实际的场景画面。On the other hand, the above technology uses two independent neural networks to process the color and brightness of the image separately, but in actual processing, it is impossible to completely decouple color and brightness. In other words, when processing color, the brightness will often change, causing the color of the image to deviate; when processing brightness, the color will also be affected, resulting in information loss. Therefore, it is impossible to guarantee that the image can truly restore the actual scene.
综上,上述第二类技术会造成纹理细节丢失和颜色偏色的问题。In summary, the second type of technology mentioned above will cause problems such as loss of texture details and color cast.
为解决上述问题,本申请实施例提供一种图像处理方法,通过两个独立的通路对图像颜色和亮度进行处理,实现了亮度和颜色的部分解耦。另外,通过参数共享机制,实现了亮度和颜色的相互作用。解决了HDR图像转换为LDR图像过程中的颜色偏差和纹理细节丢失的问题,保证了输出图像的精彩性,提高了用户的体验感。To solve the above problems, the embodiment of the present application provides an image processing method, which processes the image color and brightness through two independent paths, thereby realizing partial decoupling of brightness and color. In addition, through the parameter sharing mechanism, the interaction between brightness and color is realized. The problem of color deviation and loss of texture details in the process of converting HDR images to LDR images is solved, the wonderfulness of the output image is guaranteed, and the user experience is improved.
示例性的,本申请实施例中进行图像处理的设备可以是电子设备,如手机、平板电脑、智能手表、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personaldigitalassistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtualreality,VR)设备等,本申请实施例对该电子设备的具体形态不作特殊限制。Exemplarily, the device performing image processing in the embodiments of the present application may be an electronic device, such as a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, as well as a cellular phone, a personal digital assistant (PDA), an augmented reality (AR)\virtual reality (VR) device, etc. The embodiments of the present application do not impose any special restrictions on the specific form of the electronic device.
另外,需要说明的是,本实施例提供的图像处理方法,可以应用于对拍摄得到的HDR图像进行色调映射处理的以获得LDR图像的过程中。也可以应用于目标(如,物体)检测的预处理,人像识别的预处理等其他需要图像增强的场景中,本申请在此对本申请方案的应用场景不进行唯一限定。In addition, it should be noted that the image processing method provided in this embodiment can be applied to the process of tone mapping the captured HDR image to obtain an LDR image. It can also be applied to the preprocessing of target (e.g., object) detection, the preprocessing of portrait recognition, and other scenes requiring image enhancement. The application does not limit the application scenario of the solution of this application.
示例性的,以手机200作为上述电子设备的举例,图2示出了手机200的结构示意图。如图2所示,手机200可以包括处理器210,外部存储器接口220,内部存储器221,移动通信模块230,无线通信模块240,充电管理模块250,电源管理模块260,电池270,天线1,天线2,音频模块280,扬声器280A,受话器280B,麦克风280C,耳机接口280D,摄像头290,显示屏291以及传感器模块292等。其中传感器模块292可以包括距离传感器292A和环境光传感器292B等。Exemplarily, taking a mobile phone 200 as an example of the above electronic device, FIG2 shows a schematic diagram of the structure of the mobile phone 200. As shown in FIG2, the mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a mobile communication module 230, a wireless communication module 240, a charging management module 250, a power management module 260, a battery 270, an antenna 1, an antenna 2, an audio module 280, a speaker 280A, a receiver 280B, a microphone 280C, an earphone interface 280D, a camera 290, a display screen 291, and a sensor module 292, etc. The sensor module 292 may include a distance sensor 292A and an ambient light sensor 292B, etc.
可以理解的是,本发明实施例示意的结构并不构成对手机200的具体限定。在本申请另一些实施例中,手机200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。例如,手机200还可以包括:用户标识模块(subscriber identification module,SIM)卡接口等。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is to be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 200. In other embodiments of the present application, the mobile phone 200 may include more or fewer components than those shown in the figure, or combine certain components, or separate certain components, or arrange the components differently. For example, the mobile phone 200 may also include: a subscriber identification module (SIM) card interface, etc. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processingunit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc. Different processing units may be independent devices or integrated into one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。在一些实施例中,处理器210可以包括一个或多个接口。A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may store instructions or data that the processor 210 has just used or circulated. If the processor 210 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 210, and thus improves the efficiency of the system. In some embodiments, the processor 210 may include one or more interfaces.
充电管理模块250用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。充电管理模块250为电池270充电的同时,还可以通过电源管理模块250为手机200供电。The charging management module 250 is used to receive charging input from a charger. The charger can be a wireless charger or a wired charger. While the charging management module 250 is charging the battery 270 , the power management module 250 can also be used to power the mobile phone 200 .
电源管理模块260用于连接电池270,充电管理模块260与处理器210。电源管理模块260接收电池270和/或充电管理模块250的输入,为处理器210,内部存储器221,显示屏291,摄像头290,和无线通信模块240等供电。电源管理模块260还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块260也可以设置于处理器210中。在另一些实施例中,电源管理模块260和充电管理模块250也可以设置于同一个器件中。The power management module 260 is used to connect the battery 270, the charging management module 260 and the processor 210. The power management module 260 receives input from the battery 270 and/or the charging management module 250, and supplies power to the processor 210, the internal memory 221, the display screen 291, the camera 290, and the wireless communication module 240. The power management module 260 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc. In some other embodiments, the power management module 260 can also be set in the processor 210. In other embodiments, the power management module 260 and the charging management module 250 can also be set in the same device.
手机200的无线通信功能可以通过天线1,天线2,移动通信模块230,无线通信模块240,调制解调处理器以及基带处理器等实现。The wireless communication function of the mobile phone 200 can be realized through the antenna 1, the antenna 2, the mobile communication module 230, the wireless communication module 240, the modem processor and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。手机200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in mobile phone 200 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization rate of the antennas. For example, antenna 1 can be reused as a diversity antenna for a wireless local area network. In some other embodiments, the antenna can be used in combination with a tuning switch.
移动通信模块230可以提供应用在手机200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块230可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(lownoise amplifier,LNA)等。移动通信模块230可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。The mobile communication module 230 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the mobile phone 200. The mobile communication module 230 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc. The mobile communication module 230 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, etc. on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
无线通信模块240可以提供应用在手机200上的包括无线局域网(wireless localarea networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequencymodulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。The wireless communication module 240 can provide wireless communication solutions for application on the mobile phone 200, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (blue tooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR), etc.
在一些实施例中,手机200的天线1和移动通信模块230耦合,天线2和无线通信模块240耦合,使得手机200可以通过无线通信技术与网络以及其他设备通信。In some embodiments, the antenna 1 of the mobile phone 200 is coupled to the mobile communication module 230, and the antenna 2 is coupled to the wireless communication module 240, so that the mobile phone 200 can communicate with the network and other devices through wireless communication technology.
手机200通过GPU,显示屏291,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏291和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The mobile phone 200 implements the display function through a GPU, a display screen 291, and an application processor. The GPU is a microprocessor for image processing, which connects the display screen 291 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏291用于显示图像,视频等。显示屏291包括显示面板。例如,显示屏291可以是触摸屏。在本申请一些实施例中,在用户打开相机应用后,显示屏291可以用于显示摄像头290采集到的预览流。The display screen 291 is used to display images, videos, etc. The display screen 291 includes a display panel. For example, the display screen 291 can be a touch screen. In some embodiments of the present application, after the user opens the camera application, the display screen 291 can be used to display the preview stream collected by the camera 290.
摄像头290用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,手机200可以包括1个或N个摄像头290,N为大于1的正整数。在本申请一些实施例中,在手机200接收到用户打开相机应用的操作后,手机200可控制摄像头290开启。在摄像头290开启后,摄像头290可以用于采集当前场景的预览图像。在接收到用户点击拍照的操作后,手机200可以利用摄像头290拍摄图像。在该图像是HDR图像的情况下,可以采用本申请的方案进行处理,以获得对应的LDR图像。手机200可以将获得的LDR图像保存到图库中。The camera 290 is used to capture static images or videos. The object generates an optical image through the lens and projects it onto the photosensitive element. The photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP for conversion into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format. In some embodiments, the mobile phone 200 may include 1 or N cameras 290, where N is a positive integer greater than 1. In some embodiments of the present application, after the mobile phone 200 receives an operation from the user to open the camera application, the mobile phone 200 may control the camera 290 to turn on. After the camera 290 is turned on, the camera 290 can be used to collect a preview image of the current scene. After receiving an operation from the user to click to take a photo, the mobile phone 200 can use the camera 290 to capture an image. In the case where the image is an HDR image, the solution of the present application can be used for processing to obtain a corresponding LDR image. The mobile phone 200 may save the obtained LDR image into a gallery.
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器210通过运行存储在内部存储器221的指令,和/或存储在设置于处理器210中的存储器的指令,执行手机200的各种功能以及数据处理。音频模块280用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块280还可以用于对音频信号编码和解码。在一些实施例中,音频模块280可以设置于处理器210中,或将音频模块280的部分功能模块设置于处理器210中。The internal memory 221 can be used to store computer executable program codes, which include instructions. The internal memory 221 may include a program storage area and a data storage area. Among them, the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc. The data storage area may store data created during the use of the mobile phone 200 (such as audio data, a phone book, etc.), etc. In addition, the internal memory 221 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc. The processor 210 executes various functions and data processing of the mobile phone 200 by running instructions stored in the internal memory 221, and/or instructions stored in a memory provided in the processor 210. The audio module 280 is used to convert digital audio information into an analog audio signal output, and is also used to convert analog audio input into a digital audio signal. The audio module 280 can also be used to encode and decode audio signals. In some embodiments, the audio module 280 may be disposed in the processor 210 , or some functional modules of the audio module 280 may be disposed in the processor 210 .
扬声器280A,也称“喇叭”,用于将音频电信号转换为声音信号。手机200可以通过扬声器280A收听音乐,或收听免提通话。The speaker 280A, also called a "speaker", is used to convert an audio electrical signal into a sound signal. The mobile phone 200 can listen to music or listen to a hands-free call through the speaker 280A.
受话器280B,也称“听筒”,用于将音频电信号转换成声音信号。当手机200接听电话或语音信息时,可以通过将受话器280B靠近人耳接听语音。The receiver 280B, also called a "handset", is used to convert audio electrical signals into sound signals. When the mobile phone 200 receives a call or voice message, the voice can be received by placing the receiver 280B close to the human ear.
麦克风280C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风280C发声,将声音信号输入到麦克风280C。手机200可以设置至少一个麦克风280C。在另一些实施例中,手机200可以设置两个麦克风280C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,手机200还可以设置三个,四个或更多麦克风280C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。Microphone 280C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to the microphone 280C to input the sound signal into the microphone 280C. The mobile phone 200 can be provided with at least one microphone 280C. In other embodiments, the mobile phone 200 can be provided with two microphones 280C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the mobile phone 200 can also be provided with three, four or more microphones 280C to realize the collection of sound signals, noise reduction, identification of sound sources, and directional recording functions, etc.
耳机接口280D用于连接有线耳机。耳机接口280D可以是USB接口210,也可以是3.5mm的开放移动电子设备平台(open mobile terminal p atform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 280D is used to connect a wired earphone and can be a USB interface 210, or a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
距离传感器292A,用于测量距离。手机200可以通过红外或激光测量距离。在一些实施例中,拍摄场景,手机200可以利用距离传感器292A测距以实现快速对焦。The distance sensor 292A is used to measure the distance. The mobile phone 200 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the mobile phone 200 can use the distance sensor 292A to measure the distance to achieve fast focusing.
环境光传感器292B用于感知环境光亮度。手机200可以根据感知的环境光亮度自适应调节显示屏291亮度。环境光传感器292B也可用于拍照时自动调节白平衡。环境光传感器292B还可以与接近光传感器292B配合,检测手机200是否在口袋里,以防误触。The ambient light sensor 292B is used to sense the ambient light brightness. The mobile phone 200 can adaptively adjust the brightness of the display screen 291 according to the perceived ambient light brightness. The ambient light sensor 292B can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 292B can also cooperate with the proximity light sensor 292B to detect whether the mobile phone 200 is in a pocket to prevent accidental touches.
以下结合图3和图4介绍电子设备实现色调映射的具体流程。如图4所示,图4为本申请实施例提供一种图像处理方法的流程示意图,该方法包括S401-S406。The specific process of implementing tone mapping in an electronic device is described below in conjunction with Figure 3 and Figure 4. As shown in Figure 4, Figure 4 is a schematic flow chart of an image processing method provided by an embodiment of the present application, and the method includes S401-S406.
S401.获取第一图像;第一图像为高动态范围图像。S401. Acquire a first image; the first image is a high dynamic range image.
其中,上述第一图像可以是指需要进行色调映射的高动态范围图像。换句话说,第一图像可以指的是由于动态范围大导致亮暗差别比较大,或者说存在亮的部分过曝或暗的部分过暗的图像。即第一图像所展示内容存在不清楚的情况。The first image may refer to a high dynamic range image that needs tone mapping. In other words, the first image may refer to an image with a large difference in brightness due to a large dynamic range, or an image in which the bright part is overexposed or the dark part is too dark. That is, the content displayed by the first image is unclear.
一些示例中,第一图像的亮暗差别比较大,可能是由于拍摄时自然光线的角度原因而造成的。比如逆光拍摄得到的第一图像,主体比较暗,背景比较亮,从而导致第一图像中主体的细节看不清楚,只有阴影,颜色偏差。另一些示例中,也可能是因为电子设备自身的动态范围较为狭窄的原因造成的,也就是人们常说的电子设备降低画质的情况。比如肉眼看见的落日景色很震撼,但是通过电子设备记录的落日景色就没有肉眼看到的震撼。In some examples, the brightness difference of the first image is relatively large, which may be caused by the angle of natural light during shooting. For example, in the first image obtained by backlighting, the subject is relatively dark and the background is relatively bright, which makes the details of the subject in the first image unclear, and there are only shadows and color deviations. In other examples, it may also be caused by the narrow dynamic range of the electronic device itself, which is what people often call the situation where electronic devices reduce image quality. For example, the sunset scenery seen by the naked eye is very shocking, but the sunset scenery recorded by electronic equipment is not as shocking as that seen by the naked eye.
在一些实施例中,上述获取第一图像,具体的可以是电子设备的摄像头拍摄到的图像。如,电子设备在接收到用户开启相机应用的操作后,可以打开相机应用。之后,响应于用户的拍摄操作,电子设备可以通过电子设备的摄像头采集图像,即可获得第一图像。又如,需要进行人像识别的场景中,电子设备可以响应用户的操作,采用摄像头采集用户的图像,即可获得第一图像。在其他一些实施例中,第一图像也可以是从其他设备处接收的,或者从网络下载的,或者从电子设备的存储器中读取的。In some embodiments, the first image obtained above may specifically be an image captured by a camera of an electronic device. For example, after receiving an operation from a user to open a camera application, the electronic device may open a camera application. Afterwards, in response to the user's shooting operation, the electronic device may capture an image through the camera of the electronic device to obtain the first image. For another example, in a scene where portrait recognition is required, the electronic device may respond to the user's operation and use a camera to capture an image of the user to obtain the first image. In some other embodiments, the first image may also be received from other devices, downloaded from a network, or read from a memory of the electronic device.
考虑到对于一张图像而言,无法做到将其颜色和亮度完全解耦,因此,对两者的处理会存在互相影响的情况。即,在对图像的亮度进行处理时,可能能够保证了图像亮度的还原,但却影响了颜色;同理,在对图像的颜色进行处理时,可能能够保证对颜色的还原,但却相应的改变了图像的亮度。因此,在本申请实施例中,通过两个通路,如称为上通路和下通路分别对颜色和亮度进行还原处理。在通过两个通路分别对颜色和亮度进行还原处理之前,可以获取用于进行颜色和亮度还原处理的参数,如包括全局映射矩阵和局部映射矩阵。也就是说,在获取到第一图像之后,可以获取第一图像在亮度和颜色上到对应低动态范围图像的全局映射关系和局部映射关系,即获取第一图像的全局映射矩阵和局部映射矩阵,用于后续对颜色和亮度的处理。其中,获取第一图像的全局映射矩阵和局部映射矩阵的过程具体可以包括:S402-S406。Considering that it is impossible to completely decouple the color and brightness of an image, the processing of the two may affect each other. That is, when processing the brightness of the image, it may be possible to ensure the restoration of the brightness of the image, but it affects the color; similarly, when processing the color of the image, it may be possible to ensure the restoration of the color, but the brightness of the image is changed accordingly. Therefore, in an embodiment of the present application, two paths, such as the upper path and the lower path, are used to restore the color and brightness respectively. Before the color and brightness are restored respectively through the two paths, parameters for color and brightness restoration processing can be obtained, such as a global mapping matrix and a local mapping matrix. That is, after obtaining the first image, the global mapping relationship and local mapping relationship of the first image to the corresponding low dynamic range image in brightness and color can be obtained, that is, the global mapping matrix and the local mapping matrix of the first image are obtained for subsequent processing of color and brightness. Among them, the process of obtaining the global mapping matrix and the local mapping matrix of the first image can specifically include: S402-S406.
S402.对第一图像进行下采样处理。S402: Perform downsampling processing on the first image.
其中,下采样是指对于一个样值序列间隔几个样值取样一次,这样得到新序列就是对原序列进行下采样后的结果。对图像进行下采样可以简单的理解将图像缩小。在本申请实施例中,可以对第一图像进行下采样处理,以获得下采样后的第一图像。如,可以对第一图像中的像素值进行抽取处理,以获得下采样后的第一图像。例如,结合图3,以输入图像(fullinput)表示第一图像,可以将full input进行下采样处理,以获得下采样后的第一图像,如图3所示,以低输入预览图像(low input_ccm preview)。通过对图像进行下采样处理,可以降低后续图像处理过程中的计算量,从而提高处理效率。Among them, downsampling refers to sampling a sample value sequence once at intervals of several sample values, so that the new sequence obtained is the result of downsampling the original sequence. Downsampling an image can be simply understood as reducing the image. In an embodiment of the present application, the first image can be downsampled to obtain a first image after downsampling. For example, the pixel values in the first image can be extracted to obtain the first image after downsampling. For example, in conjunction with Figure 3, the first image is represented by an input image (full_input), and the full input can be downsampled to obtain the first image after downsampling, as shown in Figure 3, with a low input preview image (low input_ccm preview). By downsampling the image, the amount of calculation in the subsequent image processing process can be reduced, thereby improving the processing efficiency.
示例性的,假设第一图像是一个1024像素(Pixel,px)*1024px大小的图像,下采样处理后得到的第一图像可以是一个512px*512px大小的图像。应注意,上述实施例中的下采样处理后的第一图像是以512px*512px大小进行举例的,也可以是64px*64px大小,具体不作唯一限定。For example, assuming that the first image is an image of 1024 pixels (Pixel, px)*1024px, the first image obtained after downsampling processing may be an image of 512px*512px. It should be noted that the first image after downsampling processing in the above embodiment is exemplified by a size of 512px*512px, and may also be 64px*64px, which is not specifically limited.
在一些实施例中,S402为可选步骤。In some embodiments, S402 is an optional step.
S403.获取第一图像的图像特征,图像特征包括颜色特征、纹理特征、形状特征和空间关系特征中的一个或多个。S403. Obtain image features of the first image, where the image features include one or more of color features, texture features, shape features, and spatial relationship features.
在本申请一些实施例中,可以利用神经网络中的处理单元提取第一图像的图像特征。神经网络是由大量处理单元互联组成的非线性、自适应信息处理系统。作为一种示例,神经网络可以包括特征处理单元。例如,继续结合图3,在获取到第一图像(如,fullinput),并对第一图像进行下采样处理low input_ccm preview后,可以将下采样处理后的第一图像输入神经网络的特征处理单元,以便其完成对下采样后的第一图像的特征提取,输出第一图像的图像特征,(low level feature)表示。其中,第一图像的图像特征是一个二维的数据,可以简单理解为一个平面的数据,在本申请实施例中可以将图像特征理解成为第一图像的特征图。图像特征可以包括颜色特征、纹理特征、形状特征、空间关系特征中的一个或多个。In some embodiments of the present application, the processing unit in the neural network can be used to extract the image features of the first image. A neural network is a nonlinear, adaptive information processing system composed of a large number of interconnected processing units. As an example, the neural network may include a feature processing unit. For example, continuing with Figure 3, after obtaining the first image (such as fullinput) and performing downsampling processing on the first image low input_ccm preview, the downsampled first image can be input into the feature processing unit of the neural network so that it can complete the feature extraction of the downsampled first image and output the image features of the first image, represented by (low level feature). Among them, the image feature of the first image is a two-dimensional data, which can be simply understood as a plane data. In the embodiment of the present application, the image feature can be understood as a feature map of the first image. Image features may include one or more of color features, texture features, shape features, and spatial relationship features.
其中,颜色特征用于描述图像或图像区域所对应的景物的表面性质。纹理特征用于描述图像或图像区域所对应景物的表面性质。与颜色特征不同,纹理特征不是基于像素值的特征,它需要在包含多个像素值的区域中进行统计计算。形状特征有两类表示方法,一类是轮廓特征,另一类是区域特征。图像的轮廓特征主要针对物体的外边界,而图像的区域特征则关系到整个形状区域。空间关系特征用于描述图像中分割出来的多个目标之间的相互的空间位置或相对方向关系,这些关系也可分为连接/邻接关系、交叠/重叠关系和包含/包容关系等。Among them, color features are used to describe the surface properties of the scene corresponding to an image or an image region. Texture features are used to describe the surface properties of the scene corresponding to an image or an image region. Unlike color features, texture features are not based on pixel values, and they need to be statistically calculated in a region containing multiple pixel values. There are two types of representation methods for shape features, one is contour features and the other is regional features. The contour features of an image are mainly aimed at the outer boundaries of an object, while the regional features of an image are related to the entire shape region. Spatial relationship features are used to describe the spatial position or relative direction relationship between multiple targets segmented from an image. These relationships can also be divided into connection/adjacency relationships, overlap/overlapping relationships, and inclusion/inclusion relationships.
需要说明的是,在通过神经网络的特征处理单元对第一图像处理时,是通过多层卷积实现特征提取的。在这个过程中,得到的图像特征同样进行了压缩处理。换句话说,获得第一图像的图像特征是一个大小比第一图像(或者说比下采样处理后的第一图像)更小的图像特征图。接着上述的示例来看,第一图像是一个1024px*1024px大小的图像,下采用处理后得到的第一图像是一个512px*512px大小的图像,那经过特征处理单元处理后得到的第一图像的图像特征可以是多个64px*64px大小的图像。It should be noted that when the first image is processed by the feature processing unit of the neural network, feature extraction is achieved through multi-layer convolution. In this process, the obtained image features are also compressed. In other words, the image feature obtained for the first image is an image feature map that is smaller than the first image (or than the first image after downsampling). Continuing with the above example, the first image is an image of size 1024px*1024px, and the first image obtained after downsampling is an image of size 512px*512px. Then the image feature of the first image obtained after processing by the feature processing unit can be multiple images of size 64px*64px.
S404.基于图像特征确定第一图像的全局特征和局部特征。S404. Determine global features and local features of the first image based on the image features.
具体的,在获取到第一图像的图像特征之后,电子设备可以基于图像特征确定第一图像对应的全局特征和局部特征。Specifically, after acquiring the image features of the first image, the electronic device may determine the global features and local features corresponding to the first image based on the image features.
其中,全局特征用于表征高动态范围图像在颜色和亮度上到低动态范围图像的全局的映射关系。所谓的全局的映射关系也就是指高动态范围图像的全局区域和低动态范围图像的全局区域中的亮度和颜色的值是一一对应关系。也就是说无论是高动态范围图像的全局区域的像素值的数量和低动态范围图像的全局区域的像素值的数量是等同的。如高动态范围图像的像素值的一般取值范围是[0-255],低动态范围图像的像素值的一般取值范围也是[0-255]。局部特征用于表征高动态范围图像在颜色和亮度上到低动态范围图像的局部的映射关系。所谓的局部的映射关系也就是指高动态范围图像的部分区域和低动态范围图像对应的局部区域中的亮度和颜色的像素值是一一对应关系。也就是说,在局部区域高动态范围图像的像素值的数量和低动态范围图像的像素值的数量是等同的。如高动态范围图像是[0-10],低动态范围图像也是[0-10],亦或是高动态范围图像是[0-10],低动态范围图像也是[5-15]。简单来说,局部特征能够反应第一图像局部区域的特征,如,第一图像中特别亮或特别暗的区域的特征,方便后续进行针对性的处理,如后续可针对图像中暗的区域进行适应性调亮,针对图像中亮的地方进行适应性调暗。在本实施例中,第一图像的局部特征可以包括多个。如可以简单理解为将第一图像的图像特征划分为多个小区域,比如4个3*3的区域图。Among them, the global feature is used to characterize the global mapping relationship of the high dynamic range image to the low dynamic range image in terms of color and brightness. The so-called global mapping relationship means that the brightness and color values in the global area of the high dynamic range image and the global area of the low dynamic range image are in a one-to-one correspondence. That is to say, the number of pixel values in the global area of the high dynamic range image and the number of pixel values in the global area of the low dynamic range image are the same. For example, the general value range of the pixel value of the high dynamic range image is [0-255], and the general value range of the pixel value of the low dynamic range image is also [0-255]. The local feature is used to characterize the local mapping relationship of the high dynamic range image to the low dynamic range image in terms of color and brightness. The so-called local mapping relationship means that the brightness and color pixel values in the partial area of the high dynamic range image and the local area corresponding to the low dynamic range image are in a one-to-one correspondence. That is to say, in the local area, the number of pixel values of the high dynamic range image and the number of pixel values of the low dynamic range image are the same. For example, if the high dynamic range image is [0-10], the low dynamic range image is also [0-10], or if the high dynamic range image is [0-10], the low dynamic range image is also [5-15]. In simple terms, the local features can reflect the features of the local area of the first image, such as the features of the particularly bright or dark areas in the first image, so as to facilitate subsequent targeted processing, such as adaptively brightening the dark areas in the image and adaptively dimming the bright areas in the image. In this embodiment, the local features of the first image may include multiple ones. For example, it can be simply understood that the image features of the first image are divided into multiple small areas, such as 4 3*3 area maps.
在本申请一些实施例中,可以利用神经网络中的处理单元提取第一图像的全局特征和局部特征。作为一种示例,神经网络还可以包括全局特征处理单元和局部特征处理单元。例如,继续结合图3,在获取到第一图像的图像特征后,或者说特征处理单元输出lowlevel feature后,可以将该low level feature输入全局特征处理单元和局部特征处理单元。之后,全局特征处理单元可以基于第一图像的图像特征进行第一图像的全局特征的提取,以获得第一图像的全局特征,如图3中用全局特征(Global_Feature)表示。局部特征处理单元可以基于第一图像的图像特征进行第一图像的局部特征的提取,以获得第一图像的局部特征,如图3中用局部特征(Local_Feature)表示。In some embodiments of the present application, the processing unit in the neural network can be used to extract the global features and local features of the first image. As an example, the neural network may also include a global feature processing unit and a local feature processing unit. For example, continuing with Figure 3, after the image features of the first image are acquired, or after the feature processing unit outputs the low-level feature, the low-level feature may be input into the global feature processing unit and the local feature processing unit. Afterwards, the global feature processing unit may extract the global features of the first image based on the image features of the first image to obtain the global features of the first image, as represented by the global feature (Global_Feature) in Figure 3. The local feature processing unit may extract the local features of the first image based on the image features of the first image to obtain the local features of the first image, as represented by the local feature (Local_Feature) in Figure 3.
其中,作为一种示例,第一图像的全局特征的提取可以是通过全连接层实现的。第一图像的局部特征的提取可以是通过卷积和下采样实现的。As an example, the extraction of the global features of the first image may be implemented through a fully connected layer, and the extraction of the local features of the first image may be implemented through convolution and downsampling.
可以理解,全局特征处理单元和局部特征处理单元均是预先训练好的,其处理单元的核也就是固定的。例如,将第一图像的图像特征输入到全局特征处理单元得到的输出可以是一个64px*64px大小的全局特征。这是因为,全局特征处理单元的核是64px*64px,所以最后输出的全局特征是一个64px*64px大小的图像。针对局部特征,将第一图像的图像特征输入到局部特征处理单元得到的输出是多个局部区域大小的局部特征。其中,多个局部区域大小是指将第一图像的图像特征拆分为N*M个局部区域,如,N和M都设置为3,那局部区域大小为3px*3px。假设局部特征处理单元的卷积核是3px*3px,那么经过局部特征处理单元的处理便将第一图像的图像特征划分为多个3px*3px大小的图像,如最后输出的局部特征是四个3px*3px大小的[0-10]的图像。It can be understood that both the global feature processing unit and the local feature processing unit are pre-trained, and the core of the processing unit is fixed. For example, the output obtained by inputting the image feature of the first image into the global feature processing unit can be a global feature of 64px*64px size. This is because the core of the global feature processing unit is 64px*64px, so the global feature outputted in the end is an image of 64px*64px size. For local features, the output obtained by inputting the image feature of the first image into the local feature processing unit is a local feature of multiple local area sizes. Among them, multiple local area sizes refer to splitting the image feature of the first image into N*M local areas, such as, N and M are both set to 3, then the local area size is 3px*3px. Assuming that the convolution kernel of the local feature processing unit is 3px*3px, then after being processed by the local feature processing unit, the image feature of the first image is divided into multiple images of 3px*3px size, such as the local features outputted in the end are four 3px*3px size [0-10] images.
需要说明的是,无论是全局特征处理单元还是局部特征处理单元,其核数的设定是根据实际需要进行训练的,并不局限于上述示例中核的大小,也可以是其他大小的核,如全局特征处理单元的核为512px*512px,局部特征处理单元的核为24px*24px。It should be noted that, whether it is a global feature processing unit or a local feature processing unit, the number of cores is set according to actual needs for training and is not limited to the size of the core in the above example. It can also be a core of other sizes, such as the core of the global feature processing unit is 512px*512px, and the core of the local feature processing unit is 24px*24px.
考虑到亮度和颜色在图像特征上的体现不同,亮度侧重于全局特征,颜色侧重于局部特征。在获取到第一图像的全局特征和局部特征之后,在一些实施例中,可以基于全局特征对第一图像的亮度进行还原处理,基于局部特征对颜色进行还原处理。在其他一些实施例中,考虑到亮度和颜色并不能完全解耦,或者说两者之间是相互影响的,因此,可以在对亮度进行还原处理时,不仅考虑图像的全局特征,还可以考虑图像的局部特征,类似的,在对颜色进行还原处理时,不仅考虑图像的局部特征,还可以将局部特征也考虑在内。具体的可以获取全局特征对应的全局映射矩阵和局部特征对应的局部映射矩阵,分别用于后续对亮度和颜色的还原。即执行下述S405和S406。Taking into account the different manifestations of brightness and color in image features, brightness focuses on global features, and color focuses on local features. After obtaining the global features and local features of the first image, in some embodiments, the brightness of the first image can be restored based on the global features, and the color can be restored based on the local features. In some other embodiments, considering that brightness and color cannot be completely decoupled, or that the two influence each other, therefore, when restoring the brightness, not only the global features of the image can be considered, but also the local features of the image can be considered. Similarly, when restoring the color, not only the local features of the image can be considered, but also the local features can be taken into account. Specifically, the global mapping matrix corresponding to the global features and the local mapping matrix corresponding to the local features can be obtained, which are used for the subsequent restoration of brightness and color, respectively. That is, the following S405 and S406 are executed.
S405.基于全局特征,局部特征和第一权重系数,确定全局映射矩阵。S405. Determine a global mapping matrix based on the global features, the local features and the first weight coefficient.
其中,可以给全局特征和局部特征乘以不同的权重系数,以获得全局映射矩阵。Among them, the global features and local features can be multiplied by different weight coefficients to obtain a global mapping matrix.
在本申请一些实施例中,可以通过门控机制(Gate Fires Mechanism,GFM)来获得全局映射矩阵。例如,结合图4,在获取到第一图像的全局特征(Global_Feature)和局部特征(Local_Feature)之后,可以将全局特征(Global_Feature)和局部特征(Local_Feature)输入GFM。之后,GFM可以基于输入的全局特征,局部特征以及第一权重系数,输出全局特征对应的全局映射矩阵,如图4中用coeff1表示。也就是说,可以基于全局特征和局部特征与GFM中的第一权重系数确定全局映射矩阵。如,将全局特征和局部特征分别乘以不同的权重值来确定全局映射矩阵。其中,全局映射矩阵主要用于后续对亮度进行还原处理,也就是说,更加侧重全局特征处理,因此上述第一权重系数应侧重于全局特征。In some embodiments of the present application, a global mapping matrix can be obtained by a gate fire mechanism (GFM). For example, in conjunction with FIG4, after obtaining the global features (Global_Feature) and local features (Local_Feature) of the first image, the global features (Global_Feature) and local features (Local_Feature) can be input into the GFM. Afterwards, the GFM can output a global mapping matrix corresponding to the global features based on the input global features, local features and the first weight coefficient, as represented by coeff1 in FIG4. In other words, the global mapping matrix can be determined based on the global features and local features and the first weight coefficient in the GFM. For example, the global mapping matrix is determined by multiplying the global features and local features by different weight values. Among them, the global mapping matrix is mainly used for subsequent brightness restoration processing, that is, more emphasis is placed on global feature processing, so the above-mentioned first weight coefficient should focus on global features.
作为一种示例,第一权重系数可以包括针对全局特征和局部特征的多个权重值。另外,因为局部特征可能存在多个,所以针对局部特征的权重值也有多个,并与多个局部特征与一一对应。其中,多个权重值中,针对全局特征的权重值与针对局部特征的权重值不同。如,针对全局特征的权重值大于针对局部特征的权重值。针对不同局部特征的权重值可以相同,也可以不同。As an example, the first weight coefficient may include multiple weight values for global features and local features. In addition, because there may be multiple local features, there are also multiple weight values for local features, and they correspond one to one with multiple local features. Among the multiple weight values, the weight value for global features is different from the weight value for local features. For example, the weight value for global features is greater than the weight value for local features. The weight values for different local features may be the same or different.
例如,以全局特征用G表示,局部特征存在两个,分别用L1和L2表示为例,第一权重系数可以包括三个权重值,分别为针对全局特征G的权重值,如A,针对两个局部特征L1和L2的权重值,如分别为B和C。在将全局特征G,两个局部特征L1和L2输入GFM后,其输出的全局映射矩阵coeff1可以简单的表示为:coeff1=G*A+L1*B+L2*C。其中,A>B,A>C。For example, taking the global feature represented by G and two local features represented by L1 and L2, the first weight coefficient may include three weight values, namely, the weight value for the global feature G, such as A, and the weight values for the two local features L1 and L2, such as B and C. After the global feature G and the two local features L1 and L2 are input into GFM, the output global mapping matrix coeff1 can be simply expressed as: coeff1 = G*A+L1*B+L2*C. Among them, A>B, A>C.
S406.基于全局特征,局部特征和第二权重系数,确定局部映射矩阵。S406. Determine a local mapping matrix based on the global features, the local features and the second weight coefficient.
类似于S405,可以给全局特征和局部特征乘以不同的权重系数,以获得局部映射矩阵。Similar to S405 , the global features and the local features may be multiplied by different weight coefficients to obtain a local mapping matrix.
在一些实施例中,可以通过GFM来获得局部映射矩阵。例如,结合图3,可以将全局特征(Global_Feature)和局部特征(Local_Feature)输入GFM。之后,GFM可以基于输入的全局特征,局部特征以及第二权重系数,输出局部特征对应的局部映射矩阵,如图3中用coeff2表示。也就是说,可以基于全局特征和局部特征与GFM中的第二权重系数确定局部映射矩阵。其中,局部映射矩阵主要用于后续对颜色进行还原处理,也就是说,更加侧重局部特征处理,因此上述第二权重系数应侧重于局部特征。In some embodiments, the local mapping matrix can be obtained by GFM. For example, in conjunction with Figure 3, the global feature (Global_Feature) and the local feature (Local_Feature) can be input into the GFM. Afterwards, the GFM can output the local mapping matrix corresponding to the local feature based on the input global feature, local feature and the second weight coefficient, as represented by coeff2 in Figure 3. In other words, the local mapping matrix can be determined based on the global feature and the local feature and the second weight coefficient in the GFM. Among them, the local mapping matrix is mainly used for subsequent color restoration processing, that is, more emphasis is placed on local feature processing, so the above-mentioned second weight coefficient should focus on local features.
类似的,第二权重系数可以包括针对全局特征和局部特征的多个权重值。多个权重值中,针对全局特征的权重值与针对局部特征的权重值不同。如,针对局部特征的权重值大于针对全局特征的权重值。Similarly, the second weight coefficient may include multiple weight values for global features and local features. Among the multiple weight values, the weight value for global features is different from the weight value for local features. For example, the weight value for local features is greater than the weight value for global features.
例如,继续以全局特征用G表示,局部特征存在两个,分别用L1和L2表示为例,第二权重系数可以包括三个权重值,分别为针对全局特征G的权重值,如X,针对两个局部特征L1和L2的权重值,如分别为Y和Z。在将全局特征G,两个局部特征L1和L2输入GFM后,其还可以输出局部映射矩阵coeff2。局部映射矩阵coeff2可以简单的表示为:coeff2=G*X+L1*Y和G*X+L2*Z。其中,Y>X,Z>X。For example, continuing to use the example that the global feature is represented by G, and there are two local features, represented by L1 and L2 respectively, the second weight coefficient can include three weight values, namely, the weight value for the global feature G, such as X, and the weight values for the two local features L1 and L2, such as Y and Z respectively. After the global feature G and the two local features L1 and L2 are input into GFM, it can also output the local mapping matrix coeff2. The local mapping matrix coeff2 can be simply expressed as: coeff2 = G*X+L1*Y and G*X+L2*Z. Among them, Y>X, Z>X.
需要说明的是,本实施例在此对上述S405和S406的执行顺序不做限制。如可以先执行S405,再执行S406,或者可以先执行S406,再执行S405,或者S405和S406可以同时执行。It should be noted that this embodiment does not limit the execution order of the above S405 and S406. For example, S405 can be executed first and then S406, or S406 can be executed first and then S405, or S405 and S406 can be executed at the same time.
基于以上,可以理解的是,在本实施例中,全局映射矩阵和全局特征均用于指示第一图像在亮度和颜色上到对应低动态范围图像的全局映射关系,两者的区别在于,全局映射矩阵中不仅考虑了全局特征,还加入了局部特征。局部映射矩阵和局部特征均用于指示第一图像在亮度和颜色上到对应低动态范围图像的局部映射关系,两者的区别在于,局部映射矩阵中不仅考虑了局部特征,还加入了全局特征。Based on the above, it can be understood that in this embodiment, the global mapping matrix and the global features are both used to indicate the global mapping relationship of the first image to the corresponding low dynamic range image in terms of brightness and color, and the difference between the two is that the global mapping matrix not only considers the global features, but also adds local features. The local mapping matrix and the local features are both used to indicate the local mapping relationship of the first image to the corresponding low dynamic range image in terms of brightness and color, and the difference between the two is that the local mapping matrix not only considers the local features, but also adds the global features.
在获取到全局映射矩阵和局部映射矩阵之后,可以基于全局映射矩阵和局部映射矩阵通过不同的通道(如称为上通道和下通道)分别进行颜色和亮度的还原处理。如,可以基于全局映射矩阵,对第一图像的亮度进行校正,获得第二图像。之后,可以基于局部映射矩阵和第二图像,对第一图像的颜色进行校正,获得第三图像。最后,可将第三图像输出,第三图像为第一图像对应的低动态范围图像。具体的可以包括如下:S407-S411。After obtaining the global mapping matrix and the local mapping matrix, the color and brightness restoration processing can be performed based on the global mapping matrix and the local mapping matrix through different channels (such as upper channel and lower channel). For example, the brightness of the first image can be corrected based on the global mapping matrix to obtain the second image. Afterwards, the color of the first image can be corrected based on the local mapping matrix and the second image to obtain the third image. Finally, the third image can be output, and the third image is a low dynamic range image corresponding to the first image. Specifically, it can include the following: S407-S411.
S407.获取第一图像的纹理图像。S407. Obtain a texture image of the first image.
在本申请一些实施例中,可以利用神经网络中的处理单元提取第一图像的纹理图像。作为一种示例,神经网络还可以包括引导处理单元。例如,继续结合图3,可以将第一图像输入到神经网络的引导处理单元,以便其完成对第一图像的纹理图像的获取,即以便其输出第一图像的纹理图像(还可称为Guidance_map)。具体的,第一图像的纹理图像的获取在一定程度上避免了第一图像细节的丢失。In some embodiments of the present application, a processing unit in a neural network may be used to extract a texture image of the first image. As an example, the neural network may further include a guidance processing unit. For example, continuing with FIG. 3 , the first image may be input into the guidance processing unit of the neural network so that it completes the acquisition of the texture image of the first image, that is, so that it outputs the texture image of the first image (also referred to as Guidance_map). Specifically, the acquisition of the texture image of the first image avoids the loss of details of the first image to a certain extent.
S408.基于纹理图像和全局映射矩阵,确定第一图像的映射关系图。S408. Determine a mapping relationship graph of the first image based on the texture image and the global mapping matrix.
具体的,在获取第一图像的纹理图像之后,电子设备可以基于纹理图像和全局映射矩阵通过上通道确定第一图像的映射关系图。Specifically, after acquiring the texture image of the first image, the electronic device may determine the mapping relationship diagram of the first image through the upper channel based on the texture image and the global mapping matrix.
在本申请一些实施例中,可以利用神经网络中的处理单元根据纹理图像和全局映射矩阵确定第一图像的映射关系图。作为一种示例,神经网络还可以包括解码处理单元1。例如,继续结合图3,在获取第一图像的纹理图像后,或者说引导处理单元输出Guidance_map后,可以将Guidance_map和coeff1输入解码处理单元1(还可称为Decoder_DRC)。之后,解码处理单元1基于Guidance_map和coeff1进行解码处理,以获得第一图像的映射关系图(如可以称为Gain_map)。In some embodiments of the present application, a processing unit in a neural network can be used to determine a mapping relationship diagram of the first image based on a texture image and a global mapping matrix. As an example, the neural network may further include a decoding processing unit 1. For example, continuing with FIG. 3 , after obtaining the texture image of the first image, or after the guidance processing unit outputs Guidance_map, Guidance_map and coeff1 may be input into a decoding processing unit 1 (also referred to as Decoder_DRC). Afterwards, the decoding processing unit 1 performs decoding processing based on Guidance_map and coeff1 to obtain a mapping relationship diagram of the first image (such as may be referred to as Gain_map).
具体的,解码处理单元1通过Guidance_map对coeff1进行引导插值处理,以获得第一图像的映射关系图。例如,将纹理图像和全局映射矩阵输入到Decoder_DRC,得到输出的Gain_map。Specifically, the decoding processing unit 1 performs guided interpolation processing on coeff1 through Guidance_map to obtain a mapping relationship map of the first image. For example, the texture image and the global mapping matrix are input to Decoder_DRC to obtain an output Gain_map.
所谓的插值,有时也称为“重置样本”,是在不生成像素的情况下增加图像像素大小的一种方法,在周围像素色彩的基础上用数学公式计算丢失像素的色彩,人为地增加图像的分辨率。Interpolation, sometimes called "resampling," is a method of increasing the size of an image's pixels without generating them. The color of the missing pixels is calculated mathematically based on the colors of the surrounding pixels, artificially increasing the image's resolution.
应理解的是,在S408步骤中得到的映射关系图主要是通过对全局映射矩阵进行插值确定的,而插值会增加图像的大小。在本申请实施例中,具体插值的大小可以基于第一图像的大小确定。也就是该步骤可以理解为对第一图像进行了大小还原,从而得到第一图像的映射关系图。在上述实施例先对图像大小进行压缩(如,执行下采样,特征提取等操作)再对图像进行还原,可以减少在神经网络处理中的计算量,减少误差出现的概率。例如,解码处理单元1输出一个1024px*1024px的全局全分辨映射图像,即上述映射关系图。It should be understood that the mapping relationship diagram obtained in step S408 is mainly determined by interpolating the global mapping matrix, and interpolation will increase the size of the image. In an embodiment of the present application, the size of the specific interpolation can be determined based on the size of the first image. That is, this step can be understood as restoring the size of the first image to obtain the mapping relationship diagram of the first image. In the above embodiment, the image size is first compressed (such as performing downsampling, feature extraction, etc.) and then the image is restored, which can reduce the amount of calculation in the neural network processing and reduce the probability of errors. For example, the decoding processing unit 1 outputs a global full-resolution mapping image of 1024px*1024px, that is, the above-mentioned mapping relationship diagram.
基于以上,可以理解的是,在本实施例中,映射关系图和全局映射矩阵均用于指示第一图像在亮度和颜色上到对应低动态范围图像的全局映射关系,两者的区别在于,映射关系图是全局映射矩阵进行上采样后的图像,也就是进行图像大小还原后的图像。Based on the above, it can be understood that, in this embodiment, the mapping relationship diagram and the global mapping matrix are both used to indicate the global mapping relationship of the first image to the corresponding low dynamic range image in terms of brightness and color. The difference between the two is that the mapping relationship diagram is the image after upsampling by the global mapping matrix, that is, the image after the image size is restored.
一个示例中,在上述S401-S407的步骤中,第一图像的大小为1024px*1024px,对第一图像进行下采样处理得到大小为512px*512px的第一图像,接着获取第一图像的图像特征,得到的是多个64px*64px的特征图,再进一步通过全局特征确定的全局映射矩阵的大小是64px*64px,基于纹理图像和全局映射矩阵,确定的第一图像的映射关系图的大小为1024px*1024px。In an example, in the above steps S401-S407, the size of the first image is 1024px*1024px, and the first image is downsampled to obtain a first image of size 512px*512px, and then the image features of the first image are obtained to obtain multiple 64px*64px feature maps, and the size of the global mapping matrix determined by the global features is 64px*64px. Based on the texture image and the global mapping matrix, the size of the mapping relationship map of the first image is determined to be 1024px*1024px.
S409.基于映射关系图,对第一图像的亮度进行校正,以获得第二图像。S409. Based on the mapping relationship diagram, correct the brightness of the first image to obtain a second image.
具体的,在确定映射关系图之后,电子设备可以通过映射关系图对第一图像进行亮度还原处理,得到亮度还原处理后的第二图像。Specifically, after determining the mapping relationship diagram, the electronic device may perform brightness restoration processing on the first image according to the mapping relationship diagram to obtain a second image after the brightness restoration processing.
在本申请一些实施例中,可以基于上通道中获得的映射关系图对第一图像的三个通道(R通道、G通道、B通道)进行亮度还原处理后,获得中间图像。作为一种示例,继续结合图3,可以根据映射关系图和第一图像确定中间图像,如,计算映射关系图和第一图像的乘积,以获得中间图像,如图3中用图像1(image1,img1)表示。中间图像可以为本申请中的第二图像。In some embodiments of the present application, the three channels (R channel, G channel, B channel) of the first image can be subjected to brightness restoration processing based on the mapping relationship diagram obtained in the upper channel to obtain an intermediate image. As an example, in conjunction with FIG3 , the intermediate image can be determined based on the mapping relationship diagram and the first image, such as calculating the product of the mapping relationship diagram and the first image to obtain the intermediate image, as represented by image 1 (image1, img1) in FIG3 . The intermediate image can be the second image in the present application.
如可通过如下公式确定第二图像:The second image can be determined by the following formula:
Contrast(R,G,B)(x,y)=input(R,G,B)(x,y)*Gain_map(x,y)Contrast(R,G,B)(x,y)=input(R,G,B)(x,y)*Gain_map(x,y)
其中,Contrast(R,G,B)为第二图像,即img1;input(R,G,B)(x,y)为第一图像,即fullinput;Gain_map(x,y)为映射关系图。Among them, Contrast(R,G,B) is the second image, i.e., img1; input(R,G,B)(x,y) is the first image, i.e., fullinput; Gain_map(x,y) is the mapping relationship diagram.
上述的S407-S409是在上通道对亮度进行的还原处理。下面介绍在下通道对颜色的还原处理的过程,如包括S410-S411。The above-mentioned S407-S409 are the restoration processing of brightness in the upper channel. The following describes the process of restoring color in the lower channel, such as including S410-S411.
S410.基于第二图像和局部映射矩阵,确定第二图像的颜色校正矩阵。S410. Determine a color correction matrix for the second image based on the second image and the local mapping matrix.
具体的,在确定第二图像之后,电子设备可以基于第二图像和局部映射矩阵通过下通道确定第二图像的颜色校正矩阵。其中,颜色校正矩阵可以是一个3*3的矩阵,也就是说颜色校正矩阵可以包括9个系数。利用颜色校正矩阵的9个系数可以对第二图像进行颜色校正。Specifically, after determining the second image, the electronic device may determine the color correction matrix of the second image through the lower channel based on the second image and the local mapping matrix. The color correction matrix may be a 3*3 matrix, that is, the color correction matrix may include 9 coefficients. The 9 coefficients of the color correction matrix may be used to perform color correction on the second image.
本申请一些实施例中,可以利用神经网络中的处理单元确定第二图像的颜色校正矩阵。作为一种示例,神经网络还可以包括解码处理单元2。例如,继续结合图3,在获取img1后,可以将img1和coeff2进行解码处理,以获得第二图像的颜色校正矩阵,如可称为CC(x,y)。In some embodiments of the present application, a processing unit in a neural network may be used to determine a color correction matrix of the second image. As an example, the neural network may further include a decoding processing unit 2. For example, in conjunction with FIG. 3 , after obtaining img1, img1 and coeff2 may be decoded to obtain a color correction matrix of the second image, such as CC(x, y).
类似的,解码处理单元2通过img1对coeff2进行插值处理,以获得第一图像的颜色校正矩阵。例如,将第二图像和局部映射矩阵输入到Decoder_CC,得到输出的CC(x,y)。即,解码神经网络处理单元2输出一个1024px*1024px的全局全分辨映射图像。Similarly, the decoding processing unit 2 interpolates coeff2 through img1 to obtain the color correction matrix of the first image. For example, the second image and the local mapping matrix are input to Decoder_CC to obtain the output CC(x,y). That is, the decoding neural network processing unit 2 outputs a 1024px*1024px global full-resolution mapping image.
一个示例中,在上述S401-S407的步骤中,第一图像为1024px*1024px,对第一图像进行下采样处理得到大小为512px*512px的第一图像,接着获取第一图像的图像特征,得到的是多个64px*64px的特征图,再进一步通过局部特征确定的局部映射矩阵的大小是64px*64px,基于第二图像和局部映射矩阵,确定的第一图像的颜色校正矩阵的大小为1024px*1024px。In an example, in the above steps S401-S407, the first image is 1024px*1024px, and the first image is downsampled to obtain a first image of size 512px*512px. Then, the image features of the first image are obtained to obtain multiple feature maps of size 64px*64px. The size of the local mapping matrix determined by the local features is 64px*64px. Based on the second image and the local mapping matrix, the size of the color correction matrix of the first image is determined to be 1024px*1024px.
S411.基于颜色校正矩阵,对第二图像的颜色进行校正,以获得第三图像。S411. Based on the color correction matrix, correct the color of the second image to obtain a third image.
具体的,在确定颜色校正矩阵之后,电子设备可以通过颜色校正矩阵对第二图像进行颜色还原处理,得到颜色还原处理后的第三图像。Specifically, after determining the color correction matrix, the electronic device may perform color restoration processing on the second image using the color correction matrix to obtain a third image after the color restoration processing.
在本申请一些实施例中,可以基于下通道中获得的颜色校正矩阵对第二图像的RGB通道进行颜色校正,获得校正后的第三图像,如图3中用img2表示。例如,根据CC(x,y)和img2的RGB通道的乘积进行颜色校正,输出第三图像。In some embodiments of the present application, the RGB channels of the second image may be color corrected based on the color correction matrix obtained in the lower channel to obtain a corrected third image, as represented by img2 in FIG3. For example, color correction is performed based on the product of CC(x, y) and the RGB channels of img2 to output the third image.
如可通过如下公式确定第三图像:The third image can be determined by the following formula:
R’(x,y)=a0(x,y)*R(x,y)+a1(x,y)*G(x,y)+a2(x,y)*B(x,y)R'(x,y)=a 0 (x,y)*R(x,y)+a 1 (x,y)*G(x,y)+a 2 (x,y)*B(x, y)
G’(x,y)=b0(x,y)*R(x,y)+b1(x,y)*G(x,y)+b2(x,y)*B(x,y)G'(x, y)=b 0 (x, y)*R(x, y)+b 1 (x, y)*G(x, y)+b 2 (x, y)*B(x, y)
B’(x,y)=c0(x,y)*R(x,y)+c1(x,y)*G(x,y)+c2(x,y)*B(x,y)B'(x, y)=c 0 (x, y)*R(x, y)+c 1 (x, y)*G(x, y)+c 2 (x, y)*B(x, y)
其中,a0、a1、a2、b0、b1、b2、c0、c1、c2为颜色校正矩阵中的系数;R(x,y)、G(x,y)、B(x,y)为R通道、G通道、B通道的第二图像;R’(x,y)、G’(x,y)、B’(x,y)为R通道、G通道、B通道经颜色校正矩阵处理后的图像。Among them, a0 , a1 , a2 , b0 , b1 , b2 , c0 , c1 , c2 are coefficients in the color correction matrix; R(x, y), G(x, y), B(x, y) are the second images of the R channel, G channel, and B channel; R'(x, y), G'(x, y), B'(x, y) are the images of the R channel, G channel, and B channel after being processed by the color correction matrix.
由上可以理解的是,在本申请实施例中,采用一个神经网络可以实现色调映射,即实现将高动态范围的图像转换为低动态范围的图像,避免了误差累积,确保了最终输出的低动态图像的颜色的准确性。另外,该神经网络中包括两个独立通道(如,上通道和下通道),分别实现亮度和颜色的还原处理。可以看到的是,通过这两个独立通道实现了亮度与颜色部分解耦。另外,在进行亮度和颜色还原时,不仅考虑了图像的全局特征,还考虑了图像的局部特征,即通过参数共享机制,实现亮度与颜色相互作用。避免了还原出的低动态范围图像的纹理细节丢失。另外,通过在神经网络内部进行颜色校正处理,避免了对图像的负数截断和高位截取,进一步确保了纹理细节的准确还原。It can be understood from the above that, in the embodiment of the present application, a neural network can be used to achieve tone mapping, that is, to convert a high dynamic range image into a low dynamic range image, avoid error accumulation, and ensure the accuracy of the color of the low dynamic range image finally output. In addition, the neural network includes two independent channels (such as an upper channel and a lower channel) to respectively realize the restoration processing of brightness and color. It can be seen that partial decoupling of brightness and color is achieved through these two independent channels. In addition, when performing brightness and color restoration, not only the global features of the image are considered, but also the local features of the image are considered, that is, the interaction between brightness and color is achieved through a parameter sharing mechanism. The loss of texture details of the restored low dynamic range image is avoided. In addition, by performing color correction processing inside the neural network, negative truncation and high-bit interception of the image are avoided, further ensuring the accurate restoration of texture details.
需要说明的是,在应用本申请提供的神经网络进行色调映射之前,需要训练该神经网络。在训练神经网络的过程中,在上通道输出的图像,如img1以及下通道输出的图像,如img2均满足神经网络的约束条件之后,停止神经网络的迭代训练,即获得最终能够用于进行色调应用的神经网络。It should be noted that before applying the neural network provided in this application for tone mapping, the neural network needs to be trained. In the process of training the neural network, after the image output by the upper channel, such as img1, and the image output by the lower channel, such as img2, both meet the constraints of the neural network, the iterative training of the neural network is stopped, and the neural network that can be finally used for tone application is obtained.
例如,对于从上通道输出的img1需要遵守神经网络中亮度的约束条件;对于从下通道输出的img2需要遵守神经网络中颜色的约束条件。For example, img1 output from the upper channel needs to comply with the brightness constraints in the neural network; img2 output from the lower channel needs to comply with the color constraints in the neural network.
作为示例,亮度的约束函数可以基于img1的亮度值和full input的亮度值的差值的平均值确定。如,亮度的约束函数可以为如下公式:As an example, the brightness constraint function can be determined based on the average value of the difference between the brightness value of img1 and the brightness value of the full input. For example, the brightness constraint function can be the following formula:
Light_LOSS=Avg(F(Img1)-F(Label))。Light_LOSS=Avg(F(Img1)-F(Label)).
其中,Light_LOSS用于表示亮度的约束函数,Avg为均值,Label为第一图像,F为计算亮度的公式。Among them, Light_LOSS is used to represent the constraint function of brightness, Avg is the mean, Label is the first image, and F is the formula for calculating brightness.
作为另一示例,颜色的约束函数是根据RGB图像和LAB图像的差值确定的,如,颜色的约束函数可以为如下公式:As another example, the color constraint function is determined according to the difference between the RGB image and the LAB image. For example, the color constraint function may be the following formula:
O_L,O_A,O_B=rgb2lab(Img2)O_L, O_A, O_B=rgb2lab(Img2)
L_L,L_A,L_B=rgb2lab(Label)L_L, L_A, L_B=rgb2lab(Label)
SC=1+0.0225*(C1+C2)SC=1+0.0225*(C 1 +C 2 )
SH=1+0.0075*(C1+C2)SH=1+0.0075*(C 1 +C 2 )
delta_a=O_A-L_Adelta_a=O_A-L_A
delta_b=O_B-L_Bdelta_b=O_B-L_B
delta_C=C1-C2delta_C=C1-C2
delta_L=O_L-L_Ldelta_L=O_L-L_L
delta_H=delta_a**2+delta_b**2-delta_C**2delta_H=delta_a**2+delta_b**2-delta_C**2
其中,Color_LOSS用于表示颜色的约束函数,Label图像为标准图像,O_L为img2图像的亮度,O_A为img2图像的彩度,O_B均为img2图像的色相,L_L分别为Label图像的亮度,O_A,为Label图像的彩度,O_B,为Label图像的色相,C1为img2图像的色相和彩度总值,C2为Label图像的色相和彩度总值,SC为0.0225系数下的img2图像和label图像的重量系数,SH为0.0075系数下的img2图像和label图像的重量系数;delta_A为img2图像和Label图像红/绿的色彩差异,delta_B为img2图像和Label图像黄/蓝的色彩差异,delta_C为img2图像和Label图像的色彩差异,delta_L为img2图像和Label图像的亮度差异,delta_H为img2图像和Label图像的色彩损失值。应注意,0.0225系数和0.0075系数为当前约束条件下的优选,也可以是0.045和0.015,亦或是其他值,在此不做唯一限定。Among them, Color_LOSS is used to represent the color constraint function, Label image is the standard image, O_L is the brightness of img2 image, O_A is the chroma of img2 image, O_B is the hue of img2 image, L_L is the brightness of Label image, O_A, is the chroma of Label image, O_B, is the hue of Label image, C1 is the total value of hue and chroma of img2 image, C 2 is the total value of hue and chroma of the Label image, SC is the weight coefficient of img2 image and label image under the coefficient of 0.0225, SH is the weight coefficient of img2 image and label image under the coefficient of 0.0075; delta_A is the color difference of red/green between img2 image and Label image, delta_B is the color difference of yellow/blue between img2 image and Label image, delta_C is the color difference between img2 image and Label image, delta_L is the brightness difference between img2 image and Label image, delta_H is the color loss value of img2 image and Label image. It should be noted that the coefficients of 0.0225 and 0.0075 are the preferred values under the current constraints, and can also be 0.045 and 0.015, or other values, which are not limited here.
本申请另一些实施例提供了一种电子设备,该电子设备可以包括:上述显示屏、摄像头、存储器和一个或多个处理器。该显示屏、摄像头、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图2所示的手机的结构。Some other embodiments of the present application provide an electronic device, which may include: the above-mentioned display screen, camera, memory and one or more processors. The display screen, camera, memory and processor are coupled. The memory is used to store computer program code, and the computer program code includes computer instructions. When the processor executes the computer instructions, the electronic device can execute the various functions or steps executed by the mobile phone in the above-mentioned method embodiment. The structure of the electronic device can refer to the structure of the mobile phone shown in Figure 2.
本申请实施例还提供一种芯片系统,如图5所示,该芯片系统500包括至少一个处理器501和至少一个接口电路502。处理器501和接口电路502可通过线路互联。例如,接口电路502可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路502可用于向其它装置(例如处理器501)发送信号。示例性的,接口电路502可读取存储器中存储的指令,并将该指令发送给处理器501。当所述指令被处理器501执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。The embodiment of the present application also provides a chip system, as shown in Figure 5, the chip system 500 includes at least one processor 501 and at least one interface circuit 502. The processor 501 and the interface circuit 502 can be interconnected through lines. For example, the interface circuit 502 can be used to receive signals from other devices (such as a memory of an electronic device). For another example, the interface circuit 502 can be used to send signals to other devices (such as processor 501). Exemplarily, the interface circuit 502 can read instructions stored in the memory and send the instructions to the processor 501. When the instructions are executed by the processor 501, the electronic device can perform the various steps in the above embodiments. Of course, the chip system can also include other discrete devices, which are not specifically limited in the embodiment of the present application.
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。An embodiment of the present application also provides a computer storage medium, which includes computer instructions. When the computer instructions are executed on the above-mentioned electronic device, the electronic device executes each function or step executed by the mobile phone in the above-mentioned method embodiment.
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。The embodiment of the present application also provides a computer program product. When the computer program product is run on a computer, the computer is enabled to execute each function or step executed by the mobile phone in the above method embodiment.
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。Through the description of the above implementation methods, technical personnel in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional modules is used as an example. In actual applications, the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the device can be divided into different functional modules to complete all or part of the functions described above.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium, including several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above contents are only specific implementation methods of the present application, but the protection scope of the present application is not limited thereto. Any changes or substitutions within the technical scope disclosed in the present application shall be included in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311655933.XA CN118474553A (en) | 2023-12-01 | 2023-12-01 | Image processing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311655933.XA CN118474553A (en) | 2023-12-01 | 2023-12-01 | Image processing method and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118474553A true CN118474553A (en) | 2024-08-09 |
Family
ID=92162605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311655933.XA Pending CN118474553A (en) | 2023-12-01 | 2023-12-01 | Image processing method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118474553A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110048811A (en) * | 2009-11-03 | 2011-05-12 | 중앙대학교 산학협력단 | Method and apparatus for converting dynamic range of input image |
CN106920221A (en) * | 2017-03-10 | 2017-07-04 | 重庆邮电大学 | Take into account the exposure fusion method that Luminance Distribution and details are presented |
CN107203974A (en) * | 2016-03-16 | 2017-09-26 | 汤姆逊许可公司 | Method, apparatus and system for extended high dynamic range HDR to HDR tone mapping |
CN108182672A (en) * | 2014-05-28 | 2018-06-19 | 皇家飞利浦有限公司 | Method and apparatus for the method and apparatus encoded to HDR image and for using such coded image |
CN109410126A (en) * | 2017-08-30 | 2019-03-01 | 中山大学 | A kind of tone mapping method of details enhancing and the adaptive high dynamic range images of brightness |
CN116523777A (en) * | 2023-04-19 | 2023-08-01 | 哈尔滨理工大学 | Tone mapping method based on global tone reconstruction and local detail enhancement |
-
2023
- 2023-12-01 CN CN202311655933.XA patent/CN118474553A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110048811A (en) * | 2009-11-03 | 2011-05-12 | 중앙대학교 산학협력단 | Method and apparatus for converting dynamic range of input image |
CN108182672A (en) * | 2014-05-28 | 2018-06-19 | 皇家飞利浦有限公司 | Method and apparatus for the method and apparatus encoded to HDR image and for using such coded image |
CN107203974A (en) * | 2016-03-16 | 2017-09-26 | 汤姆逊许可公司 | Method, apparatus and system for extended high dynamic range HDR to HDR tone mapping |
CN106920221A (en) * | 2017-03-10 | 2017-07-04 | 重庆邮电大学 | Take into account the exposure fusion method that Luminance Distribution and details are presented |
CN109410126A (en) * | 2017-08-30 | 2019-03-01 | 中山大学 | A kind of tone mapping method of details enhancing and the adaptive high dynamic range images of brightness |
CN116523777A (en) * | 2023-04-19 | 2023-08-01 | 哈尔滨理工大学 | Tone mapping method based on global tone reconstruction and local detail enhancement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220207680A1 (en) | Image Processing Method and Apparatus | |
US10916036B2 (en) | Method and system of generating multi-exposure camera statistics for image processing | |
US11430209B2 (en) | Image signal processing method, apparatus, and device | |
CN111179282B (en) | Image processing method, image processing device, storage medium and electronic apparatus | |
CN113810600B (en) | Terminal image processing method and device and terminal equipment | |
CN116438804A (en) | Frame processing and/or capturing instruction systems and techniques | |
CN110213502A (en) | Image processing method, device, storage medium and electronic equipment | |
CN115550570B (en) | Image processing method and electronic equipment | |
CN110266954A (en) | Image processing method, device, storage medium and electronic equipment | |
EP4195679B1 (en) | Image processing method and electronic device | |
EP2446414B1 (en) | Lens roll-off correction operation using values corrected based on brightness information | |
CN115550575A (en) | Image processing method and related device | |
US11521305B2 (en) | Image processing method and device, mobile terminal, and storage medium | |
CN116668838B (en) | Image processing method and electronic equipment | |
EP4156168A1 (en) | Image processing method and electronic device | |
CN114945087B (en) | Image processing method, device, equipment and storage medium based on face characteristics | |
CN113810622B (en) | Image processing method and device | |
CN118474553A (en) | Image processing method and electronic device | |
CN115529411B (en) | Video blurring method and device | |
CN117519555A (en) | Image processing method, electronic equipment and system | |
CN116095509B (en) | Method, device, electronic equipment and storage medium for generating video frame | |
CN116452437B (en) | High dynamic range image processing method and electronic equipment | |
CN116029914B (en) | Image processing method and electronic equipment | |
CN115706766B (en) | Video processing methods, devices, electronic equipment and storage media | |
CN118338120A (en) | Image preview method and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |