WO2023123601A1 - 图像色彩处理方法、装置和电子设备 - Google Patents

图像色彩处理方法、装置和电子设备 Download PDF

Info

Publication number
WO2023123601A1
WO2023123601A1 PCT/CN2022/074396 CN2022074396W WO2023123601A1 WO 2023123601 A1 WO2023123601 A1 WO 2023123601A1 CN 2022074396 W CN2022074396 W CN 2022074396W WO 2023123601 A1 WO2023123601 A1 WO 2023123601A1
Authority
WO
WIPO (PCT)
Prior art keywords
algorithm
color
image
scene
color control
Prior art date
Application number
PCT/CN2022/074396
Other languages
English (en)
French (fr)
Inventor
吴佩媛
熊佳
何佳伟
张威
Original Assignee
展讯通信(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯通信(上海)有限公司 filed Critical 展讯通信(上海)有限公司
Publication of WO2023123601A1 publication Critical patent/WO2023123601A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • the invention relates to the technical field of image optimization, in particular to an image color processing method, device and electronic equipment.
  • Embodiments of the present invention provide an image color processing method, device, and electronic equipment, which identify a current scene and execute a corresponding color control algorithm in the current scene in combination with a scene recognition algorithm, which is helpful for effectively invoking the color control algorithm.
  • an embodiment of the present invention provides an image color processing method, including:
  • the image data captured by the camera module is input into a scene recognition algorithm, and the scene recognition algorithm is used to output image scene information of the image data;
  • the image scene information determine a target color control algorithm from several color control algorithms included in the color algorithm library;
  • Color processing of the image data is performed according to the target color control algorithm.
  • the method before inputting the image data captured by the camera module into the scene recognition algorithm, the method further includes:
  • a target scene recognition algorithm is determined from several scene recognition algorithms included in the scene recognition algorithm library, and the target scene recognition algorithm is used to output the image scene information.
  • the target scene recognition algorithm is used to output the image scene information, including:
  • the target scene recognition algorithm is used to determine the image scene information according to the scene object and the color information of the image data.
  • the target color control algorithm is determined from several color control algorithms included in the color algorithm library, including:
  • one or more of ambient light brightness, device parameters of the camera device, and usage mode is also used to determine a target color control algorithm from the plurality of color control algorithms.
  • the target color control algorithm is determined from several color control algorithms included in the color algorithm library, including:
  • the image scene information determine a plurality of target color control algorithms from the plurality of color control algorithms, and configure effective conditions and effective ratios for the plurality of target color control algorithms;
  • the color processing of the image data is performed according to the validation conditions and validation ratios configured for the plurality of target color control algorithms.
  • performing color processing on the image data according to the target color control algorithm includes:
  • the input information of the target color control algorithm includes the following RGB values of the image data and ambient light statistical information;
  • the color control algorithm includes one or more of the following: lens shading correction LSC algorithm, automatic white balance AWB algorithm, color correction matrix CCM algorithm, color correction proof algorithm and post-processing color algorithm.
  • an embodiment of the present invention provides an image color processing device, including:
  • the input module is used to input the image data taken by the camera module into the scene recognition algorithm, and the scene recognition algorithm is used to output the image scene information of the image data;
  • a determination module configured to determine a target color control algorithm from several color control algorithms included in the color algorithm library according to the image scene information
  • An execution module configured to execute color processing on the image data according to the target color control algorithm.
  • an identification determination module configured to identify scene objects from the image data by using an image feature identification algorithm
  • a target scene recognition algorithm is determined from several scene recognition algorithms included in the scene recognition algorithm library, and the target scene recognition algorithm is used to output the image scene information.
  • the input module is specifically used for the target scene recognition algorithm to determine the image scene information according to the scene object and color information of the image data.
  • the determining module is specifically configured to, on the basis of the image scene information, also according to one or more items of ambient light brightness, device parameters of the camera device, and usage mode, A target color control algorithm is determined from the plurality of color control algorithms.
  • the determination module is further specifically configured to determine a plurality of target color control algorithms from the plurality of color control algorithms according to the image scene information, and provide Algorithm configuration effective conditions and effective ratio;
  • the color processing of the image data is performed according to the validation conditions and validation ratios configured for the plurality of target color control algorithms.
  • the execution module is specifically configured to obtain input information of the target color control algorithm, where the input information includes RGB values of the image data described below and ambient light statistical information;
  • an embodiment of the present invention provides an electronic device, including:
  • At least one memory communicatively coupled to the processor, wherein:
  • the memory stores program instructions executable by the processor, and the processor can execute the method provided by the first aspect by invoking the program instructions.
  • an embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and the computer instructions cause the computer to execute the method provided in the first aspect.
  • FIG. 1 is a flowchart of an image color processing method provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an image color processing device provided by an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an embodiment of the electronic device of the present invention.
  • FIG. 1 is a flowchart of an image color processing method provided by an embodiment of the present invention. As shown in Figure 1, the image color processing method includes:
  • Step 101 input the image data captured by the camera module into a scene recognition algorithm, and the scene recognition algorithm is used to output image scene information of the image data.
  • the multi-frame image data captured by the camera module can be input into the scene recognition algorithm respectively, and the image scene information can be obtained through the recognition of each frame of image data, and combined with the image scene information of each frame, the current camera can be determined. What kind of scene is the device used in? For example, when the image scene information of each frame shows that the current scene contains blue sky and grass, it can be determined that the current camera device is used in an outdoor scene.
  • the method before inputting the image data captured by the camera module into the scene recognition algorithm, the method further includes: using an image feature recognition algorithm to identify scene objects from the image data; according to the scene objects, from the scene recognition algorithm library A target scene recognition algorithm is determined among the included scene recognition algorithms, and the target scene recognition algorithm is used to output the image scene information.
  • the image data captured by the camera module may include various scene objects, and various scene objects have corresponding scene recognition algorithms. Therefore, when using the scene recognition algorithm to process each scene object in the image data, the image feature recognition algorithm should be used to identify each scene object first, and then different scene recognition algorithms should be selected according to different scene objects. For example, use the image feature recognition algorithm to identify the relevant features of the scene object in the image data, and determine the scene object as a person according to the identified relevant features, then the face recognition algorithm can be called from several scene recognition algorithms.
  • the target scene recognition algorithm is used to output the image scene information, including:
  • the target scene recognition algorithm is used to determine the image scene information according to the scene object and the color information of the image data.
  • the scene recognition algorithm includes a neural network model, and the neural network model includes an input layer, a hidden layer, and a fully connected layer.
  • Input the scene object into the neural network model output the data through the full link layer, and then classify and recognize it through the classification function softmax to determine the specific category characteristics of the scene object.
  • the image scene information is obtained in combination with the color information in the image data. If the scene object is a scene, it is determined to be the sky through the scene recognition algorithm, and combined with the current color information to obtain the current sky color and image scene information that the current scene is cloudy or sunny.
  • Step 102 according to the image scene information, determine a target color control algorithm from several color control algorithms included in the color algorithm library.
  • the image scene information mainly displays relevant scenes in the current image data, and it can be determined which color control algorithms are used to perform color optimization processing on the image data by analyzing the current scene.
  • one or more items of ambient light brightness, device parameters of the camera device and usage mode are also used to determine the target color from the several color control algorithms control algorithm.
  • the ambient light brightness can be calculated by an automatic exposure (Auto Exposure, AE) algorithm.
  • AE Automatic Exposure
  • the more common AE algorithms include the average brightness method, the weighted mean method, and the brightness histogram. The most common of these is the average brightness method.
  • the average brightness method is to average the brightness of all the pixels in the image, and finally achieve the target ambient light brightness by continuously adjusting the exposure parameters.
  • the weight mean method is to set different weights for different areas of the image to calculate the ambient light brightness. For example, the selection of various metering modes in the camera is to change the weights of different areas.
  • the brightness histogram method calculates the ambient light brightness by assigning different weights to the peaks in the histogram.
  • the image scene information can be combined with one or more of the ambient light brightness, camera equipment parameters, and camera equipment usage modes to determine which color control algorithm should be used to optimize the color of the image data, and can also be combined with the correlation of the color control algorithm. information to judge. For example, when the image scene information obtained by the scene recognition algorithm is displayed as a blue sky scene, after the blue sky is recognized, it needs to combine the ambient light brightness to reach a certain standard, and mix and judge with information such as the color coordinate range.
  • the image scene information determine a plurality of target color control algorithms from the plurality of color control algorithms, and configure effective conditions and effective ratios for the plurality of target color control algorithms; wherein, according to the A plurality of target color control algorithms execute color processing on the image data according to configured effective conditions and effective ratios.
  • the image scene information combined with one or more items of ambient light brightness, camera device parameters, and camera device usage mode, it is judged whether the current scene needs color optimization and the color control algorithm to be used.
  • Whether the color control algorithm needs to optimize the color of the image data can be determined through the effective conditions and effective ratio configured by the color control algorithm, and the effective ratio can be set according to different scenarios. For example, if it is determined to be in a blue sky scene and the current camera equipment is determined to be shooting outdoors, the conditions for the automatic white balance algorithm configuration in the color control algorithm to take effect can be that the ambient light brightness is greater than 150cd/m 2 and the color temperature in the color coordinates is less than 5000k. The proportions are 30% and 70%, respectively. When the effective conditions and effective ratio are met, the automatic white balance algorithm needs to be used.
  • Step 103 perform color processing on the image data according to the target color control algorithm.
  • the color control algorithm includes lens shading correction (Lens Shading Correction, LSC) algorithm, automatic white balance (Auto White Balance, AWB) algorithm, color correction matrix (Color Correction Matrix, CCM) algorithm, color correction proof algorithm and post-processing color algorithms.
  • LSC lens shading correction
  • AWB automatic white balance
  • CCM Color Correction Matrix
  • the output results calculated by one or more color control algorithms can be used to optimize the color of the image data.
  • performing color processing on the image data according to the target color control algorithm includes: acquiring input information of the target color control algorithm, the input information including the following RGB values of the image data and ambient light Statistics;
  • some color control algorithms require input information during processing, while some algorithms do not require input information.
  • the output results can be obtained after calculation of these color control algorithms, and the image data can be optimized through the output results.
  • the LSC algorithm generally includes two methods: one is the concentric circle method, and the other is the grid method.
  • the process of the concentric circle method is: first find the center of the three RGB channels, generally select the same point, and multiply the three channels of the center of the screen and the edge of the screen by different gains in the shape of concentric circles.
  • the curvature of the shadow gradient gradually increases from the center to the edges, so the equal gain curve is sparse in the center and dense at the edges.
  • the gain of lens shadow should not exceed 2 times, because it will introduce noise.
  • the gain in the same grid of the grid diagram is the same, and the distribution of the grid is also sparse in the center and dense in the four corners.
  • the output of the LSC algorithm is the gain table of the RGB channel, which is mainly used to ensure the brightness uniformity and color uniformity of the center and four corners of the camera module.
  • AWB algorithms including grayscale world algorithm, perfect reflection algorithm, dynamic threshold algorithm, color temperature estimation algorithm and so on.
  • the white balance gain compensation can be output through the AWB algorithm, which is used to correct the overall color accuracy and prevent the overall color cast of the camera that does not meet expectations.
  • the CCM algorithm is mainly completed by sensorRGB space through M 2 and M 1 and ⁇ correction.
  • the sensorRGB space is called the “source color space”
  • the nonlinear sRGB space is called the “target color space”.
  • M 1 and ⁇ are known, then, It is only necessary to inversely correct the image in the non-linear sRGB space and then convert it to the XYZ space.
  • it can be combined with the sensorRGB value to obtain the matrix M 2 , and then obtain the matrix M.
  • Two typical algorithms of CCM are polynomial fitting and three-dimensional look-up table (3D-LUT). The visual results are close.
  • Color correction proof algorithms include but not limited to various Gamma, HSV and 3DLUT algorithms common in the industry.
  • the Gamma algorithm assumes that there is a pixel in the image with a value of 200, then the following steps are performed to correct this pixel: first, it is normalized, and the pixel value is converted into a real number between 0 and 1.
  • the formula is as follows: (i+0.5)/256, which includes 1 division and 1 addition operation.
  • For pixel A its corresponding normalized value is 0.783203.
  • precompensation according to the formula Find the corresponding value of the pixel-normalized data with 1/gamma as the exponent. This step includes an exponentiation operation.
  • the HSV algorithm uses hue H, saturation S, and brightness V to describe the color change.
  • the value range of H is 0° ⁇ 360°, and it is calculated counterclockwise from red, red is 0°, green is 120°, Blue is 240°.
  • the white light component of the spectral color is 0, and the saturation reaches the highest.
  • the value ranges from 0% to 100%, the larger the value, the more saturated the color.
  • H indicates the brightness of the color.
  • the lightness value is related to the brightness of the illuminant; for the object color, this value is related to the transmittance or reflectance of the object, and the value usually ranges from 0% (black) to 100%. (white).
  • the 3D-LUT algorithm readjusts the tone of the image by establishing a color mapping table, mainly a three-dimensional color mapping algorithm.
  • the output results of the three color correction proof algorithms are the results of the mapping table. This algorithm is used to map the color channels to finely control the performance of specific color modules.
  • Post-processing color algorithms include but are not limited to post-processing algorithms for color bias and rendering based on the YUV domain that are common in the industry. This algorithm implements post-processing algorithms of different color styles for different devices and scenarios.
  • Corresponding results can be output through one or more of the above algorithms to optimize the color of the image data.
  • FIG. 2 is a schematic structural diagram of an image color processing device provided by an embodiment of the present invention.
  • the image color processing apparatus 200 includes: an input module 201 , a determination module 202 and an execution module 203 .
  • the input module 201 is used to input the image data captured by the camera module into the scene recognition algorithm, and the scene recognition algorithm is used to output the image scene information of the image data;
  • the determination module 202 is used to obtain from the image scene information according to the image scene information.
  • a target color control algorithm is determined from several color control algorithms included in the color algorithm library; an execution module 203 is configured to perform color processing on the image data according to the target color control algorithm.
  • a recognition determination module configured to use an image feature recognition algorithm to recognize scene objects from the image data;
  • a target scene recognition algorithm is determined in the scene recognition algorithm, and the target scene recognition algorithm is used to output the image scene information.
  • the input module 201 is specifically used for the target scene recognition algorithm to determine the image scene information according to the scene object and the color information of the image data.
  • the determining module 202 is specifically configured to, in addition to the image scene information, also according to one or multiple items to determine the target color control algorithm from the several color control algorithms.
  • the determination module 202 is further specifically configured to determine a plurality of target color control algorithms from the plurality of color control algorithms according to the image scene information, and provide Validation conditions and validation ratios are configured for each target color control algorithm; wherein, the plurality of target color control algorithms perform color processing on the image data according to the configured validation conditions and validation ratios.
  • an acquisition module configured to acquire input information of the target color control algorithm, where the input information includes RGB values of the image data described below and ambient light statistical information.
  • FIG. 3 is a schematic structural diagram of an embodiment of the electronic device of the present invention.
  • the above-mentioned electronic device may include at least one processor; and at least one memory connected to the above-mentioned processor in communication, wherein: the memory stores program instructions that can be executed by the processor, and the above-mentioned processor calls the above-mentioned program instructions to be able to Execute the image color processing method provided by the embodiment shown in FIG. 1 of this specification.
  • the above-mentioned electronic device may be a device capable of performing gesture recognition with a user, such as a cloud server, and the embodiment of this specification does not limit the specific form of the above-mentioned electronic device. It can be understood that the electronic device here is the machine mentioned in the method embodiment.
  • FIG. 3 shows a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present invention.
  • the electronic device shown in FIG. 3 is only an example, and should not limit the functions and scope of use of this embodiment of the present invention.
  • the electronic device takes the form of a general-purpose computing device.
  • the components of the electronic device may include, but are not limited to: one or more processors 410, a communication interface 420, a memory 430, and a communication bus 440 connecting different system components (including the memory 430 and the processing unit 410).
  • Communication bus 440 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, Micro Channel Architecture (Micro Channel Architecture; hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standards Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and Peripheral Component Interconnection (hereinafter referred to as: PCI) bus.
  • Electronic devices typically include a variety of computer system readable media. These media can be any available media that can be accessed by the electronic device and include both volatile and nonvolatile media, removable and non-removable media.
  • the memory 430 may include a computer system-readable medium in the form of a volatile memory, such as a random access memory (Random Access Memory; RAM for short) and/or a cache memory.
  • the electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • Memory 430 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
  • a program/utility having a set (at least one) of program modules may be stored in memory 430, such program modules including - but not limited to - an operating system, one or more application programs, other program modules, and program data , each or some combination of these examples may include implementations of network environments.
  • the program modules generally perform the functions and/or methodologies of the described embodiments of the invention.
  • the processor 410 executes various functional applications and data processing by running the programs stored in the memory 430, for example, realizing the image color processing method provided by the embodiment shown in FIG. 1 of the present invention.
  • An embodiment of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores computer instructions, and the computer instructions cause the computer to execute the image color processing method provided in the embodiment shown in FIG. 1 of this specification.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including - but not limited to - electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to - wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations described herein can be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a Local Area Network (hereinafter referred to as LAN) or a Wide Area Network (hereinafter referred to as WAN), or it may Connect to an external computer (for example via the Internet using an Internet Service Provider).
  • LAN Local Area Network
  • WAN Wide Area Network
  • first and second are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include at least one of these features.
  • “plurality” means at least two, such as two, three, etc., unless otherwise specifically defined.
  • the word “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (a stated condition or event)”.
  • terminals involved in the embodiments of this specification may include, but are not limited to, personal computers (Personal Computer; hereinafter referred to as: PC), personal digital assistants (Personal Digital Assistant; hereinafter referred to as: PDA), wireless handheld devices, tablet Computer (Tablet Computer), mobile phone, MP3 player, MP4 player, etc.
  • PC Personal Computer
  • PDA Personal Digital Assistant
  • Tablet Computer Tablet Computer
  • mobile phone MP3 player, MP4 player, etc.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined Or it can be integrated into another system, or some features can be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of this specification may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units may be stored in a computer-readable storage medium.
  • the above-mentioned software functional units are stored in a storage medium, and include several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (Processor) execute the methods described in the various embodiments of this specification. partial steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory; hereinafter referred to as: ROM), random access memory (Random Access Memory; hereinafter referred to as: RAM), magnetic disk or optical disc, etc.

Abstract

本申请涉及互联网技术领域,尤其涉及一种图像色彩处理方法、装置和电子设备。其中,上述图像色彩处理方法包括:将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;根据所述目标色彩控制算法执行对所述图像数据的色彩处理。本申请中,结合场景识别算法识别当前场景并在当前场景下执行相应的色彩控制算法,有助于对色彩控制算法的有效调用。

Description

图像色彩处理方法、装置和电子设备
本申请要求于2021年12月27日提交中国专利局、申请号为202111611798.X、申请名称为“图像色彩处理方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像优化技术领域,尤其涉及一种图像色彩处理方法、装置和电子设备。
背景技术
随着科技的发展,摄像头的应用越来越普遍。移动设备、车载产品、智能家居以及安防设备等都配备了摄像装置,很多的电子产品也都配有摄像头。而在使用摄像头拍摄画面时,图像色彩在不同场景下会有一定的偏差,需要进行色彩优化才可将画面色彩调整至正常水平。可由于在不同场景下,需要调整的图像色彩也不完全相同。
则如何确定在不同场景下执行不同的图像色彩处理方案,成为亟待解决的问题。
发明内容
本发明实施例提供了一种图像色彩处理方法、装置和电子设备,结合场景识别算法识别当前场景并在当前场景下执行相应的色彩控制算法,有助于对色彩控制算法的有效调用。
第一方面,本发明实施例提供一种图像色彩处理方法,包括:
将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;
根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;
根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
其中一种可能的实现方式中,将摄像头模组拍摄的图像数据输入场景识别算法之前,所述方法还包括:
采用图像特征识别算法从所述图像数据中识别场景对象;
根据所述场景对象,从场景识别算法库包含的若干场景识别算法中确定目标场景识别算法,所述目标场景识别算法用于输出所述图像场景信息。
其中一种可能的实现方式中,所述目标场景识别算法用于输出所述图像场景信息,包括:
所述目标场景识别算法用于根据所述场景对象和所述图像数据的色彩信息,确定所述图像场景信息。
其中一种可能的实现方式中,根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法,包括:
在根据所述图像场景信息的基础上,还根据环境光亮度、摄像设备的设备参数和使用模式中的一项或多项,以从所述若干色彩控制算法中确定出目标色彩控制算法。
其中一种可能的实现方式中,根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法,包括:
根据所述图像场景信息,从所述若干色彩控制算法中确定多个目标色彩控制算法,并为所述多个目标色彩控制算法配置生效条件和生效比例;
其中,按照为所述多个目标色彩控制算法按照配置的生效条件和生效比例,执行对所述图像数据的色彩处理。
其中一种可能的实现方式中,根据所述目标色彩控制算法执行对所述图像数据的色彩处理,包括:
获取所述目标色彩控制算法的输入信息,所述输入信息包括以下所述图像数据的RGB值以及环境光亮统计信息;
将所述图像数据的RGB值以及环境光亮统计信息输入所述目标色彩控制算法,以实现对所述图像数据的色彩处理。
其中一种可能的实现方式中,所述色彩控制算法包括以下中的一项或多项:镜头阴影矫正LSC算法、自动白平衡AWB算法、色彩矫正矩阵CCM算法、色彩校正举证算法和后处理色彩算法。
第二方面,本发明实施例提供一种图像色彩处理装置,包括:
输入模块,用于将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;
确定模块,用于根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;
执行模块,用于根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
其中一种可能的实现方式中,还包括:识别确定模块,用于采用图像特征识别算法从所述图像数据中识别场景对象;
根据所述场景对象,从场景识别算法库包含的若干场景识别算法中确定目标场景识别算法,所述目标场景识别算法用于输出所述图像场景信息。
其中一种可能的实现方式中,所述输入模块,具体用于所述目标场景识别算法用于根据所述场景对象和所述图像数据的色彩信息,确定所述图像场景信息。
其中一种可能的实现方式中,所述确定模块,具体用于在根据所述图像场景信息的基础上,还根据环境光亮度、摄像设备的设备参数和使用模式中的一项或 多项,以从所述若干色彩控制算法中确定出目标色彩控制算法。
其中一种可能的实现方式中,所述确定模块,还具体用于根据所述图像场景信息,从所述若干色彩控制算法中确定多个目标色彩控制算法,并为所述多个目标色彩控制算法配置生效条件和生效比例;
其中,按照为所述多个目标色彩控制算法按照配置的生效条件和生效比例,执行对所述图像数据的色彩处理。
其中一种可能的实现方式中,所述执行模块,具体用于获取所述目标色彩控制算法的输入信息,所述输入信息包括以下所述图像数据的RGB值以及环境光亮统计信息;
将所述图像数据的RGB值以及环境光亮统计信息输入所述目标色彩控制算法,以实现对所述图像数据的色彩处理。
第三方面,本发明实施例提供一种电子设备,包括:
至少一个处理器;以及
与所述处理器通信连接的至少一个存储器,其中:
所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所述程序指令能够执行第一方面提供的方法。
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行第一方面提供的方法。
应当理解的是,本说明书的第二~第四方面与本说明书的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用 的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本发明实施例提供的一种图像色彩处理方法的流程图;
图2为本发明实施例提供的一种图像色彩处理装置的结构示意图;
图3为本发明电子设备一个实施例的结构示意图。
具体实施方式
为了更好的理解本发明的技术方案,下面结合附图对本发明实施例进行详细描述。
应当明确,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其它实施例,都属于本说明书保护的范围。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
图1为本发明实施例提供的一种图像色彩处理方法的流程图。如图1所示,所述图像色彩处理方法包括:
步骤101,将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息。
在一些实施例中,可以将摄像头模组拍摄的多帧图像数据分别输入场景识别算法,通过对每一帧图像数据的识别获取图像场景信息,再结合每一帧的图像场景信息可以判定当前摄像设备运用在什么样的场景中。例如,当每一帧的图像场景信息都显示当前场景中含有蓝天和草地,则可判定当前摄像设备运用在室外场景中。
可选地,将摄像头模组拍摄的图像数据输入场景识别算法之前,所述方法还包括:采用图像特征识别算法从所述图像数据中识别场景对象;根据所述场景对象,从场景识别算法库包含的若干场景识别算法中确定目标场景识别算法,所述目标场景识别算法用于输出所述图像场景信息。
具体地,由于摄像头模组拍摄的图像数据中可以包括多种场景对象,且各种场景对象都有相对应的场景识别算法。因此,当使用场景识别算法对图像数据中的各场景对象进行处理时,应先使用图像特征识别算法识别各场景对象,再根据不同的场景对象选择不同的场景识别算法。例如,使用图像特征识别算法对图像数据中场景对象的相关特征进行识别,根据识别相关特征确定场景对象为人物,则可从若干场景识别算法中调用人脸识别算法。
可选地,所述目标场景识别算法用于输出所述图像场景信息,包括:
所述目标场景识别算法用于根据所述场景对象和所述图像数据的色彩信息,确定所述图像场景信息。
具体地,场景识别算法内含神经网络模型,神经网络模型包含输入层、隐藏层以及全链接层。将场景对象输入神经网络模型,通过全链接层输出数据,再通过分类函数softmax进行分类识别,确定场景对象的具体类别特征。随后,结合图像数据中的色彩信息获取图像场景信息。如场景对象为景物,通过景物识别算法确定为天空,再结合当前的色彩信息得出当前天空的颜色以及当前场景为阴天或晴天的图像场景信息。
步骤102,根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法。
在一些实施例中,图像场景信息主要显示了当前图像数据中的相关场景,通过对当前场景的分析可判断使用那些色彩控制算法对图像数据进行色彩优化处理。
可选地,在根据所述图像场景信息的基础上,还根据环境光亮度、摄像设备 的设备参数和使用模式中的一项或多项,以从所述若干色彩控制算法中确定出目标色彩控制算法。
其中,环境光亮度可通过自动曝光(Auto Exposure,AE)算法计算得出。目前比较常见的AE算法包括平均亮度法、权重均值法和亮度直方图等。其中最普遍的就是平均亮度法。平均亮度法就是对图像所以像素亮度求平均值,通过不断调整曝光参数最终达到目标的环境光亮度。而权重均值法是对图像不同区域设置不同权重来计算环境光亮度,例如相机中的各种测光模式的选择就是改变不同区域的权重。亮度直方图法是通过为直方图中峰值分配不同权重来计算环境光亮度。
图像场景信息可结合环境光亮度、摄像设备参数以及摄像设备的使用模式中的一项或多项可判断应该使用哪项色彩控制算法对图像数据进行色彩优化处理,还可以结合色彩控制算法的相关信息进行判断。举例说明,当场景识别算法得出的图像场景信息显示为蓝天场景,则识别到蓝天后还需结合环境光亮度达到一定标准,与色坐标范围等信息进行混合和判定。
在确定目标色彩控制算法时,还可通过各类模组一致性(One Time Programmable,OTP)算法解决不同摄像头模组之间的相应差异性,提升各类算法在不同模组之间的泛化性,补偿模组之间个体差异。
可选地,根据所述图像场景信息,从所述若干色彩控制算法中确定多个目标色彩控制算法,并为所述多个目标色彩控制算法配置生效条件和生效比例;其中,按照为所述多个目标色彩控制算法按照配置的生效条件和生效比例,执行对所述图像数据的色彩处理。
具体地,根据图像场景信息结合环境光亮度、摄像设备参数以及摄像设备的使用模式中的一项或多项,判断当前场景是否需要进行色彩优化以及需要使用的色彩控制算法。可通过色彩控制算法配置的生效条件以及生效比例确定色彩控制算法是否需要对图像数据进行色彩优化,其中生效比例可以根据不同的场景进行设置。例如,确定在蓝天场景中且确定当前摄像设备在室外拍摄,而色彩控制算 法中的自动白平衡算法配置的生效条件可以为环境光亮度大于150cd/m 2以及色坐标中的色温小于5000k,生效比例分别为30%和70%。当满足生效条件和生效比例时,则需要使用自动白平衡算法。
步骤103,根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
在一些实施例中,色彩控制算法包括镜头阴影矫正(Lens Shading Correction,LSC)算法、自动白平衡(Auto White Balance,AWB)算法、色彩矫正矩阵(Color Correction Matrix,CCM)算法、色彩校正举证算法和后处理色彩算法。通过其中的一项或多项色彩控制算法计算的输出结果,可对图像数据进行色彩优化处理。
可选地,根据所述目标色彩控制算法执行对所述图像数据的色彩处理,包括:获取所述目标色彩控制算法的输入信息,所述输入信息包括以下所述图像数据的RGB值以及环境光亮统计信息;
将所述图像数据的RGB值以及环境光亮统计信息输入所述目标色彩控制算法,以实现对所述图像数据的色彩处理。
具体地,某些色彩控制算法在处理过程中需要输入信息,而某些算法则不需要输入信息。这些色彩控制算法计算后可获得输出结果,通过输出结果即可对图像数据进行优化。
其中,LSC算法一般包括两种方法:一种是同心圆法,一种是网格法。同心圆法的流程为:首先找到RGB三通道的圆心,一般选择为同一个点,以同心圆的形状将画面的中心和画面的边缘的三通道乘以不同的增益。考虑阴影渐变的曲率从中心到边缘逐渐增大,所以等增益曲线中心稀疏,边缘密集。一般来说镜头阴影的增益最好不要超过2倍,因为会引入噪声。而网格法中网格图的同一个方格中的增益一致,网格的分布也是中心稀疏四角密集。LSC算法的输出结果为RGB通道的增益表,主要用于保证摄像头模组中心及四角的亮度均匀度及色彩均匀度。
AWB算法可以有多种,包括灰度世界算法、完美反射算法、动态阈值算法、 色温估计算法等。通过AWB算法可以输出白平衡增益补偿,用于矫正整体色彩准确性,防止摄像头色彩出现整体的不符合预期的偏色。
CCM算法主要由sensorRGB空间分别经过M 2和M 1以及γ校正完成。其中,sensorRGB空间我们称之为“源色彩空间”,非线性sRGB空间称之为“目标颜色空间”。目前,我们能够得到源色彩空间的“不饱和图”对应的24色色块,也有非线性sRGB空间的“饱和图”对应的24色色块,而M 1和γ的数值是已知的,那么,只需要将非线性sRGB空间的图片经过反γ校正然后再转换到XYZ空间,那时就可以和sensorRGB数值联立从而求得矩阵M 2,继而求得矩阵M。CCM两种典型的算法为多项式拟合和三维查找表(3D-LUT)法,算法输出结果为色彩校正矩阵,用于解决摄像头色彩演色性的矩阵,保证各环境下摄像头各种颜色和人眼视觉结果接近。
色彩校正举证算法包括但不限于行业内常见的各类Gamma、HSV和3DLUT算法。Gamma算法是假设图像中有一个像素,值为200,那么对这个像素进行校正执行如下步骤:首先是归一化,将像素值转换为0~1之间的实数。公式如下:(i+0.5)/256,这里包含1个除法和1个加法操作。对于像素A而言,其对应的归一化值为0.783203。然后进行预补偿,根据公式
Figure PCTCN2022074396-appb-000001
求出像素归一化后的数据以1/gamma为指数的对应值,这一步包含一个求指数运算。若gamma值为2.2,则1/gamma为0.454545,对归一化后的A值进行预补偿的结果就是0.783203^0.454545=0.894872。最后进行反归一化,将经过预补偿的实数值反变换为0~255之间的整数值。具体公式为:f*256-0.5,此步骤包含一个乘法和一个减法运算。续前例,将A的预补偿结果0.894872代入上式,得到A预补偿后对应的像素值为228,这个228就是最后输出的数据。
HSV算法是用色调H,饱和度S,明亮度V来描述颜色的变化,H取值范围为0°~360°,从红色开始按逆时针方向计算,红色为0°,绿色为120°,蓝色为240°。饱和度S越高,颜色则深而艳。光谱色的白光成分为0,饱和度达到 最高。通常取值范围为0%~100%,值越大,颜色越饱和。H表示颜色明亮的程度,对于光源色,明度值与发光体的光亮度有关;对于物体色,此值和物体的透射比或反射比有关,通常取值范围为0%(黑)到100%(白)。
3D-LUT算法是通过建立一个颜色映射表,对图像的色调进行重调,主要是一个三维的色彩映射算法。
三种色彩校正举证算法输出结果为映射表结果,此算法用于针对色彩通道,进行映射,以精细化控制具体色彩模块表现。
后处理色彩算法包括但不限于行业内常见的基于YUV域进行色彩偏向,渲染的后处理算法。此算法针对不同设备,场景下实现不同色彩风格的后处理算法。
通过上述的一项或多项算法即可输出相应结果,来进行图像数据的色彩优化。
图2为本发明实施例提供的一种图像色彩处理装置的结构示意图。如图2所示,所述图像色彩处理装置200包括:输入模块201、确定模块202和执行模块203。输入模块201,用于将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;确定模块202,用于根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;执行模块203,用于根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
在本发明上述实施例中,可选地,还包括:识别确定模块,用于采用图像特征识别算法从所述图像数据中识别场景对象;根据所述场景对象,从场景识别算法库包含的若干场景识别算法中确定目标场景识别算法,所述目标场景识别算法用于输出所述图像场景信息。
在本发明上述实施例中,可选地,所述输入模块201,具体用于所述目标场景识别算法用于根据所述场景对象和所述图像数据的色彩信息,确定所述图像场景信息。
在本发明上述实施例中,可选地,所述确定模块202,具体用于除了根据所 述图像场景信息之外,还根据环境光亮度、摄像设备的设备参数和使用模式中的一项或多项,以从所述若干色彩控制算法中确定出目标色彩控制算法。
在本发明上述实施例中,可选地,所述确定模块202,还具体用于根据所述图像场景信息,从所述若干色彩控制算法中确定多个目标色彩控制算法,并为所述多个目标色彩控制算法配置生效条件和生效比例;其中,所述多个目标色彩控制算法按照配置的生效条件和生效比例,执行对所述图像数据的色彩处理。
在本发明上述实施例中,可选地,还包括:获取模块,用于获取所述目标色彩控制算法的输入信息,所述输入信息包括以下所述图像数据的RGB值以及环境光亮统计信息。
图3为本发明电子设备一个实施例的结构示意图。
如图3所示,上述电子设备可以包括至少一个处理器;以及与上述处理器通信连接的至少一个存储器,其中:存储器存储有可被处理器执行的程序指令,上述处理器调用上述程序指令能够执行本说明书图1所示实施例提供的图像色彩处理方法。
其中,上述电子设备可以为能够与用户进行手势识别的设备,例如:云服务器,本说明书实施例对上述电子设备的具体形式不作限定。可以理解的是,这里的电子设备即为方法实施例中提到的机器。
图3示出了适于用来实现本发明实施方式的示例性电子设备的框图。图3显示的电子设备仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图3所示,电子设备以通用计算设备的形式表现。电子设备的组件可以包括但不限于:一个或者多个处理器410,通信接口420,存储器430,连接不同系统组件(包括存储器430和处理单元410)的通信总线440。
通信总线440表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总 线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture;以下简称:MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association;以下简称:VESA)局域总线以及外围组件互连(Peripheral Component Interconnection;以下简称:PCI)总线。
电子设备典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
存储器430可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)和/或高速缓存存储器。电子设备可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。存储器430可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块的程序/实用工具,可以存储在存储器430中,这样的程序模块包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块通常执行本发明所描述的实施例中的功能和/或方法。
处理器410通过运行存储在存储器430中的程序,从而执行各种功能应用以及数据处理,例如实现本发明图1所示实施例提供的图像色彩处理方法。
本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行本说明书图1所示实施例提供的图像色彩处理方法。
上述计算机可读存储介质可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半 导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(Read Only Memory;以下简称:ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory;以下简称:EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本说明书操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network;以下简称:LAN)或广域网(Wide Area Network;以下简称:WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本说明书的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本说明书的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本说明书的实施例所属技术领域的技术人员所理解。
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时” 或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,本说明书实施例中所涉及的终端可以包括但不限于个人计算机(Personal Computer;以下简称:PC)、个人数字助理(Personal Digital Assistant;以下简称:PDA)、无线手持设备、平板电脑(Tablet Computer)、手机、MP3播放器、MP4播放器等。
在本说明书所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本说明书各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机装置(可以是个人计算机,服务器,或者网络装置等)或处理器(Processor)执行本说明书各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory;以下简称:ROM)、随机存取存储器(Random Access Memory;以下简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本说明书,凡在本 说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。

Claims (10)

  1. 一种图像色彩处理方法,其特征在于,所述方法包括:
    将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;
    根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;
    根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
  2. 根据权利要求1所述方法,其特征在于,将摄像头模组拍摄的图像数据输入场景识别算法之前,所述方法还包括:
    采用图像特征识别算法从所述图像数据中识别场景对象;
    根据所述场景对象,从场景识别算法库包含的若干场景识别算法中确定目标场景识别算法,所述目标场景识别算法用于输出所述图像场景信息。
  3. 根据权利要求2所述方法,其特征在于,所述目标场景识别算法用于输出所述图像场景信息,包括:
    所述目标场景识别算法用于根据所述场景对象和所述图像数据的色彩信息,确定所述图像场景信息。
  4. 根据权利要求1所述的方法,其特征在于,根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法,包括:
    在根据所述图像场景信息的基础上,还根据环境光亮度、摄像设备的设备参数和使用模式中的一项或多项,以从所述若干色彩控制算法中确定出目标色彩控制算法。
  5. 根据权利要求1或4所述的方法,其特征在于,根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法,包括:
    根据所述图像场景信息,从所述若干色彩控制算法中确定多个目标色彩控制算法,并为所述多个目标色彩控制算法配置生效条件和生效比例;
    其中,按照为所述多个目标色彩控制算法配置的生效条件和生效比例,执行对所述图像数据的色彩处理。
  6. 根据权利要求1所述的方法,其特征在于,根据所述目标色彩控制算法执行对所述图像数据的色彩处理,包括:
    获取所述目标色彩控制算法的输入信息,所述输入信息包括以下所述图像数据的RGB值以及环境光亮统计信息;
    将所述图像数据的RGB值以及环境光亮统计信息输入所述目标色彩控制算法,以实现对所述图像数据的色彩处理。
  7. 根据权利要求1所述的方法,其特征在于,所述若干色彩控制算法包括以下中的一项或多项:镜头阴影矫正LSC算法、自动白平衡AWB算法、色彩矫正矩阵CCM算法、色彩校正举证算法和后处理色彩算法。
  8. 一种图像色彩处理装置,其特征在于,包括:
    输入模块,用于将摄像头模组拍摄的图像数据输入场景识别算法,所述场景识别算法用于输出所述图像数据的图像场景信息;
    确定模块,用于根据所述图像场景信息,从色彩算法库包含的若干色彩控制算法中确定目标色彩控制算法;
    执行模块,用于根据所述目标色彩控制算法执行对所述图像数据的色彩处理。
  9. 一种电子设备,其特征在于,包括:至少一个处理器;以及
    与所述处理器通信连接的至少一个存储器,其中:
    所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所 述程序指令能够执行如权利要求1至7任一所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行如权利要求1至7任一所述的方法。
PCT/CN2022/074396 2021-12-27 2022-01-27 图像色彩处理方法、装置和电子设备 WO2023123601A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111611798.XA CN114286000B (zh) 2021-12-27 2021-12-27 图像色彩处理方法、装置和电子设备
CN202111611798.X 2021-12-27

Publications (1)

Publication Number Publication Date
WO2023123601A1 true WO2023123601A1 (zh) 2023-07-06

Family

ID=80876263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074396 WO2023123601A1 (zh) 2021-12-27 2022-01-27 图像色彩处理方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN114286000B (zh)
WO (1) WO2023123601A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101547A (zh) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN108600630A (zh) * 2018-05-10 2018-09-28 Oppo广东移动通信有限公司 拍照方法、装置及终端设备
WO2020238775A1 (zh) * 2019-05-28 2020-12-03 华为技术有限公司 一种场景识别方法、一种场景识别装置及一种电子设备
CN112819703A (zh) * 2019-11-18 2021-05-18 Oppo广东移动通信有限公司 信息处理方法和装置、及存储介质
US20210216807A1 (en) * 2020-01-09 2021-07-15 International Business Machines Corporation Cognitive motion picture analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942523B (zh) * 2013-01-18 2017-11-03 华为终端有限公司 一种日照场景识别方法及装置
US10567721B2 (en) * 2017-08-23 2020-02-18 Motorola Mobility Llc Using a light color sensor to improve a representation of colors in captured image data
CN109525782A (zh) * 2018-12-25 2019-03-26 努比亚技术有限公司 一种拍摄方法、终端及计算机可读存储介质
CN112562019A (zh) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 图像色彩调整方法及装置、计算机可读介质和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101547A (zh) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 一种图像数据的处理方法、装置和移动终端
CN108600630A (zh) * 2018-05-10 2018-09-28 Oppo广东移动通信有限公司 拍照方法、装置及终端设备
WO2020238775A1 (zh) * 2019-05-28 2020-12-03 华为技术有限公司 一种场景识别方法、一种场景识别装置及一种电子设备
CN112819703A (zh) * 2019-11-18 2021-05-18 Oppo广东移动通信有限公司 信息处理方法和装置、及存储介质
US20210216807A1 (en) * 2020-01-09 2021-07-15 International Business Machines Corporation Cognitive motion picture analysis

Also Published As

Publication number Publication date
CN114286000B (zh) 2023-06-16
CN114286000A (zh) 2022-04-05

Similar Documents

Publication Publication Date Title
US10791310B2 (en) Method and system of deep learning-based automatic white balancing
US10949958B2 (en) Fast fourier color constancy
TWI538522B (zh) 使用雜訊最佳化準則以計算場景白色點
CN109274985B (zh) 视频转码方法、装置、计算机设备和存储介质
CN112565636B (zh) 图像处理方法、装置、设备和存储介质
CN107274351B (zh) 图像处理设备、图像处理系统和图像处理方法
WO2017084255A1 (zh) 实时视频增强方法、终端和非易失性计算机可读存储介质
WO2021218603A1 (zh) 图像处理方法及投影系统
WO2023098251A1 (zh) 图像处理方法、设备及可读存储介质
WO2022257396A1 (zh) 图像中的色边像素点的确定方法、确定装置和计算机设备
CN113132695B (zh) 镜头阴影校正方法、装置及电子设备
WO2022121893A1 (zh) 图像处理方法、装置、计算机设备和存储介质
CN109102484B (zh) 用于处理图像的方法和装置
US8565523B2 (en) Image content-based color balancing
CN107592517B (zh) 一种肤色处理的方法及装置
WO2020119454A1 (zh) 一种对图像进行颜色还原的方法和装置
CN109348207B (zh) 色温调节方法、图像处理方法及装置、介质和电子设备
CN110225331B (zh) 选择性地将色彩施加到图像
WO2020224459A1 (zh) 图像处理方法、装置、终端及存储介质
WO2023123601A1 (zh) 图像色彩处理方法、装置和电子设备
CN115660997B (zh) 一种图像数据处理方法、装置及电子设备
CN113079362B (zh) 视频信号处理方法、装置及电子设备
CN113473101B (zh) 一种色彩校正方法、装置、电子设备和存储介质
CN112243118B (zh) 白平衡校正方法、装置、设备及存储介质
CN115514947B (zh) 一种ai自动白平衡的算法和电子设备