CN116152360B - Image color processing method and device - Google Patents

Image color processing method and device Download PDF

Info

Publication number
CN116152360B
CN116152360B CN202111387638.1A CN202111387638A CN116152360B CN 116152360 B CN116152360 B CN 116152360B CN 202111387638 A CN202111387638 A CN 202111387638A CN 116152360 B CN116152360 B CN 116152360B
Authority
CN
China
Prior art keywords
image
light source
depth
point cloud
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111387638.1A
Other languages
Chinese (zh)
Other versions
CN116152360A (en
Inventor
钱彦霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to PCT/CN2022/117564 priority Critical patent/WO2023082811A1/en
Publication of CN116152360A publication Critical patent/CN116152360A/en
Application granted granted Critical
Publication of CN116152360B publication Critical patent/CN116152360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides a processing method of image colors, which is applied to electronic equipment comprising a first camera and a second camera, and comprises the following steps: acquiring a first image and a depth image, wherein the first image is an image acquired by the first camera, the depth image is an image acquired by the second camera, and the depth image is used for indicating part or all of depth information of the first image. First light source information of the first image is determined based on the first image and the depth image, the first light source information comprising a white point value of the first image, and the first image is processed based on the first light source information. Because the depth information indicated by the depth image is used as the determination basis of the light source information, the influence degree of white points at different positions on pixels is considered to be different when the light source information is determined, and the accuracy of the light source information is improved, so that the effect of image processing is improved.

Description

Image color processing method and device
The present application claims priority from the chinese patent office, application number 202111350182.1, chinese patent application entitled "image color processing method and apparatus", filed on day 11 and 15 of 2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of electronic information technologies, and in particular, to a method and an apparatus for processing image colors.
Background
Image processing is a common function of electronic devices and can be used in scenes such as photographing, video and the like. The image processing includes processing of image colors.
The processing effect of the image color is directly related to the feeling of the user, and how to improve the processing effect of the image color is one of the current research directions.
Disclosure of Invention
The application provides a processing method and device for image colors, and aims to solve the problem of how to improve the processing effect of the image colors.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a method for processing image colors, applied to an electronic device, where the electronic device includes a first camera and a second camera, and the method includes: acquiring a first image and a depth image, wherein the first image is an image acquired by the first camera, the depth image is an image acquired by the second camera, and the depth image is used for indicating part or all of depth information of the first image. First light source information of the first image is determined based on the first image and the depth image, the first light source information comprising a white point value of the first image, and the first image is processed based on the first light source information. Because the depth information indicated by the depth image is used as the determination basis of the light source information, the influence degree of white points at different positions on pixels is considered to be different when the light source information is determined, and the accuracy of the light source information is improved, so that the effect of image processing is improved.
Optionally, in a first aspect of the present application, the determining, based on the first image and the depth image, first light source information of the first image, and based on the first light source information, an implementation manner of processing the first image includes: determining second light source information of a second image and third light source information of a third image based on the first image and the depth image, the first image including the second image and the third image, the second light source information including a white point value of the second image, and the third light source information including a white point value of the third image. The second image is processed based on the second light source information, and the third image is processed based on the third light source information. The first image is divided into at least a second image and a third image, the light source vectors of the second image and the third image are respectively determined, the second image and the third image are respectively processed according to the respective light source vectors, and the image blocking processing mode is closer to the feeling of human eyes, so that the image processing effect is further improved.
Optionally, in a first aspect of the present application, an implementation manner of determining the first light source information of the first image based on the first image and the depth image includes: and generating a point cloud based on the first image and the depth image, wherein any point in the point cloud has position information and color information. The first light source information of the first image is determined based on the point cloud. The point cloud can well fuse color information and depth information, and lays a foundation for determining light source information based on the color information and the depth information.
Optionally, in the first aspect of the present application, the generating a point cloud based on the first image and the depth image includes: two-dimensional position information of a target point, which is any point in the point cloud, is generated based on coordinates of a corresponding pixel in the first image, a depth value of the corresponding pixel in the depth image, and parameters of the second camera. And taking the two-dimensional position information and the depth value as position information of the target point in the point cloud. And taking the color information of the corresponding pixel in the first image of the target point as the color information of the target point in the point cloud.
Optionally, in a first aspect of the present application, the determining, based on the point cloud, the first light source information of the first image includes: and inputting the point cloud into a light source estimation model to obtain a light source estimation vector output by the light source input model, wherein the light source input model is obtained by training a sample point cloud and an amplified sample point cloud, and the amplified sample point cloud is obtained by amplifying the sample point cloud from at least one dimension of a camera view angle and illumination intensity. The amplified sample point cloud enriches training data, and is beneficial to improving the accuracy of a light source estimation model.
Optionally, in the first aspect of the present application, the point cloud includes: and the local point clouds of points are in one-to-one correspondence with part of pixels in the first image. The determining second light source information of a second image and third light source information of a third image based on the first image and the depth image includes: the second light source information of the second image is determined based on a local point cloud corresponding to the second image, and the third light source information of the third image is determined based on a local point cloud corresponding to the third image. The local point cloud lays a foundation for determining the light source information by dividing the first image into blocks and processing the blocks.
Optionally, in a first aspect of the present application, the first camera and the second camera are the same camera.
Optionally, in a first aspect of the present application, the first image includes: an RGB image.
Optionally, in a first aspect of the present application, the processing the first image includes: and carrying out white balance processing on the first image, or carrying out heavy polishing on the first image. Because the determination of the light source information considers the relative position information of the light source pixels and other pixels, the light source information is more accurate, and the white balance effect is better. Moreover, experiments prove that the processing speed is faster, and the required image is smaller. The redrive process can preserve the relative relationship of the colors of the pixels.
Optionally, in a first aspect of the present application, the acquiring the first image and the depth image includes: and responding to the triggering operation of automatic white balance or re-lighting of the user interface, acquiring the first image and the depth image, and facilitating the operation of a user.
Optionally, in the first aspect of the present application, the user interface includes: and the real-time service interface is convenient for improving the image quality of the real-time service.
A second aspect of the present application provides a method for processing image colors, applied to an electronic device, where the electronic device includes a first camera and a second camera, and the method includes: and acquiring a first image and depth information, wherein the first image is an image acquired by the first camera, the depth information is acquired by the second camera, and the depth information is the depth information of part or all of the first image. First light source information of the first image is determined based on the first image and the depth information, the first light source information including a white point value of the first image, and the first image is processed based on the first light source information. Because the depth information is used as the determination basis of the light source information, the influence degree of white points at different positions on pixels is considered to be different when the light source information is determined, so that the accuracy of the light source information is improved, and the effect of image processing is improved.
Optionally, in a second aspect of the present application, the determining, based on the first image and the depth information, first light source information of the first image, and based on the first light source information, an implementation manner of processing the first image includes: determining second light source information of a second image and third light source information of a third image based on the first image and the depth information, the first image including the second image and the third image, the second light source information including a white point value of the second image, and the third light source information including a white point value of the third image. The second image is processed based on the second light source information, and the third image is processed based on the third light source information. The first image is divided into at least a second image and a third image, the light source vectors of the second image and the third image are respectively determined, the second image and the third image are respectively processed according to the respective light source vectors, and the image blocking processing mode is closer to the feeling of human eyes, so that the image processing effect is further improved.
Optionally, in a second aspect of the present application, an implementation manner of determining the first light source information of the first image based on the first image and the depth information includes: and generating a point cloud based on the first image and the depth information, wherein any point in the point cloud has position information and color information. The first light source information of the first image is determined based on the point cloud. The point cloud can well fuse color information and depth information, and lays a foundation for determining light source information based on the color information and the depth information.
Optionally, in a second aspect of the present application, the generating a point cloud based on the first image and the depth information includes: two-dimensional position information of a target point, which is any point in the point cloud, is generated based on coordinates of a corresponding pixel in the first image, a depth value, and parameters of the second camera. And taking the two-dimensional position information and the depth value as position information of the target point in the point cloud. And taking the color information of the corresponding pixel in the first image of the target point as the color information of the target point in the point cloud.
Optionally, in a second aspect of the present application, the determining, based on the point cloud, the first light source information of the first image includes: and inputting the point cloud into a light source estimation model to obtain a light source estimation vector output by the light source input model, wherein the light source input model is obtained by training a sample point cloud and an amplified sample point cloud, and the amplified sample point cloud is obtained by amplifying the sample point cloud from at least one dimension of a camera view angle and illumination intensity. The amplified sample point cloud enriches training data, and is beneficial to improving the accuracy of a light source estimation model.
Optionally, in a fourth aspect of the present application, the point cloud includes: and the local point clouds of points are in one-to-one correspondence with part of pixels in the first image. The determining, based on the first image and the depth information, second light source information of a second image and third light source information of a third image includes: the second light source information of the second image is determined based on a local point cloud corresponding to the second image, and the third light source information of the third image is determined based on a local point cloud corresponding to the third image. The local point cloud lays a foundation for determining the light source information by dividing the first image into blocks and processing the blocks.
Optionally, in a second aspect of the present application, the first camera and the second camera are the same camera.
Optionally, in a second aspect of the present application, the first image includes: an RGB image.
Optionally, in a second aspect of the present application, the processing the first image includes: and carrying out white balance processing on the first image, or carrying out heavy polishing on the first image. Because the determination of the light source information considers the relative position information of the light source pixels and other pixels, the light source information is more accurate, and the white balance effect is better. Moreover, experiments prove that the processing speed is faster, and the required image is smaller. The redrive process can preserve the relative relationship of the colors of the pixels.
Optionally, in the second aspect of the present application, the acquiring the first image and the depth information includes: and responding to the triggering operation of automatic white balance or re-lighting of the user interface, acquiring the first image and the depth information, and facilitating the operation of a user.
Optionally, in a second aspect of the present application, the user interface includes: and the real-time service interface is convenient for improving the image quality of the real-time service.
A third aspect of the present application provides an electronic device, comprising: and the memory is used for storing the application program. One or more processors configured to run the application program to implement the image color processing method according to the first aspect or the second aspect of the present application.
A fourth aspect of the present application provides a computer-readable storage medium having a program stored thereon, which when executed by a computer device, implements the image color processing method according to the first or second aspect of the present application.
A fifth aspect of the present application provides a computer program product which, when run on a computer, causes the computer to perform the method of processing image colours according to the first or second aspect of the present application.
Drawings
FIG. 1 is an example diagram of a camera interface including AWB;
FIG. 2 is a diagram showing an example of the effect of the conventional AWB;
fig. 3 is a hardware configuration diagram of an electronic device provided in the present application;
fig. 4 is a software architecture diagram of an electronic device provided in the present application;
FIG. 5 is a flowchart for training a light source estimation model according to an embodiment of the present application;
fig. 6 is a flowchart of a processing method of image color according to an embodiment of the present application;
FIG. 7 is an effect diagram of a global AWB method according to an embodiment of the present application;
fig. 8a is an exemplary diagram of an application scenario of a method for processing image colors according to an embodiment of the present application;
fig. 8b is an exemplary diagram of an application of the image color processing method provided in the embodiment of the present application in a real-time scene;
FIG. 9 is a flowchart of a method for processing image colors according to another embodiment of the present disclosure;
fig. 10a and fig. 10b are each an exemplary diagram of an application scenario of the image color processing method provided in the embodiment of the present application;
FIG. 11a is an effect diagram of a local AWB method according to an embodiment of the present application;
FIG. 11b is a further effect diagram of the local AWB method provided by an embodiment of the present application;
FIG. 12 is a flowchart of a method for processing image colors according to another embodiment of the present disclosure;
FIG. 13a is an exemplary diagram before re-lighting according to an embodiment of the present application;
fig. 13b is an exemplary diagram of the effect of the heavy lighting according to the embodiment of the present application.
Detailed Description
Color processing of the image includes, but is not limited to, automatic white balancing (automatic white balance, AWB), and Relighting (Relighting).
The purpose of AWB is to reduce the impact of light sources in a solid imaging environment on the solid imaging color. The idea of AWB is: light source information, commonly referred to as a "white point", is determined from the image, and the color of the "white point" is used to adjust the color of other pixels in the image to remove the effect of the color of the "white point" on the color of the other pixels.
AWB is a common image processing function provided in an electronic device, and is an image after AWB, for example, taking the image capturing interface of the camera shown in fig. 1 as an example, assuming that AWB has been turned on, the image capturing image displayed on the image capturing interface of the camera is an image after AWB, as shown in fig. 2.
However, the effect of the current AWB is still to be improved, and fig. 2 is taken as an example:
in fig. 2, the main light source has two places, the actual color of the area 1 on the wall near is white, the actual color of the area 2 on the wall far is also white, but the light source is not white after imaging, and one of the purposes of AWB is to restore the color of the wall to white.
After AWB, the correlated color temperature (Correlated Color Temperature, CCT) of region 1 in fig. 2 is 7000k, and the CCT of region 2 is 4000k. It can be seen that the same white color exhibits a larger color temperature difference in fig. 2, indicating a larger color difference, and therefore at least one region has a larger color difference from the white color. There is room for improvement in the effectiveness of AWB.
The inventors have found during the course of the study that the effect of the existing AWB is poor because the factor that the degree of influence of the "white point" at different positions on other pixels is different is not considered, for example, in fig. 2, the effect of the lamp that emits light near actually on the area 1 is greater than the effect of the lamp that emits light far actually on the area 2 is greater than the effect of the lamp that emits light far on the area 1. And, the light near is closer to the camera, so the effect on the whole image is greater than the light far. However, the existing AWB recognizes the influence degree of the light near and far to other pixels as the same, determines the color of the white point according to the color average value of the two light sources, and adjusts the color of other pixels by using the color of the white point.
For the above reasons, the inventors further found that: if the position relation of each pixel in the image in the actual space is used as constraint, the AWB effect is improved.
Moreover, the depth image can express the position relation of each pixel in the image in the actual 3D space, and the configuration of a time-of-flight (TOF) camera on the electronic equipment is more and more common, so that the effect of AWB can be improved by utilizing the depth image acquired by the TOF camera.
In combination with the above problems and findings, embodiments of the present application provide a method for processing image colors, which uses an RGB image as a depth image, acquires light source information (i.e., "white point") in the RGB image, and adjusts the color of the image based on the acquired light source information to obtain more accurate light source information than the existing AWB, so that the processed image is closer to a true color than the existing image.
The re-lighting means that the color of the main light source is changed without changing the irradiation direction of the light and maintaining the relative color relationship among pixels in the image. The light source information in the image is also the basis of the re-lighting, and the processing method for the image color disclosed by the embodiment of the application can realize the re-lighting by using more accurate light source information.
The following two specific implementation modes of clarifying the meaning of some parameters and then processing the colors of the images respectively: AWB and redraw, respectively.
It should be noted that, the method provided in the embodiment of the present application is applicable to, but not limited to, an image imaged under a plurality of light sources, and may perform image color processing based on at least one of global light source information and local light source information.
The illumination mode of one RGB image is as shown in formula (1):
wherein omega i The incident angle of light is that the light comes from the upper hemispherical surface omega+, omega 0 Is the observation angle of illuminationThe degree is represented in an image by the reflection angle of light entering an RGB camera when the RGB camera shoots, N is the normal vector of the surface, E represents the spectral energy distribution, S is the surface reflection coefficient, R rgb Is the response coefficient of the RGB sensor.
Based on equation (1), in the embodiment of the present application, the global light source information of one RGB image is:
based on equation (2), global light source information can be understood as information of light sources affecting all pixels in an image.
An RGB image is divided into a plurality of image blocks, and E of each image block is called local light source information of one RGB image.
Note that, the reason why the color of the image is represented by CCT in fig. 2 is as follows:
an object that is neither reflected nor fully projected under the action of radiation, but on which the radiation is totally absorbed, can be referred to as a blackbody or a full radiator. When the black body is continuously heated, the maximum value of the relative spectral power distribution of the black body moves towards the short wave direction, the corresponding light color changes according to the sequence of red, yellow, white and blue, and at different temperatures, the arc-shaped locus formed on a chromaticity diagram by the light color change corresponding to the black body is called a black body locus or a Planckian locus.
CCT refers to the temperature of a blackbody radiator that is closest to the color of the stimulus of the same brightness, expressed in k-temperature, and is used to describe a measure of the color of light located near the planckian locus.
Light sources other than heat radiation light sources have linear spectra with radiation characteristics that differ significantly from the blackbody radiation characteristics, so the light color of these light sources does not necessarily fall exactly on the blackbody locus on a chromaticity diagram, and for such light sources, CCT is often used to describe the color characteristics of the light source.
The AWB related parameters include CCT and chromaticity distance D uv But for the image provided by the embodiment of the application, D uv And can be omitted, so that the description is omitted.
Therefore, in the image examples below, CCT is still used to represent color. And 6500k, one of the white CCTs recognized in the industry, is taken as the white CCT.
The image color processing method provided by the embodiment of the application is applied to electronic equipment. In some embodiments, the electronic device may be a cell phone, tablet, desktop, laptop, notebook, ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, personal digital assistant (Personal Digital Assistant, PDA), wearable electronic device, smart watch, or the like.
Fig. 3 is an example of a structure of an electronic device, including: an RGB camera 1, a TOF camera 2, a processor 3, a memory 4, and an I/O subsystem 5.
The RGB camera 1 is used to collect RGB data. The RGB camera 1 may be provided as a front camera or a rear camera. The RGB camera 1 includes, but is not limited to, an RGB sensor 11 (which may be referred to as a first camera) and an RGB sensor controller 12 (which may be referred to as a second camera).
The TOF camera 2 is used to acquire TOF data for generating depth images. The TOF camera 2 may be set as a front camera or a rear camera. It will be appreciated that in the electronic device described in this embodiment, since the RGB image and the depth image need to be registered, the positions of the RGB camera 1 and the TOF camera 2 on the electronic device are concentrated in order to reduce the computational complexity.
The TOF camera 2 includes, but is not limited to, a TOF sensor 21, a TOF sensor controller 22, a TOF light source 23, and a TOF light source controller 24.
In certain embodiments, TOF light source controller 24 is controlled by TOF sensor controller 22 to effect control of TOF light source 23. The TOF light source 23 emits infrared light or laser light under the control of the TOF light source controller 24. The TOF sensor 21 is used for sensing light reflected by the object by the emitted light to acquire TOF data.
The RGB sensor controller 12, the TOF sensor controller 22, and the TOF light source controller 24 are provided in the I/O subsystem 5, and communicate with the processor 3 through the I/O subsystem 5.
The memory 4 is used for storing computer executable program code. In particular, the memory 4 may include a program memory area and a data memory area. Wherein the program storage area may store program code required for implementing an operating system, a software system, at least one function, etc. The data storage area may store data and the like acquired, generated, and used during use of the electronic device.
The processor 3 may comprise one or more processing units, such as: processor 3 may include an application processor (application processor, AP), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), etc.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, the first camera and the second camera may be the same camera, i.e. the camera collects both RGB data and depth information.
It is understood that by running the code stored in the memory 4, the operating system implemented by the processor 3 may be an iOS operating system, an Android open source operating system, a Windows operating system, or the like. In the following embodiments, an Android open source operating system will be described as an example.
In some embodiments, as shown in fig. 4, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, a hardware abstraction layer, and a kernel layer.
The application layer may include a series of applications. As shown in fig. 4, in the embodiment of the present application, examples of the application related to AWB and re-lighting include a camera and a video call, and the like.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
A Hardware Abstraction Layer (HAL), or An Zhuoyun line (Android run), is responsible for scheduling and managing the Android system, and the HAL may configure a function or a core library for implementing the image processing method based on the light source information according to the embodiment.
The kernel layer is a layer between hardware and software. In the embodiment of the present application, the kernel layer at least includes an RGB camera driver, a TOF camera driver, and the like. And each driver is used for processing the acquired data of the hardware and reporting the processing result to a corresponding module of the hardware abstraction layer.
The processing method of the image color applied to the software and hardware frames will be described in detail.
As described above, in the embodiments of the present application, the RGB image and the depth image are used to obtain the light source information in the RGB image, but it is emphasized that the depth image represents the depth information of the pixels in the image, and the accuracy of the light source information cannot be directly improved by only introducing the depth image.
The inventor finds that the information obtained by combining the color information and the depth information has a mapping relation with the light source information in the research process, and the mapping relation can be well expressed through a neural network model.
The model is called a light source estimation model, and the light source estimation model realizes the function of outputting a light source estimation vector according to the point cloud by learning the mapping relation between the sample point cloud and the labels of the sample point cloud.
Note that, the point cloud described in this embodiment has not only position information but also color information. I.e. each point in the point cloud has position information and color information.
FIG. 5 is a training process for a light source estimation model, comprising the steps of:
s51, acquiring registered sample RGB images and sample depth images.
It will be appreciated that the sample RGB image may be obtained using an RGB camera in the electronic device, the sample depth image may be obtained using a TOF camera in the electronic device, and the sample RGB image and the sample depth image may also be from a training image set.
In order to facilitate label acquisition, the sample RGB images in this embodiment each include an imaging region of a color chip.
S52, generating a sample point cloud set according to the registered sample RGB image and the sample depth image.
Sample point clouds are a collection of N point clouds, noted as
The point cloud generated in the present embodiment has color information in addition to position information, and any one of the point clouds in the sample point cloud set is represented here as p= { P i |i∈1,......n},p i For any one point in the point cloud, it can be represented as a vector with six-dimensional information (x, y, z, r, g, b).
In some implementations, the generation rule of any one point vector in the point cloud is:
in the formula (3), u and v are p i Coordinates of corresponding pixels in the RGB image, d being p i Depth value f of corresponding pixel in depth image x ,f y Converting the focal length of the TOF camera to a component in the pixel coordinate system, c x And c y For converting the optical center of TOF camera into components under pixel coordinate system, r is p i In the RGB image, R component, g is p i In the RGB image, G component, b is p i The B component in the RGB image. It can be understood that, since this embodiment is a training process, in this embodiment, the RGB image related to the parameters in the formula (3) is a sample RGB image, the depth image is a sample depth image, and the generated points are points in the sample point cloud set.
It will be appreciated that because the sample RGB image is registered with the sample depth image, the sample RGB image has a one-to-one correspondence with the pixels in the sample depth image, and that the sample RGB image corresponds to the pixels in the sample depth image, not only to coordinates in the pixel coordinate system, but also to the same imaging entity, i.e. the corresponding pixels represent the same location point in the imaging entity, because the sample RGB image and the sample depth image are subsequently used to estimate the light source vector in the sample RGB image. As can be seen from equation (3), the points in the point cloud have a one-to-one correspondence with the pixel points in the sample RGB image and the sample depth image, and therefore any point in the point cloud represents the same position point in the imaging entity as the corresponding pixel point in the sample RGB image or the sample depth image.
In summary, compared with the sample RGB image, the sample point cloud is added with position information, and compared with the sample depth image, the sample point cloud obtained in the embodiment is added with color information, so that the sample point cloud obtained in the embodiment combines the information of the sample RGB image and the sample depth image, and has both the position information and the color information.
It will be appreciated that in order to increase the number of sample point clouds so that the illuminant estimation model has enough training samples, the sample point clouds may be augmented.
In some implementations, because the camera coordinates are rotated in the continuous area and the chromaticity of the main light source is not changed, a rotation matrix R e SO (3) belonging to an orthogonal group is set to represent different shooting angles of the camera for the same object, position data in the sample point cloud is multiplied by each rotation matrix R (color data is unchanged), SO that sample point clouds of different angles are obtained, and amplification of the sample point clouds is realized.
In other implementations, according to the principle that the illumination change of an object can be changed by the change of the relative position of the object and a light source, and the change of illumination does not affect the change of the color of the light source under the same light source, the illumination intensity coefficient L is set, and the color data in the sample point cloud is multiplied by L (the position data is unchanged), so that the amplification of the sample point cloud is realized.
It will be appreciated that both the amplification of the photographing angle and the amplification of the illumination intensity can be performed, as shown in formula (4):
wherein P' represents the amplified sample point cloud, P pos Representing position data in a sample point cloud before amplification, P rgb Representing color data in the sample point cloud prior to amplification. L is one-dimensional Gaussian distributionConstant of->Representing a link to a dimension level.
S53, inputting the sample point cloud into a light source estimation model to obtain a light source estimation vector output by the light source estimation model.
In some implementations, the point-by-point feature may be extracted for the sample point cloud using a pointe, and a light source estimate vector is obtained based on weighting information for the feature for each point.
It will be appreciated that the global illuminant estimation vector in the RGB sample image is obtained in this step, i.e. one illuminant estimation vector is output for one RGB image as illuminant information of the one RGB image. The light source estimation vector can be understood as the color of the light source, and can be understood as the "white point" value of the RGB image.
It will be appreciated that in order to adapt to the requirements of the illuminant estimation model for the format of the input data, the format of the sample point cloud may also be converted, e.g. the dimensions of the sample point cloud are reconstructed, before performing S53.
S54, adjusting parameters of the light source estimation model by using the labels of the sample point clouds, the light source estimation vectors and the loss function output by the light source estimation model.
The label of the sample point cloud is here denoted epsilon= { E 1 ,E 2 ,......E M }. Any one sample point cloudThe tag acquisition mode may be: and identifying a color card area in the sample RGB image, and determining a light source estimation vector of a sample point cloud generated by the sample RGB image according to the color value of a pre-designated reference color block in the color card area, for example, the color value of the reference color block as a label of the sample point cloud.
The training method shown in fig. 5 enables the light source estimation model to learn the mapping relation between the point cloud with the position information and the color information and the light source information, and lays a foundation for improving the effect of the image processing method based on the light source information. Because the position information is added on the basis of the color information, the model is beneficial to learning, the light source distribution in the space is determined from the mutual position relation and the color information between each point, and the capacity of the global light source is further determined, so that the accuracy of the light source information is improved. Furthermore, by adopting the sample point cloud amplification mode, enough sample point clouds can be obtained, so that a better training effect is obtained.
Based on the light source estimation model obtained by training, a flow of the image color processing method provided by the embodiment of the application is shown in fig. 6, and the flow focuses on the description of AWB. Fig. 6 includes the following steps:
s61, acquiring an RGB image through an RGB camera and acquiring a depth image through a TOF camera.
It will be appreciated that S61 may be triggered to be performed by a user operation or the like, for example, the user clicks a photographing button on the interface shown in fig. 1.
It will be appreciated that in some implementations, the RGB image and the depth image are registered by parameters of the RGB camera and the TOF camera pre-configured in the electronic device, and a conversion matrix between a world coordinate system, a camera coordinate system, and a pixel coordinate system, which are not described here. The RGB images and depth images described in the following steps are registered RGB images and depth images.
S62, generating a global point cloud by using the RGB image and the depth image.
The global point cloud is a point cloud corresponding to all pixel points in the RGB image.
The generation rule of the point cloud can be referred to as formula (3), and will not be described here again.
It can be appreciated that in this embodiment, the points in the point cloud are in one-to-one correspondence with the pixels in the RGB image and one-to-one correspondence with the pixels in the depth image. Pixels in the RGB image correspond one-to-one with pixels in the depth image. The corresponding specific meanings are as described above.
S63, inputting the global point cloud into the light source estimation model to obtain a global light source estimation vector output by the light source estimation model.
It can be appreciated that since the global point cloud is input, the global light source estimation vector of the RGB image acquired in S61 is obtained.
It is understood that the light source vector comprises the white point value of the RGB image. As previously mentioned, the white point value may be understood as the color value of the light source.
S64, according to the global light source estimation vector, RGB values of other pixels in the RGB image are adjusted.
It will be appreciated that the extent of influence of the light sources at different positions on the pixels is different, and the depth information can represent the positional relationship between the light sources and other pixels in the image, or the depth information can represent the positional relationship between the light sources in the image and the respective objects in the image, so the flow shown in fig. 6, the introduction of the depth information, is beneficial to determine the "white point" (i.e. the light source estimation vector) in the image according to the extent of influence of the light sources, thereby obtaining a more accurate "white point".
An example of the processed image resulting from the flow shown in fig. 6 is shown in fig. 7:
In fig. 7, the CCT of the area 1 is 6500k, the CCT of the area 2 is close to 3500k, and comparing fig. 2 and fig. 7, it can be seen that both the area 1 and the area 2 in fig. 7 are closer to white, and a light source with a large global influence is selected.
As can be seen from the positions of the two light sources in fig. 2 and 7, because the near light is closer to the lens, the effect on the color of the pixels in the image is greater, the far light has less effect on the color of the pixels in the image, and the flow shown in fig. 6 can introduce this difference in the extent of effect into the calculation of the "white point", i.e. the near light has a greater weight in the calculation of the "white point", so that the wall color is closer to white than in fig. 2 using the image with the "white point" adjusted.
Table 1 lists the flow shown in fig. 6 and the effect comparison data performed on the same data set with other existing white balance adjustment algorithms:
TABLE 1
In table 1, existing algorithms 1-5 represent existing AWB algorithms. The present invention represents the AWB algorithm shown in fig. 6. Where Points refers to the number of pixels that the RGB image has.
The Angular error refers to the cosine value of the angle between the light source vector estimated by the algorithm and the sample light source vector, and is used for measuring the difference between the light source vector estimated by the algorithm and the sample light source vector. Taking the existing algorithm 1 as an example, angular error refers to the cosine value of the angle between the light source vector estimated by the existing algorithm 1 and the sample light source vector. For any RGB image, the sample light source vector refers to a label of the light source vector in the RGB image, that is, a label of the sample point cloud, and the method for obtaining the sample light source vector may refer to the method for obtaining the label of the sample point cloud.
The Mean represents the Mean value of the angle errors obtained by performing the algorithm on a plurality of (e.g., 500) RGB images, and, taking the existing algorithm 1 as an example, the existing algorithm 1 is performed on a plurality of RGB images to obtain a plurality of angle errors, and Mean represents the Mean value of the plurality of angle errors.
The Median value Median represents a Median value of angle errors obtained by performing the algorithm on the plurality of RGB images, and the three-mean value tri represents three-mean values of angle errors obtained by performing the algorithm on the plurality of RGB images, the optimal B25% represents a mean value of angle errors of 25% of which the numerical value is the smallest (best) among angle errors obtained by performing the algorithm on the plurality of RGB images, and the worst W25% represents a mean value of angle errors of 25% of which the numerical value is the largest (worst) among angle errors obtained by performing the algorithm on the plurality of RGB images.
The training length refers to the length of time required for training the illuminant estimation model by the method of the present invention.
From the various parameters of the angle error, the flow described in this embodiment is superior to the existing algorithm. In addition, the present invention (256 points, w/o depth) refers to parameters obtained by an algorithm that does not consider depth information, and it can be seen that the effect is significantly deteriorated without considering depth information.
Moreover, as can be seen from the several lines of data of the present invention, the effect is better as the number of pixels is larger, but the effect of 16points is better than that of the existing algorithm, so the method of the present embodiment is suitable for small-sized images, and the effect is better than that of the existing algorithm.
The process described in this embodiment may be triggered and executed by an AWB control on the user interaction interface, as shown in fig. 8a, and after the AWB control is triggered by the user to make the AWB be in an on state, the electronic device performs the process shown in fig. 6 on the image shot by the camera, so as to obtain the image shown in fig. 7.
The test duration is the duration of processing one image after the actual deployment algorithm, and has more practical significance. The training duration is the model training duration offline at the server before algorithm deployment.
It can also be seen from table 1 that the flow shown in fig. 6 has a faster processing speed in addition to a better AWB effect.
The inventors have also found during the course of the study that the existing AWB algorithm, because of the need for a larger image, is limited by the hardware platform, and is difficult to implement the real-time adjustment of the white balance, whereas the flow shown in fig. 6, because it is suitable for a smaller image and has a better processing speed, can implement the real-time adjustment of the white balance.
Therefore, it can be understood that the flow shown in fig. 6 can be applied to a real-time scene in addition to the scene shown in fig. 1, and taking the video call scene shown in fig. 8b as an example, in the video call process, a user can start the AWB function at the call interface to realize the function of adjusting the white balance of the call video, so as to provide better call image quality for the user.
In a real-time scenario, in some implementations, the electronic device performs S61 after acquiring the AWB instruction. It will be appreciated that the electronic device may have acquired the RGB image by the RGB camera before acquiring the AWB instruction, as shown in fig. 8b, and the user has displayed the RGB image of both parties of the call (the RGB image of the other party is transmitted to the home terminal for the opposite terminal) during the video call, but the TOF camera may not be used. After the electronic device acquires the AWB instruction, taking fig. 8b as an example, the user clicks the AWB key in the interface to trigger the AWB instruction, and the electronic device acquires the depth image using the TOF camera and simultaneously acquires the RGB image still using the RGB camera. The subsequent steps of S61 are shown in fig. 6, and will not be described here.
Further, the size of the processed image is smaller, and besides the application to real-time scenes, the limitation on the resolution of the TOF camera can be reduced, for example, the TOF image obtained by the TOF camera with the resolution of 8×8 can meet the requirement.
The above embodiment focuses on the processing of image colors based on global light source information, and in the embodiment of the present application, the processing flow based on global light source information shown in fig. 6 is referred to as global AWB. It will be appreciated that in the case of multiple light sources, it is also possible to obtain the local light source information of the RGB image, i.e. the light source information of each image block, based on the model obtained by training, and for each image block, use the light source information of that image block to perform AWB, and this way of performing AWB in blocks is referred to as local AWB.
As shown in fig. 9, a further image color processing method disclosed in the embodiment of the present application is executed after the instruction of the local AWB is acquired.
The instruction of the local AWB may be obtained through a user interaction interface, as shown in fig. 10 and 10b, respectively displaying global AWB (abbreviated as GAWB) and local AWB (abbreviated as law) options, and determining to execute the global AWB shown in fig. 6 or the local AWB shown in fig. 9 through an operation on the interface.
Fig. 9 includes the following steps:
s91, acquiring an RGB image and a depth image.
See S61 for a specific implementation.
S92, acquiring a local point cloud by using the RGB image and the depth image.
The local point cloud refers to a point cloud block divided by the global point cloud. It will be appreciated that points in the global point cloud correspond to all pixels in the RGB image and points in the local point cloud correspond to part of the pixels in the RGB image.
Because the RGB image is subjected to local AWB, a local illuminant estimation vector is obtained, and based on the function of the illuminant estimation model, the point cloud block can be input into the illuminant estimation model to obtain the illuminant estimation vector of the RGB image block corresponding to the point cloud block.
In some implementations, the RGB image and the depth image are segmented to obtain registered RGB image blocks and depth image blocks, and then the registered RGB image blocks and depth image blocks are used to generate the local point cloud blocks. It will be appreciated that parameters of each RGB image block and depth image block registered are sequentially input according to equation (3), to obtain each point in each point cloud block.
In other implementations, the registered RGB image and the depth image are used to generate a global point cloud, and then the global point cloud is divided into local point clouds according to the division mode of the RGB image blocks and the position information of each point of the global point cloud.
It is understood that the partitioning of the RGB image, the depth image, or the global point cloud may be implemented according to the depth values of the pixels in the depth image.
S93, sequentially inputting the local point clouds into the light source estimation model to obtain light source estimation vectors of all the local point clouds output by the light source estimation model.
It will be appreciated that for any one local point cloud, the illuminant estimation model outputs a global illuminant estimation vector for that local point cloud, but for the whole RGB image or global point cloud, the illuminant estimation model outputs a local illuminant estimation vector.
S94, according to the light source estimation vector of each local point cloud, RGB components of pixels are adjusted for the RGB image blocks corresponding to the local point cloud.
That is, the RGB components of the pixels of each RGB image block are adjusted separately, and for any one RGB image block, the RGB components of the pixels in that image block are adjusted using the light source estimation vector obtained for that image block.
Unlike the global AWB approach, the local AWB approach determines the "white point" in blocks and adjusts the color of the pixel in blocks, which is a way of adjusting in regions, closer to the function of the human eye.
Fig. 11a and 11b are each an example of the effect of local AWB: fig. 11a is a partial AWB result of dividing an RGB image and a depth image into image blocks of 5*5, and fig. 11b is a partial AWB result of dividing an RGB image and a depth image into image blocks of 100×100.
Since each image block estimates the "white point" and adjusts the color of the other pixels according to the respective "white point", fig. 11a compares to fig. 7 that the CCT of region 1 is 6500k, the CCT of region 2 is 6300k, both are closer, and region 2 is closer to white, and it is seen that the local AWB processing according to the local light source information is more effective than the global AWB processing.
In fig. 11b, the CCT of region 1 is 6500k and the CCT of region 2 is 6500k, as compared to fig. 11 a. It can be seen that the greater the number of tiles on an image or point cloud, the better the effect of local AWB.
Further, from the perspective of the visually sensed color, for a near wall, because zone 3 is closer to the near light and zone 4 is farther from the light than zone 3, zone 3 is closer to the color of the light than zone 4, and zone 3 is closer to white. That is, the near wall in fig. 11b shows more color gradation according to the distance from the light source than the near wall in fig. 11a, so that the near wall is closer to the actual scene and is closer to the effect perceived by human eyes.
Therefore, the more the number of blocks on the image or the point cloud, the better the effect of the local AWB, but the more the occupied computing resources are crossed, and the more the time consumption is, as shown in table 1, so in practice, the number of blocks can be selected according to the effect, the time consumption and the requirement of the occupied resources.
The inventors have further found that in addition to the local AWB being possible to use the local light source information, other image processing functions may be provided using the local light source information to enhance the user experience.
In some implementations, the vector may be estimated using a local light source for re-lighting (Relighting). As described above, re-lighting means that the color of the main light source is changed without changing the irradiation direction of the light and maintaining the relative color relationship between the pixels in the image.
Fig. 12 shows a further image color processing method according to an embodiment of the present application, which is executed after the re-lighting instruction is acquired. The re-lighting function may be packaged as an application program, with the re-lighting instruction triggered by a user's operation.
Taking fig. 13 as an example, the re-lighting function is packaged as a filter, and displayed in an interface related to image processing. In fig. 13, the display interface of the photographing application displays keys of various filters, including a key for re-lighting the filter. After the user clicks the key of the re-lighting filter, starting the re-lighting function, and after the user clicks the shooting key, sending out a re-lighting instruction.
Fig. 12 includes the following steps:
s121, acquiring an RGB image and a depth image.
Specific implementation can be seen in S61.
S122, acquiring a local point cloud by using the RGB image and the depth image.
See S92 for specific implementations.
S123, sequentially inputting the local point clouds into the light source estimation model to obtain light source estimation vectors of all the local point clouds output by the light source estimation model.
See S93 for specific implementations.
S124, re-lighting the RGB image according to the light source estimation vector of each local point cloud and the obtained color data.
Color data refers to data representing the color of a light source selected by a user, such as CCT, it being understood that the color data may be obtained through a user interface.
For example, in the interface shown in fig. 13a, an image taken in the morning on a sunny day is displayed. Various filter controls such as studio light are displayed in the interface. After the user selects the re-lit control, the interface shown in FIG. 13b is displayed, here assuming that the re-lit effect includes sunrise and overcast days, and after the user selects the re-lit filter and further selects the overcast days, the image shown in FIG. 13a is processed to the effect shown in FIG. 13 b.
It will be appreciated that in addition to the selection of the re-strike by the control, a color chart (not shown in fig. 13a and 13 b) may be displayed, the user may select a color value of the light source by clicking on a color in the color chart, or an input interface (not shown in fig. 13a and 13 b) displaying the color value, etc.
It should be noted that, although in fig. 13a and 13b, the re-lighting is shown to the user as one item of the filter, it can be seen from the flow shown in fig. 12 that the re-lighting adjusts the RGB values of the pixels in the image, instead of adding a "mask" with a theatre light effect to the image like other existing filters, such as theatre light, to implement the filter function, and this way of adding the "mask" changes the color relative relationship between the pixels.
Therefore, the re-lighting mode described in this embodiment can preserve the relative relationship of the colors of the pixels, which is a pixel-level transformation, and when the relative relationship between the pixels in the original image is needed to be used in the subsequent processing, the re-lighted image can be directly used in the subsequent processing.
It can be understood that in the above illustration, the AWB function is taken as an example of opening the interface, and in addition, the global AWB or the local AWB described in the embodiments of the present application may be opened or closed in the configuration interface of the electronic device, that is, the manner of opening or closing the global AWB or the local AWB is not limited in the embodiments of the present application.
In the above embodiments, the RGB image is taken as an example, but the color information may be provided to the point cloud as long as the image has the color information, that is, the color image, so the embodiments of the present application are not limited to the RGB image, but may be a YUV image or the like.
It is understood that embodiments of the present application are not limited to color images, and that RGB images may be replaced with grayscale images (including black and white images).
Based on the principle of the TOF camera, in the embodiments of the present application, the depth data acquired by the TOF camera may be directly used, and the depth data represents the distance between pixels in the RGB image or the gray image and the camera.
It can be appreciated that, in addition to the structure of the electronic device described in the foregoing embodiment, embodiments of the present application further provide an electronic device including: a memory and one or more processors. The memory is used for storing application programs, and the one or more processors are used for running the application programs to realize the image color processing method in the embodiment.
The embodiment of the application also discloses a computer readable storage medium, wherein a program is stored on the computer readable storage medium, and the processing method of the image color in the embodiment is realized when the computer device runs the application program.
The embodiment of the application also discloses a computer program product, which when running on a computer, causes the computer to execute the image color processing method described in the embodiment.

Claims (12)

1. The image color processing method is applied to electronic equipment, and the electronic equipment comprises a first camera and a second camera, and is characterized by comprising the following steps:
acquiring a first image and a depth image, wherein the first image is an image acquired by the first camera, the depth image is an image acquired by the second camera, and the depth image is used for indicating the depth information of part or all of the first image;
determining second light source information of a second image and third light source information of a third image based on the first image and the depth image, the first image including the second image and the third image, the second light source information including a white point value of the second image, the third light source information including a white point value of the third image;
the second image is processed based on the second light source information, and the third image is processed based on the third light source information.
2. The method of claim 1, wherein the determining second light source information for a second image and third light source information for a third image based on the first image and the depth image comprises:
Generating a point cloud based on the first image and the depth image, wherein any point in the point cloud has position information and color information; the point cloud comprises: a local point cloud; the local point cloud is a point cloud comprising points corresponding to part of pixels in the first image one by one;
determining the second light source information of the second image based on a local point cloud corresponding to the second image;
the third light source information of the third image is determined based on a local point cloud corresponding to the third image.
3. The method of claim 2, wherein the generating a point cloud based on the first image and the depth image comprises:
generating two-dimensional position information of a target point based on coordinates of a corresponding pixel in the first image, a depth value of the corresponding pixel in the depth image, and parameters of the second camera; the target point is any point in the point cloud;
taking the two-dimensional position information and the depth value as position information of the target point in the point cloud;
and taking the color information of the corresponding pixel of the target point in the first image as the color information of the target point in the point cloud.
4. The method of claim 2, wherein the determining the second light source information for the second image based on the local point cloud corresponding to the second image comprises:
inputting the local point cloud corresponding to the second image into a light source estimation model to obtain a light source estimation vector output by the light source estimation model;
the light source estimation model is obtained by training a sample point cloud and an amplified sample point cloud, and the amplified sample point cloud is obtained by amplifying the sample point cloud from at least one dimension of a camera view angle and illumination intensity.
5. The method of any one of claims 1-4, wherein the first camera and the second camera are the same camera.
6. The method of any one of claims 1-4, wherein the first image comprises:
an RGB image.
7. The method of any of claims 1-4, wherein the processing the second image comprises:
and carrying out white balance processing on the second image, or carrying out heavy polishing on the second image.
8. The method of any of claims 1-4, wherein the acquiring the first image and the depth image comprises:
And acquiring the first image and the depth image in response to a triggering operation of automatic white balance or re-lighting of a user interface.
9. The method of claim 8, wherein the user interface comprises:
and (5) a real-time service interface.
10. The image color processing method is applied to electronic equipment, and the electronic equipment comprises a first camera and a second camera, and is characterized by comprising the following steps:
acquiring a first image and depth information, wherein the first image is an image acquired by the first camera, the depth information is acquired by the second camera, and the depth information is the depth information of part or all of the first image;
determining second light source information of a second image and third light source information of a third image based on the first image and the depth information, the first image including the second image and the third image, the second light source information including a white point value of the second image, the third light source information including a white point value of the third image;
the second image is processed based on the second light source information, and the third image is processed based on the third light source information.
11. An electronic device, comprising:
a memory for storing an application program;
one or more processors configured to run the application program to implement the method of processing image colors of any of claims 1-10.
12. A computer-readable storage medium, on which a program is stored, characterized in that the image color processing method according to any one of claims 1-10 is implemented when the program is run by a computer device.
CN202111387638.1A 2021-11-15 2021-11-22 Image color processing method and device Active CN116152360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/117564 WO2023082811A1 (en) 2021-11-15 2022-09-07 Image color processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111350182 2021-11-15
CN2021113501821 2021-11-15

Publications (2)

Publication Number Publication Date
CN116152360A CN116152360A (en) 2023-05-23
CN116152360B true CN116152360B (en) 2024-04-12

Family

ID=86351208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111387638.1A Active CN116152360B (en) 2021-11-15 2021-11-22 Image color processing method and device

Country Status (1)

Country Link
CN (1) CN116152360B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197004B (en) * 2023-11-08 2024-01-26 广东鸿威国际会展集团有限公司 Low-illumination image optimization enhancement method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911682A (en) * 2017-11-28 2018-04-13 广东欧珀移动通信有限公司 Image white balancing treatment method, device, storage medium and electronic equipment
CN112866667A (en) * 2021-04-21 2021-05-28 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170171523A1 (en) * 2015-12-10 2017-06-15 Motorola Mobility Llc Assisted Auto White Balance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911682A (en) * 2017-11-28 2018-04-13 广东欧珀移动通信有限公司 Image white balancing treatment method, device, storage medium and electronic equipment
CN112866667A (en) * 2021-04-21 2021-05-28 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116152360A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
KR102574141B1 (en) Image display method and device
US11798147B2 (en) Image processing method and device
EP3542347B1 (en) Fast fourier color constancy
EP2039226B1 (en) Method of controlling a lighting system based on a target light distribution
CN109565551B (en) Synthesizing images aligned to a reference frame
US8538147B2 (en) Methods and appartuses for restoring color and enhancing electronic images
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN101111867A (en) Determining scene distance in digital camera images
US10121271B2 (en) Image processing apparatus and image processing method
CN116152360B (en) Image color processing method and device
KR20210067864A (en) Generation of bokeh images using adaptive focus range and layered scattering
Qian et al. Fast color contrast enhancement method for color night vision
Manders et al. Robust hand tracking using a skin tone and depth joint probability model
US10089767B2 (en) Simplified lighting compositing
US11039118B2 (en) Interactive image processing system using infrared cameras
CN116668656A (en) Image processing method and electronic equipment
WO2023082811A1 (en) Image color processing method and apparatus
EP4181510A1 (en) Chromaticity information determination method and related electronic device
JP2014007449A (en) Illumination light color estimation device, illumination light color estimation method, and illumination light color estimation program
JP2012028973A (en) Illumination light estimation device, illumination light estimation method, and illumination light estimation program
CN110009676B (en) Intrinsic property decomposition method of binocular image
KR101488647B1 (en) Virtual illumination of operating method and apparatus for mobile terminal
WO2017179171A1 (en) Image processing device and image processing method
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium
CN113766206A (en) White balance adjusting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant