CN113676715A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113676715A
CN113676715A CN202110971031.1A CN202110971031A CN113676715A CN 113676715 A CN113676715 A CN 113676715A CN 202110971031 A CN202110971031 A CN 202110971031A CN 113676715 A CN113676715 A CN 113676715A
Authority
CN
China
Prior art keywords
color value
image
color
value
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110971031.1A
Other languages
Chinese (zh)
Other versions
CN113676715B (en
Inventor
刘新宇
郭逸汀
熊佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202110971031.1A priority Critical patent/CN113676715B/en
Publication of CN113676715A publication Critical patent/CN113676715A/en
Application granted granted Critical
Publication of CN113676715B publication Critical patent/CN113676715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The reference color value is determined through the conversion coefficient of the linear relation between the reference color value and the color value of the face image, wherein the linear relation is a linear relation, and therefore the white balance processing effect of the portrait can be effectively improved.

Description

Image processing method and device
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image processing method and apparatus.
Background
An Automatic White Balance (AWB) algorithm may compensate for a color cast phenomenon occurring in an image when photographed under a specific light source by enhancing a corresponding complementary color, so as to implement white balance processing.
The current FACE-AWB algorithm has errors in the process of mapping a reference skin color point to a neutral color point, so that the human FACE is prone to color cast in a large-area mixed color scene and a large-area non-neutral color pure color scene.
Therefore, the current FACE-AWB algorithm has a problem that the white balance processing effect of the portrait is not good.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, which are used for overcoming the problem of poor white balance processing effect of a portrait.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed, wherein the image to be processed comprises a face image;
acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system.
In one possible design, determining a target color value according to the image information of the image to be processed and the reference color value includes:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the image information includes at least one image parameter; determining a weighting coefficient according to the image information of the image to be processed, including:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
Figure BDA0003225753190000021
wherein, (Finalx, Finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the reference color value, the color value of the face image, and the conversion coefficient satisfy the following formula two:
Figure BDA0003225753190000022
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, Currenty) is the color value of the face image, and the conversion coefficient includes the kxB saidxK to kyAnd b isy
In one possible design, obtaining the conversion coefficients includes:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, generating the conversion coefficient according to the color values of the first reference images and the color values of the second reference images includes:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, including:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference imagesxB saidx
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference imagesyAnd b isy
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a face image;
the acquisition module is further used for acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
the conversion module is used for converting the color value of the face image through the conversion coefficient to obtain a reference color value, and the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and the processing module is used for determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system.
In one possible design, the processing module is specifically configured to:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the processing module is specifically configured to:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
Figure BDA0003225753190000041
wherein, (Finalx, Finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the reference color value, the color value of the face image, and the conversion coefficient satisfy the following formula two:
Figure BDA0003225753190000042
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, Currenty) is the color value of the face image, and the conversion coefficient includes the kxB saidxK to kyAnd b isy
In one possible design, the obtaining module is specifically configured to:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, the obtaining module is specifically configured to:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; the acquisition module is specifically configured to:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference imagesxB saidx
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference imagesyAnd b isy
In a third aspect, an embodiment of the present application provides an image processing apparatus, including:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being adapted to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect.
In a fifth aspect, the present application provides a computer program product, including a computer program, wherein the computer program is configured to, when executed by a processor, implement the method according to the first aspect and any one of various possible designs of the first aspect.
The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The color value of the face image under the first coordinate system is obtained, and the conversion coefficient of the linear relation between the reference color value used for indicating the reference skin color and the color value of the face image is used for determining the reference color value used for indicating the light source color on the face, wherein the linear relation is the linear relation of one time, because under the xy coordinate system, the dark skin color and the light skin color both meet the linear relation of one time, the reference color value of the reference skin color is obtained based on the linear relation of one time, thereby the introduction of errors can be avoided, then the target color value is determined according to the reference color value to carry out white balance processing, thereby the condition that the face has color cast can be effectively avoided, and the white balance processing effect of the portrait is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a processing apparatus provided in an embodiment of the present application;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an image to be processed according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of acquiring a reference image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating processing unit division of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to better understand the technical solution of the present application, the following further detailed description is provided for the background art related to the present application.
First, White Balance is introduced, and White Balance is a White Balance in english, and the basic concept of White Balance is that "a White object can be reduced to White regardless of any light source", and a color cast phenomenon occurring when a picture is taken under a specific light source is compensated by enhancing a corresponding complementary color.
For example, images taken under indoor tungsten light tend to be yellow, while images taken under sunlight shadows tend to be blue, because white balance is set to restore the normal color of the images in these scenes.
The AWB algorithm is an algorithm for automatically performing white balance processing on a picture according to ambient illumination, and color temperature is involved in the AWB algorithm, so that a concept of color temperature is introduced here, the color temperature is derived from planck-defined black body thermal radiation, and a unit is kelvin (K) for defining a light source color, and the kelvin temperature when a thermal black body radiator is matched with the light source color is the color temperature of the light source. It is understood that the color temperature is lower when the color is reddish and higher when the color is bluish. The candle light is 1800K, the cloudy day is 5000K, the sunny day is 6500K, and the blue sky is more than 10000K.
The matter to be done by the AWB algorithm is to make a white object appear white under any color temperature condition. It is worth noting that the human eye has a very fast and accurate AWB, so that the human eye perceives very little, for example, a piece of white paper, which is perceived as white by the human eye no matter what environment, and only at a moment when the color temperature of the light source is switched greatly and rapidly (for example, turning on/off the light), the white paper is perceived to change color and then immediately turn white.
However, when the image capturing apparatus captures an image, the sensor of the image capturing apparatus is greatly affected by the color temperature, and because the sensitivity of the sensor itself to three components of red (R), green (G) and blue (B) is different, the original picture output by the sensor is greatly different from that seen by human eyes. The AWB algorithm is to overcome the inconsistency between the characteristics of the sensor and human eyes and to solve the influence of color temperature on the color of an image, so as to restore the original color of an object in the image.
Based on the introduction, it can be determined that the AWB algorithm is used for simulating the color constancy of a human eye visual system to reduce the real color of an object, and a FACE-AWB algorithm is also provided on the basis of the AWB algorithm at present, wherein the FACE-AWB algorithm is based on the processing result of the basic AWB algorithm and is used for carrying out special AWB processing on a scene with a portrait, and the purpose of the processing is to meet the aesthetic preference of a person on the skin color of the portrait on the basis of the accuracy of the basic AWB.
Human skin generally has 4 basic skin color bases which are white, black, red and yellow, and the weighted mixture of the 4 bases forms various human skin colors, for example, the reddish-brown skin has the combined action of being red, yellow and black. However, the aesthetic preference under the conventional situation is fair, ruddy and undistorted, so that the problem of how to balance color cast between the background and the portrait in the scenes of high and low color temperature, large-area pure color, multi-color temperature and light source mixing and complex mixing of the FACE-AWB is challenged.
The main problems of the conventional FACE-AWB algorithm are that under a large-area mixed color scene and a large-area non-neutral color pure color scene, the human FACE is prone to color cast, for example, the human FACE is yellow, red and the like, and the color cast of the human FACE and the background color cannot be unified. Here, the mixed color scene is a gray series having a relatively large number of colors in an image and a neutral color of black, white, and various shades of black and white, and may be referred to as a achromatic color series.
The reason why the color cast problem introduced above is easily caused in the FACE-AWB algorithm is that the mapping has an error due to an improper fitting manner in the process of mapping from the reference skin color point to the neutral color point; in addition, in the design of the FACE-AWB algorithm, no corresponding confidence judgment mechanism is designed for different debugging scenes, so that partial special scenes have color cast.
The following briefly introduces the implementation process of the FACE-AWB algorithm, and the following four steps are required in the process of processing the FACE-AWB algorithm at present:
1) skin color calibration; 2) selecting a reference white point; 3) adjusting white points; 4) and compensating the skin color.
The steps 1) to 3) are generally carried out in r/g and b/g coordinate systems, and firstly, the r/g and b/g coordinate systems are introduced, wherein two coordinate axes in the r/g and b/g coordinate systems mean that the r value is more than the g value in the rgb value, and the other coordinate axis means that the b value is more than the g value in the rgb value.
The implementation schemes of the steps 1) to 3) are as follows:
acquiring r/g and b/g of reference skin color points of a skin color card and gray card points of a gray card under different light sources; then, a primary linear relationship y of the skin color card and the gray card under the same light source is established to be kx + b, and then r/g and b/g of the reference skin color point and the gray card point under each light source are respectively used as y and x to be substituted into the primary linear relationship y introduced above to be kx + b, so that a plurality of k and b sets are obtained by a plurality of light sources, and are marked as { k1, k2, k3 … kn }, { b1, b2, b3 … bn }.
Then, taking the set k as a vertical coordinate and the color temperature values of the light sources of the gray card and the skin color card as a horizontal coordinate, performing linear fitting once to respectively obtain linear relations of the parameters k and the color temperatures under different light sources, and determining the value of k under a specific color temperature based on the linear relations; and performing linear fitting once by taking the set b as a vertical coordinate and the color temperature values of the light sources of the gray card and the skin color card as a horizontal coordinate to respectively obtain linear relations of the parameters b and the color temperatures under different light sources, and determining the value of b under a specific color temperature based on the linear relations.
Based on the above-mentioned introduction, it is assumed that the relationship between the estimated gray card and the actual skin color at the current color temperature is: r is(prediction gray card)/g(prediction gray card)=k`×r(actual skin color)/g(actual skin color)And + b', then solving an intersection point of the equation and a gray card characteristic curve, wherein the intersection point is the reference white point used in the step 2), and the gray card characteristic curve is a curve obtained by fitting r/g and b/g points of the gray card under each light source.
However, the problem with this approach is that:
if the selected reference skin color point is a light skin color point, the b value of the skin color point and the gray card point of the method does not satisfy a linear relation, and if the selected reference skin color point is a dark skin color point, the linear relation is satisfied, because the b value is determined by the characteristics of the dark skin color point and the light skin color point under r/g and b/g coordinate systems; the fitting rule described above is therefore not applicable to all skin color cards.
Because the light skin color points do not satisfy the linear relationship, the light skin color points can be considered to be fitted with non-linearity, but if the light skin color reference points are fitted with non-linearity, the high-order power error is increased along with the increase of the equation power, and the fitting of the high-order power equation error is realized.
Meanwhile, even if a dark skin color point is selected as a reference skin color point, an error is introduced when an optimal reference white point is solved. Because the gray card characteristic curve is a cubic equation or a quartic equation, under a certain light source, the gray card point and the skin color point are in a linear relationship, and theoretically, the intersection point of the gray card point and the skin color point or the straight line point closest to the curve is obtained to obtain the optimal reference white point. In practice, this problem does not satisfy the constraint, and if the equation is reduced in dimension, the higher order equation error will be caused. This method is therefore difficult to solve.
The above-mentioned introduction is step 1) to step 3), the above-mentioned introduction step 4) is skin color compensation, most of the prior art adopts direct weighting of the obtained reference white point and the calculation result of the basic AWB, the weight is assumed to be prop, the reference white point is a white, the calculation result of the basic AWB is a base, and the direct weighting mode is as follows: a isWhite colour (Bai)×prop+aBase ofX (1-prop), wherein the weight prop depends on parameter debugging, so the prior art skin color compensation method is difficult to meet the color cast requirements of different scenes.
In summary, when the FACE-AWB algorithm is implemented in the related art, a large number of errors are introduced in the process of fitting the reference skin color point to the gray stuck point, so that the human FACE is prone to color cast in a large-area mixed color scene and a large-area non-neutral color pure color scene. Therefore, the current FACE-AWB algorithm has a problem that the white balance processing effect of the portrait is not good.
Aiming at the problems in the prior art, the application provides the following technical conception: because under xy coordinate system, dark skin color and light skin color all satisfy linear relation once to can all carry out demarcation and fitting process on xy coordinate system, in order to guarantee dark and light skin color all can satisfy this fitting relation, simultaneously because the linear relation execution white balance processing procedure that adopts once, thereby can avoid introducing the error, in order to avoid the problem that the people's face appears the color cast, and then correspond the white balance treatment effect that has promoted the portrait.
Based on the above description, the following describes the image processing method provided by the present application with reference to specific embodiments, and first describes an execution subject of each embodiment in the present application, for example, all devices with cameras, such as a mobile phone, a tablet, a camera (including industrial, onboard, and civil devices), which can be used as the execution subject of each embodiment in the present application, where this embodiment does not limit a specific implementation manner of the execution subject, in one possible implementation manner, a structure of a processing device used as the execution subject in each embodiment in the present application may be, for example, the implementation manner described in fig. 1, and fig. 1 is a schematic structural diagram of the processing device provided by the embodiment in the present application.
As shown in fig. 1, the sub-modules of the processing device in the embodiments of the present application include, but are not limited to, a processor, a power supply, an application operating system, an interface module, a camera sensor, a radio frequency unit, a network module, an audio data unit, a graphics processor, a microphone, a control panel, other input devices, a display panel, and the like, where the camera sensor is responsible for receiving image information and transmitting the image information to the processor for processing by the image processing method provided in the present application, and displaying the portrait effect processed by the image processing method on the display panel in real time.
In an actual implementation process, all devices having the sub-modules described above may be used as the execution main body in each embodiment of the present application, and a specific execution main body in the present application is not limited herein.
Based on the above description, the following describes an image processing method provided in an embodiment of the present application with reference to fig. 2, and fig. 2 is a flowchart of the image processing method provided in the embodiment of the present application.
As shown in fig. 2, the method includes:
s201, obtaining an image to be processed, wherein the image to be processed comprises a face image.
In this embodiment, for example, the object may be photographed by the above-described camera sensor, so as to obtain an image to be processed, where the photographed object may be any object, such as a person, a building, a plant, an animal, and the like, which is not limited in this embodiment.
Therefore, in a possible implementation manner, if the image to be processed includes a face image, for example, when the image to be processed is shot, a face may be shot, so as to obtain an image to be processed including the face image, it can be understood that the image to be processed may include, for example, one face or a plurality of faces, which is not limited in this embodiment, where a specific implementation of the face image in the image to be processed may depend on an actual shooting scene and a shooting object, which is not limited in this embodiment.
S202, obtaining a color value and a conversion coefficient of the face image in a first coordinate system.
After acquiring an image to be processed including a face image, a color value of the face image in a first coordinate system may be acquired, where the first coordinate system may be an xy coordinate system, for example, and the color value of the face image in the xy coordinate system may be understood as a color value of a skin color of the face image, for example. In a possible implementation manner, the skin color in the face image corresponds to a plurality of pixel points, for example, an average value of color values of the pixel points corresponding to the skin colors in the first coordinate system may be determined as the color value of the face image. And when a plurality of faces are included, for example, an average value of color values of the plurality of faces may be determined as the color value of the face image.
Meanwhile, a conversion coefficient may also be obtained in this embodiment, where the conversion coefficient may be, for example, a correlation coefficient in a linear relationship between a reference color value of a reference skin color and a color value of a face image, and by obtaining the conversion coefficient, a linear relationship between the reference skin color and the face skin color in the face image may be determined, so that a reference color value may be determined according to the linear relationship.
S203, converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is a color value used for indicating the ambient light corresponding to the face image in the first coordinate system.
Based on the above description, it may be determined that after the conversion coefficient is determined, a linear relationship between a reference color value of a reference skin color and a color value of the face image may be determined, so that the color value in the face image may be converted by the conversion coefficient to obtain the reference color value, and in a possible implementation, for example, the color value of the face image may be input into the linear relationship including the conversion coefficient to obtain the reference color value.
The reference color value in this embodiment is a color value of the ambient light used for indicating the face image to correspond to in the first coordinate system, it can be understood that, when the sensor captures the image, a certain light source exists in the environment, for example, yellow light emitted by a tungsten filament lamp, orange light emitted by the sun in the sunset, and the like, the light source in the environment will present a corresponding color on the face, and therefore the currently acquired reference color value is the color value of the ambient light corresponding to the face image in the first coordinate system, and it can be understood that the color of the real light source in the environment is actually fixed, but may present different color values in the face and the background, and therefore the ambient light corresponding to the face image mentioned in this embodiment is actually the light source presented by the ambient light of the real capturing environment on the face.
S204, determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the face image through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system.
In this embodiment, after the reference color value is determined, the color value of the light source on the human face may be determined, the white balance processing is to determine the color of the light source in the image, and then perform compensation processing to eliminate the effect of the ambient light on the image and restore the original color of the photographic subject, so that the target color value may be determined according to the determined reference color value and the image information of the image to be processed, where the target color value in this embodiment is used to indicate the color value of the ambient light corresponding to the image to be processed in the first coordinate system.
It is understood that the ambient light corresponding to the image to be processed is similar to the ambient light corresponding to the above-described face image, that is, the light source of the ambient light in the real shooting environment appearing in the image to be processed.
In a possible implementation manner, the image information of the image to be processed in this embodiment may include, for example, a color temperature of the image to be processed, a color of a neutral color point in the image to be processed, a proportion of a face image in the image to be processed, an image brightness, and the like.
In a possible implementation manner of determining the target color value according to the image information and the reference color value of the image to be processed, for example, the image information and the reference color value of the image to be processed may be processed by a preset model, for example, the image information and the reference color value of the image to be processed may be input into the preset model, so that the preset model outputs the target color value, or the weight value may also be determined according to the image information of the image to be processed, and then the target color value is determined according to the weight value and the reference color value through a preset functional relationship.
After the target color value is obtained, the white balance processing can be carried out on the image to be processed according to the target color value, so that the effect of the ambient light source indicated by the target color value in the image to be processed is counteracted, the reduction of the color of the shot object in the image to be processed is realized, and the white balance processing of the image is realized.
The image processing method provided by the embodiment of the application comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The color value of the face image under the first coordinate system is obtained, and the conversion coefficient of the linear relation between the reference color value used for indicating the reference skin color and the color value of the face image is used for determining the reference color value used for indicating the light source color on the face, wherein the linear relation is the linear relation of one time, because under the xy coordinate system, the dark skin color and the light skin color both meet the linear relation of one time, the reference color value of the reference skin color is obtained based on the linear relation of one time, thereby the introduction of errors can be avoided, then the target color value is determined according to the reference color value to carry out white balance processing, thereby the condition that the face has color cast can be effectively avoided, and the white balance processing effect of the portrait is improved.
Based on the above embodiments, the following describes in further detail the image processing method provided in the embodiment of the present application with reference to fig. 3 to 5, fig. 3 is a second flowchart of the image processing method provided in the embodiment of the present application, fig. 4 is a schematic diagram of an image to be processed provided in the embodiment of the present application, and fig. 5 is a schematic diagram of acquiring a reference image provided in the embodiment of the present application.
As shown in fig. 3, the method includes:
s301, obtaining an image to be processed, wherein the image to be processed comprises a face image.
The implementation manner of S301 is similar to that of S201, and is not described herein again. In this embodiment, a possible implementation manner of acquiring an image to be processed is further described.
In a possible implementation manner, for example, image data may be acquired at a camera, a to-be-processed image including a face image is obtained, and then the to-be-processed image is displayed on a preview interface of a display panel of the processing device.
Then, for example, it may be determined, through a face detection algorithm, whether the number of faces in the to-be-processed image displayed in the preview interface is greater than or equal to 1, if so, it may be further determined whether the confidence level for each detected face region is greater than or equal to a preset confidence level, if so, it may be determined that the current to-be-processed image includes a face image, and then, subsequent processing may be performed.
For example, referring to fig. 4, it is assumed that 401 in fig. 4 is an image to be processed, and then whether a human face exists in the image to be processed 401 is detected through a human face detection algorithm, and if 402 in fig. 4 is currently detected to be a human face region, it may be further determined whether a confidence level of the human face region is greater than or equal to a preset confidence level, and if it is determined that the confidence level is greater than or equal to the preset confidence level, it may be determined that a human face image exists in the current image to be processed 401, that is, an image indicated by 402.
Or, if it is determined that the face is not included in the image to be processed, or the confidence of the determined face region is smaller than the preset confidence, it may be determined that the face image is not included in the current image to be processed, and then subsequent processing is not required.
S302, obtaining a color value of the face image in the first coordinate system.
The implementation manner of S302 is similar to the implementation manner of S202 described above, and in this embodiment, a possible implementation manner of obtaining the color value of the face image is further described.
In this embodiment, when determining that the image to be processed includes a face image, for example, coordinates of a region of interest (roi) of the face may be obtained, so as to determine the face image in the image to be processed, for example, as can be understood with reference to 402 in fig. 4, 402 in fig. 4 is a determined roi, and the roi is a region where the determined face image is located.
In a possible implementation manner, for example, the x-axis of the obtained roi may be reduced by 10% in an equal proportion, and then skin color detection is performed on the reduced roi, so as to obtain a color value of the face image in the first coordinate system.
In this embodiment, the color value includes a color X value and a color Y value, where when the face image includes a face, then skin color detection is performed on a roi of the face, and an xy value of the skin color value of the face in an xy coordinate system is determined, that is, a color value of the face image in the first coordinate system may be obtained and may be represented by (Currentx, Currenty).
Optionally, when a plurality of faces exist in the face image, for example, skin color detection may be performed on respective roi areas of the plurality of faces, so as to obtain an xy value of the skin color of each face in an xy coordinate system, and then the xy values of the skin color values of each face are averaged, so as to obtain a color value of the face image in the first coordinate system, which may be represented by (Currentx, Currenty).
S303, acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M light rays, and M and N are integers larger than 1.
In this embodiment, a conversion coefficient of a linear relationship between a reference color value of a reference skin color and a color value of a face image may also be obtained, and a possible implementation manner for obtaining the conversion coefficient is described below.
In this embodiment, for example, N kinds of skin color cards may be selected, and then the N kinds of skin color cards are photographed under M different light rays, so as to obtain M × N images, so as to obtain a plurality of first reference images.
For example, as can be understood with reference to fig. 5, as shown in fig. 5, for example, the skin color card 502 may be placed on the console 501, then a specific light source is placed in the shooting environment, so as to generate corresponding light, and then the skin color card 502 is shot by the camera sensor in the specific light, so as to obtain the reference image, wherein the camera sensor may be, for example, the camera sensor in the processing device described above.
In the present embodiment, the M lights include, but are not limited to, lights under standard light sources such as D75\ D65\ D55\ TL84\ CWF \ A \ H lights. The color values of the N skin color points corresponding to the N skin color cards in this embodiment may be selected, for example, according to aesthetic preferences, for example, if it is desired that the current face image shows a fair and ruddy skin color, the color values of the corresponding N skin color points may be selected; or, for example, if the current face image is expected to show a healthy skin color with a wheat color, color values of corresponding N skin color points may be selected. The embodiment does not limit the specific implementation manner of the color value of the selected skin-color point, and the color value can be selected according to actual requirements.
In one possible implementation, the plurality of first reference images currently acquired may be in a format, such as a raw image, where the raw image is information obtained directly from a Charge-coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) almost without processing.
For example, the rgb value of the skin color card in the current light ray can be extracted based on the collected raw image, and then, for example, the rgb coordinate system can be converted into an xy coordinate system by converting the rgb coordinate system into a conversion matrix M of the xy coordinate system, so as to obtain the x color value and the y color value of the skin color card in each first reference image in the corresponding light ray.
S304, acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing the preset gray card under M light rays.
In this embodiment, a preset gray card can be made by selecting a color value of a preset gray, and then the preset gray card is photographed under M different light rays, so that M images are obtained, and a plurality of second reference images are obtained.
The present implementation can also be understood with reference to fig. 5, as shown in fig. 5, for example, a preset gray card 502 can be placed on an operation table 501, then a specific light source is placed in the shooting environment, so as to generate corresponding light, and then the preset gray card 502 is shot by a camera sensor under the specific light, so as to obtain a reference image, wherein the camera sensor can be, for example, the camera sensor in the processing device described above.
In the present embodiment, the M lights include, but are not limited to, lights under standard light sources such as D75\ D65\ D55\ TL84\ CWF \ A \ H lights. The specific implementation of the color value of the preset gray scale corresponding to the preset gray scale card in this embodiment may be selected according to actual requirements, for example, 128-degree gray, or 129-degree gray, and the like, which is not limited in this embodiment.
Similarly, in a possible implementation manner, the plurality of second reference images obtained currently may be in a format, for example, raw images, and then, for example, rgb values of the preset grayscale card under the current light ray may be extracted based on the collected raw images, and then, for example, by converting an rgb coordinate system into a conversion matrix M of an xy coordinate system, the rgb coordinate system of the preset grayscale card under the current light ray is converted into an xy color coordinate system, so as to obtain an x color value and a y color value of the preset grayscale card in each second reference image under the corresponding light ray.
S305, determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray.
In this embodiment, the plurality of first reference images include M × N images obtained by shooting N skin color cards under M types of light, and the respective images of the N skin color cards are determined with the same light as a dimension, so that N first reference images corresponding to each light can be determined in the plurality of first reference images, where the N first reference images are images obtained by shooting under the light.
For example, if a skin color card 1, a skin color card 2, a skin color card 3 currently exist, and light 1 and light 2 currently exist, 6 first reference images can be obtained by shooting each skin color card under each light, and then 3 first reference images corresponding to the light 1 and 3 first reference images corresponding to the light 2 can be determined in the 6 first reference images.
S306, according to the color values of the N first reference images corresponding to each ray, the color value corresponding to each ray is determined.
In this embodiment, the N first reference images corresponding to each ray respectively correspond to respective color values, and then, for example, the color value corresponding to each ray may be determined according to the color values of the N first reference images corresponding to each ray.
For example, 3 skin color cards, namely a skin color card 1, a skin color card 2, and a skin color card 3 exist currently, and it is assumed that for any one light source, color values of first reference images corresponding to the 3 skin color cards under the light source are respectively: (skin1x, skin1y), (skin2x, skin2y), (skin3x, and skin3 y). For example, the color value of each first reference image under the light source may be subjected to weighted fitting according to the following formula three, so as to obtain the color value of the target skin color corresponding to the light source, where the color value corresponding to the light source may also be understood as the target skin color (Targetx, Targety), where the formula three may be, for example:
Figure BDA0003225753190000161
wherein ratio1+ ratio2+ ratio3 is 1, the number of ratios in the actual implementation process may be determined according to the number of skin color cards, and if N skin color cards exist, for example, N ratios may be determined, and a specific value of each ratio may be selected according to actual needs, as long as it is ensured that the sum of the N ratios of the determination amount is 1.
The above description is given by taking any ray as an example, and the above description is performed for each ray, so that the color value corresponding to each ray, that is, the target skin color (Targetx, Targety) of the above description can be determined.
And S307, generating a conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
After determining the color value corresponding to each ray, the conversion coefficient may be generated according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, for example.
It can be understood that, there are M light rays at present, then the color value that each kind of light corresponds respectively is the color value of the target complexion that this kind of light corresponds respectively, and many second reference images in this embodiment are the reference image that obtains of shooing to presetting grey scale card under M light simultaneously, and therefore the color value that many second reference images in this embodiment correspond is the color value of the grey scale card of presetting grey scale that this kind of light corresponds respectively.
Under M different light rays, the color value of the preset grayscale card in the first coordinate system may be represented as (white ), and then it may be determined based on the above description that, currently under various light rays, the color value of the target skin color and the color value of the preset grayscale card under the light ray may be determined.
For any one of the rays, for example, a difference value may be determined according to a color value (whitet, whitety) of the preset gray card under the ray and a color value (Targetx, Targety) of the target skin color under the ray, so as to determine that the difference value is (Diffx, Diffy), and on this basis, for example, a mapping relationship between the preset gray card and the skin color card may be established, for example, to satisfy the following formula four:
Figure BDA0003225753190000171
in this embodiment, the color value includes a color X value and a color Y value, and the conversion coefficient includes a conversion coefficient of the color X value and a conversion coefficient of the color Y value, wherein when generating the conversion coefficient, k may be generated according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference images, for examplex、bx(ii) a According to the color Y value corresponding to each light ray and the color Y values corresponding to the multiple second reference images, ky and b are generatedy. That is, the conversion coefficient in the present embodiment may include k in the above formula fourx、bx、ky、by
In a possible implementation manner, it may be determined based on the above description that each ray corresponds to its respective Targetx and whitex, and then k may be obtained by least squares fitting based on the mapping relationship described in the above formula four and the Targetx and whitex corresponding to each ray, for examplexAnd bx
Meanwhile, it can be determined based on the above description that each ray corresponds to its own Targety and Diffy, and then k can be obtained by least square fitting based on the mapping relationship described by the above formula four and the Targety and Diffy corresponding to each ray, for exampleyAnd by
Based on the above description, it can be determined that the primary linear relationship is established in the present embodiment, so that the mapping relationship between the skin color card and the gray card at different color temperatures can be established, thereby providing an accurate skin color reference point for FACE-AWB calculation.
And S308, converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system.
Based on the above description, it can be determined that the color values (Currentx, Currenty) of the face image in the first coordinate system are determined, and then, for example, the conversion coefficient k can be determinedx、bx、ky、byAnd then, converting the color value of the face image according to the mapping relation corresponding to the conversion coefficient, thereby obtaining the reference color value of the skin color reference point.
In a possible implementation manner, the reference color value, the color value of the face image, and the conversion coefficient may satisfy the following formula two, for example:
Figure BDA0003225753190000181
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, Currenty) is the color value of the face image, and the conversion coefficient includes the kxB saidxK to kyAnd b isy
Based on the above-described process, the reference color value of the skin color reference point can be determined, so as to determine the color value of the ambient light corresponding to the face image in the first coordinate system.
S309, determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter.
Then, the target color value may be determined according to the image information of the image to be processed and the reference color value, in a possible implementation manner, the image information in this embodiment includes at least one image parameter, which may include, but is not limited to: the area proportion of the face roi occupying the whole image to be processed, the environmental color temperature estimated from the xy value of the current environment, the brightness value provided by the automatic exposure module, and the scene information input by the AI module, in the actual implementation process, the specific implementation manner of the image parameters included in the image information may be selected according to the actual requirements, which is not limited in this embodiment.
In this embodiment, for example, a respective corresponding segmentation function may be set for each image parameter, where the segmentation function is used to indicate a weighting coefficient determination function corresponding to each parameter range of the image parameter, and then, for example, based on the segmentation function, a weighting coefficient corresponding to each image parameter may be determined according to the parameter range in which each image parameter is located.
In one possible implementation, the implementation of determining the weights for the respective image parameters may be, for example, the implementation described as follows:
for example, in a low color temperature scene, the weighting coefficient may be determined in a decreasing piecewise function manner, where the lower the color temperature, the lower the FACE-AWB weight;
the weighting coefficients may also be determined, for example, in a high color temperature scenario, in an incremental piecewise function, where the higher the color temperature, the higher the FACE-AWB weight.
The weighting coefficients may also be determined in a decreasing piecewise function manner, for example, when there are more neutral points (greater than or equal to a preset threshold) in the current scene, where the higher the number of neutral points, the lower the FACE-AWB weight.
For example, when the proportion of the FACE to the whole picture is greater than or equal to the preset proportion, the weighting coefficient may be determined in a manner of increasing the piecewise function, wherein the higher the FACE proportion weight is, the higher the FACE-AWB weight is.
For example, when the calculated distances from the skin color reference points facewhite _ x and facewhite _ y to the gray area are less than or equal to the preset distance, the weighting coefficients may be determined in an incremental piecewise function manner, where the closer the euclidean distance is, the higher the FACE-AWB weight is.
For example, the euclidean distance between the xy point of the basic AWB result and facewhite _ x and facewhite _ y may be calculated, and the weighting coefficient may be determined in a decreasing piecewise function manner, where the larger the distance, the smaller the FACE-AWB weight.
The weighting coefficients may also be determined, for example, in an incremental piecewise function with respect to luminance, where the higher the luminance, the higher the FACE-AWB weight.
The incremental piecewise function may be, for example:
Figure BDA0003225753190000191
the decreasing piecewise function may be, for example:
Figure BDA0003225753190000192
wherein x is an image parameter, y is a weighting coefficient corresponding to the image parameter, and xLOIs a low threshold, xHIFor high threshold, it is understood that the low threshold x set for different image parameters in the present embodimentLOAnd a high threshold value xHIThe setting mode of the threshold is not limited in this embodiment, and may be selected according to actual requirements.
It is also understood that, for example, at least one image parameter may be determined, and then, when the image parameter satisfies the corresponding condition described above, the corresponding weighting coefficients determined according to the corresponding decreasing piecewise function or increasing piecewise function may be determined, so as to determine the respective weighting coefficients corresponding to the at least one image parameter.
By determining the corresponding weighting coefficients according to the corresponding piecewise functions for different image information, different weighting coefficients can be calculated for FACE-AWB for different scenes.
S310, an average value of the weighting coefficients corresponding to the at least one image parameter is determined as the weighting coefficient.
When determining the weighting factor corresponding to each of the at least one image parameter, for example, an average value of the weighting factors corresponding to each of the at least one image parameter may be determined, so as to obtain the weighting factor currently determined for the reference color value.
S311, obtaining a preset adjusting color value, wherein the preset adjusting color value is a color value obtained by automatic white balance processing in a first coordinate system.
In this embodiment, a color value in the first coordinate system obtained by the automatic white balance processing, that is, a preset adjustment color value, may also be obtained, and for example, it may be understood as an xy value calculated by the basic AWB, and it is assumed that the preset adjustment color value may be expressed as (AWBx, AWBy).
And S312, determining a target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
And then, determining a target color value according to the reference color value, the weighting coefficient and the preset adjusting color value, wherein in a possible implementation manner, the target color value, the weighting coefficient and the preset adjusting color value satisfy the following formula one:
Figure BDA0003225753190000201
wherein, (Finalx, Finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
It can be understood that the target color value in this embodiment is actually the xy value of the finally estimated light source point, that is, the target color value is used to indicate the color value of the ambient light corresponding to the image to be processed in the first coordinate system.
And S313, performing white balance processing on the image to be processed through the target color value.
After the target color value is determined, it is considered that the light source color in the image to be processed is obtained currently, and then white balance processing is performed on the image to be processed through the target color value, for example, (Finalx, Finaly) can be converted into a Gain (Gain) value, and a final white balance Gain value of the system can be obtained. For example, the determined gain value can be applied to the image to be processed, so that the influence of the ambient light source in the image can be eliminated, and the white balance processing of the image to be processed can be effectively realized.
The image processing method provided by the embodiment of the application obtains the conversion coefficient in the mapping relation by establishing the mapping relation between the gray card and the skin color card and fitting the mapping relation based on the color value of the target skin color corresponding to each light and the color value of the preset gray card by the least square method, then processes the color value of the current FACE skin color in the FACE image according to the corresponding mapping relation based on the conversion coefficient so as to obtain the color value of a skin color reference point, so as to obtain an accurate skin color reference point by fitting according to the mapping relation between the skin color card and the gray card among different chromatograms, then matches the image parameters with various preset conditions so as to determine the weighting coefficient corresponding to at least one image parameter, and then takes the average value of the weighting coefficients corresponding to at least one image parameter as the weighting coefficient of the skin color reference point obtained by FACE-AWB processing, therefore, a judging mechanism of the confidence coefficient of the FACE-AWB scene is provided, and the human image effect of the FACE-AWB scene is enhanced in different scenes. And then, determining a target color value based on the skin color reference point obtained by the FACE-AWB processing and the processing result of the basic AWB, and combining the obtained weighting coefficient, so that the color value of the light source in the current image to be processed can be accurately determined, then determining a gain value based on the target color value and then carrying out corresponding processing, and further effectively realizing white balance processing.
On the basis of the foregoing embodiment, a system description is provided below with reference to fig. 6 for a processing procedure of the image processing method provided in this application, and fig. 6 is a schematic diagram illustrating a processing unit division of the image processing method provided in this application.
As shown in fig. 6, the image processing method provided in the embodiment of the present application acts on an xy color coordinate system, and a main flow chart of the method is as shown in fig. 6, and may be divided into four parts, for example, a data acquisition and parameter preparation unit, a calibration calculation unit, a FACE-AWB calculation and processing unit, and a gain value calculation unit,
the data acquisition and parameter preparation unit mainly has the functions of acquiring xy coordinates of a gray card and a skin color card shot under different light sources in a laboratory environment, and operating parameters, such as ratio and the like, required by the algorithm.
The calibration calculation unit is used for establishing a mapping relation between the xy value of the skin color card and the xy value of the gray card under different light sources and outputting a mapping equation between the gray card and the skin color card after fitting.
The FACE-AWB calculation and processing unit is configured to obtain an xy value of a current skin color after the FACE is identified, and calculate an xy value of a reference white point (that is, a reference color value introduced in the above embodiment) corresponding to the skin color at the current brightness and color temperature according to a mapping equation obtained by the calibration calculation module. And calculating the weight value of the FACE-AWB in different scenes according to the brightness, the color temperature and the scene information input by other modules.
The Gain value calculation unit is used for weighting the weight value calculated by the FACE-AWB and the xy value of the skin color reference white point with the xy value of the basic AWB to obtain a final processed xy value, and then converting the final processed xy value into a Gain value to further correct and process image color cast.
In summary, the image processing method provided in the embodiment of the present application provides a skin color mapping method, and the mapping method is suitable for the xy coordinate system described herein, wherein the calibration and the fitting process are both performed on the xy coordinate system, and both the dark and light skin colors can satisfy the fitting relationship. And the fitting process uses a linear function, the traditional under-constrained problem is optimized into a constrained problem, and high-order power errors are not introduced. Through experimental demonstration, the method is used for fitting that the error of facewhite _ x and white _ x under each light source is minimum. Therefore, the error of the portrait white balance processing can be effectively reduced, and the effect of the portrait white balance processing is improved. Meanwhile, a decision mechanism of the weighting coefficient under different scenes of the FACE-AWB is provided, and the decision method of the weighting coefficient is not only applicable to the scenes described herein, but also applicable to sub-scenes contained in the scenes. For example, a low color temperature scene described herein should include a sunrise sunset, a portion of indoor warm light, etc., with a color temperature of less than or equal to 3500K. Based on the image processing method provided by the application, the color cast phenomenon of the human complexion under partial scenes can be effectively avoided, and the processing result of FACE-AWB is more accurate.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus 70 includes: an obtaining module 701, a converting module 702 and a processing module 703.
An obtaining module 701, configured to obtain an image to be processed, where the image to be processed includes a face image;
the obtaining module 701 is further configured to obtain a color value and a conversion coefficient of the face image in a first coordinate system;
a conversion module 702, configured to perform conversion processing on a color value of the face image through the conversion coefficient to obtain a reference color value, where the reference color value is used to indicate a color value of ambient light corresponding to the face image in the first coordinate system;
the processing module 703 is configured to determine a target color value according to the image information of the image to be processed and the reference color value, and perform white balance processing on the image to be processed through the target color value, where the target color value is used to indicate a color value of ambient light corresponding to the image to be processed in the first coordinate system.
In one possible design, the processing module 703 is specifically configured to:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the processing module 703 is specifically configured to:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
Figure BDA0003225753190000231
wherein, (Finalx, Finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the reference color value, the color value of the face image, and the conversion coefficient satisfy the following formula two:
Figure BDA0003225753190000232
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, Currenty) is the color value of the face image, and the conversion coefficient includes the kxB saidxK to kyAnd b isy
In one possible design, the obtaining module 701 is specifically configured to:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, the obtaining module 701 is specifically configured to:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; the obtaining module 701 is specifically configured to:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference imagesxB saidx
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference imagesyAnd b isy
The apparatus provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 8 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 8, an image processing apparatus 80 according to the present embodiment includes: a processor 801 and a memory 802; wherein
A memory 802 for storing computer-executable instructions;
the processor 801 is configured to execute the computer-executable instructions stored in the memory to implement the steps performed by the image processing method in the above embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 802 may be separate or integrated with the processor 801.
When the memory 802 is provided separately, the image processing apparatus further includes a bus 803 for connecting the memory 802 and the processor 801.
An embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the image processing method performed by the above image processing apparatus is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises a face image;
acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system.
2. The method of claim 1, wherein determining a target color value according to the image information of the image to be processed and the reference color value comprises:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
3. The method of claim 2, wherein the image information includes at least one image parameter; determining a weighting coefficient according to the image information of the image to be processed, including:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
4. A method according to claim 2 or 3, wherein the colour values comprise colour X values and colour Y values; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
Figure FDA0003225753180000011
wherein, (Finalx, Finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
5. The method according to any one of claims 1-4, wherein the reference color value, the color value of the face image and the conversion coefficient satisfy the following formula two:
Figure FDA0003225753180000021
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, Currenty) is the color value of the face image, and the conversion coefficient includes the kxB saidxK to kyAnd b isy
6. The method of claim 5, wherein obtaining the conversion coefficients comprises:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
7. The method of claim 6, wherein generating the transform coefficients according to the color values of the first reference images and the color values of the second reference images comprises:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
8. The method of claim 7, wherein the color values comprise color X values and color Y values, and the transform coefficients comprise transform coefficients for color X values and transform coefficients for color Y values; generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, including:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference imagesxB saidx
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference imagesyAnd b isy
9. An image processing apparatus characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a face image;
the acquisition module is further used for acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
the conversion module is used for converting the color value of the face image through the conversion coefficient to obtain a reference color value, and the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and the processing module is used for determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system.
10. An image processing apparatus characterized by comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the method of any of claims 1 to 8 when the program is executed.
11. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program realizes the method of any of claims 1 to 8 when executed by a processor.
CN202110971031.1A 2021-08-23 2021-08-23 Image processing method and device Active CN113676715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110971031.1A CN113676715B (en) 2021-08-23 2021-08-23 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110971031.1A CN113676715B (en) 2021-08-23 2021-08-23 Image processing method and device

Publications (2)

Publication Number Publication Date
CN113676715A true CN113676715A (en) 2021-11-19
CN113676715B CN113676715B (en) 2022-12-06

Family

ID=78545382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110971031.1A Active CN113676715B (en) 2021-08-23 2021-08-23 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113676715B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630095A (en) * 2022-03-15 2022-06-14 锐迪科创微电子(北京)有限公司 Automatic white balance method and device for target scene image and terminal
CN115460391A (en) * 2022-09-13 2022-12-09 浙江大华技术股份有限公司 Image simulation method, image simulation device, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006129263A (en) * 2004-10-29 2006-05-18 Fuji Photo Film Co Ltd Matrix coefficient determining method and image input device
CN1977542A (en) * 2004-06-30 2007-06-06 皇家飞利浦电子股份有限公司 Dominant color extraction using perceptual rules to produce ambient light derived from video content
CN105894458A (en) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 Processing method and device of image with human face
CN106464816A (en) * 2014-06-18 2017-02-22 佳能株式会社 Image processing apparatus and image processing method thereof
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN112887582A (en) * 2019-11-29 2021-06-01 深圳市海思半导体有限公司 Image color processing method and device and related equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1977542A (en) * 2004-06-30 2007-06-06 皇家飞利浦电子股份有限公司 Dominant color extraction using perceptual rules to produce ambient light derived from video content
JP2006129263A (en) * 2004-10-29 2006-05-18 Fuji Photo Film Co Ltd Matrix coefficient determining method and image input device
CN106464816A (en) * 2014-06-18 2017-02-22 佳能株式会社 Image processing apparatus and image processing method thereof
CN105894458A (en) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 Processing method and device of image with human face
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN112887582A (en) * 2019-11-29 2021-06-01 深圳市海思半导体有限公司 Image color processing method and device and related equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630095A (en) * 2022-03-15 2022-06-14 锐迪科创微电子(北京)有限公司 Automatic white balance method and device for target scene image and terminal
CN114630095B (en) * 2022-03-15 2024-02-09 锐迪科创微电子(北京)有限公司 Automatic white balance method and device for target scene image and terminal
CN115460391A (en) * 2022-09-13 2022-12-09 浙江大华技术股份有限公司 Image simulation method, image simulation device, storage medium and electronic device
CN115460391B (en) * 2022-09-13 2024-04-16 浙江大华技术股份有限公司 Image simulation method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN113676715B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
US10542243B2 (en) Method and system of light source estimation for image processing
CN108111772B (en) Shooting method and terminal
CN108024055B (en) Method, apparatus, mobile terminal and the storage medium of white balance processing
EP3149931B1 (en) Image processing system, imaging apparatus, image processing method, and computer-readable storage medium
CN110447051B (en) Perceptually preserving contrast and chroma of a reference scene
JP5971207B2 (en) Image adjustment apparatus, image adjustment method, and program
EP3039864B1 (en) Automatic white balancing with skin tone correction for image processing
JP5427247B2 (en) Automatic white balance (AWB) adjustment
CN109151426B (en) White balance adjusting method and device, camera and medium
CN104883504B (en) Open the method and device of high dynamic range HDR functions on intelligent terminal
CN109688396B (en) Image white balance processing method and device and terminal equipment
JP6508890B2 (en) Image processing apparatus, control method therefor, and control program
CN113676715B (en) Image processing method and device
TWI660633B (en) White balance calibration method based on skin color data and image processing apparatus thereof
CN107872663B (en) Image processing method and device, computer readable storage medium and computer equipment
CN107317967B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107682611B (en) Focusing method and device, computer readable storage medium and electronic equipment
KR20150128168A (en) White balancing device and white balancing method thereof
Pierson et al. Luminance maps from High Dynamic Range imaging: calibrations and adjustments for visual comfort assessment
CN109300186B (en) Image processing method and device, storage medium and electronic equipment
CN111918047A (en) Photographing control method and device, storage medium and electronic equipment
US11805326B2 (en) Image processing apparatus, control method thereof, and storage medium
CN110460783A (en) Array camera module and its image processing system, image processing method and electronic equipment
WO2022032666A1 (en) Image processing method and related apparatus
CN109076199A (en) White balance adjustment device and its working method and working procedure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant