CN113676715B - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN113676715B CN113676715B CN202110971031.1A CN202110971031A CN113676715B CN 113676715 B CN113676715 B CN 113676715B CN 202110971031 A CN202110971031 A CN 202110971031A CN 113676715 B CN113676715 B CN 113676715B
- Authority
- CN
- China
- Prior art keywords
- color value
- color
- image
- value
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 89
- 238000006243 chemical reaction Methods 0.000 claims abstract description 79
- 238000000034 method Methods 0.000 claims abstract description 46
- 230000000694 effects Effects 0.000 abstract description 13
- 238000013461 design Methods 0.000 description 25
- 238000004422 calculation algorithm Methods 0.000 description 23
- 238000013507 mapping Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 10
- 230000007935 neutral effect Effects 0.000 description 8
- 210000000887 face Anatomy 0.000 description 6
- 230000003247 decreasing effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 2
- 229910052721 tungsten Inorganic materials 0.000 description 2
- 239000010937 tungsten Substances 0.000 description 2
- 241001584785 Anavitrinella pampinaria Species 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000036074 healthy skin Effects 0.000 description 1
- 229910052739 hydrogen Inorganic materials 0.000 description 1
- 239000001257 hydrogen Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000006234 thermal black Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The reference color value is determined through the conversion coefficient of the linear relation between the reference color value and the color value of the face image, wherein the linear relation is a linear relation, and therefore the white balance processing effect of the portrait can be effectively improved.
Description
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image processing method and apparatus.
Background
An Automatic White Balance (AWB) algorithm may compensate for a color cast phenomenon occurring in an image when photographed under a specific light source by enhancing a corresponding complementary color, so as to implement white balance processing.
The method comprises the steps of obtaining a human FACE color mixture point, obtaining a color mixture point, and mapping a reference skin color point to a neutral color point, wherein a portrait automatic white balance (FACE-AWB) algorithm is also provided on the basis of an AWB algorithm.
Therefore, the current FACE-AWB algorithm has a problem that the white balance processing effect of the portrait is not good.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, which are used for solving the problem of poor white balance processing effect of a portrait.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed, wherein the image to be processed comprises a face image;
acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system.
In one possible design, determining the target color value according to the image information of the image to be processed and the reference color value includes:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the image information includes at least one image parameter; determining a weighting coefficient according to the image information of the image to be processed, including:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
wherein, (Finalx, finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the following formula two is satisfied among the reference color value, the color value of the face image, and the conversion coefficient:
wherein the (facewhite _ x, facewhite-y) is the reference color value, the (Currentx, currenty) is the color value of the face image, and the conversion coefficient includes the k x Said b x K to k y And b is y 。
In one possible design, obtaining the conversion coefficients includes:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, generating the conversion coefficient according to the color values of the first reference images and the color values of the second reference images includes:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, including:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference images x B said x ;
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference images y And b is y 。
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a face image;
the acquisition module is further used for acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
the conversion module is used for converting the color value of the face image through the conversion coefficient to obtain a reference color value, and the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
and the processing module is used for determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed under the first coordinate system.
In one possible design, the processing module is specifically configured to:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the processing module is specifically configured to:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value satisfy the following formula I:
wherein, (Finalx, finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the following formula two is satisfied among the reference color value, the color value of the face image, and the conversion coefficient:
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, currenty) is the color value of the face image, and the conversion coefficient includes the k x B said x K to k y And b is said y 。
In one possible design, the obtaining module is specifically configured to:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M kinds of light;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, the obtaining module is specifically configured to:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; the acquisition module is specifically configured to:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference images x Said b x ;
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference images y And b is y 。
In a third aspect, an embodiment of the present application provides an image processing apparatus, including:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being adapted to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect.
In a fifth aspect, the present application provides a computer program product, including a computer program, wherein the computer program is configured to, when executed by a processor, implement the method according to the first aspect as well as any one of various possible designs of the first aspect.
The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The method comprises the steps of obtaining a color value of a face image under a first coordinate system, and determining a conversion coefficient of a linear relation between a reference color value for indicating a reference skin color and a color value of the face image, wherein the linear relation is a primary linear relation, because the dark skin color and the light skin color both meet the primary linear relation under an xy coordinate system, the reference color value for the reference skin color is obtained based on the primary linear relation, the introduction of errors can be avoided, then, a target color value is determined according to the reference color value to perform white balance processing, the condition that the face has color cast can be effectively avoided, and the white balance processing effect of the portrait is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and those skilled in the art can obtain other drawings without inventive labor.
Fig. 1 is a schematic structural diagram of a processing apparatus provided in an embodiment of the present application;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an image to be processed according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of acquiring a reference image according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating processing unit division of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to better understand the technical solution of the present application, the following further detailed description is provided for the background art related to the present application.
First, white Balance is introduced, and White Balance is a White Balance in english, and the basic concept of White Balance is that "a White object can be reduced to White regardless of any light source", and a color cast phenomenon occurring when a picture is taken under a specific light source is compensated by enhancing a corresponding complementary color.
For example, images taken under indoor tungsten light tend to be yellow, while images taken under sunlight shadows tend to be blue, because white balance is set to restore the normal color of the images in these scenes.
The AWB algorithm is an algorithm for automatically performing white balance processing on a picture according to ambient illumination, and color temperature is involved in the AWB algorithm, so that a concept of color temperature is introduced here, the color temperature is derived from planck-defined black body thermal radiation, and a unit is kelvin (K) for defining a light source color, and the kelvin temperature when a thermal black body radiator is matched with the light source color is the color temperature of the light source. It is understood that the color temperature is lower when the color is red, and the color temperature is higher when the color is blue. The candle light is 1800K, the cloudy day is 5000K, the sunny day is 6500K, and the blue sky is more than 10000K.
The matter to be done by the AWB algorithm is to make white objects appear white under any color temperature condition. It is worth noting that the human eye has a very fast and accurate AWB, so that the human eye rarely perceives, for example, a piece of white paper, which is perceived as white in any environment, and only at a moment when the color temperature of the light source is switched greatly and rapidly (e.g., turning on/off the light), the white paper changes color and then immediately becomes white.
However, when the image capturing apparatus captures an image, the sensor of the image capturing apparatus is greatly affected by the color temperature, and because the sensitivity of the sensor itself to three components of red (R), green (G) and blue (B) is different, the original picture output by the sensor is greatly different from that seen by human eyes. The AWB algorithm is to overcome the inconsistency between the characteristics of the sensor and human eyes and to solve the influence of color temperature on the color of the image, so as to restore the original color of the object in the image.
Based on the introduction, it can be determined that the AWB algorithm is used for simulating the color constancy of a human eye vision system to reduce the real color of an object, and a FACE-AWB algorithm is also provided on the basis of the AWB algorithm, wherein the FACE-AWB algorithm is based on the processing result of the basic AWB and is used for carrying out special AWB processing on a scene with a portrait, and the purpose of the processing is to meet the aesthetic preference of a person on the skin color of the portrait on the basis of the accuracy of the basic AWB.
Human skin generally has 4 basic skin color bases which are white, black, red and yellow, and the weighted mixture of the 4 bases forms various human skin colors, for example, the reddish-brown skin has the combined action of being red, yellow and black. However, the aesthetic preference under the conventional condition is fair, ruddy and undistorted, so that the problem of balancing color cast between the background and the portrait under the scenes of high and low color temperature, large-area pure color, multi-color temperature light source mixing and complex mixing of the FACE-AWB is challenged.
The main problems of the present FACE-AWB algorithm are that under a large-area mixed color scene and a large-area non-neutral color pure color scene, the color cast of the FACE is easy to occur, for example, the FACE is yellow, red and the like, and the color cast of the FACE and the background color cannot be unified. The mixed color scene is a gray series having a relatively large number of colors in an image and a neutral color of black, white, and various shades blended by black and white, and may be referred to as a non-color series.
The reason why the color cast problem introduced above is easily caused in the FACE-AWB algorithm is that the mapping has an error due to an improper fitting manner in the process of mapping from the reference skin color point to the neutral color point; in addition, in the design of the FACE-AWB algorithm, no corresponding confidence judgment mechanism is designed for different debugging scenes, so that partial special scenes have color cast.
The following briefly introduces the implementation process of the FACE-AWB algorithm, and the following four steps are required in the process of processing the FACE-AWB algorithm at present:
1) Skin color calibration; 2) Selecting a reference white point; 3) Adjusting white points; 4) And compensating the skin color.
The steps 1) to 3) are generally carried out in r/g and b/g coordinate systems, and firstly, the r/g and b/g coordinate systems are introduced, wherein two coordinate axes in the r/g and b/g coordinate systems are provided, one coordinate axis means that the r value in the rgb value is compared with the g value, and the other coordinate axis means that the b value in the rgb value is compared with the g value.
The implementation schemes of the steps 1) to 3) are as follows:
acquiring r/g and b/g of reference skin color points of a skin color card and gray card points of a gray card under different light sources; then, a primary linear relation y = kx + b between the skin color card and the gray card under the same light source is established, and then r/g and b/g of a reference skin color point and a gray card point under each light source are respectively used as y and x to be substituted into the primary linear relation y = kx + b, so that a plurality of k and b sets can be obtained by a plurality of light sources and are marked as { k1, k2, k3 \8230, kn }, { b1, b2, b3 \8230, bn }.
Then, taking the set k as a vertical coordinate and the color temperature values of the light sources of the gray card and the skin color card as a horizontal coordinate, performing linear fitting once to respectively obtain linear relations of the parameters k and the color temperatures under different light sources, and determining the value of k under a specific color temperature based on the linear relations; and performing linear fitting once by taking the set b as a vertical coordinate and the color temperature values of the light sources of the gray card and the skin color card as a horizontal coordinate to respectively obtain linear relations of the parameters b and the color temperatures under different light sources, and determining the value of b under a specific color temperature based on the linear relations.
Based on the above-mentioned introduction, it is assumed that the relationship between the estimated gray card and the actual skin color at the current color temperature is: r is a radical of hydrogen (prediction gray card) /g (prediction gray card) =k ` ×r (actual skin color) /g (actual skin color) +b ` And then solving an intersection point of the equation and a gray card characteristic curve, wherein the intersection point is the reference white point used in the step 2), and the gray card characteristic curve is a curve obtained by fitting r/g and b/g points of the gray card under each light source.
However, the problem with this approach is that:
if the selected reference skin color point is a light skin color point, the b value of the skin color point and the gray card point of the method does not satisfy a linear relation, and if the selected reference skin color point is a dark skin color point, the linear relation is satisfied, because the b value is determined by the characteristics of the dark skin color point and the light skin color point under r/g and b/g coordinate systems; the fitting rule described above is therefore not applicable to all skin color cards.
Because the light skin color points do not satisfy the linear relationship, the light skin color points may be considered to be fitted with non-linear fitting, but if the light skin color reference points are fitted with non-linear fitting, the high order power error is introduced to increase along with the increase of the equation power, and the fitting of the high order power equation error is realized.
Meanwhile, even if a dark skin color point is selected as a reference skin color point, an error is introduced when an optimal reference white point is solved. Because the characteristic curve of the common gray card is a cubic equation or a quartic equation, under a certain light source, the gray card point and the skin color point are in a linear relationship, and theoretically, the intersection point of the gray card point and the skin color point or the straight line point closest to the curve is obtained to obtain the optimal reference white point. In practice, this problem does not satisfy the constraint, and if the equation is reduced in dimension, the higher order equation error will be caused. This method is therefore difficult to solve.
The above-mentioned introduction is step 1) to step 3), the above-mentioned introduction step 4) is skin color compensation, most of the prior art adopts direct weighting of the obtained reference white point and the calculation result of the basic AWB, the weight is assumed to be prop, the reference white point is a white, the calculation result of the basic AWB is a base, and the direct weighting mode is as follows: a is White colour (Bai) ×prop+a Base of X (1-prop), wherein the weight prop depends on parameter adjustment, so that the skin color compensation method in the prior art is difficult to meet the color cast requirements of different scenes.
In summary, when the FACE-AWB algorithm is implemented in the related art, a large number of errors are introduced in the process of fitting the reference skin color point to the gray stuck point, so that the human FACE is prone to color cast in a large-area mixed color scene and a large-area non-neutral color pure color scene. Therefore, the current FACE-AWB algorithm has a problem that the white balance processing effect of the portrait is not good.
Aiming at the problems in the prior art, the application provides the following technical concepts: because under xy coordinate system, dark skin color and light skin color all satisfy linear relation once to can all carry out demarcation and fitting process on xy coordinate system, in order to guarantee dark and light skin color all can satisfy this fitting relation, simultaneously because the linear relation execution white balance processing procedure that adopts once, thereby can avoid introducing the error, in order to avoid the problem that the people's face appears the color cast, and then correspond the white balance treatment effect that has promoted the portrait.
Based on the above description, the following describes the image processing method provided by the present application with reference to specific embodiments, and first describes an execution subject of each embodiment in the present application, for example, all devices with cameras, such as a mobile phone, a tablet, a camera (including industrial, vehicle-mounted, and civil devices), which can be used as the execution subject of each embodiment in the present application, where this embodiment does not limit a specific implementation manner of the execution subject, in one possible implementation manner, a structure of a processing device used as the execution subject in each embodiment in the present application may be, for example, the implementation manner described in fig. 1, and fig. 1 is a schematic structural diagram of the processing device provided by the embodiment in the present application.
As shown in fig. 1, the sub-modules of the processing device in the embodiments of the present application include, but are not limited to, a processor, a power supply, an application operating system, an interface module, a camera sensor, a radio frequency unit, a network module, an audio data unit, a graphics processor, a microphone, a control panel, other input devices, a display panel, and the like, where the camera sensor is responsible for receiving image information and transmitting the image information to the processor for processing according to the image processing method provided in the present application, and displaying the portrait effect processed according to the image processing method on the display panel in real time.
In an actual implementation process, all devices having the sub-modules described above may be used as the execution main body in each embodiment of the present application, and a specific execution main body in the present application is not limited herein.
Based on the above description, the following describes an image processing method provided in an embodiment of the present application with reference to fig. 2, and fig. 2 is a flowchart of the image processing method provided in the embodiment of the present application.
As shown in fig. 2, the method includes:
s201, obtaining an image to be processed, wherein the image to be processed comprises a face image.
In this embodiment, for example, the object may be photographed by the above-described camera sensor, so as to obtain an image to be processed, where the photographed object may be any object, such as a person, a building, a plant, an animal, and the like, which is not limited in this embodiment.
Therefore, in a possible implementation manner, if the image to be processed includes a face image, for example, when the image to be processed is shot, a face may be shot, so as to obtain an image to be processed including the face image, it can be understood that the image to be processed may include, for example, one face or a plurality of faces, which is not limited in this embodiment, where a specific implementation of the face image in the image to be processed may depend on an actual shooting scene and a shooting object, which is not limited in this embodiment.
S202, obtaining a color value and a conversion coefficient of the face image in a first coordinate system.
After acquiring an image to be processed including a face image, a color value of the face image in a first coordinate system may be acquired, where the first coordinate system may be an xy coordinate system, for example, and the color value of the face image in the xy coordinate system may be understood as a color value of a skin color of the face image, for example. In a possible implementation manner, the skin color in the face image corresponds to a plurality of pixel points, for example, an average value of color values of the pixel points corresponding to each skin color in the first coordinate system may be determined as the color value of the face image. And when a plurality of faces are included, for example, an average value of color values of the plurality of faces may be determined as the color value of the face image.
Meanwhile, a conversion coefficient may also be obtained in this embodiment, where the conversion coefficient may be, for example, a correlation coefficient in a linear relationship between a reference color value of a reference skin color and a color value of a face image, and by obtaining the conversion coefficient, a linear relationship between the reference skin color and the face skin color in the face image may be determined, so that a reference color value may be determined according to the linear relationship.
And S203, converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system.
Based on the above description, it may be determined that after the conversion coefficient is determined, a linear relationship between a reference color value of a reference skin color and a color value of the face image may be determined, so that the color value in the face image may be converted by the conversion coefficient to obtain the reference color value, and in a possible implementation, for example, the color value of the face image may be input into the linear relationship including the conversion coefficient to obtain the reference color value.
The reference color value in this embodiment is a color value of the ambient light used for indicating the face image to correspond to in the first coordinate system, it can be understood that, when the sensor captures the image, a certain light source exists in the environment, for example, yellow light emitted by a tungsten filament lamp, orange light emitted by the sun in the sunset, and the like, the light source in the environment will present a corresponding color on the face, and therefore the currently acquired reference color value is the color value of the ambient light corresponding to the face image in the first coordinate system, and it can be understood that the color of the real light source in the environment is actually fixed, but may present different color values in the face and the background, and therefore the ambient light corresponding to the face image mentioned in this embodiment is actually the light source presented by the ambient light of the real capturing environment on the face.
S204, determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the face image through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system.
In this embodiment, after the reference color value is determined, the color value of the light source on the human face may be determined, the white balance processing is to determine the color of the light source in the image, and then perform compensation processing to eliminate the effect of the ambient light on the image and restore the original color of the photographic subject, so that the target color value may be determined according to the determined reference color value and the image information of the image to be processed, where the target color value in this embodiment is used to indicate the color value of the ambient light corresponding to the image to be processed in the first coordinate system.
It is understood that the ambient light corresponding to the image to be processed is similar to the ambient light corresponding to the above-described face image, that is, the light source of the ambient light in the real shooting environment appearing in the image to be processed.
In a possible implementation manner, the image information of the image to be processed in this embodiment may include, for example, a color temperature of the image to be processed, a color of a neutral color point in the image to be processed, a proportion of a face image in the image to be processed, an image brightness, and the like.
In a possible implementation manner of determining the target color value according to the image information and the reference color value of the image to be processed, for example, the image information and the reference color value of the image to be processed may be processed by a preset model, for example, the image information and the reference color value of the image to be processed may be input into the preset model, so that the preset model outputs the target color value, or the weight value may also be determined according to the image information of the image to be processed, and then the target color value is determined according to the weight value and the reference color value through a preset functional relationship.
After the target color value is obtained, white balance processing can be performed on the image to be processed according to the target color value, so that the effect of an ambient light source indicated by the target color value in the image to be processed is counteracted, and then the reduction of the color of a shooting object in the image to be processed is realized, and the white balance processing of the image is realized.
The image processing method provided by the embodiment of the application comprises the following steps: and acquiring an image to be processed, wherein the image to be processed comprises a face image. And acquiring a color value and a conversion coefficient of the face image in a first coordinate system. And converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system. And determining a target color value according to the image information and the reference color value of the image to be processed, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating the color value of the ambient light corresponding to the image to be processed in the first coordinate system. The color value of the face image under the first coordinate system is obtained, and the conversion coefficient of the linear relation between the reference color value used for indicating the reference skin color and the color value of the face image is used for determining the reference color value used for indicating the light source color on the face, wherein the linear relation is the linear relation of one time, because under the xy coordinate system, the dark skin color and the light skin color both meet the linear relation of one time, the reference color value of the reference skin color is obtained based on the linear relation of one time, thereby the introduction of errors can be avoided, then the target color value is determined according to the reference color value to carry out white balance processing, thereby the condition that the face has color cast can be effectively avoided, and the white balance processing effect of the portrait is improved.
Based on the above embodiments, the following describes in further detail the image processing method provided in the embodiment of the present application with reference to fig. 3 to 5, fig. 3 is a second flowchart of the image processing method provided in the embodiment of the present application, fig. 4 is a schematic diagram of an image to be processed provided in the embodiment of the present application, and fig. 5 is a schematic diagram of acquiring a reference image provided in the embodiment of the present application.
As shown in fig. 3, the method includes:
s301, obtaining an image to be processed, wherein the image to be processed comprises a face image.
The implementation manner of S301 is similar to that of S201, and is not described herein again. In this embodiment, a possible implementation manner of acquiring an image to be processed is further described.
In a possible implementation manner, for example, image data may be acquired at a camera, a to-be-processed image including a face image is obtained, and then the to-be-processed image is displayed on a preview interface of a display panel of the processing device.
Then, for example, it may be determined, through a face detection algorithm, whether the number of faces in the to-be-processed image displayed in the preview interface is greater than or equal to 1, if so, it may be further determined whether the confidence level for each detected face region is greater than or equal to a preset confidence level, if so, it may be determined that the current to-be-processed image includes a face image, and then, subsequent processing may be performed.
For example, referring to fig. 4, it is assumed that 401 in fig. 4 is an image to be processed, and then whether a human face exists in the image to be processed 401 is detected through a human face detection algorithm, and it is assumed that 402 in fig. 4 is currently detected as a human face region, it may be further determined whether a confidence level of the human face region is greater than or equal to a preset confidence level, and if it is determined that the confidence level is greater than or equal to the preset confidence level, it may be determined that a human face image exists in the current image to be processed 401, that is, an image indicated by 402.
Or if it is determined that the face is not included in the image to be processed, or the confidence of the determined face region is smaller than the preset confidence, it may be determined that the face image is not included in the current image to be processed, and then subsequent processing is not required.
S302, obtaining a color value of the face image in the first coordinate system.
The implementation manner of S302 is similar to the implementation manner of S202 described above, and in this embodiment, a possible implementation manner of obtaining the color value of the face image is further described.
In this embodiment, when determining that the image to be processed includes a face image, for example, coordinates of a region of interest (roi) of the face may be obtained, so as to determine the face image in the image to be processed, for example, as can be understood with reference to 402 in fig. 4, 402 in fig. 4 is a determined roi, and the roi is a region where the determined face image is located.
In a possible implementation manner, for example, the x-axis of the obtained roi may be scaled down by 10% and then skin color detection may be performed on the scaled-down roi, so as to obtain a color value of the face image in the first coordinate system.
In this embodiment, the color value includes a color X value and a color Y value, where when the face image includes a face, then skin color detection is performed on a roi of the face, and an xy value of the skin color value of the face in an xy coordinate system is determined, that is, a color value of the face image in the first coordinate system may be obtained and may be represented by (Currentx, currenty).
Alternatively, when a plurality of faces exist in the face image, for example, skin color detection may be performed on respective roi areas of the plurality of faces, so as to obtain an xy value of the skin color of each face in an xy coordinate system, and then the xy values of the skin color values of each face are averaged, so as to obtain a color value of the face image in the first coordinate system, which may be represented by (Currentx, currenty).
And S303, acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M light rays, and M and N are integers greater than 1.
In this embodiment, a conversion coefficient of a linear relationship between a reference color value of a reference skin color and a color value of a face image may also be obtained, and a possible implementation manner for obtaining the conversion coefficient is described below.
In this embodiment, for example, N kinds of skin color cards may be selected, and then the N kinds of skin color cards are photographed under M different light rays, so as to obtain M × N images, so as to obtain a plurality of first reference images.
For example, as can be understood with reference to fig. 5, as shown in fig. 5, for example, the skin color card 502 may be placed on the console 501, then a specific light source is placed in the shooting environment, so as to generate corresponding light, and then the skin color card 502 is shot by the camera sensor in the specific light, so as to obtain the reference image, wherein the camera sensor may be, for example, the camera sensor in the processing device described above.
In this embodiment, the M kinds of light include, but are not limited to, light under a standard light source including D75\ D65\ D55\ TL84\ CWF \ a \ H light. The color values of the N skin color points corresponding to the N skin color cards in this embodiment may be selected, for example, according to aesthetic preferences, for example, if it is desired that the current face image shows a fair and ruddy skin color, the color values of the corresponding N skin color points may be selected; or, for example, if the current face image is expected to show a healthy skin color with a wheat color, color values of corresponding N skin color points may be selected. The embodiment does not limit the specific implementation manner of the color value of the selected skin-color point, and the color value can be selected according to actual requirements.
In one possible implementation, the plurality of first reference images currently acquired may be in a format, such as a raw image, where the raw image is information obtained directly from a Charge-coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) almost without processing.
For example, the rgb value of the skin color card in the current light ray can be extracted based on the collected raw image, and then, for example, the rgb coordinate system can be converted into an xy coordinate system by converting the rgb coordinate system into a conversion matrix M of the xy coordinate system, so as to obtain the x color value and the y color value of the skin color card in each first reference image in the corresponding light ray.
S304, acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing the preset gray card under M light rays.
In this embodiment, a preset gray card can be made by selecting a color value of a preset gray, and then the preset gray card is photographed under M different light rays, so that M images are obtained, and a plurality of second reference images are obtained.
The present implementation can also be understood with reference to fig. 5, as shown in fig. 5, for example, a preset gray card 502 can be placed on an operation table 501, then a specific light source is placed in the shooting environment, so as to generate corresponding light, and then the preset gray card 502 is shot by a camera sensor under the specific light, so as to obtain a reference image, wherein the camera sensor can be, for example, the camera sensor in the processing device described above.
In the present embodiment, the M kinds of light include, but are not limited to, light under a standard light source including D75\ D65\ D55\ TL84\ CWF \ A \ H light. The specific implementation of the color value of the preset gray corresponding to the preset gray card in this embodiment may be selected according to actual requirements, for example, 128-degree gray, or 129-degree gray, and the like, which is not limited in this embodiment.
Similarly, in a possible implementation manner, the multiple second reference images obtained currently may have a format, for example, a raw image, and then, for example, an rgb value of the preset gray card in the current light ray may be extracted based on the collected raw image, and then, for example, the rgb coordinate system of the preset gray card in the current light ray may be converted into an xy color coordinate system by converting the rgb coordinate system into a conversion matrix M of the xy coordinate system, so as to obtain an x color value and a y color value of the preset gray card in each second reference image in the corresponding light ray.
S305, determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray.
In this embodiment, the plurality of first reference images include M × N images captured for N skin color cards under M types of light, and the same light is used as a dimension to determine respective images of the N skin color cards, so that N first reference images corresponding to each light can be determined in the plurality of first reference images, where the N first reference images are images captured under the light.
For example, if a skin color card 1, a skin color card 2, a skin color card 3, and a light 1 and a light 2 exist at present, 6 first reference images may be obtained by shooting each skin color card under each light, and then 3 first reference images corresponding to the light 1 and 3 first reference images corresponding to the light 2 may be determined in the 6 first reference images.
S306, according to the color values of the N first reference images corresponding to each ray, the color value corresponding to each ray is determined.
In this embodiment, the N first reference images corresponding to each ray respectively correspond to respective color values, and then, for example, the color value corresponding to each ray may be determined according to the color value of the N first reference images corresponding to each ray.
For example, 3 skin color cards, namely a skin color card 1, a skin color card 2, and a skin color card 3 exist currently, and it is assumed that for any one light source, color values of first reference images corresponding to the 3 skin color cards under the light source are respectively: (skin 1x, skin1 y), (skin 2x, skin2 y), (skin 3x, and skin3 y). For example, the color value of each first reference image under the light source may be subjected to weighted fitting according to the following formula three, so as to obtain the color value of the target skin color corresponding to the light source, where the color value corresponding to the light source may also be understood as the target skin color (Targetx, targety), where the formula three may be, for example:
wherein ratio1+ ratio2+ ratio3=1, the number of ratios may be determined according to the number of skin color cards in an actual implementation process, and if N skin color cards exist, for example, N ratios may be determined, and a specific value of each ratio may be selected according to an actual requirement, as long as it is ensured that the sum of N ratios of the determined quota is 1.
The above description is given by taking any one of the rays as an example, and the above description is performed for each ray, so that the color value corresponding to each ray, that is, the target skin color (tagetx, targety) of the above description can be determined.
And S307, generating a conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
After determining the color value corresponding to each ray, the conversion coefficient may be generated according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, for example.
It can be understood that, there are M light rays at present, then the color value that each kind of light corresponds respectively is the color value of the target complexion that this kind of light corresponds respectively, and many second reference images in this embodiment are the reference image that obtains of shooing to presetting grey scale card under M light simultaneously, and therefore the color value that many second reference images in this embodiment correspond is the color value of the grey scale card of presetting grey scale that this kind of light corresponds respectively.
Under M different light rays, the color value of the preset grayscale card in the first coordinate system may be represented as (white ), and then it may be determined based on the above description that, currently under various light rays, the color value of the target skin color and the color value of the preset grayscale card under the light ray may be determined.
For any one of the rays, for example, a difference value may be determined according to a color value (whitet, whitety) of the preset gray card under the ray and a color value (Targetx, targety) of the target skin color under the ray, so as to determine that the difference value is (Diffx, diffy), and on this basis, for example, a mapping relationship between the preset gray card and the skin color card may be established, for example, to satisfy the following formula four:
in this embodiment, the color value includes a color X value and a color Y value, and the conversion coefficient includes a conversion coefficient of the color X value and a conversion coefficient of the color Y value, wherein when generating the conversion coefficient, k may be generated according to a color X value corresponding to each ray and color X values corresponding to the plurality of second reference images, for example x 、b x (ii) a According to each light ray correspondenceAnd color Y values corresponding to the plurality of second reference images to generate k y And b y . That is, the conversion coefficient in the present embodiment may include k in the above formula four x 、b x 、k y 、b y 。
In a possible implementation manner, it may be determined based on the above description that each ray corresponds to respective Targetx and whitex, and then k may be obtained through least square fitting based on a mapping relationship described by the above formula four and respective Targetx and whitex of each ray, for example, and the mapping relationship and the Targetx and whitex of each ray are obtained through the least square fitting x And b x 。
Meanwhile, it can be determined based on the above description that each ray corresponds to its own taegey and Diffy, and then k can be obtained by least square fitting based on the mapping relationship described by the above formula four and its own Targety and Diffy y And b y 。
Based on the above description, it can be determined that the primary linear relationship is established in this embodiment, so that the mapping relationship between the skin color card and the gray card under different color temperatures can be established, thereby providing an accurate skin color reference point for FACE-AWB calculation.
And S308, converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system.
Based on the above description, it can be determined that the color values (Currentx, currenty) of the face image in the first coordinate system are determined, and then, for example, the conversion coefficient k can be determined x 、b x 、k y 、b y And then, converting the color value of the face image according to the mapping relation corresponding to the conversion coefficient so as to obtain the reference color value of the skin color reference point.
In a possible implementation manner, the reference color value, the color value of the face image, and the conversion coefficient may satisfy the following equation two, for example:
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, currenty) is the color value of the face image, and the conversion coefficient includes the k x B said x K to k y And b is said y 。
Based on the above-described process, the reference color value of the skin color reference point may be determined, so as to determine the color value of the ambient light corresponding to the face image in the first coordinate system.
S309, determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter.
Then, the target color value may be determined according to the image information of the image to be processed and the reference color value, in a possible implementation manner, the image information in this embodiment includes at least one image parameter, which may include, but is not limited to: the area proportion of the face roi area occupying the whole image to be processed, the environmental color temperature estimated from the xy value of the current environment, the brightness value provided by the automatic exposure module, and the scene information input by the AI module, in the actual implementation process, the specific implementation mode of the image parameters included in the image information may be selected according to the actual requirements, which is not limited in this embodiment.
In this embodiment, for example, a respective corresponding segmentation function may be set for each image parameter, where the segmentation function is used to indicate a weighting coefficient determination function corresponding to each parameter range of the image parameter, and then, for example, based on the segmentation function, a weighting coefficient corresponding to each image parameter may be determined according to the parameter range in which each image parameter is located.
In one possible implementation, the implementation of determining the weights for the respective image parameters may be, for example, the implementation described as follows:
for example, in a low color temperature scene, the weighting coefficient may be determined in a manner of decreasing a piecewise function, where the lower the color temperature, the lower the FACE-AWB weight;
for example, the weighting factor may be determined in an increasing piecewise function manner in a high color temperature scenario, where the higher the color temperature, the higher the FACE-AWB weight.
The weighting coefficients may also be determined in a decreasing piecewise function manner, for example, when there are more neutral color points (greater than or equal to a preset threshold) in the current scene, where the higher the number of neutral color points, the lower the FACE-AWB weight.
For example, when the proportion of the FACE in the whole picture is greater than or equal to the preset proportion, the weighting coefficient may be determined in a manner of increasing the piecewise function, wherein the higher the FACE proportion weight is, the higher the FACE-AWB weight is.
For example, the weighting coefficient may be determined in an incremental piecewise function manner when the calculated distances from the skin color reference points facewhite _ x and facewhite _ y to the gray area are less than or equal to a preset distance, where the closer the euclidean distance is, the higher the FACE-AWB weight is.
For example, the euclidean distance between the xy point of the basic AWB result and facewhite _ x and facewhite _ y may be calculated, and the weighting coefficient may be determined in a decreasing piecewise function manner, where the larger the distance, the smaller the FACE-AWB weight.
The weighting coefficients may also be determined, for example, in an incremental piecewise function with respect to luminance, where the higher the luminance, the higher the FACE-AWB weight.
The incremental piecewise function may be, for example:
the decreasing piecewise function may be, for example:
wherein x is an image parameter, y is a weighting coefficient corresponding to the image parameter, and x LO Is a low threshold, x HI For high threshold, it is understood that the low threshold set for different image parameters in the present embodimentx LO And a high threshold value x HI The setting mode of the threshold is not limited in this embodiment, and may be selected according to actual requirements.
It is also understood that, for example, at least one image parameter may be determined, and then, when the image parameter satisfies the corresponding condition described above, the corresponding weighting coefficients determined according to the corresponding decreasing piecewise function or increasing piecewise function may be determined, so as to determine the respective weighting coefficients corresponding to the at least one image parameter.
By determining the corresponding weighting coefficients according to the corresponding piecewise functions for different image information, different weighting coefficients can be calculated for FACE-AWB for different scenes.
S310, an average value of the weighting coefficients corresponding to the at least one image parameter is determined as the weighting coefficient.
When determining the weighting factor corresponding to each of the at least one image parameter, for example, an average value of the weighting factors corresponding to each of the at least one image parameter may be determined, so as to obtain the weighting factor currently determined for the reference color value.
S311, obtaining a preset adjusting color value, wherein the preset adjusting color value is a color value obtained by automatic white balance processing in a first coordinate system.
In addition, in this embodiment, a color value in the first coordinate system obtained by the automatic white balance processing, that is, a preset adjustment color value, may also be obtained, which may be understood as an xy value calculated by the basic AWB, for example, and it is assumed that the preset adjustment color value may be represented as (AWBx, AWBy).
And S312, determining a target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
And then, determining a target color value according to the reference color value, the weighting coefficient and the preset adjusting color value, wherein in a possible implementation manner, the target color value, the weighting coefficient and the preset adjusting color value satisfy the following formula one:
wherein, (Finalx, finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
It can be understood that the target color value in this embodiment is actually an xy value of the finally estimated light source point, that is, the target color value is used to indicate a color value of the ambient light corresponding to the image to be processed in the first coordinate system.
And S313, performing white balance processing on the image to be processed through the target color value.
After the target color value is determined, it is considered that the light source color in the image to be processed is obtained currently, and then white balance processing is performed on the image to be processed through the target color value, for example, (Finalx, finaly) can be converted into a Gain (Gain) value, and a final white balance Gain value of the system can be obtained. For example, the determined gain value may be applied to the image to be processed, so that the influence of the ambient light source in the image may be eliminated to effectively implement the white balance processing on the image to be processed.
According to the image processing method provided by the embodiment of the application, a mapping relation between a gray card and a skin color card is established, then a conversion coefficient in the mapping relation is obtained through least square fitting based on a target skin color value corresponding to each light ray and a color value of a preset gray card, then a current FACE skin color value in a FACE image is processed according to the corresponding mapping relation based on the conversion coefficient, so that a color value of a skin color reference point is obtained, an accurate skin color reference point is obtained through fitting according to the mapping relation between the skin color card and the gray card under different color problems, then matching is carried out based on image parameters and multiple conditions which are preset, so that a weighting coefficient corresponding to at least one image parameter is determined, then the average value of the weighting coefficients corresponding to at least one image parameter is used as the weighting coefficient of the skin color reference point obtained through the FACE-AWB processing, so that a judging mechanism of the confidence coefficient of a FACE-AWB scene is provided, and the human image effect enhancement of the FACE-AWB under different scenes is realized. And then, determining a target color value based on the skin color reference point obtained by the FACE-AWB processing and the processing result of the basic AWB, and combining the obtained weighting coefficient, so that the color value of the light source in the current image to be processed can be accurately determined, then determining a gain value based on the target color value and then carrying out corresponding processing, and further effectively realizing white balance processing.
Based on the foregoing embodiment, a system description is provided below with reference to fig. 6 for a processing procedure of the image processing method provided in this application, and fig. 6 is a schematic diagram illustrating a processing unit division of the image processing method provided in this application.
As shown in fig. 6, the image processing method provided in the embodiment of the present application acts on an xy color coordinate system, and a main flow chart of the method is as shown in fig. 6, and may be divided into four parts, for example, a data acquisition and parameter preparation unit, a calibration calculation unit, a FACE-AWB calculation and processing unit, and a gain value calculation unit,
the data acquisition and parameter preparation unit mainly has the functions of acquiring xy coordinates of a gray card and a skin color card shot in a laboratory environment under different light sources, and the method is used for operating parameters required by the operation of the algorithm, such as the ratio and the like.
The calibration calculation unit is used for establishing a mapping relation between a skin color card xy value and a gray card xy value under different light sources and outputting a mapping equation between the gray card and the skin color card after fitting.
The FACE-AWB calculation and processing unit is configured to obtain an xy value of a current skin color after the FACE is identified, and calculate an xy value of a reference white point (that is, a reference color value introduced in the above embodiment) corresponding to the skin color at the current brightness and color temperature according to a mapping equation obtained by the calibration calculation module. And the weight values of FACE-AWB under different scenes can be calculated according to the brightness, color temperature and scene information input by other modules.
The Gain value calculation unit is used for weighting the weight value calculated by the FACE-AWB and the xy value of the skin color reference white point with the xy value of the basic AWB to obtain a final processed xy value, and then converting the final processed xy value into a Gain value to further correct and process image color cast.
In summary, the image processing method provided in the embodiment of the present application provides a skin color mapping method, and the mapping method is suitable for the xy coordinate system described herein, wherein the calibration and the fitting process are both performed on the xy coordinate system, and both the dark and light skin colors can satisfy the fitting relationship. And the fitting process uses a linear function, the traditional under-constrained problem is optimized into a constrained problem, and high-order power errors are not introduced. Through experimental demonstration, the method is used for fitting that the error of facewhite _ x and white _ x under each light source is minimum. Therefore, the error of the portrait white balance processing can be effectively reduced, and the effect of the portrait white balance processing is improved. Meanwhile, a decision mechanism of the weighting coefficient under different scenes of the FACE-AWB is provided, and the decision mode of the weighting coefficient is not only applicable to the scenes described herein, but also applicable to sub-scenes contained in the scenes. For example, a low color temperature scene as described herein should include a sunrise sunset, a portion of indoor warm lights, etc., with a color temperature of 3500K or less. Based on the image processing method provided by the application, the color cast phenomenon of the human complexion under partial scenes can be effectively avoided, and the processing result of FACE-AWB is more accurate.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus 70 includes: an obtaining module 701, a converting module 702 and a processing module 703.
An obtaining module 701, configured to obtain an image to be processed, where the image to be processed includes a face image;
the obtaining module 701 is further configured to obtain a color value and a conversion coefficient of the face image in a first coordinate system;
a conversion module 702, configured to perform conversion processing on a color value of the face image through the conversion coefficient to obtain a reference color value, where the reference color value is used to indicate a color value of ambient light corresponding to the face image in the first coordinate system;
the processing module 703 is configured to determine a target color value according to the image information of the image to be processed and the reference color value, and perform white balance processing on the image to be processed through the target color value, where the target color value is used to indicate a color value of ambient light corresponding to the image to be processed in the first coordinate system.
In one possible design, the processing module 703 is specifically configured to:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
In one possible design, the processing module 703 is specifically configured to:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
In one possible design, the color values include a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value meet the following formula I:
wherein, (Finalx, finaly) is the target color value, (facewhite _ x, facewhite _ y) is the reference color value, the prop is the weighting factor, and (AWBx, AWBy) is the preset adjustment color value.
In one possible design, the reference color value, the color value of the face image, and the conversion coefficient satisfy the following formula two:
wherein the (facewhite _ x, facewhite _ y) is the reference color value, the (Currentx, currenty) is the color value of the face image, and the conversion coefficient includes the k x Said b x K to k y And b is said y 。
In one possible design, the obtaining module 701 is specifically configured to:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M light rays, and M and N are integers greater than 1;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M light rays;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
In one possible design, the obtaining module 701 is specifically configured to:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
In one possible design, the color values include color X values and color Y values, and the conversion coefficients include conversion coefficients for color X values and conversion coefficients for color Y values; the obtaining module 701 is specifically configured to:
generating the k according to the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference images x Said b x ;
Generating the k according to the color Y value corresponding to each light ray and the color Y values corresponding to the plurality of second reference images y And b is y 。
The apparatus provided in this embodiment may be configured to implement the technical solutions of the method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 8 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 8, an image processing apparatus 80 according to the present embodiment includes: a processor 801 and a memory 802; wherein
A memory 802 for storing computer-executable instructions;
the processor 801 is configured to execute computer-executable instructions stored in the memory to implement the steps performed by the image processing method in the foregoing embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 802 may be separate or integrated with the processor 801.
When the memory 802 is provided separately, the image processing apparatus further includes a bus 803 for connecting the memory 802 and the processor 801.
An embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the image processing method performed by the above image processing apparatus is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises a face image;
acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
converting the color value of the face image through the conversion coefficient to obtain a reference color value, wherein the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
determining a target color value according to the image information of the image to be processed and the reference color value, and performing white balance processing on the image to be processed through the target color value, wherein the target color value is used for indicating a color value of ambient light corresponding to the image to be processed in the first coordinate system;
determining a target color value according to the image information of the image to be processed and the reference color value, wherein the determining comprises the following steps:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value in the first coordinate system obtained by automatic white balance processing;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
2. The method of claim 1, wherein the image information comprises at least one image parameter; determining a weighting coefficient according to the image information of the image to be processed, including:
determining a weighting coefficient corresponding to each image parameter according to the parameter range of each image parameter;
determining an average value of the weighting coefficients corresponding to the at least one image parameter as the weighting coefficient.
3. The method of claim 2, wherein the color values comprise a color X value and a color Y value; the target color value, the weighting coefficient and the preset adjusting color value satisfy the following formula I:
4. The method according to any one of claims 1-3, wherein the reference color value, the color value of the face image and the conversion coefficient satisfy the following formula two:
5. The method of claim 4, wherein obtaining the conversion coefficients comprises:
acquiring a plurality of first reference images, wherein the plurality of first reference images comprise M x N images obtained by respectively photographing N skin color cards under M kinds of light, and M and N are integers which are larger than 1 respectively;
acquiring a plurality of second reference images, wherein the plurality of second reference images comprise M images obtained by photographing a preset gray card under the M light rays;
and generating the conversion coefficient according to the color values of the first reference images and the color values corresponding to the second reference images.
6. The method of claim 5, wherein generating the transform coefficients according to the color values of the first reference images and the color values of the second reference images comprises:
determining N first reference images corresponding to each light ray in the plurality of first reference images, wherein the N first reference images are images obtained by shooting under the light ray;
determining a color value corresponding to each ray according to the color values of the N first reference images corresponding to each ray;
and generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images.
7. The method of claim 6, wherein the color values comprise color X values and color Y values, and the transform coefficients comprise transform coefficients for color X values and transform coefficients for color Y values; generating the conversion coefficient according to the color value corresponding to each ray and the color values corresponding to the plurality of second reference images, including:
generating the color X value corresponding to each light ray and the color X values corresponding to the plurality of second reference imagesThe above-mentioned;
8. An image processing apparatus characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed comprises a face image;
the acquisition module is further used for acquiring a color value and a conversion coefficient of the face image in a first coordinate system;
the conversion module is used for converting the color value of the face image through the conversion coefficient to obtain a reference color value, and the reference color value is used for indicating the color value of the ambient light corresponding to the face image in the first coordinate system;
the processing module is configured to determine a target color value according to the image information of the image to be processed and the reference color value, and perform white balance processing on the image to be processed through the target color value, where the target color value is used to indicate a color value of ambient light corresponding to the image to be processed in the first coordinate system;
wherein the processing module is specifically configured to:
determining a weighting coefficient according to the image information of the image to be processed;
acquiring a preset adjusting color value, wherein the preset adjusting color value is a color value obtained by automatic white balance processing in the first coordinate system;
and determining the target color value according to the reference color value, the weighting coefficient and the preset adjusting color value.
9. An image processing apparatus characterized by comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the method of any of claims 1 to 7 when the program is executed.
10. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110971031.1A CN113676715B (en) | 2021-08-23 | 2021-08-23 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110971031.1A CN113676715B (en) | 2021-08-23 | 2021-08-23 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113676715A CN113676715A (en) | 2021-11-19 |
CN113676715B true CN113676715B (en) | 2022-12-06 |
Family
ID=78545382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110971031.1A Active CN113676715B (en) | 2021-08-23 | 2021-08-23 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113676715B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114630095B (en) * | 2022-03-15 | 2024-02-09 | 锐迪科创微电子(北京)有限公司 | Automatic white balance method and device for target scene image and terminal |
CN115460391B (en) * | 2022-09-13 | 2024-04-16 | 浙江大华技术股份有限公司 | Image simulation method and device, storage medium and electronic device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1977542B (en) * | 2004-06-30 | 2010-09-29 | 皇家飞利浦电子股份有限公司 | Dominant color extraction using perceptual rules to produce ambient light derived from video content |
JP4501634B2 (en) * | 2004-10-29 | 2010-07-14 | 富士フイルム株式会社 | Matrix coefficient determination method and image input apparatus |
JP6423625B2 (en) * | 2014-06-18 | 2018-11-14 | キヤノン株式会社 | Image processing apparatus and image processing method |
CN105894458A (en) * | 2015-12-08 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Processing method and device of image with human face |
CN108024055B (en) * | 2017-11-03 | 2019-09-17 | Oppo广东移动通信有限公司 | Method, apparatus, mobile terminal and the storage medium of white balance processing |
CN112887582A (en) * | 2019-11-29 | 2021-06-01 | 深圳市海思半导体有限公司 | Image color processing method and device and related equipment |
-
2021
- 2021-08-23 CN CN202110971031.1A patent/CN113676715B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113676715A (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10542243B2 (en) | Method and system of light source estimation for image processing | |
CN108024055B (en) | Method, apparatus, mobile terminal and the storage medium of white balance processing | |
CN108111772B (en) | Shooting method and terminal | |
EP3149931B1 (en) | Image processing system, imaging apparatus, image processing method, and computer-readable storage medium | |
EP3039864B1 (en) | Automatic white balancing with skin tone correction for image processing | |
JP5971207B2 (en) | Image adjustment apparatus, image adjustment method, and program | |
EP3648459B1 (en) | White balance adjustment method and apparatus, camera and medium | |
CN104883504B (en) | Open the method and device of high dynamic range HDR functions on intelligent terminal | |
CN113676715B (en) | Image processing method and device | |
CN109844804B (en) | Image detection method, device and terminal | |
CN109688396B (en) | Image white balance processing method and device and terminal equipment | |
CN107317967B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
TW201944774A (en) | White balance calibration method based on skin color data and image processing apparatus thereof | |
CN107682611B (en) | Focusing method and device, computer readable storage medium and electronic equipment | |
KR20150128168A (en) | White balancing device and white balancing method thereof | |
CN109300186B (en) | Image processing method and device, storage medium and electronic equipment | |
US11805326B2 (en) | Image processing apparatus, control method thereof, and storage medium | |
CN109345602A (en) | Image processing method and device, storage medium, electronic equipment | |
CN110460783A (en) | Array camera module and its image processing system, image processing method and electronic equipment | |
CN114945087A (en) | Image processing method, device and equipment based on human face features and storage medium | |
CN109447925B (en) | Image processing method and device, storage medium and electronic equipment | |
JP6725105B2 (en) | Imaging device and image processing method | |
US12052516B2 (en) | Flexible region of interest color processing for cameras | |
CN116668838B (en) | Image processing method and electronic equipment | |
CN115866413A (en) | Image white balance adjusting method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |