CN115118947B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115118947B
CN115118947B CN202110310827.2A CN202110310827A CN115118947B CN 115118947 B CN115118947 B CN 115118947B CN 202110310827 A CN202110310827 A CN 202110310827A CN 115118947 B CN115118947 B CN 115118947B
Authority
CN
China
Prior art keywords
color
rectangle
weight
determining
statistic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110310827.2A
Other languages
Chinese (zh)
Other versions
CN115118947A (en
Inventor
林威丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110310827.2A priority Critical patent/CN115118947B/en
Publication of CN115118947A publication Critical patent/CN115118947A/en
Application granted granted Critical
Publication of CN115118947B publication Critical patent/CN115118947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical image, wherein the color statistical image comprises a plurality of color statistical points and color values corresponding to each color statistical point; under the condition that the original image contains faces, determining the positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and the reference weight corresponding to each weight rectangle; determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle; according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point, determining a white balance gain value corresponding to the original image; and performing white balance processing on the original image based on the white balance gain value. Thereby, the accuracy of the white balance processing can be improved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to an image processing method, an image processing device, an electronic device and a storage medium.
Background
White balance is a very important concept in the field of imaging, and by which a series of problems of color reproduction and hue processing can be solved, and is also one of important indexes for evaluating color. White balance is an abstract concept, and the process of adjusting white balance is called white balance adjustment.
In the related art, an automatic white balance (Automatic whitebalance, AWB) adjustment method is generally used for performing white balance adjustment. However, in actual use, the judgment of the color temperature of the light source by the AWB is more easily affected by the color tone of the human face, so that the AWB processing may generate larger errors, and the accuracy is lower, so that the user experience may be affected.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present disclosure provides an image processing method, including:
preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical image, wherein the color statistical image comprises a plurality of color statistical points and color values corresponding to each color statistical point;
under the condition that the original image contains faces, determining positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and reference weights corresponding to the weight rectangles, wherein the center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame;
Determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle;
determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point;
and performing white balance processing on the original image based on the white balance gain value.
An embodiment of a second aspect of the present disclosure provides an image processing apparatus, including:
the generation module is used for preprocessing the original image acquired by the image acquisition device to generate a corresponding color statistical image, wherein the color statistical image comprises a plurality of color statistical points and color values corresponding to each color statistical point;
the first determining module is used for determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image under the condition that the face is contained in the original image, wherein the center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame;
The second determining module is used for determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle;
the third determining module is used for determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point;
and the processing module is used for carrying out white balance processing on the original image based on the white balance gain value.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to invoke and execute the executable instructions stored in the memory to implement the image processing method according to the embodiment of the first aspect of the present disclosure.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the image processing method proposed by the embodiment of the first aspect of the present disclosure.
An embodiment of a fifth aspect of the present disclosure proposes a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to perform the method for processing an image proposed by the embodiment of the first aspect of the present disclosure.
The image processing method, the device, the electronic equipment and the storage medium provided by the disclosure are characterized in that firstly, an original image acquired by an image acquisition device is preprocessed to generate a corresponding color statistical image, the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image are determined under the condition that the original image contains the face, then a first weight value corresponding to each color statistical point can be determined according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle, and then a white balance gain value corresponding to the original image is determined according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point, so that white balance processing is performed on the original image based on the white balance gain value. When the white balance processing is carried out on the original image containing the human face, the first weight value corresponding to each color statistic point is determined according to the positions of different color statistic points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face complexion on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to users.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
FIG. 1 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural view of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural view of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
Generally, ambient light sources can be broadly classified into high color temperature light sources, medium color temperature light sources and low color temperature light sources, and image acquisition is performed under different types of color temperature light source environments, and the resulting image may be affected by the type of light source. For example, an image collected in a high-color-temperature light source environment is biased to blue, an image collected in a light source environment of a medium-color-temperature light source is biased to white, and an image collected in a low-color-temperature light source environment is biased to yellow.
The optical three primary colors are Red (Red, R), green (Green, G), and Blue (Blue, B). The AWB can generate appropriate RGB gain values according to the type of light source at the time of photographing, and adjust the RGB three primary colors of the photographed picture so that the white object appears white on the picture as seen by human eyes.
Generally speaking, the face complexion is biased towards yellow, and when a large area of face complexion appears, the calculation of the color temperature of the light source by AWB is more easily influenced by the color tone of the face complexion. For example, in a high color temperature light source environment, AWB may misinterpret a human face skin color as a reflected light source of a medium color temperature light source or a low color temperature light source irradiating a white object, which may cause the AWB to be misinterpreted as being in a medium color temperature light source environment and a low color temperature light source environment, and further generate a larger B gain value, which may cause the image color to be bluish.
The image processing method provided by the disclosure is used for reducing the influence of the human face complexion on the white balance gain and improving the accuracy of white balance processing, so that good experience can be given to users.
The following describes an image processing method, apparatus, electronic device, and storage medium of the embodiments of the present disclosure with reference to the accompanying drawings.
The image processing method according to the embodiment of the present disclosure may be performed by the image processing apparatus provided by the embodiment of the present disclosure, and the apparatus may be configured in an electronic device.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing method may include the steps of:
step 101, preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical chart, wherein the color statistical chart comprises a plurality of color statistical points and color values corresponding to each color statistical point.
The image capturing device may be any device with a photographing function, such as a camera, a video camera, a scanner, a mobile phone, a tablet computer, or the like, or may be any device that captures an image into a computer through a video capturing card or the like, which is not limited in this disclosure.
It is understood that the pixel values of a plurality of adjacent pixels in the original image may be less different. If the same processing is performed on each pixel, the results obtained for the pixels at adjacent positions may have smaller differences, so that in order to reduce the data processing process and improve the efficiency, the original image may be uniformly divided into m×n small blocks with equal areas. The values of m and n may be preset values, or may be adjusted according to the pixel size of the original image, which is not limited in the present disclosure.
In addition, through preprocessing the original image acquired by the image acquisition device, m x n small blocks can be converted into a color space, and color statistics points and color values corresponding to the small blocks in the color statistics diagram are obtained.
It is understood that there may be a plurality of color spaces, for example, a YUV color space, an RgBg color space, or a color difference component space, such as YCbCr, YPbPr, and the like, and accordingly, the color statistics may also have a plurality of representations, which is not limited in this disclosure.
For ease of illustration, the RgBg color statistics are presented in this disclosure as an example.
For example, the original image may be divided into m×n small blocks, and then it is determined that each small block corresponds to one of the small blocks: the average value Ravg of the R component, the average value Gavg of the G component, the average value Bavg of the B component, and then converting the m×n small blocks into a color space, m×n color statistic points in the color statistic chart RgBg and color values (Rg, bg) corresponding to the respective color statistic points can be obtained, where Rg and Bg should satisfy: rg=Ravg/Gavg, bg=Bavg/Gavg.
The color statistics, color statistics points, color values, and the like are merely illustrative, and are not intended to limit the determination, representation, and the like of color values in the embodiments of the present disclosure.
Step 102, determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image when the face is contained in the original image.
The center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame.
In addition, for the same original image, which may include a plurality of faces, when the original image is detected, the number of weight rectangles corresponding to the face frames at each face and the reference weight corresponding to each weight rectangle should be the same corresponding value.
For example, when detecting an original image, two face frames appear simultaneously, the 1 st face frame corresponds to 3 weight rectangles, and the reference weights corresponding to the weight rectangles are respectively: 1. 5, 15, the 2 nd face frame also corresponds to 3 weight rectangles, and the reference weights corresponding to the weight rectangles are respectively: 1. 5, 15. It is understood that the aspect ratio of each weight rectangle is the same as the aspect ratio of the corresponding face frame, so that the corresponding dimension of the weight rectangle may be different according to the size of the face frame, which is not limited in this disclosure.
In addition, there may be various ways to determine the positions of the plurality of weight rectangles corresponding to the face frames included in the original image.
Optionally, the environment brightness corresponding to the original image may be determined first, then, according to the environment brightness, the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle are determined, and then, based on the number of weight rectangles, the positions of the weight rectangles corresponding to each face frame in the original image are determined.
For example, the relation between the ambient brightness and the number of weight rectangles and the weight value corresponding to each weight rectangle can be set in advance, so that the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle can be determined more quickly according to the ambient brightness corresponding to the original image.
For example, according to the ambient brightness, it is determined that the number of weight rectangles corresponding to the original image is 4, and the weight values corresponding to each weight rectangle are respectively: 20. 15, 10, 1. And then, for each face frame in the original image, the 4 weight rectangles corresponding to each face frame can be overlapped with the center point of the face frame, and then, the positions of the 4 weight rectangles corresponding to each face frame in the original image can be determined according to the size of each weight rectangle.
Or, the illuminance corresponding to the original image can be determined first, then the number of weight rectangles corresponding to each face frame and the weight value corresponding to each weight rectangle are determined according to the illuminance, and then the weight rectangles corresponding to each face frame are overlapped with the center point of the face frame, so that the positions of the weight rectangles corresponding to each face frame in the original image can be determined according to the sizes of the weight rectangles.
It should be noted that the foregoing examples are only illustrative, and are not intended to limit the number, size, position, etc. of the weight rectangles corresponding to each face frame in the embodiments of the present disclosure.
Step 103, determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle.
There are various situations, such as the color statistics point is located inside each weight rectangle, the color statistics point is located inside a part of the weight rectangle, and so on, which is not limited in this disclosure.
For example, the color statistics point is only located inside one weight rectangle, and the reference weight corresponding to the weight rectangle is the first weight value corresponding to the color statistics point.
Or when any color statistic point is located in at least two weight rectangles, determining a larger value in the reference weights corresponding to at least two weight rectangles as a first weight value corresponding to any color statistic point.
For example, if any color statistic point a is located in the weight rectangle 1 and the weight rectangle 2, the reference weight corresponding to the weight rectangle 1 is 15, and the reference weight corresponding to the weight rectangle 2 is 10, it can be determined that the first weight value corresponding to the any color statistic point a is 15.
Alternatively, when any color statistic point is located in the plurality of weight rectangles, an average value of the reference weights corresponding to the plurality of weight rectangles may be determined as the first weight value corresponding to any color statistic point.
For example, if any color statistic point a is located in the weight rectangle 1, the weight rectangle 2, and the weight rectangle 3, the reference weight corresponding to the weight rectangle 1 is 15, the reference weight corresponding to the weight rectangle 2 is 10, the reference weight corresponding to the weight rectangle 3 is 5, and the average value of the reference weights corresponding to the weight rectangles is determined, the first weight value corresponding to any color statistic point a is 10.
It should be noted that the foregoing examples are only illustrative, and should not be taken as limiting the determination of the first weight value corresponding to any color statistic point in the embodiments of the present disclosure.
In the embodiment of the disclosure, by utilizing the reference weights corresponding to the weight rectangles, the corresponding first weight values can be assigned to the color statistic points in the weight rectangles, so that the color statistic points in different weight rectangles in the image have the corresponding weight values, and the condition is provided for subsequent processing.
Step 104, determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
The color value corresponding to each color statistic point can be updated according to the first weight value corresponding to each color statistic point, and then the white balance gain value corresponding to the original image is determined according to the updated color value.
For example, in the RgBg color statistics chart, the first weight value corresponding to the color statistics point a is 10, the color value is (Rg 1, bg 1), the first weight value corresponding to the color statistics point B is 5, the color value is (Rg 2, bg 2), the first weight value corresponding to the color statistics point C is 15, and the color value is (Rg 3, bg 3). Thereby determining updated color values for the color statistics A, B, C:
Rg avg =(10*Rg1+5*Rg2+15*Rg3)/(10+5+15)、
Bg avg =(10*Bg1+5*Bg2+15*Bg3)/(10+5+15)。
the corresponding white balance gain can then be determined: r is R Gain value =1/Rg avg ,G Gain value =1.0,B Gain value =1/Bg avg
It can be understood that, since the human eye has the highest sensitivity to the light (480 nm-600 nm) belonging to the green wavelength in the spectrum and the number of the collected green pixels in the bayer array is the largest, the gain values of the green component are usually fixed, and then the gain values of the red component and the blue component are respectively adjusted, so as to realize the adjustment of the red component and the blue component.
It should be noted that the foregoing examples are only illustrative, and should not be taken as limiting the color statistics, color statistics points, color values, etc. in the embodiments of the present disclosure.
In the embodiment of the disclosure, the first weight value corresponding to each color statistic point refers to the reference weight of each weight rectangle, so that the determined first weight value corresponding to each color statistic point is more reasonable and accurate, and the white balance gain value determined by using the first weight value is more accurate and reliable.
Step 105, performing white balance processing on the original image based on the white balance gain value.
Alternatively, each gain value may be multiplied by each color component corresponding to the original image, so as to implement white balance processing on the original image.
According to the embodiment of the disclosure, an original image acquired by an image acquisition device is preprocessed to generate a corresponding color statistical image, the positions of a plurality of weight rectangles corresponding to each face frame and reference weights corresponding to each weight rectangle in the original image are determined under the condition that the original image contains the face, then a first weight value corresponding to each color statistical point can be determined according to the position relation between each color statistical point and each weight rectangle and the reference weights corresponding to each weight rectangle, and then a white balance gain value corresponding to the original image is determined according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point, so that white balance processing is performed on the original image based on the white balance gain values. When the white balance processing is carried out on the original image containing the human face, the first weight value corresponding to each color statistic point is determined according to the positions of different color statistic points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face complexion on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to users.
In the above embodiment, the original image is preprocessed to generate the corresponding color statistics image, and the first weight value corresponding to each color statistics point can be adjusted according to the multiple weight rectangles corresponding to each face frame when the original image includes a face, so that the influence of face complexion and the like on the white balance gain can be effectively reduced, and the white balance processing accuracy is improved. In one possible implementation manner, the first division granularity and the second division granularity may be determined according to the performance parameters of the image capturing device under different light sources and the current ambient brightness, and then the color statistics chart is divided into a plurality of color statistics rectangles, which is further described below with reference to fig. 2.
Fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 2, the image processing method may include the steps of:
step 201, preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical chart, wherein the color statistical chart comprises a plurality of color statistical points and color values corresponding to each color statistical point.
Step 202, determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image when the original image contains the face.
The center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame.
Step 203, determining a first weight value corresponding to each color statistic point according to the positional relationship between each color statistic point and each weight rectangle, and the reference weight corresponding to each weight rectangle.
The specific implementation manner of the steps 201 to 203 may refer to the descriptions of other embodiments of the present disclosure, and will not be repeated here.
Step 204, determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of the image acquired by the image acquisition device in different dimensions.
Wherein, the performance parameters of the images acquired by the image acquisition device may be the same or may be different for different types of light sources, which is not limited by the present disclosure.
Optionally, the corresponding relation between the light source type and the environment brightness and each division granularity can be set in advance, and then the first division granularity and the second division granularity can be determined by searching the corresponding relation according to the light source type and the environment brightness.
Or, the first color value of the image collected by the image collecting device under the first appointed light source in the first dimension and the second color value of the image collected by the image collecting device under the second appointed light source in the second dimension can be determined first, and then the third color value of the image collected by the image collecting device under the first appointed light source in the first dimension and the fourth color value of the image collected by the image collecting device under the second appointed light source in the second dimension can be determined. And determining a reference distance value according to the first color value, the second color value, the third color value and the fourth color value, determining a first coefficient and a second coefficient according to the current ambient brightness, determining a first segmentation granularity based on the reference distance value and the first coefficient, and determining a second segmentation granularity based on the reference distance value and the second coefficient.
The first designated light source and the second designated light source may be any light source, which is not limited in this disclosure.
In addition, the types of the color values corresponding to the collected images may also be different for different color spaces, which is not limited by the disclosure.
In addition, there may be various ways to determine the reference distance value, for example, euclidean distance formula, manhattan distance formula, or the like may be used, which is not limited by the present disclosure.
It can be understood that the corresponding relation between the ambient brightness and the first coefficient and the second coefficient can be set in advance, so that the corresponding first coefficient and second coefficient can be determined by searching the corresponding relation according to the current ambient brightness. Alternatively, the first coefficient, the second coefficient, and the like may be determined according to the ambient brightness using a set formula or the like, which is not limited in the present disclosure.
In addition, when the first division granularity and the second division granularity are determined according to the reference distance, the first coefficient and the second coefficient, various modes are available.
For example, a product value obtained by multiplying the reference distance value by the first coefficient may be determined as the first division granularity, and a product value obtained by multiplying the reference distance value by the second coefficient may be determined as the second division granularity. Alternatively, a quotient obtained by dividing the reference distance value by the first coefficient may be determined as the first division granularity, and correspondingly, a quotient obtained by dividing the reference distance value by the second coefficient may be determined as the second division granularity, which is not limited in the present disclosure.
For example, the image capturing device captures an image 1 under a first specified light source, where the image 1 is an average value corresponding to an R component, a G component, and a B component: ravg1, gavg1, bavg1, the first color value in the first dimension may be expressed as: the second color value of the second dimension may be denoted as bg1=bavg 1/Gavg1, and the image capturing device captures an image 2 under the second specified light source, where the image 2 is an average value corresponding to the R component, the G component, and the B component: ravg2, gavg2, bavg2, the third color value in the first dimension may be expressed as: the fourth color value of the second dimension may be denoted bg2=bavg 2/Gavg2. The reference distance value may be expressed as: Then, by searching the relation between the reference distance value and each coefficient, a first coefficient and a second coefficient can be determined, and the first segmentation granularity can be expressed as follows: d, a first coefficient, and a second division granularity is: and d, a second coefficient, and further determining the first segmentation granularity and the second segmentation granularity.
The foregoing examples are merely illustrative, and are not intended to limit the manner in which the first division granularity and the second division granularity are determined in the embodiments of the present disclosure.
In step 205, the color statistics graph is partitioned based on the first partition granularity and the second partition granularity to determine a plurality of color statistics rectangles included in the color statistics graph.
The size of the color statistics rectangle may be: the color statistics map may be uniformly divided into a plurality of color statistics rectangles according to the size of the first division granularity.
Step 206, determining a second weight value corresponding to each color statistics rectangle according to the first weight value corresponding to each color statistics point and the positional relationship between each color statistics point and the color statistics rectangle.
The second weight value corresponding to each color statistics rectangle can be determined in various manners.
Optionally, the color statistics points contained in each color statistics rectangle may be determined according to the positional relationship between each color statistics point and the color statistics rectangle, and then the sum of the first weight values corresponding to each color statistics point contained in each color statistics rectangle is determined as the second weight value corresponding to each color statistics rectangle.
For example, the color statistics A, B, C are all located inside the color statistics rectangle 1, and the first weights corresponding to the color statistics A, B, C are respectively 10, 20, and 30, and the second weights corresponding to the color statistics rectangle 1 may be the sum of the first weights corresponding to the color statistics A, B, C: 60.
it should be noted that the foregoing examples are only illustrative, and are not intended to limit the positional relationship between the color statistics points and the color statistics rectangles, the second weight values corresponding to the color statistics rectangles, and the like in the embodiments of the present disclosure.
Alternatively, the color statistics points included in each color statistics rectangle may be determined first, and then the largest first weight value corresponding to each color statistics point included in each color statistics rectangle is determined as the second weight value corresponding to each color statistics rectangle.
For example, the color statistics points A, B, C are all located inside the color statistics rectangle 1, and the first weight values corresponding to the color statistics points A, B, C are respectively 50, 30, and 10, and the second weight values corresponding to the color statistics rectangle 1 may be: 50.
or, first weight values corresponding to the color statistics points contained in each color statistics rectangle can be determined first, and then second weight values corresponding to the color statistics rectangles can be determined according to the sizes of the first weight values.
For example, it may be set in advance that when the number of the color statistics points with the first weight value being greater than 100 exceeds half, the second weight value corresponding to the second color statistics rectangle corresponding to the color statistics rectangle 1 is determined to be 120, and when the number of the color statistics points with the first weight value being less than 100 is greater than half, the second weight value corresponding to the color statistics rectangle is determined to be 80. For example, if there are 10 color statistics points in the color statistics rectangle 1 and 7 color statistics points with a first weight value greater than 100, it can be determined that the second weight value corresponding to the color statistics rectangle is 120.
It should be noted that the foregoing examples are only illustrative, and are not intended to limit the positional relationship between the color statistics points and the color statistics rectangles, the first weight values corresponding to the color statistics points, the second weight values corresponding to the color statistics rectangles, and the like in the embodiments of the present disclosure.
Step 207, updating the first weight value corresponding to each color statistics point based on the second weight value corresponding to each color statistics rectangle, so as to obtain an updated first weight value corresponding to each color statistics point.
In order to reduce errors possibly caused by a single color statistic point, for each color statistic point located in the color statistic rectangle, a second weight value corresponding to the color statistic rectangle can be determined as a first weight value updated by each color statistic point in the color statistic rectangle.
Step 208, determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
Step 209, performing white balance processing on the original image based on the white balance gain value.
It should be noted that, the specific content and implementation manner of step 208 and step 209 may refer to other embodiments of the present disclosure, and will not be described herein.
According to the embodiment of the disclosure, an original image acquired by an image acquisition device can be preprocessed to generate a corresponding color statistical image, under the condition that the original image contains a human face, a first weight value corresponding to each color statistical point is determined according to the positions of a plurality of weight rectangles corresponding to each human face frame, reference weights corresponding to each weight rectangle and the position relation between each color point and each weight rectangle, then the first segmentation granularity and the second segmentation granularity are determined according to the performance parameters of the image acquisition device under different light sources and the current ambient brightness, the color statistical image is segmented to obtain a plurality of color statistical rectangles, then the second weight value corresponding to each color statistical rectangle is determined, the first weight value corresponding to each color statistical point is updated, then the white balance gain value corresponding to the original image is determined, and white balance processing is performed on the original image. When the white balance processing is carried out on the original image containing the face, the first weight value corresponding to each color statistic point is determined according to the positions of different color statistic points in the face frame, then the first weight value is updated by using the second weight value corresponding to each color statistic rectangle, and the white balance gain corresponding to the original image is determined, so that the influence of the face complexion on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to users.
It will be appreciated that for different color statistics rectangles, their corresponding second weight values may or may not be the same. Therefore, based on the second weight value corresponding to each color statistics rectangle, when updating the first weight value corresponding to each color statistics point to obtain the updated first weight value corresponding to each color statistics point, multiple situations may exist. In one possible implementation, as shown in fig. 3, the step 207 may further include the following steps:
in step 301, a first color statistics rectangle set and a second color statistics rectangle set included in the plurality of color statistics rectangles are determined.
The second weight value corresponding to each color statistics rectangle in the first color statistics rectangle group is zero, and the second weight value corresponding to each color statistics rectangle in the second color statistics rectangle group is non-zero.
Step 302, updating the first weight value corresponding to each first color statistic point in each first color statistic rectangle based on the second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group, so as to obtain an updated first weight value corresponding to each first color statistic point.
For each first color statistic point in each first color statistic rectangle in the first color statistic rectangle group, the updated first weight value of each first color statistic point may be a second weight value corresponding to the first color statistic rectangle where each first color statistic point is located.
Step 303, determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is greater than the second attenuation intensity.
Wherein, different environmental brightness may correspond to different attenuation intensities, and the corresponding relation between the environmental brightness and each attenuation intensity can be determined in advance. Therefore, according to the current ambient brightness, the first attenuation intensity and the second attenuation intensity corresponding to the current ambient brightness can be determined by searching the corresponding relation.
Step 304, determining that the first attenuation intensity is the attenuation intensity corresponding to each second color statistics rectangle under the condition that the maximum value and the minimum value of the second weight values corresponding to each second color statistics rectangle in the second color statistics rectangle group are the same.
In the second color statistics rectangle group, the maximum value and the minimum value of the second weight values corresponding to each second color statistics rectangle are the same, the second weight values corresponding to each second color statistics rectangle can be the same, the sum of the first weight values corresponding to each second color statistics rectangle is the same, and at the moment, the color statistics points contained in each second color statistics rectangle can accurately represent the skin color of the face. In order to reduce the influence of the human face complexion on the white balance gain, the attenuation intensity corresponding to each second color statistic rectangle can be determined to be the first attenuation intensity, so that each second color statistic rectangle has higher attenuation intensity.
Step 305, updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle, so as to determine an updated second weight corresponding to each second color statistics rectangle.
When updating the second weight value corresponding to each second color statistics rectangle, there may be multiple manners. For example, if it is determined that the attenuation intensity corresponding to each second color statistics rectangle is 70% and the second weight value corresponding to each second color statistics rectangle is 100, the updated second weight value corresponding to each second color statistics rectangle may be 30.
It should be noted that the foregoing examples are only illustrative, and should not be taken as limiting the attenuation intensity, the second weight, etc. corresponding to the second color statistics rectangle in the embodiments of the present disclosure.
Step 306, updating the first weight value corresponding to each second color statistics point in each second color statistics rectangle based on the updated second weight corresponding to each second color statistics rectangle, so as to obtain the updated first weight value corresponding to each second color statistics point.
Wherein, the updated second weight corresponding to each second color statistics rectangle may be determined as the updated first weight value corresponding to each second color statistics point in each second color statistics rectangle.
In the embodiment of the disclosure, the second color statistics rectangles are endowed with higher attenuation intensity so as to reduce the second weight value of each second color statistics rectangle, and correspondingly, the first weight value corresponding to each second color statistics point in each second color statistics rectangle is also adjusted, so that the influence of the human face complexion on the white balance gain can be reduced as much as possible, and the accuracy of the white balance processing is improved.
According to the embodiment of the disclosure, the first weight value corresponding to each color statistic point in the different color statistic rectangular groups is updated, so that the weight corresponding to each color statistic point can be effectively reduced, then the white balance gain corresponding to the original image is determined, the influence of the human face complexion on the white balance gain can be effectively reduced, the accuracy of white balance processing is improved, and therefore good experience can be given to users.
In one possible implementation, as shown in fig. 4, the step 207 may further include the following steps:
step 401, determining a first difference between the maximum value and the minimum value and a second difference between the second weight value and the minimum value corresponding to each second color statistic rectangle in the second color statistic rectangle group when the maximum value and the minimum value in the second weight values corresponding to each second color statistic rectangle are different.
For convenience of explanation, the first difference between the maximum value and the minimum value in the second weight values corresponding to the second color statistics rectangles in the second color statistics rectangle group may be denoted as D. The second difference between the second weight value corresponding to the ith second color statistics rectangle and the minimum value is recorded as d i The second color statistics rectangle group may include y second color statistics rectangles, y may be any positive number, and the value range of i may be 1 to y.
Step 402, determining the attenuation rate according to the first attenuation intensity and the second attenuation intensity.
Alternatively, for convenience of explanation, the first attenuation intensity may be denoted as max_str, the second attenuation intensity may be denoted as min_str, and the attenuation rate may be a difference between the first attenuation intensity and the second attenuation intensity, which may be expressed as: max_str-min_str.
Step 403, determining an attenuation change value of each second color statistic rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value.
Wherein the attenuation change value of the ith second color statistics rectangle relative to the second attenuation intensity can be expressed as: (max_str-min_str) d i /D。
Step 404, determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation variation value corresponding to each second color statistic rectangle.
Wherein, the attenuation intensity corresponding to the ith second color statistics rectangle can be expressed as: min_Str+ (Max_Str-Min_Str) d i /D。
Step 405, updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle, so as to determine an updated second weight corresponding to each second color statistics rectangle.
For example, if it has been determined that the attenuation intensity corresponding to the ith second color statistics rectangle is 70%, and the second weight corresponding to the second color statistics rectangle is 100, the updated second weight corresponding to the second color statistics rectangle may be: 30.
it should be noted that the foregoing examples are only illustrative, and should not be taken as limiting the attenuation intensity, the second weight, etc. corresponding to the second color statistics rectangle in the embodiments of the present disclosure.
Step 406, updating the first weight value corresponding to each second color statistics point in each second color statistics rectangle based on the updated second weight corresponding to each second color statistics rectangle, so as to obtain the updated first weight value corresponding to each second color statistics point.
Wherein, the updated second weight corresponding to each second color statistics rectangle may be determined as the updated first weight value corresponding to each second color statistics point in each second color statistics rectangle.
According to the embodiment of the disclosure, when the maximum value and the minimum value in the second weight values corresponding to the second color statistics rectangles in the second color statistics rectangle group are different, the attenuation rate, the attenuation intensity and the like can be utilized to determine the updated second weight values corresponding to the second color statistics rectangles, then the corresponding first weight values are updated to the second color statistics points at different positions, the weight corresponding to the color statistics points can be effectively reduced, then the white balance gain corresponding to the original image is determined, so that the influence of the face complexion on the white balance gain can be effectively reduced, the accuracy of white balance processing is improved, and good experience can be given to users.
The embodiment of the disclosure also provides an image processing device, and fig. 5 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 5, the image processing apparatus 100 includes: the generation module 110, the first determination module 120, the second determination module 130, the third determination module 140, and the processing module 150.
The generating module 110 is configured to pre-process an original image acquired by the image acquisition device to generate a corresponding color statistics chart, where the color statistics chart includes a plurality of color statistics points and color values corresponding to each of the color statistics points.
The first determining module 120 is configured to determine, when the original image includes a face, a position of a plurality of weight rectangles corresponding to each face frame included in the original image and a reference weight corresponding to each weight rectangle, where a center point and an aspect ratio of each weight rectangle are the same as a center point and an aspect ratio of the corresponding face frame, respectively.
The second determining module 130 is configured to determine a first weight value corresponding to each color statistics point according to a positional relationship between each color statistics point and each weight rectangle, and a reference weight corresponding to each weight rectangle.
The third determining module 140 is configured to determine a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
And the processing module 150 is configured to perform white balance processing on the original image based on the white balance gain value.
As a possible implementation manner, the first determining module 120 is specifically configured to: determining the corresponding ambient brightness of the original image; according to the ambient brightness, determining the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle; and determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
As a possible implementation manner, the second determining module 130 is specifically configured to determine, when any color statistic point is located within at least two weight rectangles, a larger value in the reference weights corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistic point.
As one possible implementation manner, the third determining module 140 includes:
the first determining unit is used for determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of the image acquired by the image acquisition device in different dimensions;
a dividing unit, configured to divide the color statistics graph based on the first division granularity and the second division granularity, so as to determine a plurality of color statistics rectangles included in the color statistics graph;
the second determining unit is used for determining a second weight value corresponding to each color statistics rectangle according to the first weight value corresponding to each color statistics point and the position relation between each color statistics point and the color statistics rectangle;
The acquisition unit is used for updating the first weight value corresponding to each color statistic point based on the second weight value corresponding to each color statistic rectangle so as to acquire the updated first weight value corresponding to each color statistic point;
and the third determining unit is used for determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
As a possible implementation manner, the first determining unit is specifically configured to:
determining a first color value of an image acquired by the image acquisition device under a first appointed light source in a first dimension and a second color value of the image acquired by the image acquisition device under a second dimension;
determining a third color value of the image acquired by the image acquisition device under the second designated light source in the first dimension and a fourth color value of the image acquired by the image acquisition device under the second designated light source in the second dimension;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first segmentation granularity based on the reference distance value and the first coefficient;
The second division granularity is determined based on the reference distance value and the second coefficient.
As a possible implementation manner, the second determining unit is specifically configured to:
determining color statistical points contained in each color statistical rectangle according to the position relation between each color statistical point and the color statistical rectangle;
and determining the sum of the first weight values corresponding to each color statistic point contained in each color statistic rectangle as a second weight value corresponding to each color statistic rectangle.
As a possible implementation manner, the acquiring unit is specifically configured to:
determining a first color statistics rectangle group and a second color statistics rectangle group contained in the plurality of color statistics rectangles, wherein a second weight value corresponding to each color statistics rectangle in the first color statistics rectangle group is zero, and a second weight value corresponding to each color statistics rectangle in the second color statistics rectangle group is non-zero;
updating the first weight value corresponding to each first color statistic point in each first color statistic rectangle based on the second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group so as to acquire the updated first weight value corresponding to each first color statistic point;
Determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is larger than the second attenuation intensity;
determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle under the condition that the maximum value and the minimum value of the second weight values corresponding to each second color statistic rectangle in the second color statistic rectangle group are the same;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
As a possible implementation manner, the obtaining unit is further specifically configured to:
determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistics rectangle and the minimum value under the condition that the maximum value and the minimum value in the second weight values corresponding to each second color statistics rectangle in the second color statistics rectangle group are different;
Determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
The image processing device in the embodiment of the disclosure first performs preprocessing on an original image acquired by an image acquisition device to generate a corresponding color statistics image, determines positions of a plurality of weight rectangles corresponding to each face frame and reference weights corresponding to each weight rectangle in the original image when the original image contains the face, and then determines a first weight value corresponding to each color statistics point according to a positional relationship between each color statistics point and each weight rectangle and the reference weights corresponding to each weight rectangle, and then determines a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistics point and the color value corresponding to each color statistics point, so as to perform white balance processing on the original image based on the white balance gain value. When the white balance processing is carried out on the original image containing the human face, the first weight value corresponding to each color statistic point is determined according to the positions of different color statistic points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face complexion on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to users.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 6, the electronic device 200 includes: memory 210 and processor 220, bus 230 connecting the different components, including memory 210 and processor 220.
Wherein the memory 210 is used to store executable instructions of the processor 220; the processor 201 is configured to call and execute executable instructions stored in the memory 202 to implement the image processing method proposed by the above-described embodiment of the present disclosure.
Bus 230 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media can be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard disk drive"). Although not shown in fig. 6, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 230 via one or more data medium interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
Program/utility 280 having a set (at least one) of program modules 270 may be stored in, for example, memory 210, such program modules 270 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 270 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), one or more devices that enable a user to interact with the electronic device 200, and/or any device (e.g., network card, modem, etc.) that enables the electronic device 200 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 292. Also, electronic device 200 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 293. As shown, network adapter 293 communicates with other modules of electronic device 200 over bus 230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 200, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor 220 executes various functional applications and data processing by running programs stored in the memory 210.
It should be noted that, the implementation process of the electronic device in the embodiment of the present disclosure refers to the foregoing explanation of the image processing method in the embodiment of the present disclosure, and will not be repeated herein.
The electronic device in the embodiment of the disclosure first performs preprocessing on an original image acquired by an image acquisition device to generate a corresponding color statistics image, determines positions of a plurality of weight rectangles corresponding to each face frame and reference weights corresponding to each weight rectangle in the original image when the original image contains the face, and then determines a first weight value corresponding to each color statistics point according to a positional relationship between each color statistics point and each weight rectangle and the reference weights corresponding to each weight rectangle, and then determines a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistics point and the color value corresponding to each color statistics point, so as to perform white balance processing on the original image based on the white balance gain value. When the white balance processing is carried out on the original image containing the human face, the first weight value corresponding to each color statistic point is determined according to the positions of different color statistic points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face complexion on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to users.
In order to implement the above-described embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method as described above.
To achieve the above embodiments, the embodiments of the present disclosure also provide a computer program product which, when executed by a processor of an electronic device, enables the electronic device to perform the method of processing an image as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A method of processing an image, comprising:
preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical image, wherein the color statistical image comprises a plurality of color statistical points and color values corresponding to each color statistical point;
under the condition that the original image contains faces, determining positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and reference weights corresponding to the weight rectangles, wherein the center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame;
determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle;
determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point;
and performing white balance processing on the original image based on the white balance gain value.
2. The method of claim 1, wherein determining the positions of the plurality of weight rectangles corresponding to each face frame included in the original image includes:
Determining the corresponding ambient brightness of the original image;
according to the ambient brightness, determining the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle;
and determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
3. The method of claim 1, wherein determining the first weight value corresponding to each color statistic point according to the positional relationship between each color statistic point and each weight rectangle, and the reference weight corresponding to each weight rectangle comprises:
and under the condition that any color statistic point is positioned in at least two weight rectangles, determining a larger value in the reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistic point.
4. A method according to any one of claims 1 to 3, wherein said determining the white balance gain value corresponding to the original image according to the first weight value corresponding to each of the color statistics and the color value corresponding to each of the color statistics comprises:
determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of acquired images of the image acquisition device in different dimensions;
Dividing the color statistical map based on the first division granularity and the second division granularity to determine a plurality of color statistical rectangles contained in the color statistical map;
determining a second weight value corresponding to each color statistics rectangle according to the first weight value corresponding to each color statistics point and the position relation between each color statistics point and the color statistics rectangle;
updating the first weight value corresponding to each color statistics point based on the second weight value corresponding to each color statistics rectangle so as to obtain the updated first weight value corresponding to each color statistics point;
and determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
5. The method of claim 4, wherein determining the first segmentation granularity and the second segmentation granularity based on the performance parameters of the image acquisition device under different light sources and the current ambient brightness comprises:
determining a first color value of an image acquired by the image acquisition device under a first appointed light source in a first dimension and a second color value of the image acquired by the image acquisition device under a second dimension;
Determining a third color value of the image acquired by the image acquisition device under the second designated light source in the first dimension and a fourth color value of the image acquired by the image acquisition device under the second designated light source in the second dimension;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first segmentation granularity based on the reference distance value and the first coefficient;
the second division granularity is determined based on the reference distance value and the second coefficient.
6. The method of claim 4, wherein determining the second weight value corresponding to each color statistics rectangle according to the first weight value corresponding to each color statistics point and the positional relationship between each color statistics point and color statistics rectangle comprises:
determining color statistical points contained in each color statistical rectangle according to the position relation between each color statistical point and the color statistical rectangle;
and determining the sum of the first weight values corresponding to each color statistic point contained in each color statistic rectangle as a second weight value corresponding to each color statistic rectangle.
7. The method according to claim 5 or 6, wherein updating the first weight value corresponding to each color statistics point based on the second weight value corresponding to each color statistics rectangle to obtain the updated first weight value corresponding to each color statistics point comprises:
determining a first color statistics rectangle group and a second color statistics rectangle group contained in the plurality of color statistics rectangles, wherein a second weight value corresponding to each color statistics rectangle in the first color statistics rectangle group is zero, and a second weight value corresponding to each color statistics rectangle in the second color statistics rectangle group is non-zero;
updating the first weight value corresponding to each first color statistic point in each first color statistic rectangle based on the second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group so as to acquire the updated first weight value corresponding to each first color statistic point;
determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is larger than the second attenuation intensity;
Determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle under the condition that the maximum value and the minimum value of the second weight values corresponding to each second color statistic rectangle in the second color statistic rectangle group are the same;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
8. The method of claim 7, further comprising, after said determining a first decay intensity and a second decay intensity corresponding to said current ambient brightness:
determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistics rectangle and the minimum value under the condition that the maximum value and the minimum value in the second weight values corresponding to each second color statistics rectangle in the second color statistics rectangle group are different;
Determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
9. An image processing apparatus, comprising:
the generation module is used for preprocessing the original image acquired by the image acquisition device to generate a corresponding color statistical image, wherein the color statistical image comprises a plurality of color statistical points and color values corresponding to each color statistical point;
The first determining module is used for determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image under the condition that the face is contained in the original image, wherein the center point and the length-width ratio of each weight rectangle are respectively the same as the center point and the length-width ratio of the corresponding face frame;
the second determining module is used for determining a first weight value corresponding to each color statistic point according to the position relation between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle;
the third determining module is used for determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point;
and the processing module is used for carrying out white balance processing on the original image based on the white balance gain value.
10. The apparatus of claim 9, wherein the first determining module is specifically configured to:
determining the corresponding ambient brightness of the original image;
according to the ambient brightness, determining the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle;
And determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
11. The apparatus of claim 9, wherein the second determining module is specifically configured to:
and under the condition that any color statistic point is positioned in at least two weight rectangles, determining a larger value in the reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistic point.
12. The apparatus according to any of claims 9-11, wherein the third determining module comprises:
the first determining unit is used for determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of the image acquired by the image acquisition device in different dimensions;
a dividing unit, configured to divide the color statistics graph based on the first division granularity and the second division granularity, so as to determine a plurality of color statistics rectangles included in the color statistics graph;
the second determining unit is used for determining a second weight value corresponding to each color statistics rectangle according to the first weight value corresponding to each color statistics point and the position relation between each color statistics point and the color statistics rectangle;
The acquisition unit is used for updating the first weight value corresponding to each color statistic point based on the second weight value corresponding to each color statistic rectangle so as to acquire the updated first weight value corresponding to each color statistic point;
and the third determining unit is used for determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point.
13. The apparatus according to claim 12, wherein the first determining unit is specifically configured to:
determining a first color value of an image acquired by the image acquisition device under a first appointed light source in a first dimension and a second color value of the image acquired by the image acquisition device under a second dimension;
determining a third color value of the image acquired by the image acquisition device under the second designated light source in the first dimension and a fourth color value of the image acquired by the image acquisition device under the second designated light source in the second dimension;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first segmentation granularity based on the reference distance value and the first coefficient;
The second division granularity is determined based on the reference distance value and the second coefficient.
14. The apparatus according to claim 12, wherein the second determining unit is specifically configured to:
determining color statistical points contained in each color statistical rectangle according to the position relation between each color statistical point and the color statistical rectangle;
and determining the sum of the first weight values corresponding to each color statistic point contained in each color statistic rectangle as a second weight value corresponding to each color statistic rectangle.
15. The apparatus according to claim 13 or 14, wherein the acquisition unit is specifically configured to:
determining a first color statistics rectangle group and a second color statistics rectangle group contained in the plurality of color statistics rectangles, wherein a second weight value corresponding to each color statistics rectangle in the first color statistics rectangle group is zero, and a second weight value corresponding to each color statistics rectangle in the second color statistics rectangle group is non-zero;
updating the first weight value corresponding to each first color statistic point in each first color statistic rectangle based on the second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group so as to acquire the updated first weight value corresponding to each first color statistic point;
Determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is larger than the second attenuation intensity;
determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle under the condition that the maximum value and the minimum value of the second weight values corresponding to each second color statistic rectangle in the second color statistic rectangle group are the same;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
16. The apparatus of claim 15, wherein the acquisition unit is further specifically configured to:
Determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistics rectangle and the minimum value under the condition that the maximum value and the minimum value in the second weight values corresponding to each second color statistics rectangle in the second color statistics rectangle group are different;
determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating the second weight value corresponding to each second color statistics rectangle based on the attenuation intensity corresponding to each second color statistics rectangle so as to determine an updated second weight corresponding to each second color statistics rectangle;
and updating the first weight value corresponding to each second color statistic point in each second color statistic rectangle based on the updated second weight corresponding to each second color statistic rectangle so as to acquire the updated first weight value corresponding to each second color statistic point.
17. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to invoke and execute the executable instructions stored in the memory to implement the method of processing an image according to any of claims 1-8.
18. A non-transitory computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of processing an image as claimed in any one of claims 1-8.
CN202110310827.2A 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium Active CN115118947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110310827.2A CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110310827.2A CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115118947A CN115118947A (en) 2022-09-27
CN115118947B true CN115118947B (en) 2023-11-24

Family

ID=83323835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110310827.2A Active CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115118947B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009004966A (en) * 2007-06-20 2009-01-08 Panasonic Corp Imaging apparatus
CN105187810A (en) * 2014-11-11 2015-12-23 怀效宁 Automatic white balance method based on face color features and electronic media device
WO2017096865A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Method and device for processing human face-containing image
CN106878695A (en) * 2017-02-13 2017-06-20 广东欧珀移动通信有限公司 Method, device and computer equipment that white balance is processed
CN107343189A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 White balancing treatment method and device
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN109151428A (en) * 2018-08-30 2019-01-04 Oppo广东移动通信有限公司 automatic white balance processing method, device and computer storage medium
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
CN110971813A (en) * 2018-09-30 2020-04-07 北京微播视界科技有限公司 Focusing method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031060A1 (en) * 2005-08-04 2007-02-08 Canon Kabushiki Kaisha Image processing apparatus, method for calculating white balance evaluation value, program including program code for realizing the method for calculating white balance evaluation value, and storage medium for storing the program
JP5066398B2 (en) * 2007-06-29 2012-11-07 富士フイルム株式会社 Image processing apparatus and method, and program
JP6234191B2 (en) * 2013-11-28 2017-11-22 オリンパス株式会社 Multi-area white balance control device, multi-area white balance control method, multi-area white balance control program, computer recording multi-area white balance control program, multi-area white balance image processing device, multi-area white balance image processing method, multi-area White balance image processing program, computer recording multi-area white balance image processing program, and imaging apparatus provided with multi-area white balance image processing device
CN107454345B (en) * 2017-07-12 2019-10-22 Oppo广东移动通信有限公司 White balancing treatment method, device and the terminal device of image
US20200036888A1 (en) * 2018-07-26 2020-01-30 Qualcomm Incorporated Calibration of Automatic White Balancing using Facial Images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009004966A (en) * 2007-06-20 2009-01-08 Panasonic Corp Imaging apparatus
CN105187810A (en) * 2014-11-11 2015-12-23 怀效宁 Automatic white balance method based on face color features and electronic media device
WO2017096865A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Method and device for processing human face-containing image
CN106878695A (en) * 2017-02-13 2017-06-20 广东欧珀移动通信有限公司 Method, device and computer equipment that white balance is processed
CN107343189A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 White balancing treatment method and device
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN109151428A (en) * 2018-08-30 2019-01-04 Oppo广东移动通信有限公司 automatic white balance processing method, device and computer storage medium
CN110971813A (en) * 2018-09-30 2020-04-07 北京微播视界科技有限公司 Focusing method and device, electronic equipment and storage medium
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A DOA Estimation Method in TD-SCDMA System;Chunyuan Zheng;《2015 2nd International Conference on Information Science and Control Engineering》;全文 *
基于FPGA的Bayer彩色自动白平衡设计与实现;程本飞;戴明;孙丽娜;;电子技术应用(第08期);全文 *
视频监控中基于iOS平台的人脸检测与识别;黄牧;《中国优秀硕士论文电子期刊网》;全文 *

Also Published As

Publication number Publication date
CN115118947A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
JP6395810B2 (en) Reference image selection for motion ghost filtering
US11064174B2 (en) White balance processing method and apparatus
US20200137369A1 (en) White balance processing method and apparatus
CN111327824B (en) Shooting parameter selection method and device, storage medium and electronic equipment
US10455207B2 (en) Method, computing device and nonvolatile computer readable storage medium for processing white balance
CN113132695B (en) Lens shading correction method and device and electronic equipment
CN112351195B (en) Image processing method, device and electronic system
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
CN114866809B (en) Video conversion method, apparatus, device, storage medium, and program product
CN113132696B (en) Image tone mapping method, image tone mapping device, electronic equipment and storage medium
CN111526351A (en) White balance synchronization method, white balance synchronization system, electronic device, medium, and digital imaging device
WO2019128539A1 (en) Image definition obtaining method and apparatus, storage medium, and electronic device
US20060115172A1 (en) Face enhancement in a digital video
KR20120015980A (en) Method, system and computer program product for object color correction
JP2002281327A (en) Device, method and program for image processing
US20090324127A1 (en) Method and System for Automatic Red-Eye Correction
CN115118947B (en) Image processing method and device, electronic equipment and storage medium
CN107767350B (en) Video image restoration method and device
WO2022183876A1 (en) Photography method and apparatus, and computer-readable storage medium and electronic device
WO2022111269A1 (en) Method and device for enhancing video details, mobile terminal, and storage medium
WO2021147316A1 (en) Object recognition method and device
WO2021078276A1 (en) Method for obtaining continuously photographed photos, smart terminal, and storage medium
CN113762058A (en) Video synthesis method and device, computer equipment and storage medium
CN113132562A (en) Lens shadow correction method and device and electronic equipment
CN114945087B (en) Image processing method, device, equipment and storage medium based on face characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant