CN113487497A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113487497A
CN113487497A CN202110680600.7A CN202110680600A CN113487497A CN 113487497 A CN113487497 A CN 113487497A CN 202110680600 A CN202110680600 A CN 202110680600A CN 113487497 A CN113487497 A CN 113487497A
Authority
CN
China
Prior art keywords
image
correction
sub
skin area
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110680600.7A
Other languages
Chinese (zh)
Inventor
杨丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110680600.7A priority Critical patent/CN113487497A/en
Publication of CN113487497A publication Critical patent/CN113487497A/en
Priority to PCT/CN2022/099439 priority patent/WO2022262848A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of image processing. The image processing method comprises the following steps: identifying whether the sub-image of the skin area in the target image has color cast; under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and performing hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image; and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
With the development of image acquisition and processing technologies, electronic devices with image acquisition functions, such as smart phones and cameras, are widely used.
In the process that a user triggers the electronic device to perform image acquisition, when the user is in a severe environment light, the influence of the environment light on the face of the user is large, and the transition of facial colors (namely skin colors) in a photo is unnatural. In response to the above situation, users are generally required to take pictures by looking for a suitable light source. Still another scheme is that the electronic device performs supplementary lighting for photographing additionally. Both of the above two schemes bring poor photographing experience to users.
In order to improve the photographing experience of a user in severe environment light, in the related art, a plurality of color temperature and tone parameters and skin color effect parameters are usually set for electronic equipment in advance, so that an image acquisition result under the environment light is adjusted under the condition that the user is in the severe environment light, and therefore the skin color with distortion or poor effect in a photo is corrected. However, since the ambient light is complicated and changeable, for the above processing method in the related art, it is difficult to adapt the preset color temperature and hue parameters and skin color effect parameters to the complicated and changeable ambient light, and further it is difficult to accurately correct the skin color of the user.
In conclusion, how to accurately and reasonably correct the skin color of the user in the image acquisition process becomes a technical problem to be solved urgently by the technical personnel in the field.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, and the problem of how to accurately and reasonably correct the skin color of a user during image acquisition can be solved.
In a first aspect, an embodiment of the present application provides an image processing method, including:
identifying whether the sub-image of the skin area in the target image has color cast;
under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and performing hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image;
and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the identification module is used for identifying whether the sub-image of the skin area in the target image has color cast or not;
the correction module is used for generating a correction target image according to the skin area sub-image under the condition that the identification module identifies the color cast of the skin area sub-image, and performing hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image;
and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, by identifying whether the sub-image of the skin area in the target image has color cast, the color cast condition and the area that needs color cast correction (i.e. the area occupied by the skin of the user in the image) in the target image can be known. And further, under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and carrying out hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image. And carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image. Therefore, the method can perform the color cast correction adaptive to the skin area sub-image according to the color cast recognition result aiming at the skin area sub-image in the target image, and the color cast correction operation is performed aiming at the skin area sub-image, which can not cause the color change of other area images in the target image, thereby avoiding the influence of the ambient light on the face of the user in the image acquisition process and ensuring the real and natural skin color of the user in the image acquisition result.
Drawings
FIG. 1 is a flow chart of steps of an image processing method of an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an image processing method, an image processing apparatus, and an electronic device provided in the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing method, including the following S101 to S103:
s101, the image processing device identifies whether the sub-image of the skin area in the target image has color cast.
In the embodiment of the application, the target image is an image acquired by an image acquisition device or an electronic device equipped with the image acquisition device, or is obtained by receiving an image sent by other electronic devices through the electronic device equipped with the image acquisition device. The target image may be an image acquired only for a person or a face, or may be an image including both the person or the face and other backgrounds (e.g., an object or a scene).
It will be appreciated that the target image is an image that needs or may need to be corrected. In other words, the target image is an image that needs to be subjected to the color cast correction process due to the presence or possibility of a color cast problem.
In the embodiment of the present application, the number of target images may be one or more. For example, when the target object is subjected to continuous image acquisition (i.e., continuous shooting), the number of target images is plural. In the above case, the image processing method provided by the embodiment of the present application may be performed separately for each target image.
It is understood that the skin region sub-image is the region of the target image where the skin of the person is located.
Optionally, in this embodiment of the application, the skin area sub-image may be an area where the face of the person in the target image is located, an area where the face and the neck of the person in the target image are located, or an area where all the skins (e.g., the face, the neck, the hands, the arms, etc.) of the person in the target image are located.
For example, in the case where a person is included in an image, the face area is generally relatively large in proportion and concentrated in the image. Therefore, an image of the region where the face of the person is located in the target image can be selected as the skin region sub-image.
It will be appreciated that in the target image, the skin region sub-image has different attribute characteristics in terms of colour attributes than the other regions. Therefore, it is possible to distinguish the skin area sub-image from the non-skin area sub-image, and thereby recognize the skin area sub-image in the target image and recognize whether it is color cast.
It is understood that color cast refers to the case where the image acquisition results deviate from the normal skin color of a human. This may be generally influenced by ambient light. For example, color cast may occur when the ambient light is too bright or too dark, and the color of the illumination light source is abnormal.
It should be noted that the above steps only determine whether the sub-image of the skin area has color cast. Specifically, when the target image includes a skin area sub-image and a non-skin area sub-image (i.e., a background), the image processing method of the embodiment of the present application first identifies the skin area sub-image, and then determines whether the identified skin area sub-image is color-shifted.
It can be understood that, if the identification result is that the sub-image of the skin area has color cast, step S102 is executed; if the identification result indicates that the sub-image of the skin area has no color cast, step S102 is not required, i.e., the sub-image of the skin area does not need to be corrected for color cast. In this case, the target image may be directly output (e.g., displayed or transmitted) or stored.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes than the other regions in view of the skin regions. Therefore, whether color cast occurs can be identified and judged according to the logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the skin region sub-image.
Illustratively, taking the skin color attribute of yellow race as an example, considering that the logical relationship between the RGB pixel values of skin color in the normal portrait satisfies R value > G value > B value, it may be determined that the skin area sub-image has color cast in the case where the RGB pixel values of the skin area sub-image do not satisfy the above relationship.
Optionally, in this embodiment of the present application, S101 includes the following S101a and S101 b:
s101a, the image processing device respectively obtains the average values of the red, green and blue pixels of the red, green and blue color channels in the sub-image of the skin area.
It can be understood that the red, green and blue pixel mean values are respectively a pixel mean value of a red color channel, a pixel mean value of a green color channel and a pixel mean value of a blue color channel in the skin region sub-image in sequence.
S101b, the image processing device identifies whether the sub-image of the skin area has color cast according to the logic relation among the red, green and blue pixel mean values.
It will be appreciated that the logical relationship between the red, green and blue pixel means followed for skin area sub-images of different ethnic groups (including yellow ethnic group, black ethnic group, white ethnic group) may be different.
Illustratively, exemplified by the skin color attribute of the yellow race, S101b includes the following S101b1 to S10214:
s101b1, the image processing apparatus determines that the skin region sub-image is color-shifted and the skin region sub-image is yellow-shifted when the first difference is greater than or equal to the first threshold.
Wherein the first difference is a red pixel mean (R for short)m) And blue pixel mean (B)m) Is determined by the difference of (1).
It can be understood that the value of the first threshold may be specifically determined according to actual use requirements, and the embodiment of the present application is not limited.
In example 1, the first threshold may take a value of 90. Accordingly, at Rm-Bm>If 90, it can be determined that the skin region sub-image is color-shifted and the skin region sub-image is yellow-shifted.
S101b2, the image processing apparatus determines that the skin-area sub-image is color-shifted and the skin-area sub-image is green-shifted when the second difference is less than or equal to the second threshold.
Wherein the second difference is a red pixel mean (R for short)m) And green pixel mean (G for short)m) Is determined by the difference of (1).
It can be understood that the value of the second threshold may be specifically determined according to actual use requirements, and the embodiment of the present application is not limited.
In example 2, the second threshold may take a value of 25. Accordingly, at Rm-Gm<In the case of 25, it can be determined that the skin region sub-image is color-shifted and the skin region sub-image is green-shifted.
S101b3, the image processing apparatus judges that the skin region sub-image is color-shifted and the skin region sub-image is red-shifted in the case where the sum of the additions of the first difference value and the second difference value is greater than or equal to a third threshold value.
Wherein, as mentioned above, the first difference is a red pixel mean value (abbreviated as R)m) And blue pixel mean (B)m) The second difference is the average value of red pixels (R for short)m) And green pixel mean (G for short)m) Is determined by the difference of (1).
It can be understood that the value of the third threshold may be specifically determined according to actual use requirements, and the embodiment of the present application is not limited.
In example 3, the value of the third threshold may be 150. Accordingly, at 2 XRm-Gm-Bm>150 ═ i.e. (R)m-Bm)+(Rm-Gm)>In the case of 150, it may be determined that the skin region sub-image is color-shifted and the skin region sub-image is red-shifted.
S101b4, the image processing apparatus determines that the skin region sub-image is color-shifted and the skin region sub-image is blue-shifted when the sum of the first intra-interval minimum value and the second intra-interval minimum value is smaller than a fourth threshold value.
Wherein the first interval is an interval from the first difference to zero, i.e. min (R)m-BmThe interval of 0); the second interval is an interval from the inverse of the second difference to zero, i.e., min (G)m-BmAnd 0).
It can be understood that the value of the fourth threshold may be specifically determined according to actual use requirements, and the embodiment of the present application is not limited.
In example 4, the value of the fourth threshold may be 0. Accordingly, in min (R)m-Bm,0)+min(Gm-Bm,0)<0, it can be determined that the sub-image of the skin region has a color cast and the sub-image of the skin region has a blue cast.
It is understood that in the case where neither of the above cases of S101a and S101b occurs (i.e., the sub-image of the skin region is not yellow, green, red, or blue), it can be determined that the sub-image of the skin region is not color-shifted. On the contrary, when any one or more of yellow, green, red and blue of the sub-image of the skin area appears, it indicates that the sub-image of the skin area appears color cast.
S102, under the condition that the skin area sub-image has color cast, the image processing device generates a correction target image according to the skin area sub-image, and performs hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image.
It is understood that the correction target image has color parameters that the ideal skin image has.
In this embodiment, by performing hue mixing processing on the correction target image and the skin area sub-image, the skin area sub-image may be corrected for the first time by using the correction target image (i.e., a first correction image is obtained).
S103, carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
It should be noted that the above S102 to S103 may perform color cast correction only on the sub-image of the skin region. In other words, in the case that the target image includes the skin area sub-image and the non-skin area sub-image, the image processing method of the embodiment of the application first identifies whether the skin area sub-image has color cast through S101, and then obtains the first corrected image through S102, and finally corrects the first corrected image into the second corrected image through S103. The image processing method according to the embodiment of the present application may not process the non-skin area sub-image, and keep the color attributes (e.g., hue, saturation) unchanged.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes than the other regions in view of the skin regions. Therefore, the color cast correction can be performed on the sub-image of the skin area according to the logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the sub-image of the skin area.
Exemplarily, the skin color attribute of a yellow race is taken as an example, RGB three-color channel pixel values of a plurality of normal skin color samples may be collected under different brightness conditions, a mapping relationship between one color channel and the other two color channels in a skin image with normal skin color is obtained by a first-order polynomial linear fitting, and then, color cast correction is performed on the skin region sub-image according to the mapping relationship.
Optionally, in this embodiment of the application, the generating of the correction target image from the skin area sub-image at S102 includes the following S102a to S102 b:
s102a, the image processing apparatus obtains a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin region sub-image according to the first preset mapping relationship and the pixel mean value of the first color channel in the skin region sub-image.
It is understood that the first predetermined mapping relationship is a mapping relationship obtained by the above-mentioned sample collection and linear fitting method. The first preset mapping relationship includes a mapping relationship between a pixel mean value of the first color channel and a standard pixel mean value of the second color channel (referred to as a mapping relationship a for short), and also includes a mapping relationship between a pixel mean value of the first color channel and a standard pixel mean value of the third color channel (referred to as a mapping relationship B for short).
It can be understood that the mapping relationship a is a mapping relationship with the pixel mean value of the first color channel as an independent variable and the standard pixel mean value of the second color channel as a dependent variable. The mapping relation B is a mapping relation which takes the pixel mean value of the first color channel as an independent variable and takes the standard pixel mean value of the third color channel as a dependent variable.
It should be noted that, in the embodiment of the present application, the three color channels including the first color channel, the second color channel, and the third color channel refer to three color channels including a red color channel, a green color channel, and a blue color channel. The color channel of each of the first color channel, the second color channel, and the third color channel may be determined according to actual usage requirements, and the embodiment of the present application is not limited.
Optionally, in this embodiment of the application, the first color channel is a red color channel, the second color channel is a green color channel, and the third color channel is a blue color channel.
It should be noted that, although the first color channel, the second color channel, and the third color channel may be any color channel, since the average pixel value of the red color channel is greater than the average pixel value of the green color channel and the average pixel value of the blue color channel for the skin region sub-image, if the first color channel is the red color channel, the standard average pixel values of the other two color channels can be obtained more accurately.
Optionally, in this embodiment of the present application, the first preset mapping is:
G1=a1×R1-b1
B1=a2×R1-b2
wherein R is1Is the pixel mean, G, of the first color channel1Is the standard pixel mean of the second color channel, B1Is the standard pixel mean of the third color channel, a1、a2、b1、b2Are each a constant.
It can be understood that a1、a2、b1、b2The specific value of (b) can be determined according to actual use requirements, and the embodiment of the application is not limited.
Illustratively, the first preset mapping relationship may be:
G1=1.118×R1-71.57;
B1=0.959×R1-64.38。
s102b, the image processing device carries out random disturbance on the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel, and the correction target image is generated.
It will be appreciated that the size (e.g. shape, resolution) of the correction target image coincides with the sub-image of the skin area.
In the embodiment of the present application, the first correction image is an image obtained by hue-mixing the skin area sub-image and the correction target image.
In the embodiment of the present application, the first correction image is a hue mixture map obtained by mixing the brightness and saturation values of the original image (i.e., the skin area sub-image) and the correction target image.
Thus, after the correction target image is obtained, the correction target image and the skin area sub-image can be mixed to correct the skin area sub-image according to the color cast condition of the skin area sub-image.
Optionally, in this embodiment of the present application, a specific manner of the hue mixing processing is as follows: and performing hue mixing processing on the correction target image and the skin area sub-image by performing loop iteration on the hue value of the correction target image and the brightness value of the skin area sub-image.
It will be appreciated that when the skin area sub-image and the correction target image are mixed, the hue change will result in a change in brightness, which will result in a change in hue and saturation.
In this way, by using the hue of the correction target image and maintaining the brightness of the skin region sub-image, by loop iteration, mixing processing can be performed, and thereby the purpose of skin color correction is achieved.
Optionally, in this embodiment of the present application, S103 includes the following S103a and S103 b:
s103a, the image processing device obtains a first weight for the sub-image of the skin region and a second weight for the second correction image.
Wherein the sum of the first weight and the second weight is 1.
It will be appreciated that the first and second weights are assigned for the purpose of: the skin region sub-image and the second correction image are fused according to the first weight and the second weight.
S103b, the image processing device performs fusion processing on the skin region sub-image and the second correction image according to the first weight and the second weight.
It is understood that after the fusion process, a second corrected image can be obtained, and the second corrected image is an image whose color cast correction is completed.
Therefore, the problem of over-correction of the skin color image in the area with heavier color cast after the hue mixing processing can be avoided.
Illustratively, the first correction map may be corrected by example 5 as followsThe image and the skin region sub-image are subjected to a fusion process. For convenience of description, in example 5, the skin region sub-image is abbreviated as SorgThe first corrected image is abbreviated as ShueAnd the second corrected image is abbreviated as Sfusion
Example 5, the fusion process was performed using the formula:
Sfusion=α×Sorg+(1-α)×Shue
where α is the first weight and 1- α is the second weight.
Optionally, in this embodiment of the application, the first weight is determined according to a distance difference matrix between the skin area sub-image and the second correction image.
Illustratively, the distance difference matrix may be obtained as in example 6 below, the first weight may be determined according to the distance difference matrix, the second weight may be determined according to the first weight, and the skin region sub-image and the second correction image may be fused according to the first weight and the second weight. For convenience of description, the skin region sub-image is also abbreviated as S in example 6orgThe first corrected image is ShueAnd the second corrected image is abbreviated as Sfusion
Example 6 to obtain a distance difference matrix, one can follow SorgAnd ShueNormalizing the difference value of the pixels to obtain a distance difference matrix WdThe normalization is performed by dividing the pixel-by-pixel value by 255. Wherein, the larger the pixel value difference value is, the more S is representedorgThe more severe the color cast problem. Through WdThe fusion proportion of the areas with heavier color cast can be reduced, thereby avoiding excessive skin color correction of the areas with heavier color cast. The fusion process is performed using the formula:
Sfusion=αd×Sorg+(1-αd)×Shue
Wd=Normal(Sorg-Shu);
αd=α×Wd
wherein alpha isdIs a first weight, 1-alphadIs the second weight.
In the embodiment of the present application, by identifying whether the sub-image of the skin area in the target image has color cast, the color cast condition and the area that needs color cast correction (i.e. the area occupied by the skin of the user in the image) in the target image can be known. And further, under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and carrying out hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image. And carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image. Therefore, the method can perform the color cast correction adaptive to the skin area sub-image according to the color cast recognition result aiming at the skin area sub-image in the target image, and the color cast correction operation is performed aiming at the skin area sub-image, which can not cause the color change of other area images in the target image, thereby avoiding the influence of the ambient light on the face of the user in the image acquisition process and ensuring the real and natural skin color of the user in the image acquisition result.
Optionally, in this embodiment of the application, in the case that the sub-image of the skin region has color cast, the color cast scene includes at least one of the following: yellow-biased scenes, green-biased scenes, red-biased scenes, blue-biased scenes.
Optionally, in this embodiment of the application, the color cast scenes may be numbered, and according to the number corresponding to the color cast scene, the color cast correction mode corresponding to the number is adopted to perform color cast correction on the sub-image of the skin area.
Alternatively, in this embodiment of the application, in the case that the sub-image of the skin region has color cast, the above S102 to S103 may be performed one or more times.
Illustratively, the high exposure suppression and color shift pre-correction may be performed for the color shift case by performing the above S102 to S103 for the first time, and the image subjected to the high exposure suppression and color shift pre-correction may be further corrected by performing the above S102 to S103 for the second time.
Illustratively, to achieve high exposure suppression and color cast pre-correction, a local color cast region in the skin region sub-image may be identified, and in case a local color cast region is identified, the color cast pre-correction is performed on the local color cast region. In the case where the above-mentioned S102 to S103 are executed a plurality of times, the parameters of the preset mapping relationship, for example, adopted each time the color cast correction steps of S102 to S103 are executed, may be the same or different.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
As shown in fig. 2, an embodiment of the present application further provides an image processing apparatus 200, including:
the identifying module 210 is configured to identify whether the sub-image of the skin area in the target image has color cast.
The correcting module 220 is configured to generate a correction target image according to the skin area sub-image under the condition that the skin area sub-image identified by the identifying module 210 has color cast, and perform hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image; and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
In the embodiment of the present application, the image processing apparatus 200 can know the color cast condition and the area that needs color cast correction (i.e., the area occupied by the skin of the user in the image) in the target image by identifying whether the sub-image of the skin area in the target image has color cast. And further, under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and carrying out hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image. And carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image. Therefore, the image processing apparatus 200 can perform the color cast correction adapted to the sub-image of the skin region in the target image according to the color cast recognition result of the sub-image of the skin region in the target image, and the color cast correction operation is performed on the sub-image of the skin region, which does not cause the color change of the images in other regions in the target image, thereby avoiding the influence of the ambient light on the face of the user during the image acquisition process, and ensuring the real and natural skin color of the user in the image acquisition result.
Optionally, in this embodiment of the application, the correcting module 220 is specifically configured to:
obtaining a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin area sub-image according to the first preset mapping relation and the pixel mean value of the first color channel in the skin area sub-image;
and carrying out random disturbance on the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate a corrected target image.
Optionally, in this embodiment of the application, the correcting module 220 is specifically configured to:
and performing hue mixing processing on the correction target image and the skin area sub-image by performing loop iteration on the hue value of the correction target image and the brightness value of the skin area sub-image.
Optionally, in this embodiment of the application, the correcting module 220 is specifically configured to:
acquiring a first weight of a skin region sub-image and a second weight of a first correction image;
according to the first weight and the second weight, carrying out fusion processing on the first correction image and the skin area sub-image;
wherein the sum of the first weight and the second weight is 1, the first weight being determined from the distance difference matrix between the skin area sub-image and the first correction image.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
identifying whether the sub-image of the skin area in the target image has color cast;
under the condition that the skin area sub-image has color cast, generating a correction target image according to the skin area sub-image, and carrying out hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image;
and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
2. The image processing method according to claim 1, wherein said generating a correction target image from the skin region sub-image comprises:
obtaining a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin area sub-image according to a first preset mapping relation and through a pixel mean value of a first color channel in the skin area sub-image;
and randomly disturbing the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate the correction target image.
3. The image processing method according to claim 1, wherein the hue-blending the correction target image with the skin area sub-image includes:
and performing hue mixing processing on the correction target image and the skin area sub-image by performing loop iteration on the hue value of the correction target image and the brightness value of the skin area sub-image.
4. The image processing method according to claim 1, wherein said fusing the first correction image with the skin region sub-image comprises:
acquiring a first weight of the skin region sub-image and a second weight of the first correction image;
according to the first weight and the second weight, carrying out fusion processing on the first correction image and the skin area sub-image;
wherein the sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first correction image.
5. An image processing apparatus characterized by comprising:
the identification module is used for identifying whether the sub-image of the skin area in the target image has color cast or not;
the correction module is used for generating a correction target image according to the skin area sub-image under the condition that the identification module identifies the skin area sub-image to have color cast, and carrying out hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image; and carrying out fusion processing on the first correction image and the skin area sub-image to obtain a second correction image.
6. The image processing apparatus according to claim 5, wherein the correction module is specifically configured to:
obtaining a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin area sub-image according to a first preset mapping relation and through a pixel mean value of a first color channel in the skin area sub-image;
and randomly disturbing the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate the correction target image.
7. The image processing apparatus according to claim 5, wherein the correction module is specifically configured to:
and performing hue mixing processing on the correction target image and the skin area sub-image by performing loop iteration on the hue value of the correction target image and the brightness value of the skin area sub-image.
8. The image processing apparatus according to claim 5, wherein the correction module is specifically configured to:
acquiring a first weight of the skin region sub-image and a second weight of the first correction image;
according to the first weight and the second weight, carrying out fusion processing on the first correction image and the skin area sub-image;
wherein the sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first correction image.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 4.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 4.
CN202110680600.7A 2021-06-18 2021-06-18 Image processing method and device and electronic equipment Pending CN113487497A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110680600.7A CN113487497A (en) 2021-06-18 2021-06-18 Image processing method and device and electronic equipment
PCT/CN2022/099439 WO2022262848A1 (en) 2021-06-18 2022-06-17 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110680600.7A CN113487497A (en) 2021-06-18 2021-06-18 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113487497A true CN113487497A (en) 2021-10-08

Family

ID=77935603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110680600.7A Pending CN113487497A (en) 2021-06-18 2021-06-18 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113487497A (en)
WO (1) WO2022262848A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262848A1 (en) * 2021-06-18 2022-12-22 维沃移动通信有限公司 Image processing method and apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971344A (en) * 2014-05-27 2014-08-06 广州商景网络科技有限公司 Skin color error correction method and system for certificate images
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
CN111524076A (en) * 2020-04-07 2020-08-11 咪咕文化科技有限公司 Image processing method, electronic device, and computer-readable storage medium
CN112532855A (en) * 2019-09-17 2021-03-19 华为技术有限公司 Image processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009059009A (en) * 2007-08-30 2009-03-19 Dainippon Printing Co Ltd Color-corrected image creation method and color-corrected image creation device
CN108038889A (en) * 2017-11-10 2018-05-15 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image color cast
CN111063008A (en) * 2019-12-23 2020-04-24 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN113487497A (en) * 2021-06-18 2021-10-08 维沃移动通信有限公司 Image processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971344A (en) * 2014-05-27 2014-08-06 广州商景网络科技有限公司 Skin color error correction method and system for certificate images
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
CN112532855A (en) * 2019-09-17 2021-03-19 华为技术有限公司 Image processing method and device
CN111524076A (en) * 2020-04-07 2020-08-11 咪咕文化科技有限公司 Image processing method, electronic device, and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262848A1 (en) * 2021-06-18 2022-12-22 维沃移动通信有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2022262848A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
CN106780635B (en) Picture adaptation method and system of intelligent terminal
CN104010124A (en) Method and device for displaying filter effect, and mobile terminal
WO2023273111A1 (en) Image processing method and apparatus, and computer device and storage medium
CN112702531B (en) Shooting method and device and electronic equipment
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN112419218B (en) Image processing method and device and electronic equipment
CN112437237B (en) Shooting method and device
CN112508820B (en) Image processing method and device and electronic equipment
CN113487497A (en) Image processing method and device and electronic equipment
CN113747076A (en) Shooting method and device and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN112511890A (en) Video image processing method and device and electronic equipment
US20190174057A1 (en) Image acquisition device, image processing device, image processing method, image processing program, and image acquisition system
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN114816619A (en) Information processing method and electronic equipment
CN112468794B (en) Image processing method and device, electronic equipment and readable storage medium
EP3913616A1 (en) Display method and device, computer program, and storage medium
CN112165631B (en) Media resource processing method and device, storage medium and electronic equipment
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113486693A (en) Video processing method and device
CN112669229B (en) Image processing method and device and electronic equipment
CN114071016B (en) Image processing method, device, electronic equipment and storage medium
CN113114930B (en) Information display method, device, equipment and medium
CN114143447B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination