WO2021253783A1 - 图像处理方法、装置、电子设备和存储介质 - Google Patents

图像处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2021253783A1
WO2021253783A1 PCT/CN2020/139133 CN2020139133W WO2021253783A1 WO 2021253783 A1 WO2021253783 A1 WO 2021253783A1 CN 2020139133 W CN2020139133 W CN 2020139133W WO 2021253783 A1 WO2021253783 A1 WO 2021253783A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
face
target
preset
Prior art date
Application number
PCT/CN2020/139133
Other languages
English (en)
French (fr)
Inventor
刘易周
杨鼎超
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Priority to JP2022556185A priority Critical patent/JP2023518444A/ja
Publication of WO2021253783A1 publication Critical patent/WO2021253783A1/zh
Priority to US17/952,619 priority patent/US20230020937A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of image processing, and in particular to image processing methods, devices, electronic equipment, and storage media.
  • the processing of the facial features is a relatively common operation, for example, the facial features can be enlarged, dislocated, erased, and so on.
  • the current way of erasing facial features is often due to the large difference between the color of the replacement facial features and the color of the human face, resulting in poor rendering effects, and because there are often a large number of pixels near the facial features, the amount of calculation is too much. It is difficult to apply to devices with small computing power.
  • the present disclosure provides an image processing method, device, electronic device, and storage medium to at least solve technical problems in related technologies.
  • the technical solutions of the present disclosure are as follows:
  • an image processing method which includes: determining a first face mask image of a target image that does not contain hair, and obtaining a first face mask image in the target image according to the first face mask image A first face area that does not contain hair; fills the color of a preset gray scale outside the first face area to generate a preset shape to be sampled image; downsampling the to be sampled image, and removes the sampling result
  • the remaining sampling result is obtained from the sampling result of the color whose middle color is the preset gray scale; calculating the color average value of the remaining sampling result, and performing a weighted summation on the preset standard face color and the average value to obtain the target color;
  • the target color is to render the pixels in the face area of the target image.
  • an image processing device including: a first face determination module configured to execute a first face mask map that determines that the target image does not contain hair, according to the first face mask The face mask map acquires a first face region that does not contain hair in the target image; an image generation module is configured to perform filling of a preset gray-scale color outside the first face region to generate a preset A shape of the image to be sampled; a down-sampling module configured to perform down-sampling of the image to be sampled, and remove the sampling result of the color of the preset grayscale in the sampling result to obtain the remaining sampling result; the calculation module is configured to Perform calculation of the color average value of the remaining sampling results, and perform a weighted summation of the preset standard face color and the average value to obtain the target color; the rendering module is configured to perform the calculation of the target image according to the target color The pixels in the face area are rendered.
  • an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the instructions to implement The image processing method as described in any of the above embodiments.
  • a storage medium when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the image processing method described in any of the above embodiments.
  • a computer program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of the device obtains data from the readable storage medium.
  • the computer program is read and executed, so that the device executes the image processing method described in any one of the foregoing embodiments.
  • Fig. 1 is a schematic flowchart showing an image processing method according to an embodiment of the present disclosure.
  • Fig. 2 is a diagram showing a first face mask according to an embodiment of the present disclosure.
  • Fig. 3 shows an image to be sampled according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram showing a sampling result according to an embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram showing a color corresponding to an average value according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram showing a target color according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic flowchart showing another image processing method according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
  • Fig. 9 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
  • Fig. 10 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
  • Fig. 11 is a schematic diagram showing a rendered second face region according to an embodiment of the present disclosure.
  • Fig. 12 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
  • Fig. 13 is a schematic block diagram showing an image processing device according to an embodiment of the present disclosure.
  • Fig. 14 is a schematic block diagram showing a computing module according to an embodiment of the present disclosure.
  • Fig. 15 is a schematic block diagram showing another computing module according to an embodiment of the present disclosure.
  • Fig. 16 is a schematic block diagram showing a rendering module according to an embodiment of the present disclosure.
  • Fig. 17 is a schematic block diagram showing another rendering module according to an embodiment of the present disclosure.
  • Fig. 18 is a schematic block diagram showing still another rendering module according to an embodiment of the present disclosure.
  • Fig. 19 is a schematic block diagram showing an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 is a schematic flowchart showing an image processing method according to an embodiment of the present disclosure.
  • the image processing method shown in this embodiment can be applied to terminals, such as mobile phones, tablet computers, wearable devices, personal computers, etc., and can also be applied to servers, such as local servers, cloud servers, and the like.
  • the image processing method may include the following steps:
  • a preset gray-scale color is filled outside the first face area to generate an image to be sampled in a preset shape
  • the pixels in the face area of the target image are rendered according to the target color.
  • the method of determining the first face mask image can be selected as required.
  • the mask determination model can be obtained through deep learning training in advance.
  • the mask determination model is used to determine the mask image that does not contain hair in the image, and then based on the mask determination model, the first face mask that does not contain hair in the target image can be determined picture.
  • the key point determination model can be obtained through deep learning training in advance.
  • the key point determination model is used to determine the key points on the face in the image.
  • the key points on the face in the target image can be determined.
  • the closed area formed by connecting the key points on the edge of the face is used as the first face mask image.
  • the first face region that does not contain hair in the target image can be obtained according to the first face mask image, and then the color of the preset grayscale can be filled outside the first face region.
  • the preset gray level can be selected from 0 to 255 according to needs. For example, the preset gray level is 0, then the color of the preset gray level is black, for example, the preset gray level is 255, then the color of the preset gray level Is white.
  • a color with a preset gray level of 0 or a color with a preset gray level of 255 can be selected, which is beneficial to avoid that the color of the sampling result including face pixels is the same as the color of the preset gray level in the subsequent sampling process. Be rejected.
  • Fig. 2 is a diagram showing a first face mask according to an embodiment of the present disclosure.
  • Fig. 3 shows an image to be sampled according to an embodiment of the present disclosure.
  • the first face mask image can be used to obtain the target image as shown in Figure 3 without hair.
  • the first face area, the preset shape formed by filling the preset gray-scale color outside the first face area, may be a rectangle as shown in FIG. 3, or other shapes. For this, the present disclosure Not limited.
  • the image to be sampled can be down-sampled, and the down-sampling method of the sample can be set as required, for example, it can be set to 4*7, that is, 4 samples are sampled in the width direction and 7 times in the height direction, then 28 can be obtained.
  • Sampling results each of which can be used to collect a single pixel, or multiple pixels near a certain position, or divide the image to be sampled into 4*7 areas on average, and use the color average of each area pixel as Sampling results.
  • Fig. 4 is a schematic diagram showing a sampling result according to an embodiment of the present disclosure.
  • each row contains 14 sampling results, a total of 28 sampling results, of the 28 sampling results, the color of 24 sampling results is the color of the preset grayscale, 4
  • the color of the sampling result is not the color of the preset grayscale, then the 24 sampling results of the color whose color is the preset grayscale can be eliminated, and the remaining 4 sampling results of the color whose color is not the preset grayscale are retained.
  • the remaining sampling results can be one or more.
  • its color is the mean value.
  • the colors of each remaining sampling result can be added, and then Calculate the average value.
  • the color can be represented by the gray value of 0 to 255, or the gray value of 0 to 255 can be converted to the interval of 0 to 1.
  • the color of the remaining sampling result is not the color of the preset grayscale, it is possible that the corresponding sampling area contains both the filled color area of the preset grayscale and the face area, which will cause the average value to be obtained to be dark. Or when the target image is in an extreme environment, such as in a dark environment, the color of each remaining sample will be darker, and the average value obtained will also be darker.
  • the preset standard face color and the average value can be further weighted and summed to obtain the target color, where the standard face color may be preset The color close to the skin color of a human face.
  • the average value can be corrected to a certain extent based on the standard face color, so as to avoid the color obtained based only on the average from being different from the usual one. The color of the face is too different.
  • Fig. 5 is a schematic diagram showing a color corresponding to an average value according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram showing a target color according to an embodiment of the present disclosure. As shown in Figure 5, the color corresponding to the average value is too dark.
  • the target color as shown in Figure 6 is obtained by weighting and summing the preset standard face color and the average value, which is closer to the color of the face skin. Thus, not only the average value reflects the color of the face in the target image, but also it can be ensured that the obtained target color will not be too different from the normal face color.
  • the pixels in the face area of the target image can be rendered, so as to set the color of all pixels in the face area as the target color, so as to realize the eyes, eyebrows, nose, and eyes in the face area.
  • the effect of erasing facial features such as mouth.
  • the target color is obtained through down-sampling, the amount of color information in the down-sampling sampling result is relatively small, which is convenient for processing by devices with less computing power.
  • the target color used for rendering is obtained by weighted summation of the preset standard face color and the mean value, where the mean value can reflect the color of the face in the target image, and the standard face color can play a corrective role .
  • the embodiments of the present disclosure can make the target color not only consistent with the color of the human face in the target image, but also avoid excessive differences in the color of the human face under normal circumstances.
  • Fig. 7 is a schematic flowchart showing another image processing method according to an embodiment of the present disclosure. As shown in FIG. 7, the calculation of the color average value of the remaining sampling results, and the weighted summation of the preset standard face color and the average value to obtain the target color includes:
  • the color average value corresponding to each color channel is respectively weighted and summed with the color of the corresponding color channel in the standard face color to obtain the target color of each color channel.
  • the remaining sampling results may include the colors of multiple color channels, for example, including R (red), G (green), and B (blue) three color channels, and the color of each color channel may pass from 0 to 255.
  • the gray value of 0 to 255 can also be converted to 0 to 1 interval.
  • the average value of the color values of the same color channel in each remaining sampling result can be calculated to obtain the color average corresponding to each color channel.
  • the standard face color also includes the colors of the three color channels. For example, the colors of the three color channels of the remaining sampling results are all converted to the range of 0 to 1, so the colors of the three color channels included in the standard face color can also pass 0
  • the value between 1 and 1, for example, the standard face color can be set to (0.97, 0.81, 0.7).
  • the color average corresponding to each color channel may be weighted and summed with the color of the corresponding color channel in the standard face color to obtain the target color of each color channel.
  • Fig. 8 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 8, the calculating the color average value of the remaining sampling results, and performing a weighted summation on the preset standard face color and the average value to obtain the target color includes:
  • the weights of the mean value and the standard face color may be preset or adjusted in real time.
  • a color threshold may be preset for comparison with the obtained average value, where the preset color threshold may be a color closer to the color of the skin under normal circumstances, specifically, the difference between the average value and the preset color threshold may be calculated .
  • the preset standard face color may be weighted by the first preset weight
  • the average value may be weighted by the second preset weight
  • the weighted value may be summed to obtain the target color. Since the second preset weight is greater than the first preset weight, the target color obtained by the weighted summation can reflect the color of the face skin in the target image to a greater extent, ensuring that the rendering result is close to the original face skin in the target image s color.
  • the difference value is greater than the preset difference value threshold, it means that the obtained average value has a large difference with the skin color under normal circumstances, and the face in the target image may be in a more extreme environment, resulting in a relatively abnormal average value obtained.
  • the first preset weight can be increased, and/or the second preset weight can be reduced, and then the preset standard face color can be weighted by the increased first preset weight, and the reduced
  • the second preset weight value of is weighted to the average value, and then the weighted value is summed to obtain the target color.
  • Fig. 9 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 9, the rendering of pixels in the face region of the target image according to the target color includes:
  • the pixels in the first face area are rendered according to the target color.
  • the first face region that does not contain hair may be determined in the target image according to the first face mask image, and then the pixels in the first face region may be rendered according to the target color. In this way, the color of all pixels in the face area is set as the target color, and the effect of erasing the facial features such as eyes, eyebrows, nose, and mouth in the face area is realized.
  • Fig. 10 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 10, the rendering of pixels in the face region of the target image according to the target color includes:
  • the pixels in the second face area are rendered according to the target color.
  • the pixels in the first face area are rendered. Since the first face area does not contain hair, there will be a clear dividing line at the boundary between the first face area and the hair. It feels unnatural for users to watch.
  • the face key points of the target image can be acquired, and the second face mask image containing hair is determined according to the face key points, and then according to the second face mask image, it is determined that the target image contains The second face area of the hair. Since the second face area contains hair, there is no obvious boundary with the hair. Then, according to the target color, the pixels in the second face area are rendered and rendered As a result, the viewing effect is relatively natural.
  • Fig. 11 is a schematic diagram showing a rendered second face region according to an embodiment of the present disclosure.
  • the second face mask image can be close to an ellipse, covering the chin to the forehead from top to bottom, and covering the left edge to the right edge of the face from left to right.
  • the second face area contains hair, that is, there is no obvious boundary with the hair on the forehead, so the pixels in the second face area are rendered according to the target color, and the rendering result is relatively natural.
  • the rendering effect on the edge of the second face region can also be gradually reduced, so that the rendered second face region has a certain degree of transparency, so as to visually connect with the region outside the face region.
  • Fig. 12 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
  • the target image is the k-th frame image in a continuous multi-frame image, and k is an integer greater than 1.
  • the pixels in the face area of the target image Rendering includes:
  • weighted summation is performed on the target color of the k-th frame image and the target color of the previous frame of the k-th frame image;
  • the pixels in the face area of the target image are rendered according to the color obtained by the weighted summation.
  • the target image may be a single image, or it may be the k-th frame image in a continuous multi-frame image, for example, it belongs to a frame image in a certain video.
  • the light of the environment where the face is located can change, which will cause the color of the face skin to change, or the angle between the face and the light source will change, which will also cause the skin of the face to change.
  • the color changes. If the pixels in the face area are rendered only based on the target color corresponding to the target image, it may cause a large difference in the rendering results of adjacent images, causing the user to feel the color jump of the face area after the facial features are erased. (Alternatively called flashing) effect.
  • this embodiment may follow the steps in the embodiment shown in FIG. 1, for example, to obtain the target color of the previous frame image of the k-th frame image, and then store the obtained target color, and then perform the calculation of the k-th frame image.
  • the target color and the target color of the previous frame of the k-th image are weighted and summed, and the color obtained by the weighted summation combines the color of the face skin in the k-th frame image and the previous one of the k-th frame image.
  • the color of the face skin of the frame image, and then the pixels in the face area of the target image are rendered according to the color obtained by the weighted summation, which can avoid the rendering result relative to the image before the kth frame image (such as the previous frame image). )
  • the color of the face area changes.
  • the present disclosure also proposes an embodiment of an image processing device.
  • Fig. 13 is a schematic block diagram showing an image processing device according to an embodiment of the present disclosure.
  • the image processing frame shown in this embodiment can be applied to terminals, such as mobile phones, tablet computers, wearable devices, personal computers, etc., and can also be applied to servers, such as local servers, cloud servers, and the like.
  • the image processing apparatus may include:
  • the first face determination module 101 is configured to execute a first face mask image that determines that the target image does not contain hair, and obtain the first person in the target image that does not contain hair according to the first face mask image Face area
  • the image generation module 102 is configured to perform filling of a preset gray-scale color outside the first face area to generate a preset shape of an image to be sampled;
  • the down-sampling module 103 is configured to perform down-sampling of the image to be sampled, and remove the sampling result of the color whose color is the preset grayscale in the sampling result to obtain the remaining sampling result;
  • the calculation module 104 is configured to perform calculation of the color average value of the remaining sampling results, and perform a weighted summation of the preset standard face color and the average value to obtain the target color;
  • the rendering module 105 is configured to perform rendering of pixels in the face region of the target image according to the target color.
  • Fig. 14 is a schematic block diagram showing a computing module according to an embodiment of the present disclosure.
  • the calculation module 104 includes:
  • the first average value sub-module 1041 is configured to perform calculation of the average value of the color value of the same color channel in each remaining sampling result to obtain the color average value corresponding to each of the color channels;
  • the first weighting sub-module 1042 is configured to perform weighted summation of the color average value corresponding to each color channel with the color of the corresponding color channel in the standard face color to obtain the target of each color channel. color.
  • Fig. 15 is a schematic block diagram showing another computing module according to an embodiment of the present disclosure.
  • the calculation module 104 includes:
  • the second average sub-module 1043 is configured to perform calculation of the color average of each remaining sampling result
  • the difference calculation sub-module 1044 is configured to perform calculation of the difference between the average value and a preset color threshold
  • the second weighting submodule 1045 is configured to perform calculation based on the difference value being less than the preset difference value threshold, calculating the weight of the preset standard face color through the first preset weight value, and calculating the weight of the preset standard face color through the second preset weight value. Performing a weighted sum of the average values to obtain a target color, wherein the first preset weight value is smaller than the second preset weight value;
  • the third weighting sub-module 1046 is configured to perform, based on the difference value being greater than the preset difference value threshold, increasing the first preset weight value, and/or reducing the second preset weight value, and calculating after the increase
  • the first preset weight value of is weighted to the preset standard face color, and the weighted sum of the average value is performed through the reduced second preset weight value to obtain the target color.
  • Fig. 16 is a schematic block diagram showing a rendering module according to an embodiment of the present disclosure.
  • the rendering module 105 includes:
  • the first region determining sub-module 1051 is configured to perform determining a first face region that does not contain hair in the target image according to the first face mask map;
  • the first rendering submodule 1052 is configured to perform rendering of pixels in the first face region according to the target color.
  • Fig. 17 is a schematic block diagram showing another rendering module according to an embodiment of the present disclosure.
  • the rendering module 105 includes:
  • the mask determination sub-module 1053 is configured to execute the acquisition of key points of the face of the target image, and determine a second face mask image containing hair according to the key points of the face;
  • the second region determining submodule 1054 is configured to perform determining a second face region containing hair in the target image according to the second face mask map;
  • the second rendering submodule 1055 is configured to perform rendering of pixels in the second face region according to the target color.
  • Fig. 18 is a schematic block diagram showing still another rendering module according to an embodiment of the present disclosure.
  • the target image is the k-th frame image in the continuous multi-frame images, and k is an integer greater than 1, and the rendering module 105 includes:
  • the color acquiring submodule 1056 is configured to execute acquiring the target color of the image of the previous frame of the k-th frame of image;
  • a weighted sum submodule 1057 is configured to perform weighted summation of the target color of the k-th frame image and the target color of the previous frame of the k-th frame image;
  • the third rendering submodule 1058 is configured to perform rendering of pixels in the face region of the target image according to the color obtained by the weighted summation.
  • the embodiment of the present disclosure also proposes an electronic device, including:
  • a memory for storing executable instructions of the processor
  • the processor is configured to execute the instructions to implement the image processing method according to any of the foregoing embodiments.
  • the embodiment of the present disclosure also proposes a storage medium.
  • the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the image processing method described in any of the foregoing embodiments.
  • the embodiment of the present disclosure also proposes a computer program product, the program product includes a computer program, the computer program is stored in a readable storage medium, at least one processor of the device reads and executes from the readable storage medium
  • the computer program enables the device to execute the image processing method described in any one of the foregoing embodiments.
  • Fig. 19 is a schematic block diagram showing an electronic device according to an embodiment of the present disclosure.
  • the electronic device 1900 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the electronic device 1900 may include one or more of the following components: a processing component 1902, a memory 1904, a power supply component 1906, a multimedia component 1908, an audio component 1910, an input/output (I/O) interface 1912, and a sensor component 1914 , And the communication component 1916.
  • the processing component 1902 generally controls the overall operations of the electronic device 1900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 1902 may include one or more processors 1920 to execute instructions to complete all or part of the steps of the foregoing image processing method.
  • the processing component 1902 may include one or more modules to facilitate the interaction between the processing component 1902 and other components.
  • the processing component 1902 may include a multimedia module to facilitate the interaction between the multimedia component 1908 and the processing component 1902.
  • the memory 1904 is configured to store various types of data to support operations in the electronic device 1900. Examples of such data include instructions for any application or method operating on the electronic device 1900, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 1904 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 1906 provides power for various components of the electronic device 1900.
  • the power supply component 1906 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 1900.
  • the multimedia component 1908 includes a screen that provides an output interface between the electronic device 1900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 1908 includes a front camera and/or a rear camera. When the electronic device 1900 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1910 is configured to output and/or input audio signals.
  • the audio component 1910 includes a microphone (MIC).
  • the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 1904 or transmitted via the communication component 1916.
  • the audio component 1910 further includes a speaker for outputting audio signals.
  • the I/O interface 1912 provides an interface between the processing component 1902 and the peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 1914 includes one or more sensors for providing the electronic device 1900 with various aspects of state evaluation.
  • the sensor component 1914 can detect the on/off status of the electronic device 1900 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 1900, and the sensor component 1914 can also detect the electronic device 1900 or the electronic device 1900.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 1900, the orientation or acceleration/deceleration of the electronic device 1900, and the temperature change of the electronic device 1900.
  • the sensor assembly 1914 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 1914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 1916 is configured to facilitate wired or wireless communication between the electronic device 1900 and other devices.
  • the electronic device 1900 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 1916 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 1916 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 1900 may be used by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components are implemented to implement the above-mentioned image processing method.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field A programmable gate array
  • controller a microcontroller, a microprocessor, or other electronic components are implemented to implement the above-mentioned image processing method.
  • non-transitory computer-readable storage medium including instructions, such as a memory 1904 including instructions, which can be executed by the processor 1920 of the electronic device 1900 to complete the foregoing image processing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法、装置、电子设备和存储介质,所述方法可以包括:根据第一人脸掩膜图获取目标图像中的第一人脸区域(S101);在第一人脸区域外填充预设灰阶的颜色生成待采样图像(S102);对待采样图像进行降采样,剔除颜色为预设灰阶的颜色的采样结果得到剩余采样结果(S103);计算剩余采样结果的颜色均值,对预设的标准人脸颜色和均值进行加权求和得到目标颜色(S104);根据目标颜色对目标图像的人脸区域内的像素进行渲染(S105)。

Description

图像处理方法、装置、电子设备和存储介质
相关申请的交叉引用
本公开要求于2020年06月19日提交中国专利局、申请号为202010567699.5、发明名称为“图像处理方法、装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像处理领域,尤其涉及图像处理方法、装置、电子设备和存储介质。
背景技术
在图像处理应用中,对于五官的处理属于相对常见的操作,例如可以对五官放大、错位、擦除等。然而,目前擦除人脸五官的方式往往由于确定替代五官的颜色与人脸的颜色差异较大,导致渲染效果较差,并且由于人脸五官附近往往存在大量的像素点,因此其计算量过大,难以适用于计算能力较小的设备。
发明内容
本公开提供了图像处理方法、装置、电子设备和存储介质,以至少解决相关技术中的技术问题。本公开的技术方案如下:
根据本公开实施例的第一方面,提出一种图像处理方法,包括:确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到剩余采样结果;计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
根据本公开实施例的第二方面,提出一种图像处理装置,包括:第一人脸确定模块,被配置为执行确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;图像生成模块,被配置为执行在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;降采样模块,被配置为执行对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到 剩余采样结果;计算模块,被配置为执行计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;渲染模块,被配置为执行根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
根据本公开实施例的第三方面,提出一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如上述任一实施例所述的图像处理方法。
根据本公开实施例的第四方面,提出一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述任一实施例所述的图像处理方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,设备的至少一个处理器从所述可读存储介质读取并执行所述计算机程序,使得设备执行上述任一项实施例所述的图像处理方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据本公开的实施例示出的一种图像处理方法的示意流程图。
图2是根据本公开的实施例示出的一种第一人脸掩膜图。
图3是根据本公开的实施例示出的一种待采样图像。
图4是根据本公开的实施例示出的一种采样结果的示意图。
图5是根据本公开的实施例示出的一种均值对应颜色的示意图。
图6是根据本公开的实施例示出的一种目标颜色的示意图。
图7是根据本公开的实施例示出的另一种图像处理方法的示意流程图。
图8是根据本公开的实施例示出的又一种图像处理方法的示意流程图。
图9是根据本公开的实施例示出的又一种图像处理方法的示意流程图。
图10是根据本公开的实施例示出的又一种图像处理方法的示意流程图。
图11是根据本公开的实施例示出的一种渲染后的第二人脸区域的示意图。
图12是根据本公开的实施例示出的又一种图像处理方法的示意流程图。
图13是根据本公开的实施例示出的一种图像处理装置的示意框图。
图14是根据本公开的实施例示出的一种计算模块的示意框图。
图15是根据本公开的实施例示出的另一种计算模块的示意框图。
图16是根据本公开的实施例示出的一种渲染模块的示意框图。
图17是根据本公开的实施例示出的另一种渲染模块的示意框图。
图18是根据本公开的实施例示出的又一种渲染模块的示意框图。
图19是根据本公开的实施例示出的一种电子设备的示意框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图1是根据本公开的实施例示出的一种图像处理方法的示意流程图。本实施例所示的图像处理方法可以适用于终端,例如手机、平板电脑、可穿戴设备、个人计算机等,也可以适用于服务器,例如本地服务器、云端服务器等。
如图1所示,所述图像处理方法可以包括以下步骤:
在S101中,确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
在S102中,在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
在S103中,对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到剩余采样结果;
在S104中,计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
在S105中,根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
在一些实施例中,确定第一人脸掩膜图的方式可以根据需要进行选择。例如可以预先通过深度学习训练得到掩膜确定模型,掩膜确定模型用于确定图像中不包含头发的掩膜图,进而基于掩膜确定模型可以确定目标图像不包含头发的第一人脸掩膜图。例如可以预先通过深度学习训练得到关键点确定模型,关键点确定模型用于确定图像中人脸上的关键点,进而基于关键点确定模型可以确定目标图像中人脸上的关键点,将位于人脸边缘的关键点 连接起来所形成的封闭区域,作为第一人脸掩膜图。
在确定第一人脸掩膜图后,可以根据第一人脸掩膜图获取目标图像中不包含头发的第一人脸区域,进而在第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像。其中,预设灰阶可以根据需要在0至255之间选择,例如预设灰阶为0,那么预设灰阶的颜色为黑色,例如预设灰阶为255,那么预设灰阶的颜色为白色。本实施例可以选择预设灰阶为0的颜色,或者预设灰阶为255的颜色,有利于避免后续采样过程中,包含人脸像素的采样结果的颜色与预设灰阶的颜色相同而被剔除。
图2是根据本公开的实施例示出的一种第一人脸掩膜图。图3是根据本公开的实施例示出的一种待采样图像。
以预设灰阶为0,预设灰阶的颜色为黑色进行示例,如图2所示,通过第一人脸掩膜图可以在目标图像中获取到如图3所示的不包含头发的第一人脸区域,在第一人脸区域外填充预设灰阶的颜色后所形成的预设形状,,可以是如图3所示的矩形,也可以是其他形状,对此,本公开并不限制。
对于待采样图像可以进行降采样,其中所采样的降采样方式可以根据需要进行设置,例如可以设置为4*7,也即宽度方向上采样4次,高度方向上采样7次,那么可以得到28个采样结果,其中,每次采用可以是采集单个像素,也可以采集某个位置附近的多个像素,或者将待采样图像平均划分为4*7个区域,将每个区域像素的颜色均值作为采样结果。
由于一般情况下,人的皮肤不会出现大面积的纯预设灰阶的颜色区域,所以颜色为预设灰阶的颜色的采样结果得到剩余采样结果,完全是对填充在第一人脸区域外的部分进行采样得到的,其中并不存在可供参考的皮肤颜色,因此可以剔除采样结果中颜色为预设灰阶的颜色的采样结果,那么剩余采样结果都包含可供参考的皮肤颜色。
图4是根据本公开的实施例示出的一种采样结果的示意图。
如图4所示,其中包含两行采样结果,每行包含14个采样结果,共28个采样结果,在28个采样结果中,24个采样结果的颜色为预设灰阶的颜色,4个采样结果的颜色不是预设灰阶的颜色,那么可以剔除颜色为预设灰阶的颜色的24个采样结果,保留剩余的颜色不是预设灰阶的颜色的4个采样结果。
剩余采样结果可以为一个或多个,在剩余采样结果为一个的情况下,其颜色就是均值,在剩余采样结果为多个的情况下,可以将每个剩余采样结果的颜色进行加和,然后计算均值,例如颜色可以通过0至255的灰度值表示,也可以将0至255的灰度值转换到0至1区间表示。
虽然剩余采样结果的颜色并不是预设灰阶的颜色,但是可能其对应的采样区域既包含填充的预设灰阶的颜色区域,又包含人脸区域,这会导致得到的均值偏暗。或者当目标图 像所处环境较为极端,例如处于暗光环境中,那么每个剩余采样的颜色都会偏暗,得到的均值也会偏暗。
考虑到这种情况,本实施在计算得到均值后,可以进一步对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色,其中,标准人脸颜色可以是预先设定的接近人脸皮肤颜色的颜色。通过对预设的标准人脸颜色和所述采样结果的颜色均值进行加权求和,可以基于标准人脸颜色对均值起到一定程度的矫正作用,以免仅根据均值得到的颜色与通常情况下的人脸颜色差异过大。
图5是根据本公开的实施例示出的一种均值对应颜色的示意图。图6是根据本公开的实施例示出的一种目标颜色的示意图。如图5所示,均值对应的颜色偏暗,通过对预设的标准人脸颜色和所述均值进行加权求和得到如图6所示的目标颜色,则更接近于人脸皮肤的颜色,从而既通过均值体现了目标图像中人脸的颜色,又能保证得到的目标颜色不会与通常情况下的人脸颜色差异过大。
最后,可以根据得到的目标颜色,对目标图像的人脸区域内的像素进行渲染,从而将人脸区域内的所有像素的颜色设置为目标颜色,实现将人脸区域内眼、眉、鼻、口等五官擦除的效果。
根据本公开的实施例,由于目标颜色是通过降采样得到的,而降采样的采样结果中的颜色信息的数据量相对较少,便于计算能力较小的设备进行处理。并且用于渲染的目标颜色是通过对预设的标准人脸颜色和所述均值进行加权求和得到的,其中均值可以反应目标图像中人脸的颜色,标准人脸颜色则可以起到矫正作用。
本公开的实施例能够使得目标颜色既能够与目标图像中人脸的颜色相符,又能够避免与通常情况下的人脸颜色差异过大。
图7是根据本公开的实施例示出的另一种图像处理方法的示意流程图。如图7所示,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:
在S1041中,计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
在S1042中,将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
在一些实施例中,剩余采样结果可以包括多个颜色通道的颜色,例如包括R(红)、G(绿)、B(蓝)三个颜色通道,每个颜色通道的颜色可以通过0至255的灰度值表示,也可以将0至255的灰度值转换到0至1区间表示。针对剩余采样结果,可以计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个颜色通道对应的颜色均值。
标准人脸颜色也包含三个颜色通道的颜色,例如剩余采样结果的三个颜色通道的颜色都转换到0至1区间表示,那么标准人脸颜色所包含三个颜色通道的颜色也可以通过0至1之间的数值表示,例如可以设置标准人脸颜色为(0.97,0.81,0.7)。可以将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
图8是根据本公开的实施例示出的又一种图像处理方法的示意流程图。如图8所示,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:
在S1043中,计算每个剩余采样结果的颜色均值;
在S1044中,计算所述均值与预设颜色阈值的差值;
在S1045中,基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
在S1046中,基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
在一些实施例中,在对预设的标准人脸颜色和所述均值进行加权求和过程中,均值和标准人脸颜色的权值可以是预先设置的,也可以是实时调整的。
例如可以预先设置颜色阈值以供与得到的均值进行比较,其中,预设颜色阈值可以是与一般情况下皮肤的颜色较为接近的颜色,具体地,可以计算所述均值与预设颜色阈值的差值。
若所述差值小于预设差值阈值,说明得到的均值与一般情况下皮肤的颜色是较为接近的。那么可以通过第一预设权值对预设的标准人脸颜色加权,通过第二预设权值对所述均值进行加权,然后对加权值求和得到目标颜色。由于第二预设权值大于第一预设权值,因此可以使得加权求和得到的目标颜色较大程度上体现目标图像中人脸皮肤的颜色,确保渲染结果接近目标图像中人脸皮肤原本的颜色。
若所述差值大于预设差值阈值,说明得到的均值与一般情况下皮肤的颜色差距较大,目标图像中的人脸可能处在较为极端的环境中,导致得到的均值相对异常。那么可以提高所述第一预设权值,和/或降低所述第二预设权值,然后通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权,再对加权值求和得到目标颜色。通过第二权值和提高第一权值,可以降低异常的均值对目标颜色的影响,并加强标准人脸颜色的校正作用。
图9是根据本公开的实施例示出的又一种图像处理方法的示意流程图。如图9所示,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:
在S1051中,根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
在S1052中,根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
在一些实施例中,可以根据第一人脸掩膜图,在目标图像中确定不包含头发的第一人脸区域,进而根据所述目标颜色,对第一人脸区域内的像素进行渲染,从而将人脸区域内的所有像素的颜色设置为目标颜色,实现将人脸区域内眼、眉、鼻、口等五官擦除的效果。
图10是根据本公开的实施例示出的又一种图像处理方法的示意流程图。如图10所示,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:
在S1053中,获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
在S1054中,根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
在S1055中,根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
基于图9所示的实施例,对第一人脸区域内的像素进行渲染,由于第一人脸区域不包含头发,因此第一人脸区域的与头发的交界处会存在明显的分界线,用户观看起来感觉不自然。
本实施例可以获取目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图,进而根据第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域,由于第二人脸区域包含头发,所以并不会与头发存在明显的交界,那么根据所述目标颜色,对所述第二人脸区域内的像素进行渲染,渲染结果观看效果相对自然一些。
图11是根据本公开的实施例示出的一种渲染后的第二人脸区域的示意图。
如图11所示,第二人脸掩膜图可以接近椭圆形,从上到下覆盖下巴到额头,从左到右覆盖脸左侧边缘到右侧边缘。第二人脸区域包含头发,也即在额头处不会与头发存在明显的交界,那么根据目标颜色对第二人脸区域内的像素进行渲染,渲染结果观看效果相对自然一些。
进一步地,在渲染过程中,还可以逐渐降低对第二人脸区域边缘的渲染效果,使得渲染后的第二人脸区域具有一定透明度,以便视觉效果上与人脸区域以外的区域衔接。
图12是根据本公开的实施例示出的又一种图像处理方法的示意流程图。如图12所示,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:
在S1056中,获取所述第k帧图像的前一帧图像的目标颜色;
在S1057中,对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
在S1058中,根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
在一些实施例中,目标图像可以是单独的一张图像,也可以是连续的多帧图像中的第k帧图像,例如属于某个视频中的一帧图像。
由于在多帧图像中,人脸所处环境的光线可以发生变化,这会导致人脸皮肤的颜色发生变化,或者人脸与光源之间的角度会发生变化,这也会导致人脸皮肤的颜色发生变化。若仅根据目标图像这一帧图像对应的目标颜色渲染人脸区域内的像素,可能会导致相邻图像渲染结果差异较大,导致用户观看起来会感觉擦除五官后的人脸区域颜色跳变(或者称作闪动)的效果。
对此,本实施例可以按照例如图1所示实施例中的步骤,获取所述第k帧图像的前一帧图像的目标颜色,然后存储获取到的目标颜色,进而对第k帧图像的目标颜色,以及第k帧图像的前一帧图像的目标颜色进行加权求和,那么加权求和得到的颜色,综合了第k帧图像中人脸皮肤的颜色,以及第k帧图像的前一帧图像人脸皮肤的颜色,进而根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染,可以避免渲染结果相对于第k帧图像之前的图像(例如前一帧图像)人脸区域的颜色出现跳变。
与前述图像处理方法的实施例相对应地,本公开还提出了图像处理装置的实施例。
图13是根据本公开的实施例示出的一种图像处理装置的示意框图。本实施例所示的图像处理框可以适用于终端,例如手机、平板电脑、可穿戴设备、个人计算机等,也可以适用于服务器,例如本地服务器、云端服务器等。
如图13所示,所述图像处理装置可以包括:
第一人脸确定模块101,被配置为执行确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
图像生成模块102,被配置为执行在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
降采样模块103,被配置为执行对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到剩余采样结果;
计算模块104,被配置为执行计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
渲染模块105,被配置为执行根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
图14是根据本公开的实施例示出的一种计算模块的示意框图。如图14所示,所述计算模块104包括:
第一均值子模块1041,被配置为执行计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
第一加权子模块1042,被配置为执行将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
图15是根据本公开的实施例示出的另一种计算模块的示意框图。如图15所示,所述计算模块104包括:
第二均值子模块1043,被配置为执行计算每个剩余采样结果的颜色均值;
差值计算子模块1044,被配置为执行计算所述均值与预设颜色阈值的差值;
第二加权子模块1045,被配置为执行基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
第三加权子模块1046,被配置为执行基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
图16是根据本公开的实施例示出的一种渲染模块的示意框图。如图16所示,所述渲染模块105包括:
第一区域确定子模块1051,被配置为执行根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
第一渲染子模块1052,被配置为执行根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
图17是根据本公开的实施例示出的另一种渲染模块的示意框图。如图17所示,所述渲染模块105包括:
掩膜确定子模块1053,被配置为执行获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
第二区域确定子模块1054,被配置为执行根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
第二渲染子模块1055,被配置为执行根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
图18是根据本公开的实施例示出的又一种渲染模块的示意框图。如图18所示,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述渲染模块105包括:
颜色获取子模块1056,被配置为执行获取所述第k帧图像的前一帧图像的目标颜色;
加权求和子模块1057,被配置为执行对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
第三渲染子模块1058,被配置为执行根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
本公开的实施例还提出一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述指令,以实现如上述任一实施例所述的图像处理方法。
本公开的实施例还提出一种一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述任一实施例所述的图像处理方法。
本公开的实施例还提出一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,设备的至少一个处理器从所述可读存储介质读取并执行所述计算机程序,使得设备执行上述任一项实施例所述的图像处理方法。
图19是根据本公开的实施例示出的一种电子设备的示意框图。例如,电子设备1900可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图19,电子设备1900可以包括以下一个或多个组件:处理组件1902,存储器1904,电源组件1906,多媒体组件1908,音频组件1910,输入/输出(I/O)的接口1912,传感器组件1914,以及通信组件1916。
处理组件1902通常控制电子设备1900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1902可以包括一个或多个处理器1920来执行指令,以完成上述图像处理方法的全部或部分步骤。此外,处理组件1902可以包括一个或多个模块,便于处理组件1902和其他组件之间的交互。例如,处理组件1902可以包括多媒体模块,以方便多媒体组件1908和处理组件1902之间的交互。
存储器1904被配置为存储各种类型的数据以支持在电子设备1900的操作。这些数据的示例包括用于在电子设备1900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1906为电子设备1900的各种组件提供电力。电源组件1906可以包括电源管理系统,一个或多个电源,及其他与为电子设备1900生成、管理和分配电力相关联的组件。
多媒体组件1908包括在电子设备1900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1908包括一个前置摄像头和/或后置摄像头。当电子设备1900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1910被配置为输出和/或输入音频信号。例如,音频组件1910包括一个麦克风(MIC),当电子设备1900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1904或经由通信组件1916发送。在一些实施例中,音频组件1910还包括一个扬声器,用于输出音频信号。
I/O接口1912为处理组件1902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1914包括一个或多个传感器,用于为电子设备1900提供各个方面的状态评估。例如,传感器组件1914可以检测到电子设备1900的打开/关闭状态,组件的相对定位,例如所述组件为电子设备1900的显示器和小键盘,传感器组件1914还可以检测电子设备1900或电子设备1900一个组件的位置改变,用户与电子设备1900接触的存在或不存在,电子设备1900方位或加速/减速和电子设备1900的温度变化。传感器组件1914可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1914还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1914还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1916被配置为便于电子设备1900和其他设备之间有线或无线方式的通信。电子设备1900可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一些实施例中,通信组件1916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一些实施例中,所述通信组件1916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外 数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在本公开一实施例中,电子设备1900可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述图像处理方法。
在本公开一实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1904,上述指令可由电子设备1900的处理器1920执行以完成上述图像处理方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本公开实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。。

Claims (24)

  1. 一种图像处理方法,包括:
    确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
    在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
    对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结果得到剩余采样结果;
    计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
    根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
  2. 根据权利要求1所述的方法,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:
    计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
    将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
  3. 根据权利要求1所述的方法,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:
    计算每个剩余采样结果的颜色均值;
    计算所述均值与预设颜色阈值的差值;
    基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
    基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
  4. 根据权利要求1所述的方法,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:
    根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
    根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
  5. 根据权利要求1所述的方法,所述根据所述目标颜色,对所述目标图像的人脸区域 内的像素进行渲染包括:
    获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
    根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
    根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
  6. 根据权利要求1至5中任一项所述的方法,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:
    获取所述第k帧图像的前一帧图像的目标颜色;
    对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
    根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
  7. 一种图像处理装置,包括:
    第一人脸确定模块,被配置为执行确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
    图像生成模块,被配置为执行在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
    降采样模块,被配置为执行对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到剩余采样结果;
    计算模块,被配置为执行计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
    渲染模块,被配置为执行根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
  8. 根据权利要求7所述的装置,所述计算模块包括:
    第一均值子模块,被配置为执行计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
    第一加权子模块,被配置为执行将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
  9. 根据权利要求7所述的装置,所述计算模块包括:
    第二均值子模块,被配置为执行计算每个剩余采样结果的颜色均值;
    差值计算子模块,被配置为执行计算所述均值与预设颜色阈值的差值;
    第二加权子模块,被配置为执行基于所述差值小于预设差值阈值,计算通过第一预设 权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
    第三加权子模块,被配置为执行基于所述差值大于预设差值阈值时,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
  10. 根据权利要求7所述的装置,所述渲染模块包括:
    第一区域确定子模块,被配置为执行根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
    第一渲染子模块,被配置为执行根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
  11. 根据权利要求7所述的装置,所述渲染模块包括:
    掩膜确定子模块,被配置为执行获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
    第二区域确定子模块,被配置为执行根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
    第二渲染子模块,被配置为执行根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
  12. 根据权利要求7至12中任一项所述的装置,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述渲染模块包括:
    颜色获取子模块,被配置为执行获取所述第k帧图像的前一帧图像的目标颜色;
    加权求和子模块,被配置为执行对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
    第三渲染子模块,被配置为执行根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
  13. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现如下操作:
    确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
    在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
    对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结 果得到剩余采样结果;
    计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
    根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
  14. 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:
    计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
    将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
  15. 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:
    计算每个剩余采样结果的颜色均值;
    计算所述均值与预设颜色阈值的差值;
    基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
    基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
  16. 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:
    根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
    根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
  17. 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:
    获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
    根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
    根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
  18. 根据权利要求13至17任一项所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:
    获取所述第k帧图像的前一帧图像的目标颜色;
    对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
    根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
  19. 一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;
    在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;
    对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结果得到剩余采样结果;
    计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;
    根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
  20. 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;
    将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
  21. 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    计算每个剩余采样结果的颜色均值;
    计算所述均值与预设颜色阈值的差值;
    基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;
    基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
  22. 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;
    根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
  23. 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;
    根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;
    根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
  24. 根据权利要求19至23任一项所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:
    获取所述第k帧图像的前一帧图像的目标颜色;
    对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;
    根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
PCT/CN2020/139133 2020-06-19 2020-12-24 图像处理方法、装置、电子设备和存储介质 WO2021253783A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022556185A JP2023518444A (ja) 2020-06-19 2020-12-24 画像処理方法、装置、電子機器及び記憶媒体
US17/952,619 US20230020937A1 (en) 2020-06-19 2022-09-26 Image processing method, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010567699.5A CN113822806B (zh) 2020-06-19 2020-06-19 图像处理方法、装置、电子设备和存储介质
CN202010567699.5 2020-06-19

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/952,619 Continuation US20230020937A1 (en) 2020-06-19 2022-09-26 Image processing method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021253783A1 true WO2021253783A1 (zh) 2021-12-23

Family

ID=78912035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139133 WO2021253783A1 (zh) 2020-06-19 2020-12-24 图像处理方法、装置、电子设备和存储介质

Country Status (4)

Country Link
US (1) US20230020937A1 (zh)
JP (1) JP2023518444A (zh)
CN (1) CN113822806B (zh)
WO (1) WO2021253783A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169099A1 (en) * 2007-12-05 2009-07-02 Vestel Elektronik Sanayi Ve Ticaret A.S. Method of and apparatus for detecting and adjusting colour values of skin tone pixels
CN104156915A (zh) * 2014-07-23 2014-11-19 小米科技有限责任公司 肤色调整方法和装置
CN105359162A (zh) * 2013-05-14 2016-02-24 谷歌公司 用于图像中的与脸部有关的选择和处理的图像掩模
CN108875534A (zh) * 2018-02-05 2018-11-23 北京旷视科技有限公司 人脸识别的方法、装置、系统及计算机存储介质
CN108986019A (zh) * 2018-07-13 2018-12-11 北京小米智能科技有限公司 肤色调整方法及装置、电子设备、机器可读存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928601B2 (en) * 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
US9811734B2 (en) * 2015-05-11 2017-11-07 Google Inc. Crowd-sourced creation and updating of area description file for mobile device localization
GB2548087B (en) * 2016-03-02 2022-05-11 Holition Ltd Locating and augmenting object features in images
WO2017181332A1 (zh) * 2016-04-19 2017-10-26 浙江大学 一种基于单幅图像的全自动三维头发建模方法
US10628700B2 (en) * 2016-05-23 2020-04-21 Intel Corporation Fast and robust face detection, region extraction, and tracking for improved video coding
US10491895B2 (en) * 2016-05-23 2019-11-26 Intel Corporation Fast and robust human skin tone region detection for improved video coding
US11012694B2 (en) * 2018-05-01 2021-05-18 Nvidia Corporation Dynamically shifting video rendering tasks between a server and a client
CN109903257A (zh) * 2019-03-08 2019-06-18 上海大学 一种基于图像语义分割的虚拟头发染色方法
CN111275648B (zh) * 2020-01-21 2024-02-09 腾讯科技(深圳)有限公司 人脸图像处理方法、装置、设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169099A1 (en) * 2007-12-05 2009-07-02 Vestel Elektronik Sanayi Ve Ticaret A.S. Method of and apparatus for detecting and adjusting colour values of skin tone pixels
CN105359162A (zh) * 2013-05-14 2016-02-24 谷歌公司 用于图像中的与脸部有关的选择和处理的图像掩模
CN104156915A (zh) * 2014-07-23 2014-11-19 小米科技有限责任公司 肤色调整方法和装置
CN108875534A (zh) * 2018-02-05 2018-11-23 北京旷视科技有限公司 人脸识别的方法、装置、系统及计算机存储介质
CN108986019A (zh) * 2018-07-13 2018-12-11 北京小米智能科技有限公司 肤色调整方法及装置、电子设备、机器可读存储介质

Also Published As

Publication number Publication date
US20230020937A1 (en) 2023-01-19
CN113822806A (zh) 2021-12-21
CN113822806B (zh) 2023-10-03
JP2023518444A (ja) 2023-05-01

Similar Documents

Publication Publication Date Title
CN110675310B (zh) 视频处理方法、装置、电子设备及存储介质
US10032076B2 (en) Method and device for displaying image
WO2016011747A1 (zh) 肤色调整方法和装置
EP3582187B1 (en) Face image processing method and apparatus
CN109345485B (zh) 一种图像增强方法、装置、电子设备及存储介质
WO2017092289A1 (zh) 图像处理方法及装置
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN107798654B (zh) 图像磨皮方法及装置、存储介质
CN107730448B (zh) 基于图像处理的美颜方法及装置
US20230162332A1 (en) Image Transformation Method and Apparatus
US11847769B2 (en) Photographing method, terminal, and storage medium
US20220327749A1 (en) Method and electronic device for processing images
US11250547B2 (en) Facial image enhancement method, device and electronic device
CN112508773A (zh) 图像处理方法及装置、电子设备、存储介质
WO2020114097A1 (zh) 一种边界框确定方法、装置、电子设备及存储介质
US9665925B2 (en) Method and terminal device for retargeting images
CN115526774A (zh) 图像插值方法、装置、存储介质及电子设备
WO2021253783A1 (zh) 图像处理方法、装置、电子设备和存储介质
EP3273437A1 (en) Method and device for enhancing readability of a display
CN113706430A (zh) 一种图像处理方法、装置和用于图像处理的装置
CN109413232B (zh) 屏幕显示方法及装置
CN113610723A (zh) 图像处理方法及相关装置
US11527219B2 (en) Method and apparatus for processing brightness of display screen
CN115619879A (zh) 图像处理方法、装置、存储介质及电子设备
CN115375555A (zh) 图像处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940617

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022556185

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20940617

Country of ref document: EP

Kind code of ref document: A1