WO2021253783A1 - 图像处理方法、装置、电子设备和存储介质 - Google Patents
图像处理方法、装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021253783A1 WO2021253783A1 PCT/CN2020/139133 CN2020139133W WO2021253783A1 WO 2021253783 A1 WO2021253783 A1 WO 2021253783A1 CN 2020139133 W CN2020139133 W CN 2020139133W WO 2021253783 A1 WO2021253783 A1 WO 2021253783A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color
- image
- face
- target
- preset
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000005070 sampling Methods 0.000 claims abstract description 96
- 238000009877 rendering Methods 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 23
- 210000004209 hair Anatomy 0.000 claims description 47
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 20
- 230000003247 decreasing effect Effects 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 24
- 238000004891 communication Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the field of image processing, and in particular to image processing methods, devices, electronic equipment, and storage media.
- the processing of the facial features is a relatively common operation, for example, the facial features can be enlarged, dislocated, erased, and so on.
- the current way of erasing facial features is often due to the large difference between the color of the replacement facial features and the color of the human face, resulting in poor rendering effects, and because there are often a large number of pixels near the facial features, the amount of calculation is too much. It is difficult to apply to devices with small computing power.
- the present disclosure provides an image processing method, device, electronic device, and storage medium to at least solve technical problems in related technologies.
- the technical solutions of the present disclosure are as follows:
- an image processing method which includes: determining a first face mask image of a target image that does not contain hair, and obtaining a first face mask image in the target image according to the first face mask image A first face area that does not contain hair; fills the color of a preset gray scale outside the first face area to generate a preset shape to be sampled image; downsampling the to be sampled image, and removes the sampling result
- the remaining sampling result is obtained from the sampling result of the color whose middle color is the preset gray scale; calculating the color average value of the remaining sampling result, and performing a weighted summation on the preset standard face color and the average value to obtain the target color;
- the target color is to render the pixels in the face area of the target image.
- an image processing device including: a first face determination module configured to execute a first face mask map that determines that the target image does not contain hair, according to the first face mask The face mask map acquires a first face region that does not contain hair in the target image; an image generation module is configured to perform filling of a preset gray-scale color outside the first face region to generate a preset A shape of the image to be sampled; a down-sampling module configured to perform down-sampling of the image to be sampled, and remove the sampling result of the color of the preset grayscale in the sampling result to obtain the remaining sampling result; the calculation module is configured to Perform calculation of the color average value of the remaining sampling results, and perform a weighted summation of the preset standard face color and the average value to obtain the target color; the rendering module is configured to perform the calculation of the target image according to the target color The pixels in the face area are rendered.
- an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the instructions to implement The image processing method as described in any of the above embodiments.
- a storage medium when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the image processing method described in any of the above embodiments.
- a computer program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of the device obtains data from the readable storage medium.
- the computer program is read and executed, so that the device executes the image processing method described in any one of the foregoing embodiments.
- Fig. 1 is a schematic flowchart showing an image processing method according to an embodiment of the present disclosure.
- Fig. 2 is a diagram showing a first face mask according to an embodiment of the present disclosure.
- Fig. 3 shows an image to be sampled according to an embodiment of the present disclosure.
- Fig. 4 is a schematic diagram showing a sampling result according to an embodiment of the present disclosure.
- Fig. 5 is a schematic diagram showing a color corresponding to an average value according to an embodiment of the present disclosure.
- Fig. 6 is a schematic diagram showing a target color according to an embodiment of the present disclosure.
- Fig. 7 is a schematic flowchart showing another image processing method according to an embodiment of the present disclosure.
- Fig. 8 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
- Fig. 9 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
- Fig. 10 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
- Fig. 11 is a schematic diagram showing a rendered second face region according to an embodiment of the present disclosure.
- Fig. 12 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
- Fig. 13 is a schematic block diagram showing an image processing device according to an embodiment of the present disclosure.
- Fig. 14 is a schematic block diagram showing a computing module according to an embodiment of the present disclosure.
- Fig. 15 is a schematic block diagram showing another computing module according to an embodiment of the present disclosure.
- Fig. 16 is a schematic block diagram showing a rendering module according to an embodiment of the present disclosure.
- Fig. 17 is a schematic block diagram showing another rendering module according to an embodiment of the present disclosure.
- Fig. 18 is a schematic block diagram showing still another rendering module according to an embodiment of the present disclosure.
- Fig. 19 is a schematic block diagram showing an electronic device according to an embodiment of the present disclosure.
- Fig. 1 is a schematic flowchart showing an image processing method according to an embodiment of the present disclosure.
- the image processing method shown in this embodiment can be applied to terminals, such as mobile phones, tablet computers, wearable devices, personal computers, etc., and can also be applied to servers, such as local servers, cloud servers, and the like.
- the image processing method may include the following steps:
- a preset gray-scale color is filled outside the first face area to generate an image to be sampled in a preset shape
- the pixels in the face area of the target image are rendered according to the target color.
- the method of determining the first face mask image can be selected as required.
- the mask determination model can be obtained through deep learning training in advance.
- the mask determination model is used to determine the mask image that does not contain hair in the image, and then based on the mask determination model, the first face mask that does not contain hair in the target image can be determined picture.
- the key point determination model can be obtained through deep learning training in advance.
- the key point determination model is used to determine the key points on the face in the image.
- the key points on the face in the target image can be determined.
- the closed area formed by connecting the key points on the edge of the face is used as the first face mask image.
- the first face region that does not contain hair in the target image can be obtained according to the first face mask image, and then the color of the preset grayscale can be filled outside the first face region.
- the preset gray level can be selected from 0 to 255 according to needs. For example, the preset gray level is 0, then the color of the preset gray level is black, for example, the preset gray level is 255, then the color of the preset gray level Is white.
- a color with a preset gray level of 0 or a color with a preset gray level of 255 can be selected, which is beneficial to avoid that the color of the sampling result including face pixels is the same as the color of the preset gray level in the subsequent sampling process. Be rejected.
- Fig. 2 is a diagram showing a first face mask according to an embodiment of the present disclosure.
- Fig. 3 shows an image to be sampled according to an embodiment of the present disclosure.
- the first face mask image can be used to obtain the target image as shown in Figure 3 without hair.
- the first face area, the preset shape formed by filling the preset gray-scale color outside the first face area, may be a rectangle as shown in FIG. 3, or other shapes. For this, the present disclosure Not limited.
- the image to be sampled can be down-sampled, and the down-sampling method of the sample can be set as required, for example, it can be set to 4*7, that is, 4 samples are sampled in the width direction and 7 times in the height direction, then 28 can be obtained.
- Sampling results each of which can be used to collect a single pixel, or multiple pixels near a certain position, or divide the image to be sampled into 4*7 areas on average, and use the color average of each area pixel as Sampling results.
- Fig. 4 is a schematic diagram showing a sampling result according to an embodiment of the present disclosure.
- each row contains 14 sampling results, a total of 28 sampling results, of the 28 sampling results, the color of 24 sampling results is the color of the preset grayscale, 4
- the color of the sampling result is not the color of the preset grayscale, then the 24 sampling results of the color whose color is the preset grayscale can be eliminated, and the remaining 4 sampling results of the color whose color is not the preset grayscale are retained.
- the remaining sampling results can be one or more.
- its color is the mean value.
- the colors of each remaining sampling result can be added, and then Calculate the average value.
- the color can be represented by the gray value of 0 to 255, or the gray value of 0 to 255 can be converted to the interval of 0 to 1.
- the color of the remaining sampling result is not the color of the preset grayscale, it is possible that the corresponding sampling area contains both the filled color area of the preset grayscale and the face area, which will cause the average value to be obtained to be dark. Or when the target image is in an extreme environment, such as in a dark environment, the color of each remaining sample will be darker, and the average value obtained will also be darker.
- the preset standard face color and the average value can be further weighted and summed to obtain the target color, where the standard face color may be preset The color close to the skin color of a human face.
- the average value can be corrected to a certain extent based on the standard face color, so as to avoid the color obtained based only on the average from being different from the usual one. The color of the face is too different.
- Fig. 5 is a schematic diagram showing a color corresponding to an average value according to an embodiment of the present disclosure.
- Fig. 6 is a schematic diagram showing a target color according to an embodiment of the present disclosure. As shown in Figure 5, the color corresponding to the average value is too dark.
- the target color as shown in Figure 6 is obtained by weighting and summing the preset standard face color and the average value, which is closer to the color of the face skin. Thus, not only the average value reflects the color of the face in the target image, but also it can be ensured that the obtained target color will not be too different from the normal face color.
- the pixels in the face area of the target image can be rendered, so as to set the color of all pixels in the face area as the target color, so as to realize the eyes, eyebrows, nose, and eyes in the face area.
- the effect of erasing facial features such as mouth.
- the target color is obtained through down-sampling, the amount of color information in the down-sampling sampling result is relatively small, which is convenient for processing by devices with less computing power.
- the target color used for rendering is obtained by weighted summation of the preset standard face color and the mean value, where the mean value can reflect the color of the face in the target image, and the standard face color can play a corrective role .
- the embodiments of the present disclosure can make the target color not only consistent with the color of the human face in the target image, but also avoid excessive differences in the color of the human face under normal circumstances.
- Fig. 7 is a schematic flowchart showing another image processing method according to an embodiment of the present disclosure. As shown in FIG. 7, the calculation of the color average value of the remaining sampling results, and the weighted summation of the preset standard face color and the average value to obtain the target color includes:
- the color average value corresponding to each color channel is respectively weighted and summed with the color of the corresponding color channel in the standard face color to obtain the target color of each color channel.
- the remaining sampling results may include the colors of multiple color channels, for example, including R (red), G (green), and B (blue) three color channels, and the color of each color channel may pass from 0 to 255.
- the gray value of 0 to 255 can also be converted to 0 to 1 interval.
- the average value of the color values of the same color channel in each remaining sampling result can be calculated to obtain the color average corresponding to each color channel.
- the standard face color also includes the colors of the three color channels. For example, the colors of the three color channels of the remaining sampling results are all converted to the range of 0 to 1, so the colors of the three color channels included in the standard face color can also pass 0
- the value between 1 and 1, for example, the standard face color can be set to (0.97, 0.81, 0.7).
- the color average corresponding to each color channel may be weighted and summed with the color of the corresponding color channel in the standard face color to obtain the target color of each color channel.
- Fig. 8 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 8, the calculating the color average value of the remaining sampling results, and performing a weighted summation on the preset standard face color and the average value to obtain the target color includes:
- the weights of the mean value and the standard face color may be preset or adjusted in real time.
- a color threshold may be preset for comparison with the obtained average value, where the preset color threshold may be a color closer to the color of the skin under normal circumstances, specifically, the difference between the average value and the preset color threshold may be calculated .
- the preset standard face color may be weighted by the first preset weight
- the average value may be weighted by the second preset weight
- the weighted value may be summed to obtain the target color. Since the second preset weight is greater than the first preset weight, the target color obtained by the weighted summation can reflect the color of the face skin in the target image to a greater extent, ensuring that the rendering result is close to the original face skin in the target image s color.
- the difference value is greater than the preset difference value threshold, it means that the obtained average value has a large difference with the skin color under normal circumstances, and the face in the target image may be in a more extreme environment, resulting in a relatively abnormal average value obtained.
- the first preset weight can be increased, and/or the second preset weight can be reduced, and then the preset standard face color can be weighted by the increased first preset weight, and the reduced
- the second preset weight value of is weighted to the average value, and then the weighted value is summed to obtain the target color.
- Fig. 9 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 9, the rendering of pixels in the face region of the target image according to the target color includes:
- the pixels in the first face area are rendered according to the target color.
- the first face region that does not contain hair may be determined in the target image according to the first face mask image, and then the pixels in the first face region may be rendered according to the target color. In this way, the color of all pixels in the face area is set as the target color, and the effect of erasing the facial features such as eyes, eyebrows, nose, and mouth in the face area is realized.
- Fig. 10 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure. As shown in FIG. 10, the rendering of pixels in the face region of the target image according to the target color includes:
- the pixels in the second face area are rendered according to the target color.
- the pixels in the first face area are rendered. Since the first face area does not contain hair, there will be a clear dividing line at the boundary between the first face area and the hair. It feels unnatural for users to watch.
- the face key points of the target image can be acquired, and the second face mask image containing hair is determined according to the face key points, and then according to the second face mask image, it is determined that the target image contains The second face area of the hair. Since the second face area contains hair, there is no obvious boundary with the hair. Then, according to the target color, the pixels in the second face area are rendered and rendered As a result, the viewing effect is relatively natural.
- Fig. 11 is a schematic diagram showing a rendered second face region according to an embodiment of the present disclosure.
- the second face mask image can be close to an ellipse, covering the chin to the forehead from top to bottom, and covering the left edge to the right edge of the face from left to right.
- the second face area contains hair, that is, there is no obvious boundary with the hair on the forehead, so the pixels in the second face area are rendered according to the target color, and the rendering result is relatively natural.
- the rendering effect on the edge of the second face region can also be gradually reduced, so that the rendered second face region has a certain degree of transparency, so as to visually connect with the region outside the face region.
- Fig. 12 is a schematic flowchart showing yet another image processing method according to an embodiment of the present disclosure.
- the target image is the k-th frame image in a continuous multi-frame image, and k is an integer greater than 1.
- the pixels in the face area of the target image Rendering includes:
- weighted summation is performed on the target color of the k-th frame image and the target color of the previous frame of the k-th frame image;
- the pixels in the face area of the target image are rendered according to the color obtained by the weighted summation.
- the target image may be a single image, or it may be the k-th frame image in a continuous multi-frame image, for example, it belongs to a frame image in a certain video.
- the light of the environment where the face is located can change, which will cause the color of the face skin to change, or the angle between the face and the light source will change, which will also cause the skin of the face to change.
- the color changes. If the pixels in the face area are rendered only based on the target color corresponding to the target image, it may cause a large difference in the rendering results of adjacent images, causing the user to feel the color jump of the face area after the facial features are erased. (Alternatively called flashing) effect.
- this embodiment may follow the steps in the embodiment shown in FIG. 1, for example, to obtain the target color of the previous frame image of the k-th frame image, and then store the obtained target color, and then perform the calculation of the k-th frame image.
- the target color and the target color of the previous frame of the k-th image are weighted and summed, and the color obtained by the weighted summation combines the color of the face skin in the k-th frame image and the previous one of the k-th frame image.
- the color of the face skin of the frame image, and then the pixels in the face area of the target image are rendered according to the color obtained by the weighted summation, which can avoid the rendering result relative to the image before the kth frame image (such as the previous frame image). )
- the color of the face area changes.
- the present disclosure also proposes an embodiment of an image processing device.
- Fig. 13 is a schematic block diagram showing an image processing device according to an embodiment of the present disclosure.
- the image processing frame shown in this embodiment can be applied to terminals, such as mobile phones, tablet computers, wearable devices, personal computers, etc., and can also be applied to servers, such as local servers, cloud servers, and the like.
- the image processing apparatus may include:
- the first face determination module 101 is configured to execute a first face mask image that determines that the target image does not contain hair, and obtain the first person in the target image that does not contain hair according to the first face mask image Face area
- the image generation module 102 is configured to perform filling of a preset gray-scale color outside the first face area to generate a preset shape of an image to be sampled;
- the down-sampling module 103 is configured to perform down-sampling of the image to be sampled, and remove the sampling result of the color whose color is the preset grayscale in the sampling result to obtain the remaining sampling result;
- the calculation module 104 is configured to perform calculation of the color average value of the remaining sampling results, and perform a weighted summation of the preset standard face color and the average value to obtain the target color;
- the rendering module 105 is configured to perform rendering of pixels in the face region of the target image according to the target color.
- Fig. 14 is a schematic block diagram showing a computing module according to an embodiment of the present disclosure.
- the calculation module 104 includes:
- the first average value sub-module 1041 is configured to perform calculation of the average value of the color value of the same color channel in each remaining sampling result to obtain the color average value corresponding to each of the color channels;
- the first weighting sub-module 1042 is configured to perform weighted summation of the color average value corresponding to each color channel with the color of the corresponding color channel in the standard face color to obtain the target of each color channel. color.
- Fig. 15 is a schematic block diagram showing another computing module according to an embodiment of the present disclosure.
- the calculation module 104 includes:
- the second average sub-module 1043 is configured to perform calculation of the color average of each remaining sampling result
- the difference calculation sub-module 1044 is configured to perform calculation of the difference between the average value and a preset color threshold
- the second weighting submodule 1045 is configured to perform calculation based on the difference value being less than the preset difference value threshold, calculating the weight of the preset standard face color through the first preset weight value, and calculating the weight of the preset standard face color through the second preset weight value. Performing a weighted sum of the average values to obtain a target color, wherein the first preset weight value is smaller than the second preset weight value;
- the third weighting sub-module 1046 is configured to perform, based on the difference value being greater than the preset difference value threshold, increasing the first preset weight value, and/or reducing the second preset weight value, and calculating after the increase
- the first preset weight value of is weighted to the preset standard face color, and the weighted sum of the average value is performed through the reduced second preset weight value to obtain the target color.
- Fig. 16 is a schematic block diagram showing a rendering module according to an embodiment of the present disclosure.
- the rendering module 105 includes:
- the first region determining sub-module 1051 is configured to perform determining a first face region that does not contain hair in the target image according to the first face mask map;
- the first rendering submodule 1052 is configured to perform rendering of pixels in the first face region according to the target color.
- Fig. 17 is a schematic block diagram showing another rendering module according to an embodiment of the present disclosure.
- the rendering module 105 includes:
- the mask determination sub-module 1053 is configured to execute the acquisition of key points of the face of the target image, and determine a second face mask image containing hair according to the key points of the face;
- the second region determining submodule 1054 is configured to perform determining a second face region containing hair in the target image according to the second face mask map;
- the second rendering submodule 1055 is configured to perform rendering of pixels in the second face region according to the target color.
- Fig. 18 is a schematic block diagram showing still another rendering module according to an embodiment of the present disclosure.
- the target image is the k-th frame image in the continuous multi-frame images, and k is an integer greater than 1, and the rendering module 105 includes:
- the color acquiring submodule 1056 is configured to execute acquiring the target color of the image of the previous frame of the k-th frame of image;
- a weighted sum submodule 1057 is configured to perform weighted summation of the target color of the k-th frame image and the target color of the previous frame of the k-th frame image;
- the third rendering submodule 1058 is configured to perform rendering of pixels in the face region of the target image according to the color obtained by the weighted summation.
- the embodiment of the present disclosure also proposes an electronic device, including:
- a memory for storing executable instructions of the processor
- the processor is configured to execute the instructions to implement the image processing method according to any of the foregoing embodiments.
- the embodiment of the present disclosure also proposes a storage medium.
- the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the image processing method described in any of the foregoing embodiments.
- the embodiment of the present disclosure also proposes a computer program product, the program product includes a computer program, the computer program is stored in a readable storage medium, at least one processor of the device reads and executes from the readable storage medium
- the computer program enables the device to execute the image processing method described in any one of the foregoing embodiments.
- Fig. 19 is a schematic block diagram showing an electronic device according to an embodiment of the present disclosure.
- the electronic device 1900 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
- the electronic device 1900 may include one or more of the following components: a processing component 1902, a memory 1904, a power supply component 1906, a multimedia component 1908, an audio component 1910, an input/output (I/O) interface 1912, and a sensor component 1914 , And the communication component 1916.
- the processing component 1902 generally controls the overall operations of the electronic device 1900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 1902 may include one or more processors 1920 to execute instructions to complete all or part of the steps of the foregoing image processing method.
- the processing component 1902 may include one or more modules to facilitate the interaction between the processing component 1902 and other components.
- the processing component 1902 may include a multimedia module to facilitate the interaction between the multimedia component 1908 and the processing component 1902.
- the memory 1904 is configured to store various types of data to support operations in the electronic device 1900. Examples of such data include instructions for any application or method operating on the electronic device 1900, contact data, phone book data, messages, pictures, videos, etc.
- the memory 1904 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 1906 provides power for various components of the electronic device 1900.
- the power supply component 1906 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 1900.
- the multimedia component 1908 includes a screen that provides an output interface between the electronic device 1900 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 1908 includes a front camera and/or a rear camera. When the electronic device 1900 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 1910 is configured to output and/or input audio signals.
- the audio component 1910 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 1904 or transmitted via the communication component 1916.
- the audio component 1910 further includes a speaker for outputting audio signals.
- the I/O interface 1912 provides an interface between the processing component 1902 and the peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 1914 includes one or more sensors for providing the electronic device 1900 with various aspects of state evaluation.
- the sensor component 1914 can detect the on/off status of the electronic device 1900 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 1900, and the sensor component 1914 can also detect the electronic device 1900 or the electronic device 1900.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 1900, the orientation or acceleration/deceleration of the electronic device 1900, and the temperature change of the electronic device 1900.
- the sensor assembly 1914 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 1914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 1914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 1916 is configured to facilitate wired or wireless communication between the electronic device 1900 and other devices.
- the electronic device 1900 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
- the communication component 1916 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 1916 also includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 1900 may be used by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components are implemented to implement the above-mentioned image processing method.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller a microcontroller, a microprocessor, or other electronic components are implemented to implement the above-mentioned image processing method.
- non-transitory computer-readable storage medium including instructions, such as a memory 1904 including instructions, which can be executed by the processor 1920 of the electronic device 1900 to complete the foregoing image processing method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (24)
- 一种图像处理方法,包括:确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结果得到剩余采样结果;计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
- 根据权利要求1所述的方法,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
- 根据权利要求1所述的方法,所述计算剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色包括:计算每个剩余采样结果的颜色均值;计算所述均值与预设颜色阈值的差值;基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
- 根据权利要求1所述的方法,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
- 根据权利要求1所述的方法,所述根据所述目标颜色,对所述目标图像的人脸区域 内的像素进行渲染包括:获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
- 根据权利要求1至5中任一项所述的方法,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染包括:获取所述第k帧图像的前一帧图像的目标颜色;对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
- 一种图像处理装置,包括:第一人脸确定模块,被配置为执行确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;图像生成模块,被配置为执行在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;降采样模块,被配置为执行对所述待采样图像进行降采样,剔除采样结果中颜色为预设灰阶的颜色的采样结果得到剩余采样结果;计算模块,被配置为执行计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;渲染模块,被配置为执行根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
- 根据权利要求7所述的装置,所述计算模块包括:第一均值子模块,被配置为执行计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;第一加权子模块,被配置为执行将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
- 根据权利要求7所述的装置,所述计算模块包括:第二均值子模块,被配置为执行计算每个剩余采样结果的颜色均值;差值计算子模块,被配置为执行计算所述均值与预设颜色阈值的差值;第二加权子模块,被配置为执行基于所述差值小于预设差值阈值,计算通过第一预设 权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;第三加权子模块,被配置为执行基于所述差值大于预设差值阈值时,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
- 根据权利要求7所述的装置,所述渲染模块包括:第一区域确定子模块,被配置为执行根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;第一渲染子模块,被配置为执行根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
- 根据权利要求7所述的装置,所述渲染模块包括:掩膜确定子模块,被配置为执行获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;第二区域确定子模块,被配置为执行根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;第二渲染子模块,被配置为执行根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
- 根据权利要求7至12中任一项所述的装置,所述目标图像为连续的多帧图像中的第k帧图像,k为大于1的整数,所述渲染模块包括:颜色获取子模块,被配置为执行获取所述第k帧图像的前一帧图像的目标颜色;加权求和子模块,被配置为执行对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;第三渲染子模块,被配置为执行根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
- 一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如下操作:确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结 果得到剩余采样结果;计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
- 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
- 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:计算每个剩余采样结果的颜色均值;计算所述均值与预设颜色阈值的差值;基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
- 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
- 根据权利要求13所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
- 根据权利要求13至17任一项所述的电子设备,所述处理器被配置为执行所述指令,以实现如下操作:获取所述第k帧图像的前一帧图像的目标颜色;对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
- 一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:确定目标图像不包含头发的第一人脸掩膜图,根据所述第一人脸掩膜图获取所述目标图像中不包含头发的第一人脸区域;在所述第一人脸区域外填充预设灰阶的颜色,以生成预设形状的待采样图像;对所述待采样图像进行降采样,剔除采样结果中颜色为所述预设灰阶的颜色的采样结果得到剩余采样结果;计算所述剩余采样结果的颜色均值,对预设的标准人脸颜色和所述均值进行加权求和,以得到目标颜色;根据所述目标颜色,对所述目标图像的人脸区域内的像素进行渲染。
- 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:计算每个剩余采样结果中相同颜色通道的颜色值的均值,以得到每个所述颜色通道对应的颜色均值;将每个所述颜色通道对应的颜色均值,分别与标准人脸颜色中对应颜色通道的颜色进行加权求和,以得到每个所述颜色通道的目标颜色。
- 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:计算每个剩余采样结果的颜色均值;计算所述均值与预设颜色阈值的差值;基于所述差值小于预设差值阈值,计算通过第一预设权值对预设的标准人脸颜色加权,以及通过第二预设权值对所述均值进行加权之和,以得到目标颜色,其中,所述第一预设权值小于所述第二预设权值;基于所述差值大于预设差值阈值,提高所述第一预设权值,和/或降低所述第二预设权值,计算通过提高后的第一预设权值对预设的标准人脸颜色加权,以及通过降低后的第二预设权值对所述均值进行加权之和,以得到目标颜色。
- 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:根据所述第一人脸掩膜图,在所述目标图像中确定不包含头发的第一人脸区域;根据所述目标颜色,对所述第一人脸区域内的像素进行渲染。
- 根据权利要求19所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:获取所述目标图像的人脸关键点,根据所述人脸关键点确定包含头发的第二人脸掩膜图;根据所述第二人脸掩膜图,在所述目标图像中确定包含头发的第二人脸区域;根据所述目标颜色,对所述第二人脸区域内的像素进行渲染。
- 根据权利要求19至23任一项所述的存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下操作:获取所述第k帧图像的前一帧图像的目标颜色;对所述第k帧图像的目标颜色,以及所述第k帧图像的前一帧图像的目标颜色进行加权求和;根据加权求和得到的颜色对所述目标图像的人脸区域内的像素进行渲染。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022556185A JP2023518444A (ja) | 2020-06-19 | 2020-12-24 | 画像処理方法、装置、電子機器及び記憶媒体 |
US17/952,619 US20230020937A1 (en) | 2020-06-19 | 2022-09-26 | Image processing method, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010567699.5A CN113822806B (zh) | 2020-06-19 | 2020-06-19 | 图像处理方法、装置、电子设备和存储介质 |
CN202010567699.5 | 2020-06-19 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/952,619 Continuation US20230020937A1 (en) | 2020-06-19 | 2022-09-26 | Image processing method, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021253783A1 true WO2021253783A1 (zh) | 2021-12-23 |
Family
ID=78912035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/139133 WO2021253783A1 (zh) | 2020-06-19 | 2020-12-24 | 图像处理方法、装置、电子设备和存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230020937A1 (zh) |
JP (1) | JP2023518444A (zh) |
CN (1) | CN113822806B (zh) |
WO (1) | WO2021253783A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169099A1 (en) * | 2007-12-05 | 2009-07-02 | Vestel Elektronik Sanayi Ve Ticaret A.S. | Method of and apparatus for detecting and adjusting colour values of skin tone pixels |
CN104156915A (zh) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | 肤色调整方法和装置 |
CN105359162A (zh) * | 2013-05-14 | 2016-02-24 | 谷歌公司 | 用于图像中的与脸部有关的选择和处理的图像掩模 |
CN108875534A (zh) * | 2018-02-05 | 2018-11-23 | 北京旷视科技有限公司 | 人脸识别的方法、装置、系统及计算机存储介质 |
CN108986019A (zh) * | 2018-07-13 | 2018-12-11 | 北京小米智能科技有限公司 | 肤色调整方法及装置、电子设备、机器可读存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9928601B2 (en) * | 2014-12-01 | 2018-03-27 | Modiface Inc. | Automatic segmentation of hair in images |
US9811734B2 (en) * | 2015-05-11 | 2017-11-07 | Google Inc. | Crowd-sourced creation and updating of area description file for mobile device localization |
GB2548087B (en) * | 2016-03-02 | 2022-05-11 | Holition Ltd | Locating and augmenting object features in images |
WO2017181332A1 (zh) * | 2016-04-19 | 2017-10-26 | 浙江大学 | 一种基于单幅图像的全自动三维头发建模方法 |
US10628700B2 (en) * | 2016-05-23 | 2020-04-21 | Intel Corporation | Fast and robust face detection, region extraction, and tracking for improved video coding |
US10491895B2 (en) * | 2016-05-23 | 2019-11-26 | Intel Corporation | Fast and robust human skin tone region detection for improved video coding |
US11012694B2 (en) * | 2018-05-01 | 2021-05-18 | Nvidia Corporation | Dynamically shifting video rendering tasks between a server and a client |
CN109903257A (zh) * | 2019-03-08 | 2019-06-18 | 上海大学 | 一种基于图像语义分割的虚拟头发染色方法 |
CN111275648B (zh) * | 2020-01-21 | 2024-02-09 | 腾讯科技(深圳)有限公司 | 人脸图像处理方法、装置、设备及计算机可读存储介质 |
-
2020
- 2020-06-19 CN CN202010567699.5A patent/CN113822806B/zh active Active
- 2020-12-24 JP JP2022556185A patent/JP2023518444A/ja active Pending
- 2020-12-24 WO PCT/CN2020/139133 patent/WO2021253783A1/zh active Application Filing
-
2022
- 2022-09-26 US US17/952,619 patent/US20230020937A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169099A1 (en) * | 2007-12-05 | 2009-07-02 | Vestel Elektronik Sanayi Ve Ticaret A.S. | Method of and apparatus for detecting and adjusting colour values of skin tone pixels |
CN105359162A (zh) * | 2013-05-14 | 2016-02-24 | 谷歌公司 | 用于图像中的与脸部有关的选择和处理的图像掩模 |
CN104156915A (zh) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | 肤色调整方法和装置 |
CN108875534A (zh) * | 2018-02-05 | 2018-11-23 | 北京旷视科技有限公司 | 人脸识别的方法、装置、系统及计算机存储介质 |
CN108986019A (zh) * | 2018-07-13 | 2018-12-11 | 北京小米智能科技有限公司 | 肤色调整方法及装置、电子设备、机器可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230020937A1 (en) | 2023-01-19 |
CN113822806A (zh) | 2021-12-21 |
CN113822806B (zh) | 2023-10-03 |
JP2023518444A (ja) | 2023-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675310B (zh) | 视频处理方法、装置、电子设备及存储介质 | |
US10032076B2 (en) | Method and device for displaying image | |
WO2016011747A1 (zh) | 肤色调整方法和装置 | |
EP3582187B1 (en) | Face image processing method and apparatus | |
CN109345485B (zh) | 一种图像增强方法、装置、电子设备及存储介质 | |
WO2017092289A1 (zh) | 图像处理方法及装置 | |
US11030733B2 (en) | Method, electronic device and storage medium for processing image | |
CN107798654B (zh) | 图像磨皮方法及装置、存储介质 | |
CN107730448B (zh) | 基于图像处理的美颜方法及装置 | |
US20230162332A1 (en) | Image Transformation Method and Apparatus | |
US11847769B2 (en) | Photographing method, terminal, and storage medium | |
US20220327749A1 (en) | Method and electronic device for processing images | |
US11250547B2 (en) | Facial image enhancement method, device and electronic device | |
CN112508773A (zh) | 图像处理方法及装置、电子设备、存储介质 | |
WO2020114097A1 (zh) | 一种边界框确定方法、装置、电子设备及存储介质 | |
US9665925B2 (en) | Method and terminal device for retargeting images | |
CN115526774A (zh) | 图像插值方法、装置、存储介质及电子设备 | |
WO2021253783A1 (zh) | 图像处理方法、装置、电子设备和存储介质 | |
EP3273437A1 (en) | Method and device for enhancing readability of a display | |
CN113706430A (zh) | 一种图像处理方法、装置和用于图像处理的装置 | |
CN109413232B (zh) | 屏幕显示方法及装置 | |
CN113610723A (zh) | 图像处理方法及相关装置 | |
US11527219B2 (en) | Method and apparatus for processing brightness of display screen | |
CN115619879A (zh) | 图像处理方法、装置、存储介质及电子设备 | |
CN115375555A (zh) | 图像处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20940617 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022556185 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/06/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20940617 Country of ref document: EP Kind code of ref document: A1 |