CN115482157A - Image processing method and device and computer equipment - Google Patents

Image processing method and device and computer equipment Download PDF

Info

Publication number
CN115482157A
CN115482157A CN202110604268.6A CN202110604268A CN115482157A CN 115482157 A CN115482157 A CN 115482157A CN 202110604268 A CN202110604268 A CN 202110604268A CN 115482157 A CN115482157 A CN 115482157A
Authority
CN
China
Prior art keywords
image
skin
processing
inpainting
skin area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110604268.6A
Other languages
Chinese (zh)
Inventor
祝琳
郭宇轩
林斯贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202110604268.6A priority Critical patent/CN115482157A/en
Publication of CN115482157A publication Critical patent/CN115482157A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application provides an image processing method, an image processing device and computer equipment, wherein the method comprises the following steps: carrying out flaw removal processing on the original image to obtain a flaw-removed image; acquiring a skin area division result corresponding to the original image; and according to the skin area division result, respectively carrying out preset beautifying processing of corresponding categories on different skin areas in the inpainting image to obtain an output image corresponding to the original image. When the image containing the face is subjected to optimization processing such as beautifying, flaw removing processing and skin area dividing processing are respectively carried out firstly, flaw removing images and skin area dividing results are obtained, and then preset beautifying processing is carried out on different skin areas in the flaw removing images according to the skin area dividing results, so that targeted beautifying processing can be carried out according to organ parts, characteristic nevus and the like in the image and different skin areas, the attractiveness and the authenticity of the beautifying processing of the image are improved, and false facial sensation caused by the whole beautifying is avoided.

Description

Image processing method and device and computer equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus, and a computer device.
Background
The existing beautifying scheme mainly searches for high-frequency noise information and flaws of the face through operations such as filtering and the like, and performs integral smoothing in a skin grinding mode, so that texture details of the face, including characteristic nevi and the like, are not well retained, and the whole face is false.
Therefore, the existing beautifying processing scheme has the technical problem of false face effect caused by integral processing.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device and computer equipment, which can perform targeted beautifying processing according to different skin areas in a picture, improve the attractiveness and authenticity of the image beautifying processing and avoid false face feeling caused by overall beautifying.
In a first aspect, an embodiment of the present application provides an image processing method, including:
carrying out flaw removal processing on the original image to obtain a flaw-removed image;
acquiring a skin area division result corresponding to the original image;
and according to the skin area division result, respectively carrying out preset beautifying processing of corresponding categories on different skin areas in the inpainting image to obtain an output image corresponding to the original image.
According to a specific embodiment of the present disclosure, the step of performing inpainting processing on an original image to obtain an inpainted image includes:
carrying out face key point identification on the original image to obtain a face image;
inputting the face image into a pre-trained blemish removal model, and obtaining a blemish removal image subjected to blemish removal processing, wherein the blemish comprises a skin feature block to be removed in the face image.
According to a specific embodiment of the present disclosure, the step of inputting the face image into a pre-trained inpainting model to obtain an inpainting image through inpainting processing includes:
inputting the face image into a pre-trained defect removing model to obtain an intermediate image subjected to defect removing processing;
calculating a weighted sum of the intermediate image and the original image as the inpainted image.
According to a specific embodiment of the present disclosure, the method for obtaining the inpainting model includes:
acquiring a plurality of groups of flaw removal sample images, wherein each group of flaw removal sample images comprise a first type sample image which is not subjected to flaw removal processing and a second type sample image which is obtained by performing flaw removal processing on the first type sample image;
and inputting a plurality of groups of flaw removal sample images into a basic neural network for training to obtain the flaw removal model.
According to a specific embodiment of the present disclosure, the step of obtaining a skin region partition result corresponding to the original image includes:
and inputting the original image into a pre-trained skin region division model, and obtaining a skin region division result corresponding to the original image, wherein the skin region division result comprises a face skin region, a body skin region and a surface organ region.
According to a specific embodiment of the present disclosure, the obtaining manner of the skin region partition model includes:
acquiring a plurality of groups of skin area division sample images, wherein each group of skin area division sample images comprise third type sample images which are not divided and fourth type sample images obtained by carrying out skin area division and labeling processing on the third type sample images;
and inputting a plurality of groups of skin area division sample images into a basic neural network for training to obtain the skin area division model.
According to a specific embodiment of the present disclosure, the step of performing preset skin beautifying processing of corresponding categories on different skin areas in the inpainting image according to the skin area division result to obtain an output image corresponding to the original image includes:
mapping the skin region division result to the inpainting image to obtain the skin region division result of the inpainting image;
according to the skin area division result of the blemish removed image, carrying out uniform skin color processing on a target skin area in the blemish removed image to obtain a uniform skin color image;
and carrying out basic skin beautifying processing on the uniform skin color image to obtain the output image.
According to a specific embodiment of the present disclosure, the step of performing uniform skin color processing on the target skin area in the inpainting image according to the skin area division result of the inpainting image to obtain a uniform skin color image includes:
respectively extracting actual hue values of target skin areas in the inpainting image, wherein the target skin areas comprise a face area and a neck area;
calculating a regional cumulative histogram according to the actual hue value of each target skin region;
calculating a target hue value of each target skin area according to the area cumulative histogram;
and adjusting the actual hue value of each target skin area to the corresponding target hue value to obtain the uniform skin color image.
According to a specific embodiment of the present disclosure, the step of performing basic skin makeup processing on the uniform skin color image to obtain the output image includes:
enhancing the brightness value of pixel points in each target skin area in the skin color uniform image;
and sharpening a preset skin area in the uniform skin color image to obtain the output image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the flaw removing module is used for removing flaws of the original image to obtain a flaw removed image;
the area division module is used for acquiring a skin area division result corresponding to the original image;
and the beautifying module is used for respectively carrying out preset beautifying processing of corresponding categories on different skin areas in the defect-removed image according to the skin area division result to obtain an output image corresponding to the original image.
According to a specific embodiment of the present disclosure, the inpainting module is configured to:
carrying out face key point identification on the original image to obtain a face image;
and inputting the face image into a pre-trained inpainting model, and obtaining an inpainting image subjected to inpainting processing, wherein the inpainting comprises a skin feature block to be removed in the face image.
According to a specific embodiment of the present disclosure, the defect removing module is specifically configured to:
inputting the face image into a pre-trained defect removing model to obtain an intermediate image subjected to defect removing processing;
calculating a weighted sum of the intermediate image and the original image as the inpainted image.
According to a specific embodiment of the present disclosure, the method for obtaining the inpainting model includes:
acquiring a plurality of groups of inpainting sample images, wherein each group of inpainting sample images comprise a first type of sample image which is not inpainted and a second type of sample image obtained by inpainting the first type of sample image;
and inputting a plurality of groups of flaw removal sample images into a basic neural network for training to obtain the flaw removal model.
According to a specific embodiment of the present disclosure, the region dividing module is configured to:
and inputting the original image into a pre-trained skin region division model to obtain a skin region division result corresponding to the original image, wherein the skin region division result comprises a face skin region, a body skin region and a surface organ region.
According to a specific embodiment of the present disclosure, the obtaining manner of the skin region partition model includes:
acquiring a plurality of groups of skin area division sample images, wherein each group of skin area division sample images comprise a third type sample image which is not divided and a fourth type sample image which is obtained by carrying out skin area division and labeling processing on the third type sample image;
and inputting a plurality of groups of skin area division sample images into a basic neural network for training to obtain the skin area division model.
According to a specific embodiment of the present disclosure, the beauty module is specifically configured to:
mapping the skin region division result to the inpainting image to obtain the skin region division result of the inpainting image;
according to the skin area division result of the blemish removed image, carrying out uniform skin color processing on a target skin area in the blemish removed image to obtain a uniform skin color image;
and carrying out basic skin beautifying processing on the uniform skin color image to obtain the output image.
According to a specific embodiment of the present disclosure, the beauty module is configured to:
respectively extracting actual hue values of target skin areas in the inpainting image, wherein the target skin areas comprise a face area and a neck area;
calculating an area cumulative histogram according to the actual hue value of each target skin area;
calculating a target hue value of each target skin area according to the area cumulative histogram;
and adjusting the actual hue value of each target skin area to the corresponding target hue value to obtain the uniform skin color image.
According to a specific embodiment of the present disclosure, the beauty module is specifically configured to:
enhancing the brightness value of pixel points in each target skin area in the skin color uniform image;
and sharpening a preset skin area in the uniform skin color image to obtain the output image.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory and a processor, where the memory is used to store a computer program, and the computer program executes, when the processor runs, the image processing method described in any one of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, which, when run on a processor, performs the image processing method of any one of the first aspects.
According to the image processing method, the image processing device and the computer equipment, when the image containing the face is subjected to optimization processing such as beautifying, flaw removing processing and skin area dividing processing are firstly and respectively carried out, flaw removing images and skin area dividing results are obtained, then preset beautifying processing is carried out on different skin areas in the flaw removing images according to the skin area dividing results, and therefore targeted beautifying processing can be carried out according to organ parts, characteristic nevi and the like in the image and different skin areas, the attractiveness and the authenticity of the image beautifying processing are improved, and false face feeling caused by the whole beautifying is avoided.
Drawings
To more clearly illustrate the technical solutions of the present application, the drawings required for use in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the present application. Like components are numbered similarly in the various figures.
Fig. 1 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 2 is a process diagram illustrating an image processing method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a principle of an inpainting model according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a principle of a skin segmentation model involved in an image processing method provided in an embodiment of the present application;
fig. 5 shows an image processed by skin segmentation according to an image processing method provided in an embodiment of the present application;
fig. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 shows a hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments.
The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present application, are intended to indicate only specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present application belong. The terms (such as terms defined in a commonly used dictionary) will be construed to have the same meaning as the contextual meaning in the related art and will not be construed to have an idealized or overly formal meaning unless expressly so defined in various embodiments of the present application.
Example 1
Referring to fig. 1, a schematic flow chart of an image processing method according to an embodiment of the present application is shown. As shown in fig. 1, the image processing method mainly includes the following steps:
s101, performing flaw removal processing on an original image to obtain a flaw-removed image;
the image processing method provided by the embodiment is applied to optimization processing such as beautifying of a face region and a face-related region contained in an image, and particularly for an image containing a human face, an image to be processed is defined as an original image. The original image may be an image containing a human face directly acquired by a computer device, or an image acquired by the computer device from another device end through a network, without limitation.
The image processing process provided in this embodiment includes two processes performed separately, namely, a blemish removal process for content and a skin area division process for an area, that is, a blemish removal process is performed using an original image to obtain a blemish removed image, and a skin area division process is performed using the original image to obtain a skin area division result.
The blemish removing treatment refers to the blemish removing treatment of the original image, namely the blemish image such as color spots, acne marks, stains, wrinkles and the like of the face are removed. In order to avoid false facial feeling in image processing, it is necessary to keep the areas of facial inherent features such as nevus, beard, silkworm, etc. during the blemish removal processing, so as to avoid false facial feeling caused by over-beautification.
The inpainting processing mode has various modes, and the inpainting processing can be performed on an original image by using a conventional inpainting processing algorithm, and the inpainting processing based on Artificial Intelligence (AI for short) can also be performed by using a neural network.
According to an embodiment of the present disclosure, the step of performing inpainting processing on the original image to obtain an inpainted image may specifically include:
carrying out face key point identification on the original image to obtain a face image;
and inputting the face image into a pre-trained inpainting model, and obtaining an inpainting image subjected to inpainting processing, wherein the inpainting comprises a skin feature block to be removed in the face image.
In image processing, a facial image can be divided into a plurality of skin feature blocks, such as wrinkles, wounds, dark color blocks, moles, birthmarks, and the like, wherein the wrinkles, wounds, dark color blocks are usually skin feature blocks to be removed in image processing and defined as blemishes, and the moles and the birthmarks are usually basic items to be reserved in image processing. The blemish removal process may include a blemish removal process and a basic item retention process, the blemishes including at least one of wrinkles, wounds, dark color patches, and the basic item including at least one of lentigines, birthmarks, and silkworm droppers. The defect removing processing refers to a defect removing action executed by a computer, and the basic item retention refers to a non-processing action or a specific identification retention action aiming at the existing skin characteristic block.
The present embodiment defines a scheme for implementing AI inpainting processing by an inpainting model. Specifically, as shown in fig. 2, after the original image is obtained, all key points of the face are identified through face key point recognition, and then the face image is segmented from the original image through an image segmentation scheme based on the key points. Here, the face may be an image region including only the face, or may be an image region including a related part such as the face, a neck, and an ear.
And training a basic neural network by using the images before and after the defect removing processing as sample images to obtain a defect removing processing model with a defect removing processing function, and loading the defect removing processing model into computer equipment. As shown in fig. 3, when image processing is performed, the acquired face image is input to a inpainting process model, and a first image subjected to inpainting processing is obtained by feature extraction, shape fitting, and processing of the inpainting process model. Here, the blemish removal treatment may include removal treatment of wrinkles, wounds, dark color patches, and the like, and retention of facial intrinsic basic items such as lentigo, birthmarks, silkworm droppers, and the like.
On the basis of the foregoing specific embodiments, the acquiring manner of the inpainting model may specifically include:
acquiring a plurality of groups of flaw removal sample images, wherein each group of flaw removal sample images comprise a first type sample image which is not subjected to flaw removal processing and a second type sample image which is obtained by performing flaw removal processing on the first type sample image;
and inputting a plurality of groups of flaw removal sample images into a basic neural network for training to obtain the flaw removal model.
When the defect removing model training is carried out, a plurality of groups of defect removing sample images are used, and each group of defect removing sample images comprise two types of sample images, namely a first type of sample image which is not subjected to defect removing processing and a second type of sample image which is subjected to defect removing processing. The sample image obtaining method may be that a first type sample image without being subjected to inpainting processing is obtained, inpainting processing is performed on the first type sample image manually or by using an existing algorithm to obtain a corresponding second type sample image, or an inpainting-free image is obtained as the second type sample image, and then an inpainting process is added to the second type sample image to obtain the corresponding first type sample image. It should be noted that all sample images or original images herein refer to images containing human faces or other animal faces.
And inputting the sample image into a basic neural network for training to obtain a skin segmentation model. There are various basic Neural Networks, such as Convolutional Neural Networks (CNN), recurrent Neural Networks (RNN), etc., without limitation. The whole treatment process is as follows:
the input image is processed to obtain the face range according to the face key points, and then the face area is sent into the AI inpainting model, and the overall model architecture is shown in fig. 2. The whole network architecture is a network structure close to a U shape, and is used for carrying out down-sampling on an input picture to extract features and restoring the size of the original picture through up-sampling. The input of the model is a matched picture of the same person before flaw removal and after refined flaw removal, and the main purpose of the model is to learn and output a picture close to the refined image so as to achieve the goal of flaw removal. The loss function adopted by the model mainly comprises the difference of pixel values and the structural similarity between two pictures, the larger the difference of the two pictures is, the larger the punishment on the network is, and finally the model is driven to output the better inpainted picture. The AI removes the flaw can be accurate dispel facial flaw and keep the characteristic nevus, and whole skin matter detail remains intact and has higher definition and third dimension.
Further, the step of inputting the face image into a pre-trained inpainting model to obtain an inpainting image through inpainting processing may include:
inputting the face image into a pre-trained defect removing model to obtain an intermediate image subjected to defect removing processing;
calculating a weighted sum of the intermediate image and the original image as the inpainted image.
In the embodiment, in consideration of different requirements of different users for the beauty treatment, different levels of defect removal treatment schemes are set. Specifically, after a face image corresponding to an original image is input into a pre-trained inpainting model, an obtained inpainting image is defined as an intermediate image. As shown in fig. 3, after the intermediate image is obtained, the intermediate image and the original image are superposed by different weights, and the size of the weight indicates how much the image content is retained, so that the weights of the intermediate image and the original image can be adjusted according to the beauty requirement of the user, and the inpainted image meeting the beauty requirement of the user is obtained.
Specifically, the weight of the intermediate image is defined as a first weight, the weight corresponding to the original image is defined as a second weight, and the sum of the first weight and the second weight is 1. The beauty requirement degree can be divided into a level 1, a level 2 and a level 3 from high to low in sequence, so that the first weight of the intermediate image is reduced in sequence, the second weight of the original image is increased in sequence, and the defect removing degree of the output first image is reduced in sequence.
In order to adapt to a plurality of levels set by mobile terminal beauty, the model is expanded from the original single output to the output of the plurality of levels, and in order to expand the difference of the three levels, the final output results are respectively fused with the original image in different proportions, so that different flaw removal strengths are reserved.
S102, obtaining a skin area division result corresponding to the original image;
considering that the skin color difference of different areas of the face is large, the one-key skin color uniform processing scheme usually directly adds the chrominance values of all the pixel points and then calculates the average value as the target chrominance value of all the pixel points, so that the whole skin color is monotonous easily, and the layering sense of the face cannot be reflected.
As shown in fig. 2, in the present embodiment, when performing the skin-beautifying process, the skin region segmentation process may be performed on the original image, that is, the face region, the forehead region, the beard region, the neck region, the clothes region, the background region, and the like in the original image, and the obtained skin region segmentation result may include parameters such as the skin region segmentation result and the pixel hue value of each skin region.
S103, performing preset beautifying processing of corresponding categories on different skin areas in the inpainting image according to the skin area division result to obtain an output image corresponding to the original image.
As shown in fig. 2, after the inpainting image and the skin area dividing result of the inpainting process are obtained, the inpainting image is already processed by the initial inpainting process in consideration of the skin distribution of the skin area dividing result corresponding to the inpainting image, so that the final beautification process image can be obtained by performing the preset beautification process on different skin areas of the inpainting image by using the skin area dividing result, and the final beautification process image is used as the output image corresponding to the original image.
The preset beautifying processing process can be conventional beautifying processing operations, such as skin polishing, uniform skin color, facial deformation adjustment, makeup supplementation and the like, and the operations can be processed by using a traditional algorithm without limitation. When the preset beautifying processing is performed, it is determined that corresponding processing is performed on beautifying processing schemes of different skin areas, for example, sharpening processing is performed on a beard area, brightening and peeling processing is performed on a face skin area, and highlight removing processing is performed on a forehead area.
According to the image processing method provided by the embodiment of the application, when the image containing the face is subjected to optimization processing such as beautifying, flaw removing processing and skin area dividing processing are respectively performed to obtain the flaw removing image and the skin area dividing result, and then preset beautifying processing is performed on different skin areas in the flaw removing image according to the skin area dividing result, so that targeted beautifying processing can be performed according to organ parts, characteristic nevus and the like in the image and different skin areas, the attractiveness and the authenticity of the image beautifying processing are improved, and false facial impression caused by the whole beautifying is avoided.
On the basis of the foregoing embodiment, according to a specific implementation manner of the present disclosure, the step of obtaining a skin region division result corresponding to the original image in S102 may include:
and inputting the original image into a pre-trained skin region division model to obtain a skin region division result corresponding to the original image, wherein the skin region division result comprises a face skin region, a body skin region and a surface organ region.
The present embodiment defines a scheme for realizing AI skin segmentation processing by a skin segmentation model. Specifically, after an original image is obtained, the original image is input into a skin segmentation model trained in advance, and a skin region segmentation result is obtained through feature extraction, shape fitting and processing of the skin segmentation model, wherein the skin region segmentation result mainly comprises feature point regions and pixel point parameters of specific skin feature blocks such as a face, a neck, beards, eyebrows, hair, clothes and a background.
Specifically, the obtaining manner of the skin region partition model may include:
acquiring a plurality of groups of skin area division sample images, wherein each group of skin area division sample images comprise third type sample images which are not divided and fourth type sample images obtained by carrying out skin area division and labeling processing on the third type sample images;
and inputting a plurality of groups of skin area division sample images into a basic neural network for training to obtain the skin area division model.
When skin area division model training is carried out, a plurality of groups of skin area division sample images are used, and each group of skin area division sample images comprise two types of sample images, namely an undivided third type sample image and a fourth type sample image subjected to skin area division. The sample image may be obtained in various manners, for example, a third type sample image without skin area division is obtained, and skin area division and labeling are performed on the third type sample image to obtain a corresponding fourth type sample image.
And inputting the sample image into a basic neural network for training to obtain a skin area division model. The AI body segmentation can accurately predict the skin area of the body, is less interfered by factors such as external color illumination and the like, and can greatly reduce the false detection rate compared with the traditional skin segmentation algorithm. The basic neural network used may be various, such as a convolutional neural network CNN, a cyclic convolutional network RNN, and the like, without limitation.
As shown in fig. 2, the whole image is also sent to an AI skin segmentation model to obtain the skin of different areas of the body, including the facial skin and the rest of the body, the main structure of the model is as shown in fig. 4 below, and the model is also a network structure similar to a U shape as a whole. The input to the model is a picture of a person and its corresponding mask picture of body segmentation, as shown in fig. 5, where different areas of the body are distinguished and labeled with different colors, where different colors are represented by different depths. The main purpose of the model is to learn to divide the regions of various parts of the body, so as to achieve the purpose of skin region division, and in addition, the model can predict the beard region in order to realize the beard sharpening function. The loss function adopted by the model is mainly a cross entropy loss function to judge the difference between the predicted body segmentation result and the labeled body segmentation region, so that the model is continuously corrected to obtain a more accurate prediction result.
On the other hand, according to a specific embodiment of the present disclosure, the step of performing, according to the skin region division result, preset skin beautification processing of corresponding categories on different skin regions in the inpainting image to obtain an output image corresponding to the original image in S103 may specifically include:
mapping the skin region division result to the inpainting image to obtain the skin region division result of the inpainting image;
carrying out uniform skin color processing on a target skin area in the blemish removed image according to the skin area division result of the blemish removed image to obtain a uniform skin color image;
and carrying out basic skin beautifying processing on the uniform skin color image to obtain the output image.
As shown in fig. 2, in the latter beauty processing flow, the skin area division result of the inpainted image is obtained by mapping the skin area division result, and then the skin color uniformity processing is performed on different skin areas of the inpainted image, and the image obtained here is defined as a skin color uniformity image. And then, performing basic skin beautifying processing such as whitening, face thinning and the like on the uniform skin color image to obtain a final output image.
Specifically, the step of performing uniform skin color processing on the target skin area in the inpainting image according to the skin area division result of the inpainting image to obtain a uniform skin color image may include:
respectively extracting actual hue values of target skin areas in the inpainting image, wherein the target skin areas comprise a face area and a neck area;
calculating a regional cumulative histogram according to the actual hue value of each target skin region;
calculating a target hue value of each target skin area according to the area cumulative histogram;
and adjusting the actual hue value of each target skin area to the corresponding target hue value to obtain the uniform skin color image.
In addition, the step of performing basic skin makeup processing on the uniform skin color image to obtain the output image includes:
enhancing the brightness value of pixel points in each target skin area in the skin color uniform image;
and sharpening a preset skin area in the uniform skin color image to obtain the output image.
After the picture is processed by the two AI models, the final output is further processed by conventional algorithms. The treatment mainly comprises; 1. and counting the hue of the region for the segmented face region, calculating a region cumulative histogram, determining a target skin color range, and realizing complementary color processing for other segmented body regions so as to realize uniform skin color. 2. And protecting the segmented background, and whitening only aiming at the human body area. 3. And the function of sharpening the beard area segmented by the model is realized. This process may specifically include: sharpening the eyebrow area, the beard area, and the hair area, and highlighting the cheek area and the forehead area, etc.
In summary, the image processing method provided by the application uses a deep learning method to build a whole beautifying processing flow, including AI blemish removal and AI skin area division, and finally can accurately remove facial blemishes and keep characteristic nevi, and the whole skin detail is completely kept, and the skin color is uniform and has higher definition and stereoscopic impression. In addition, an AI body segmentation model is added, so that the operations of uniform skin color, eyebrow sharpening, beard sharpening and the like can be performed on the image with the flaws removed, and the final beautifying effect is more natural. The whole scheme adopts a lightweight network, and compared with the traditional beautifying algorithm, the operation time is greatly shortened.
Example 2
Referring to fig. 6, a block diagram of an image processing apparatus 600 according to an embodiment of the present disclosure is provided. As shown in fig. 6, the image processing apparatus 600 includes:
a defect removing module 601, configured to perform defect removing processing on an original image to obtain a defect removed image;
a region division module 602, configured to obtain a skin region division result corresponding to the original image;
a beautifying module 603, configured to perform preset beautifying processing of corresponding categories on different skin areas in the inpainted image according to the skin area division result, so as to obtain an output image corresponding to the original image.
According to a specific embodiment of the present disclosure, the inpainting module 601 is configured to:
carrying out face key point identification on the original image to obtain a face image;
and inputting the face image into a pre-trained inpainting model, and obtaining an inpainting image subjected to inpainting processing, wherein the inpainting comprises a skin feature block to be removed in the face image.
According to an embodiment of the present disclosure, the blemish removal module 601 is specifically configured to:
inputting the face image into a pre-trained defect removing model to obtain an intermediate image subjected to defect removing processing;
calculating a weighted sum of the intermediate image and the original image as the inpainted image.
According to a specific embodiment of the present disclosure, the method for obtaining the inpainting model includes:
acquiring a plurality of groups of flaw removal sample images, wherein each group of flaw removal sample images comprise a first type sample image which is not subjected to flaw removal processing and a second type sample image which is obtained by performing flaw removal processing on the first type sample image;
and inputting a plurality of groups of flaw removal sample images into a basic neural network for training to obtain the flaw removal model.
According to a specific embodiment of the present disclosure, the region dividing module 602 is configured to:
and inputting the original image into a pre-trained skin region division model to obtain a skin region division result corresponding to the original image, wherein the skin region division result comprises a face skin region, a body skin region and a surface organ region.
According to a specific embodiment of the present disclosure, the obtaining manner of the skin region partition model includes:
acquiring a plurality of groups of skin area division sample images, wherein each group of skin area division sample images comprise a third type sample image which is not divided and a fourth type sample image which is obtained by carrying out skin area division and labeling processing on the third type sample image;
and inputting a plurality of groups of skin area division sample images into a basic neural network for training to obtain the skin area division model.
According to a specific embodiment of the present disclosure, the beauty module 603 is specifically configured to:
mapping the skin region division result to the inpainting image to obtain the skin region division result of the inpainting image;
according to the skin area division result of the blemish removed image, carrying out uniform skin color processing on a target skin area in the blemish removed image to obtain a uniform skin color image;
and carrying out basic skin beautifying processing on the uniform skin color image to obtain the output image.
According to a specific embodiment of the present disclosure, the beautifying module 603 is configured to:
respectively extracting actual hue values of target skin areas in the inpainting image, wherein the target skin areas comprise a face area and a neck area;
calculating an area cumulative histogram according to the actual hue value of each target skin area;
calculating a target hue value of each target skin area according to the area cumulative histogram;
and adjusting the actual hue value of each target skin area to the corresponding target hue value to obtain the uniform skin color image.
According to a specific embodiment of the present disclosure, the beauty module 603 is specifically configured to:
enhancing the brightness value of pixel points in each target skin area in the skin color uniform image;
and sharpening a preset skin area in the uniform skin color image to obtain the output image.
Furthermore, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory is used for storing a computer program, and the computer program executes the above-mentioned image processing method when the processor runs.
In addition, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program that, when running on a processor, executes the image processing method described above.
The image processing apparatus, the computer device, and the computer-readable storage medium provided by the application, when performing optimization processing such as beauty processing on an image including a face, perform defect removing processing and skin area dividing processing respectively to obtain a defect removing image and a skin area dividing result, and perform preset beauty processing on different skin areas in the defect removing image according to the skin area dividing result, so that targeted beauty processing can be performed according to organ positions, characteristic nevus and the like in the image and different skin areas, the beauty and authenticity of the image beauty processing are improved, and false face feeling caused by the whole beauty is avoided. For specific implementation processes of the image processing apparatus, the computer device, and the computer-readable storage medium, reference may be made to the specific implementation process of the image processing method provided in the embodiment shown in fig. 1, and details are not repeated here.
Specifically, as shown in fig. 7, to implement a computer device according to various embodiments of the present application, the computer device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the computer device architecture illustrated in FIG. 7 is not intended to be limiting of computer devices, which may include more or fewer components than those illustrated, or some of the components may be combined, or a different arrangement of components. In the embodiment of the present application, the computer device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present application, the radio frequency unit 701 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The computer device provides wireless broadband internet access to the user via the network module 702, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the computer apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of still pictures or video obtained by an image capturing computer device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be video played on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of the phone call mode.
The computer device 700 further comprises at least one sensor 705 comprising at least the barometer mentioned in the above embodiments. In addition, the sensor 705 may also be other sensors such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the computer device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the computer device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used for video playing of information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two portions, a touch detection computer device and a touch controller. The touch detection computer equipment detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch-sensing computer device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 710, receives commands from the processor 710, and executes the commands. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the computer device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the computer device, which is not limited herein.
The interface unit 708 is an interface for connecting an external computer device to the computer device 700. For example, the external computer device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a computer device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external computer device and transmit the received input to one or more elements within the computer device 700 or may be used to transmit data between the computer device 700 and an external computer device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the computer device as a whole. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The computer device 700 may further include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically connected to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption may be performed via the power management system.
In addition, the computer device 700 includes some functional modules that are not shown, and are not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (12)

1. An image processing method, comprising:
performing flaw removal processing on the original image to obtain a flaw-removed image;
acquiring a skin area division result corresponding to the original image;
and according to the skin area division result, respectively carrying out preset beautifying processing of corresponding categories on different skin areas in the inpainting image to obtain an output image corresponding to the original image.
2. The method of claim 1, wherein the step of inpainting the original image to obtain an inpainted image comprises:
carrying out face key point identification on the original image to obtain a face image;
inputting the face image into a pre-trained blemish removal model, and obtaining a blemish removal image subjected to blemish removal processing, wherein the blemish comprises a skin feature block to be removed in the face image.
3. The method of claim 2, wherein the step of inputting the facial image into a pre-trained inpainting model to obtain an inpainting image through inpainting processing comprises:
inputting the face image into a pre-trained defect removing model to obtain an intermediate image subjected to defect removing processing;
calculating a weighted sum of the intermediate image and the original image as the inpainted image.
4. The method of claim 3, wherein the inpainting model is obtained by:
acquiring a plurality of groups of inpainting sample images, wherein each group of inpainting sample images comprise a first type of sample image which is not inpainted and a second type of sample image obtained by inpainting the first type of sample image;
and inputting a plurality of groups of flaw removal sample images into a basic neural network for training to obtain the flaw removal model.
5. The method according to claim 4, wherein the step of obtaining the skin region segmentation result corresponding to the original image comprises:
and inputting the original image into a pre-trained skin region division model, and obtaining a skin region division result corresponding to the original image, wherein the skin region division result comprises a face skin region, a body skin region and a surface organ region.
6. The method according to claim 5, wherein the skin region segmentation model is obtained by:
acquiring a plurality of groups of skin area division sample images, wherein each group of skin area division sample images comprise third type sample images which are not divided and fourth type sample images obtained by carrying out skin area division and labeling processing on the third type sample images;
and inputting a plurality of groups of skin area division sample images into a basic neural network for training to obtain the skin area division model.
7. The method according to any one of claims 1 to 6, wherein the step of performing predetermined facial treatment of corresponding categories on different skin areas in the inpainted image according to the skin area division result to obtain an output image corresponding to the original image comprises:
mapping the skin area division result to the inpainting image to obtain a skin area division result of the inpainting image;
according to the skin area division result of the blemish removed image, carrying out uniform skin color processing on a target skin area in the blemish removed image to obtain a uniform skin color image;
and carrying out basic skin beautifying processing on the uniform skin color image to obtain the output image.
8. The method according to claim 7, wherein the step of performing skin color equalization processing on the target skin area in the inpainted image according to the skin area division result of the inpainted image to obtain an even skin color image comprises:
respectively extracting actual hue values of target skin areas in the inpainting image, wherein the target skin areas comprise a face area and a neck area;
calculating a regional cumulative histogram according to the actual hue value of each target skin region;
calculating a target hue value of each target skin area according to the area cumulative histogram;
and adjusting the actual hue value of each target skin area to the corresponding target hue value to obtain the uniform skin color image.
9. The method of claim 8, wherein the step of performing basic skin makeup processing on the skin tone uniform image to obtain the output image comprises:
enhancing the brightness value of pixel points in each target skin area in the skin color uniform image;
and sharpening a preset skin area in the uniform skin color image to obtain the output image.
10. An image processing apparatus characterized by comprising:
the flaw removing module is used for removing flaws of the original image to obtain a flaw removed image;
the region division module is used for acquiring a skin region division result corresponding to the original image;
and the beautifying module is used for respectively carrying out preset beautifying processing of corresponding categories on different skin areas in the defect-removed image according to the skin area division result to obtain an output image corresponding to the original image.
11. A computer device comprising a memory for storing a computer program and a processor, the computer program, when executed by the processor, performing the image processing method of any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the image processing method of any one of claims 1 to 9.
CN202110604268.6A 2021-05-31 2021-05-31 Image processing method and device and computer equipment Pending CN115482157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604268.6A CN115482157A (en) 2021-05-31 2021-05-31 Image processing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604268.6A CN115482157A (en) 2021-05-31 2021-05-31 Image processing method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115482157A true CN115482157A (en) 2022-12-16

Family

ID=84419983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604268.6A Pending CN115482157A (en) 2021-05-31 2021-05-31 Image processing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115482157A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245754A (en) * 2022-12-29 2023-06-09 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN117442895A (en) * 2023-12-26 2024-01-26 广州中科医疗美容仪器有限公司 Ultrasonic automatic control method and system based on machine learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245754A (en) * 2022-12-29 2023-06-09 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN116245754B (en) * 2022-12-29 2024-01-09 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN117442895A (en) * 2023-12-26 2024-01-26 广州中科医疗美容仪器有限公司 Ultrasonic automatic control method and system based on machine learning
CN117442895B (en) * 2023-12-26 2024-03-05 广州中科医疗美容仪器有限公司 Ultrasonic automatic control method and system based on machine learning

Similar Documents

Publication Publication Date Title
CN110706179B (en) Image processing method and electronic equipment
CN111260665B (en) Image segmentation model training method and device
CN108076290B (en) Image processing method and mobile terminal
CN110443769B (en) Image processing method, image processing device and terminal equipment
CN108234882B (en) Image blurring method and mobile terminal
CN110781899B (en) Image processing method and electronic device
CN109272473B (en) Image processing method and mobile terminal
CN111080747B (en) Face image processing method and electronic equipment
CN115482157A (en) Image processing method and device and computer equipment
CN111047511A (en) Image processing method and electronic equipment
CN110765924A (en) Living body detection method and device and computer-readable storage medium
CN109671034B (en) Image processing method and terminal equipment
CN109727212B (en) Image processing method and mobile terminal
CN109272466A (en) A kind of tooth beautification method and device
CN108550117A (en) A kind of image processing method, device and terminal device
CN110602424A (en) Video processing method and electronic equipment
CN113255396A (en) Training method and device of image processing model, and image processing method and device
CN109840476B (en) Face shape detection method and terminal equipment
CN109639981B (en) Image shooting method and mobile terminal
CN109451235B (en) Image processing method and mobile terminal
CN113192537B (en) Awakening degree recognition model training method and voice awakening degree acquisition method
CN107563353B (en) Image processing method and device and mobile terminal
CN110991325A (en) Model training method, image recognition method and related device
CN110944112A (en) Image processing method and electronic equipment
CN111553854A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination