WO2023103813A1 - 图像处理方法、装置、设备、存储介质及程序产品 - Google Patents

图像处理方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023103813A1
WO2023103813A1 PCT/CN2022/134464 CN2022134464W WO2023103813A1 WO 2023103813 A1 WO2023103813 A1 WO 2023103813A1 CN 2022134464 W CN2022134464 W CN 2022134464W WO 2023103813 A1 WO2023103813 A1 WO 2023103813A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
stage
image
result
skin color
Prior art date
Application number
PCT/CN2022/134464
Other languages
English (en)
French (fr)
Inventor
陈莉莉
Original Assignee
百果园技术(新加坡)有限公司
陈莉莉
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百果园技术(新加坡)有限公司, 陈莉莉 filed Critical 百果园技术(新加坡)有限公司
Publication of WO2023103813A1 publication Critical patent/WO2023103813A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of computer technology, in particular to an image processing method, device, equipment, storage medium and program product.
  • the traditional image processing method based on the edge-preserving filter algorithm focuses on removing blemishes such as acne and spots in the facial image, but at the same time it also loses a lot of facial details, resulting in poor skin texture preservation.
  • a machine learning algorithm is used to train a neural network model, and the original facial image is processed through the neural network model to output a beautified facial image.
  • This end-to-end processing method based on the neural network model requires the design of a more complex neural network structure in order to make the neural network model have a better ability to remove blemishes and preserve skin texture, which in turn leads to The processing process takes a long time and cannot meet the needs of some application scenarios with high real-time requirements.
  • Embodiments of the present application provide an image processing method, device, device, storage medium, and program product.
  • the technical solution is as follows:
  • an image processing method is provided, the method is executed by a computer device, and the method includes:
  • first detection result graph is used to characterize the skin color area in the original image
  • first detection result graph is used to characterize the skin color area in the original image
  • first detection result image is used to characterize the face area in the original image
  • the first-stage processing is performed on the original image to obtain the first-stage result graph; wherein, the first-stage processing is used to remove the face Areas in the area that differ from the overall skin tone;
  • a final result graph is generated based on the original graph and the second-stage result graph.
  • an image processing device includes:
  • the original image acquisition module is configured to acquire the original image to be processed
  • the original image detection module is configured to perform skin color detection and face detection on the original image to obtain a first detection result graph and a second detection result graph; wherein, the first detection result graph is used to characterize the original image The skin color area, the second detection result map is used to characterize the face area in the original image;
  • the first processing module is configured to perform a first-stage processing on the original image based on the first detection result graph and the second detection result graph to obtain a first-stage result graph; wherein, the first-stage processing Used to remove areas of the face that differ from the overall skin tone;
  • the second processing module is configured to perform second-stage processing on the first-stage result map to obtain a second-stage result map; wherein, the second-stage processing is used to improve the uniformity of skin color at different positions in the face region sex;
  • a result generating module configured to generate a final result graph based on the original graph and the second-stage result graph.
  • a computer device the computer device includes a processor and a memory, and a computer program is stored in the memory, and the computer program is loaded and executed by the processor to realize the above-mentioned image processing method.
  • a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and the computer program is loaded and executed by a processor to implement the above image processing method.
  • a computer program product includes computer instructions, the computer instructions are stored in a computer-readable storage medium, and a processor reads the computer-readable storage medium from the computer-readable storage medium. fetching and executing the computer instructions to realize the above image processing method.
  • the skin color area and face area in the original image are selected, and according to the first stage of processing, the area with a large difference from the overall skin color in the face area is weakened and the area is reduced.
  • the difference between the skin color of the face and the overall skin color of the face area is improved through the second stage of processing to improve the uniformity of the skin color in the face area to obtain the final image processing result.
  • the area that has a large difference from the overall skin color in the face area is weakened, and the difference between the skin color of this area and the overall skin color of the face area is reduced, and the skin color in the face area is improved through the second stage of processing.
  • the facial skin color area focusing on the image beauty processing of the facial skin color area obtained above, does not perform image beauty processing on the background area and the edge of the facial features, and reduces the background area and the edge of the facial features during the beauty process.
  • this solution can complete the image beautification process only through simple calculations, and does not need to use a neural network to perform a large number of calculations, which optimizes the calculation process of beautification processing, thus shortening the time required Time-consuming, it can meet the needs of some application scenarios with high real-time requirements.
  • the above-mentioned beauty optimization process can be performed while shooting, and the real-time performance is high. Therefore, the present application provides an image beautification solution that takes into account the ability to remove blemishes, the ability to preserve skin texture, and high real-time performance.
  • Fig. 1 is a schematic diagram of a scheme implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram before and after image processing provided by an embodiment of the present application.
  • Fig. 4 is a flowchart of an image processing method provided by another embodiment of the present application.
  • Fig. 5 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • Fig. 6 is a block diagram of an image processing device provided by an embodiment of the present application.
  • Fig. 7 is a block diagram of an image processing device provided by another embodiment of the present application.
  • FIG. 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the solution implementation environment may include: a terminal 10 and a server 20 .
  • Terminal 10 can be such as mobile phone, panel computer, PC (Personal Computer, personal computer), wearable equipment, vehicle-mounted terminal equipment, VR (Virtual Reality, virtual reality) equipment and AR (Augmented Reality, augmented reality) equipment and other electronic equipment, This application is not limited to this.
  • a client running a target application program can be installed in the terminal 10 .
  • the target application program may be an image processing application program or other application programs with image processing functions.
  • the target application program is an application program with image beautification function, such as a shooting application program, a live video application program, a social application program, a video editing application program, a short video application program, etc., which is not limited in this application.
  • the target application is an application with image beautification function.
  • the client of the target application has the function of beautifying the face image. Processing, so that the face image can meet the user's needs after processing.
  • the server 20 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server providing cloud computing services.
  • the server 20 may be the background server of the above-mentioned target application, and is used to provide background services for the client of the target application.
  • Communication between the terminal 10 and the server 20 may be performed through a network, for example, the network may be a wired network or a wireless network.
  • the execution subject of each step may be the server 20 in the solution implementation environment shown in FIG. 1, that is, the server 20 executes all the steps of the method embodiment of the present application; (such as the client of the target application program), that is, the terminal 10 executes all the steps of the method embodiment of the application; or the server 20 and the terminal 10 interact and cooperate to execute, that is, the server 20 executes a part of the steps of the method embodiment of the application , and the terminal 10 executes another part of the steps of the method embodiment of the present application.
  • the above-mentioned target application program is used to process the created pictures, and perform beautification processing on the faces in the pictures.
  • the above-mentioned target application is used in live broadcast and short video applications, and performs real-time beautification processing on the faces in the screen while live broadcasting or shooting videos.
  • the real-time requirements are high, and the terminal needs to be able to perform beautification processing on the faces in the shooting screen while shooting.
  • FIG. 2 shows a flowchart of an image processing method provided by an embodiment of the present application.
  • the method may include at least one of the following steps (210-250):
  • Step 210 acquire the original image to be processed.
  • the original image contains the face area of the target object.
  • the original image may be a face image, or an image containing a face.
  • the original image may be a front view of a face, a side view of a face, or an image containing multiple face regions, etc., which is not limited in the present application.
  • the face beautification process is performed on the original image to be processed, and the image after the face beautification process is obtained. Wherein, the strength of the face beautification treatment can be adjusted according to the needs of the user.
  • Step 220 perform skin color detection and face detection on the original image to obtain a first detection result map and a second detection result map.
  • the first detection result image is used to characterize the skin color area in the original image
  • the second detection result image is used to characterize the face area in the original image
  • the skin color area refers to the area formed by the pixels whose color value matches the skin color in the original image.
  • the skin-colored area and the non-skinned area in the original image can be distinguished based on the first detection result image.
  • the skin color area may include a face area, and may also include skin areas such as necks and arms, or areas where objects or backgrounds whose color values match the skin color in the original image are located.
  • the probability of skin color is set for the pixels in the original image
  • the first detection result map includes the probability of skin color corresponding to each pixel in the original image.
  • the skin color probability corresponding to a certain pixel is used to indicate whether the pixel belongs to the skin color area.
  • the original image is detected with the RGB color value of the pixel, the RGB color value of the pixel in the original image is extracted, and then the RGB color value interval corresponding to the skin color area is set, and the RGB color value of the pixel in the original image is between The area formed by all the pixels in the RGB color value range corresponding to the skin color area is determined as the skin color area.
  • the R value interval of the RGB color value interval corresponding to the skin color area For example, set the R value interval of the RGB color value interval corresponding to the skin color area to (206-254), the G value interval to (123-234), and the B value interval to (100-230), for each pixel in the original image
  • the RGB color value of a pixel is (240, 180, 150)
  • the pixel belongs to the skin color area
  • the RGB color value of another pixel is (200, 180, 150)
  • the pixel does not belong to the skin color area.
  • the skin color probability corresponding to a certain pixel is used to indicate the degree of association (or closeness) between the pixel and the skin color area.
  • the skin color probability is set according to the sum of the minimum difference between the RGB color value of the pixel and the upper or lower limit of the RGB color value range corresponding to the skin color area, and the difference between the RGB color value of the pixel and the upper or lower limit of the RGB color range corresponding to the skin color area The smaller the sum of the minimum values, the greater the skin color probability.
  • the maximum probability of skin color is 1, that is, the pixel is located in the skin color area; the minimum skin color probability is 0, that is, no RGB color value in the RGB color value of the pixel is located in the RGB color value interval corresponding to the skin color area; when the RGB color value of the pixel If any RGB color value in the value is in the RGB color value interval corresponding to the skin color area, then the skin color probability of this point is between 0 and 1.
  • the R value interval in the RGB color value interval corresponding to the skin color area is set to be (206-254), the G value interval is (123-234), and the B value interval is (100-230).
  • Each pixel is detected, when the RGB color value of pixel E is (240, 180, 150), then the pixel E belongs to the skin color area, and the skin color probability corresponding to the pixel E is 1; when the RGB color value of pixel F is (200, 60, 255) , then the pixel F does not belong to the skin color area, and there is no RGB color value in the RGB color value of the pixel F in the RGB color value interval corresponding to the skin color area, then the skin color probability corresponding to the pixel F is 0; when the pixel G's When the RGB color value is (200, 180, 150), the pixel G does not belong to the skin color area, and at the same time, the sum of the minimum difference between the RGB color value of the pixel G and the upper limit or lower limit of the RGB color value interval corresponding to the skin color area
  • the key points of the face in the original image are located by means of neural network positioning, and then the face area is determined through the key points of the face to obtain the face area in the original image.
  • the key points of the face are used to help face detection, so that more correct face regions can be obtained.
  • the key points of the face include the key points of facial parts such as the corners of the mouth, the corners of the eyes, and the corners of the eyebrows.
  • the face area in the original image is determined.
  • the face area only includes skin-colored areas of the face, excluding non-skin-colored areas such as eyes and eyebrows.
  • face probabilities are set for pixels in the original image
  • the second detection result map includes the face probabilities of each pixel in the original image.
  • the face probability of a certain pixel is used to indicate whether the pixel belongs to the face region.
  • the spot 35 is located in the face area 33
  • the face probability of the pixel corresponding to the spot 35 is 1, and the corner of the eye 34 does not belong to the face area 33, then The face probability of the pixel corresponding to the corner of the eye 34 is 0. If a pixel is located in the background of the original image, and this pixel does not belong to the face area 33, then the face probability of this pixel is 0.
  • the face probability of a certain pixel is used to indicate the degree of association between the pixel and the face area.
  • the degree of association is set according to whether the pixel can be used to help determine the face area, and the more the pixel can be used to help determine the face area, the greater the face probability of the pixel.
  • the maximum face probability is 1, that is, the pixel is located in the face area; the minimum face probability is 0, that is, the pixel does not belong to the face area, and the pixel cannot be used to help determine the face area; when the pixel does not belong to the face area In the region of the face, and this pixel can be used to help determine the face region, then the face probability of this point is between 0 and 1.
  • the spot 35 is located in the face area 33, then the face probability of the pixel corresponding to the spot 35 is 1, and the corner of the eye 34 does not belong to the face area 33, but The corner of the eye 34 is used to help determine the face area 33 , and the face probability of the pixel corresponding to the corner of the eye 34 is between 0 and 1. If a pixel is located in the background, this pixel does not belong to the face area 33 , and this pixel cannot be used to help determine the face area 33 , then the face probability of this pixel is 0.
  • Step 230 based on the first detection result graph and the second detection result graph, the first-stage processing is performed on the original image to obtain the first-stage result graph.
  • the first pass is used to remove areas of the face that differ from the overall skin tone.
  • the first-stage result image is obtained.
  • the above-mentioned first-stage result image weakens the large difference between the face area and the overall skin color , reducing the difference between the skin tone in that area and the overall skin tone of the face area.
  • the above-mentioned first-stage processing only processes the face area in the original image, and mainly processes the area in the face area that is quite different from the overall skin color. For example, pimples, blemishes, scars, etc. in areas of the face where there are large color differences.
  • the first stage of treatment is used to weaken the above-mentioned areas with large color differences such as acne, spots, and scars, and reduce the difference between the skin color of this area and the overall skin color of the face area, so that the overall color of the face area tends to be as consistent as possible .
  • the first stage of processing is used to process acnes, spots 35 , scars and other areas with color differences in the facial area 33 in the original image 31 .
  • Step 240 performing second-stage processing on the first-stage result map to obtain the second-stage result map.
  • the second stage of processing is used to improve the uniformity of skin tone in different places in the face area.
  • the face area weakens the area with a large color difference, and reduces the difference between the skin color of this area and the overall skin color of the face area, but the skin color of this area is different from that of the face. There is still a certain difference in the overall skin color of the facial area. From the perspective of the overall visual effect of the entire face area, there may be some slight color differences between the above-mentioned freckle-removed area and other un-freckle-removed areas, resulting in the above-mentioned freckle-removed area and other areas that have not been freckle-removed.
  • the discontinuity and unevenness of other areas that have not been speckled are processed in the second stage to improve the uniformity of the skin color in the face area, and the result map of the second stage is obtained.
  • Step 250 based on the original image and the second-stage result image, a final result image is generated.
  • the result map of the second stage is obtained, and the result map of the second stage is the result map obtained by the user when the beautification intensity value is maximized.
  • the beautification intensity value is used to adjust the proportions of the original image and the second-stage result image when they are mixed. This value can be controlled by the user.
  • the stage result map is fused after setting the corresponding weight to obtain the final result map.
  • the beautification intensity value set by the user when the beautification intensity value set by the user is the minimum threshold value, the weight of the original image is set to 1, and the weight of the second stage result image is 0, then the final result image is the original image; when the user sets When the beautification intensity value of is between the minimum threshold value and the maximum threshold value, then according to the beautification intensity value, the original image and the second-stage result image are set with corresponding weights and then fused to obtain the final result image; When the beautification intensity value set by the user is the maximum threshold value, set the weight of the original image to 0, and the weight of the second-stage result image to 1, then the final result image is the second-stage result image.
  • This application selects the skin color area and face area in the original image by performing skin color detection and face detection on the original image, and weakens the area of the face area that is greatly different from the overall skin color according to the first stage of processing, reducing the size of the face area.
  • the difference between the skin color of this area and the overall skin color of the face area is improved through the second stage of processing to improve the uniformity of skin color in the face area to obtain the final image processing result.
  • the first stage of processing the area that has a large difference from the overall skin color in the face area is weakened, and the difference between the skin color of this area and the overall skin color of the face area is reduced, and the skin color in the face area is improved through the second stage of processing.
  • the facial skin color area focusing on the image beauty processing of the facial skin color area obtained above, does not perform image beauty processing on the background area and the edge of the facial features, and reduces the background area and the edge of the facial features during the beauty process.
  • this solution can complete the image beautification process only through simple calculations, and does not need to use a neural network to perform a large number of calculations, which optimizes the calculation process of beautification processing, thus shortening the time required Time-consuming, it can meet the needs of some application scenarios with high real-time requirements.
  • the above-mentioned beauty optimization process can be performed while shooting, and the real-time performance is high. Therefore, the present application provides an image beautification solution that takes into account the ability to remove blemishes, the ability to preserve skin texture, and high real-time performance.
  • FIG. 4 shows a flow chart of an image processing method provided in another embodiment of the present application.
  • the method may include the following steps (410- 490) at least one of the steps:
  • Step 410 acquire the original image to be processed.
  • Step 420 performing skin color detection and face detection on the original image to obtain a first detection result image and a second detection result image.
  • the first detection result image is used to characterize the skin color area in the original image
  • the second detection result image is used to characterize the face area in the original image
  • step 410 and step 420 please refer to the above embodiment, and details are not repeated here.
  • Step 430 based on the first detection result map and the second detection result map, perform a first filtering process on the original image to obtain a first filtered image.
  • the first filtering process is used to perform filtering process on the face area while retaining the edges in the original image.
  • filtering processing may be performed on the face area.
  • the pixel values of the pixels in the face area are closer to each other, and the difference between the face area and other areas is made more obvious.
  • step 430 includes steps 431-434:
  • Step 431 based on the second detection result image, determine the face area in the original image.
  • the second detection result image is used to characterize the facial area in the original image, and based on the second detection result image, the facial area in the original image is determined, wherein the facial area in the original image may include a skin color area, excluding eyes, Non-skinned areas such as eyebrows.
  • Step 432 for the target pixel in the face area, according to the pixel value difference between the target pixel and each surrounding pixel and the skin color probability corresponding to each surrounding pixel, determine the first filter weight corresponding to each surrounding pixel; wherein, the skin color probability is Obtained based on the first detection result graph.
  • the surrounding pixels are pixels adjacent to the target pixel.
  • the surrounding pixels may also be pixels separated by 1 pixel from the target pixel.
  • the definition of the surrounding pixels is not limited in this application.
  • the first filtering weight corresponding to the target pixel is calculated.
  • the first filtering weight is obtained by the pixel value difference between the target pixel and each surrounding pixel and the skin color probability corresponding to each surrounding pixel.
  • the pixel value may be the RGB color value mentioned in the previous embodiment.
  • the pixel value difference between the target pixel and each surrounding pixel is obtained through calculation. For example, if the pixel value of the target pixel is A, and the pixel value of one of the surrounding pixels is B, then the pixel value difference between the target pixel and the surrounding pixel is
  • the pixel value difference between the target pixel and the surrounding pixels may also be (AB) 2 , and the present application does not limit the calculation method of the pixel value difference between the target pixel and the surrounding pixels.
  • the skin color probability corresponding to the pixel value of each surrounding pixel is obtained through the first detection result.
  • the probability of skin color refer to the content in the previous embodiment, which will not be repeated here.
  • the first filter weight corresponding to each surrounding pixel is obtained.
  • the pixel value of the target pixel is A
  • the pixel value of one surrounding pixel is B
  • the skin color probability corresponding to the surrounding pixel is X
  • the first filter weight corresponding to the surrounding pixels is X(AB) 2 , and this application does not limit the calculation method of the first filter weight.
  • Step 433 Determine the first filtered pixel value corresponding to the target pixel according to the pixel values corresponding to each surrounding pixel and the first filtering weight.
  • a first filtered pixel value corresponding to the target pixel is determined according to the pixel values of each surrounding pixel and the first filtering weight.
  • the pixel value of each surrounding pixel can be obtained from the original image, and the first filtering weight is obtained according to the above calculation process.
  • the target pixel participates in the calculation process of the first filtered pixel value corresponding to the target pixel, and the weight value of the target pixel may be set to 1.
  • the target pixel does not participate in the calculation process of the first filtered pixel value corresponding to the target pixel.
  • the application does not limit the calculation process of the first filtered pixel value corresponding to the target pixel.
  • the number of pixels around the target pixel can be arbitrary.
  • the area composed of the target pixel and its surrounding pixels can be a 3*3 area, then according to the pixel value of each surrounding pixel in the 3*3 area and the first filter weight , to determine the first filtered pixel value corresponding to the target pixel.
  • the area composed of the target pixel and its surrounding pixels can also be a 5*5 area, then according to the pixel values of each surrounding pixel in the 5*5 area and the first filter weight, determine the first filter corresponding to the target pixel Post pixel value.
  • the application does not limit the size of the area formed by the target pixel and the corresponding surrounding pixels.
  • the target pixel participates in the calculation process of the first filtered pixel value corresponding to the target pixel, and the weight value of the target pixel is set to 1, and the area formed by the target pixel and its surrounding pixels is set to be a 3*3 area. Then according to the pixel value of each surrounding pixel in the 3*3 area and the first filter weight, the product result corresponding to each surrounding pixel is obtained by multiplying the pixel value of each surrounding pixel and the first filter weight, and at the same time according to the pixel value of the target pixel itself Value and weight, multiply to get the product result corresponding to the target pixel. The average value of the above nine product results is taken as the first filtered pixel value of the target pixel.
  • the target pixel does not participate in the calculation process of the first filtered pixel value corresponding to the target pixel, and the area composed of the target pixel and its surrounding pixels is set to be a 5*5 area. Then, according to the pixel value of each surrounding pixel in the area of 5*5 and the first filtering weight, the product result corresponding to each surrounding pixel is obtained by multiplying the pixel value of each surrounding pixel and the first filtering weight. The average value of the above 24 product results is taken as the first filtered pixel value of the target pixel.
  • Step 434 Obtain a first filtered image according to the first filtered pixel values corresponding to each pixel in the face area.
  • the first filtered pixel values corresponding to all the pixels in the face area are calculated according to step 433 to obtain the first filtered image.
  • the first filtered pixel values respectively corresponding to all the pixels in the face area are obtained by using the same calculation method.
  • the first filtered image is obtained, and the face area in the original image is filtered through the first filtering weight, and the pixel values of the face area pixels in the original image are reconstructed, so that the following steps Determination of defect location is more accurate.
  • Step 440 based on the first filtered image and the original image, generate a blemish detection result map.
  • the first filtered image and the original image are processed to obtain a defect detection result map, wherein the defect detection result map is used to represent the position of the defect in the face area, and the defect refers to the above-mentioned acne, spots, scars, etc. that have larger colors area of difference.
  • step 440 includes steps 441-443:
  • Step 441 Obtain a first difference image based on the difference between the first filtered image and the pixel value of the corresponding position in the original image.
  • the pixel value of each pixel in the first filtered image is subtracted from the pixel value of the corresponding pixel in the original image to obtain the first difference image.
  • the first difference image displays the difference between the pixel values of corresponding positions in the first filtered image and the original image.
  • the difference value in the first difference image is 0, and the face area
  • Step 442 set the pixel value of each first pixel in the first difference image as the first value to obtain the processed first difference image; wherein, the first pixel means that the pixel value in the first difference image conforms to the first value.
  • a conditional pixel set the pixel value of each first pixel in the first difference image as the first value to obtain the processed first difference image; wherein, the first pixel means that the pixel value in the first difference image conforms to the first value.
  • the first difference image contains the difference value of the pixel
  • the difference value is the difference between the pixel value of the corresponding position pixel in the first filtered image and the original image
  • the above difference represents the difference between the first filtered image and the original image.
  • the difference between the brightness and darkness of the pixel value of the pixel corresponding to the position in the figure.
  • the difference when the difference is greater than 0, it means that the brightness of the pixel in the first filtered image is greater than the brightness of the corresponding pixel in the original image; when the difference is equal to 0, it means that the brightness of the pixel in the first filtered image is the same as the original image
  • the brightness and darkness of the pixels at the corresponding positions in are the same; when the difference is less than 0, it means that the brightness of the pixels in the first filtered image is smaller than the brightness and darkness of the pixels at the corresponding positions in the original image.
  • Step 443 Perform difference truncation and smooth remapping according to the pixel values of each pixel in the processed first difference image to generate a blemish detection result map.
  • difference truncation and smooth remapping are performed to generate a blemish detection result map.
  • Difference truncation refers to setting the maximum and minimum values (such as the maximum and minimum values can be artificially set), according to the set maximum and minimum values, the difference of each pixel in the first difference image is greater than the maximum value
  • the difference value is set to the maximum value
  • the difference value of each pixel in the first difference value image that is smaller than the minimum value is set to the minimum value
  • the difference value of each pixel in the first difference value image is between the maximum value and the minimum value
  • the difference between them remains unchanged.
  • the difference value of each pixel after the difference truncation is made to be within the range of the above-mentioned maximum value and minimum value.
  • the smooth remapping process is to smoothly (or proportionally) map the difference of each pixel after difference truncation to a certain set interval (for example, the interval can be artificially set), for example, the interval can be [0,1].
  • the difference values of 3 pixels obtained after difference truncation are 50, 100, 250 respectively .
  • the difference between the above three pixels is mapped to the interval [0,1] through smooth remapping, then the pixel with a difference of 50 is mapped to 0, the pixel with a difference of 100 is mapped to 0.25, and the difference A pixel of 250 is mapped to 1. Therefore, for the three pixels with difference values of 0, 100, and 300 in the first difference image, the difference values obtained after difference truncation and smooth remapping are 0, 0.25, and 1.
  • the image composed of the difference values of each pixel obtained after the difference truncation and smooth remapping processing obtained above is called the blemish detection result map.
  • the position of the defect in the defect detection result map is the position of the pixel whose difference value is greater than 0.
  • the defect detection result map is generated through difference truncation and smooth remapping, which clearly reflects the position of the defect in the face area, so that the following steps can be used to eliminate the defect, that is, to the first
  • the processing of the result graph of the first stage is more accurate.
  • Step 450 based on the blemish detection result map and the second detection result map, the original image and the first filtered image are mixed to generate a first-stage result map.
  • the original image and the first filtered image are mixed to generate the first stage result map.
  • step 450 includes steps 451-452:
  • Step 451 Determine a first weight matrix based on the product of the pixel difference in the blemish detection result map and the face probability at the corresponding position in the second detection result map.
  • any weight value in the first weight matrix is 0-1.
  • Step 452 Mix the original image and the first filtered image based on the first weight matrix to generate a first-stage result image.
  • the original image and the first-filtered image are mixed to generate the first-stage result map.
  • the result map of the first stage first filtered image*first weight matrix+original image*(1-first weight matrix).
  • the weight value of the pixel in the first weight matrix is 0, then the pixel value of the pixel in the first stage result image is the corresponding pixel value of the original image, when the pixel One point in the face area is that the value of the pixel in the first weight matrix is set to M at this time, and M is greater than 0 and less than or equal to 1, then the pixel value of the pixel in the first stage result map is:
  • the pixel value of the pixel in the result map of the first stage is the pixel value of the pixel in the first filtered image.
  • the difference points in the original image and the color difference of the overall skin color are weakened and reduced, so that the image after preliminary beautification does not have acne, spots, scars and other differences.
  • Step 460 based on the result map of the first stage, generate a fuzzy result map and an edge result map.
  • step 460 includes steps 461-462:
  • Step 461 performing a second filtering process on the first-stage result map to obtain a fuzzy result map.
  • a second filtering process is performed to obtain a blurred result map.
  • the selected pixel is any pixel in the result map of the first stage, and the second filtering process adopts mean value filtering, that is, the weights of the above-mentioned selected pixel and each surrounding pixel are set to the same value, for example, both are set to 1.
  • the selected pixel participates in the calculation process of the second filtered pixel value corresponding to the selected pixel, and the weight value of the target pixel is set to 1.
  • the selected pixel does not participate in the calculation process of the second filtered pixel value corresponding to the selected pixel.
  • the application does not limit whether the selected pixel participates in the calculation process of the second filtered pixel value of the selected pixel.
  • the number of pixels around the selected pixel can be arbitrary.
  • the area composed of the target pixel and its surrounding pixels can be a 3*3 area, then according to the pixel value of each surrounding pixel in the 3*3 area and the same weight , to determine the second filtered pixel value corresponding to the selected pixel.
  • the area formed by the selected pixel and its surrounding pixels may be a 5*5 area, then according to the pixel values of each surrounding pixel in the 5*5 area and the same weight, determine the second filter corresponding to the selected pixel Post pixel value.
  • the application does not limit the size of the area formed by the selected pixel and the corresponding surrounding pixels.
  • the selected pixel participates in the calculation process of the first filtered pixel value corresponding to the selected pixel, and the weight value of the selected pixel is set to 1, and the area formed by the selected pixel and its surrounding pixels is set to 3* 3 areas. Then according to the pixel value of each surrounding pixel in the 3*3 area and the same weight, the product result corresponding to each surrounding pixel is obtained by multiplying the pixel value of each surrounding pixel with the same weight, and at the same time according to the pixel value of the selected pixel itself and the weight, and multiply to get the product result corresponding to the selected pixel. The average value of the above nine product results is taken as the second filtered pixel value of the selected pixel.
  • the selected pixel does not participate in the calculation process of the first filtered pixel value corresponding to the selected pixel, and the area formed by the selected pixel and its surrounding pixels is set to be a 5*5 area. Then, according to the pixel value of each surrounding pixel in the 5*5 area and the same weight, the corresponding product result of each surrounding pixel is obtained by multiplying the pixel value of each surrounding pixel and the same weight. The average value of the above two product results is taken as the second filtered pixel value of the selected pixel.
  • step 462 an edge result map is obtained based on the difference between the pixel values of the corresponding positions in the result map of the first stage and the fuzzy result map.
  • the second value is a preset value.
  • mean filtering is performed on the edge result map obtained above to obtain a smoother face area edge.
  • a fuzzy result map and an edge result map are generated. If the original image is used to generate the edge result map, part of the spot area will be regarded as the edge, resulting in the final result cannot completely remove the spot.
  • the first-stage result map to generate the edge result map, it can effectively prevent the spot area from being regarded as the edge.
  • the edge information in the image can be well reflected through the edge result map, so that the beautification result can achieve the effect of removing freckles and acne and improving the uniformity of skin color without losing the edge of the facial features.
  • Step 470 generate a result map of uneven skin color according to the blur result map and the first stage result map.
  • the Uneven Skin Tone result map is used to represent areas of the face where there is uneven skin tone.
  • step 470 includes steps 471-472:
  • Step 471 based on the difference between the pixel values of the corresponding positions in the first-stage result map and the blurred result map, an initial uneven skin color result map is obtained.
  • the pixel value of each pixel in the first stage result map is subtracted from the pixel value of the corresponding pixel in the fuzzy result map to obtain the difference value of each pixel, and then the difference value of each pixel is added to a third value, Obtain the uneven skin color value, and obtain the uneven skin color result map.
  • the third value is any preset value, and in actual use, the third value may be set to 0.5.
  • the uneven skin color value is used to indicate whether the pixel value of the pixel in the result image of the first stage is dark, bright or uniform.
  • Uniform pixel values for a pixel mean that the pixel values for that pixel are neither too dark nor too bright.
  • Step 472 performing a third filtering process on the initial uneven skin color result map to obtain an uneven skin color result map.
  • the pixels in the face area in the initial uneven skin color result map are subjected to the third filtering process to obtain the uneven skin color result map, wherein the third filtering process adopts a 3*3 Gaussian filter, and the weight closer to the pixel in the Gaussian filter is greater. Larger, the farther away from the center point, the lower the weight.
  • the following steps can be performed directly using the obtained initial uneven skin color result map.
  • the initial uneven skin color result map is obtained, which reflects the lightness and darkness of the face area, and prepares for the lightness and shade processing of the face area in the following steps.
  • Gaussian filtering is used to further process the initial uneven skin color result map to obtain the uneven skin color result map, which can better preserve the skin texture texture details.
  • Step 480 according to the result map of uneven skin color, use the inverse contrast enhancement method to process the result map of the first stage to obtain the result map of the second stage; wherein, the inverse contrast enhancement method is used to shorten the brightness and darkness between different pixels.
  • the inverse contrast enhancement method is the opposite method of the contrast enhancement method.
  • the contrast enhancement method is to move the value away from the middle value and expand the value proportionally to both sides of the middle value when the middle value is set. Therefore, the inverse contrast enhancement method moves the value closer to the middle value and makes the value proportionally smaller toward the middle value.
  • the inverse contrast enhancement method is used to shorten the brightness between different pixels, so that the brightness tends to the middle value and makes the brightness of the face area similar.
  • step 480 includes steps 481-482:
  • Step 481 Determine the first pixel set and the second pixel set according to the result map of uneven skin color; wherein, the first pixel set includes pixels whose skin color unevenness value belongs to the first value range in the result map of uneven skin color, and the second pixel set Including pixels whose skin color unevenness values in the skin color unevenness result map belong to the second numerical range, the skin color unevenness values of the pixels in the first pixel set are greater than the skin color unevenness values of the pixels in the second pixel set.
  • the skin color unevenness value when the skin color unevenness value is greater than 0.5, it means that the pixel value of the pixel in the result map of the first stage is greater than the pixel value of the corresponding pixel in the blur result map, that is, the skin color In the bright part of the uneven area, the pixel value of the pixel in the corresponding first-stage result image needs to be reduced; when the uneven skin color value is less than 0.5, it means that the pixel value of the pixel in the first-stage result image is smaller than the corresponding position in the blurred result image
  • the pixel value of the pixel is the dark part of the uneven skin color area, and the pixel value of the pixel in the corresponding first-stage result image needs to be increased; when the skin color uneven value is equal to 0.5, it means the skin color area is uniform, and the skin color is not
  • the pixel value of the pixel in the first stage result map corresponding to the uniform value is the intermediate value when using the inverse contrast enhancement method, and the uniform area is not
  • the corresponding pixel value of the pixel in the first stage result map is the median value of the inverse contrast enhancement method, wherein, the area where the pixels with the pixel value of the middle value are located is also called a uniform area, and the area where other pixels are located is called an uneven area.
  • Step 482 Decrease the pixel values of the pixels belonging to the first pixel set in the result map of the first stage, and increase the pixel values of the pixels belonging to the second pixel set in the result map of the first stage to obtain the result map of the second stage .
  • the uneven skin color value when the uneven skin color value is less than 0.5, the corresponding pixel in the first stage result image is a local dark area, and its pixel value needs to be increased; for the uneven skin color value greater than 0.5 , the corresponding pixel is a local bright area, and its pixel value needs to be reduced.
  • the pixel value of the pixel in the result map of the first stage is increased or decreased to different degrees, wherein the uneven skin color of the pixel is The greater the difference between the value and the median value, the greater the increase or decrease of the pixel value of the pixel in the first-stage result map; The less the pixel value of the pixel is increased or decreased in .
  • the unevenness value is used to represent the unevenness of the face area of the first-stage result map.
  • the uneven skin color value of the pixels in the uneven skin color result map is used to describe the skin color uniformity of the first stage result map, and the first stage result map is divided into uniform regions according to the skin color uneven value of pixels in the uneven skin color result map , the brighter part of the local uneven area, and the darker part of the local uneven area, the pixel values of the three area pixels in the above-mentioned first stage result map are treated differently.
  • the pixel value of the pixel in the result map of the second stage is the same as that of the pixel in the result map of the first stage.
  • the inverse contrast enhancement method is performed on the result map of the first stage.
  • the difference between the uneven skin color value and the median value in the result map of the uneven skin color is the same, the unevenness of the corresponding region in the result map of the first stage.
  • the degree of uniformity is the same.
  • the median value is 0.5
  • the two skin color unevenness values in the uneven skin color result map are 0.4 and 0.6 respectively
  • the difference from 0.5 is 0.1, which corresponds to the unevenness of the area in the first stage result map
  • the extent is the same.
  • Step 490 based on the original image and the second-stage result image, a final result image is generated.
  • Step 490 has been introduced in the previous embodiment and will not be repeated here.
  • step 490 includes steps 491-494:
  • Step 491 Obtain a second difference image based on the difference between the pixel values at the corresponding positions in the second-stage result image and the original image.
  • Step 492 Perform difference truncation and smooth remapping according to the pixel values of each pixel in the second difference image to generate an intermediate result map.
  • the pixel value interval of each pixel in the second difference image is selected and compressed through difference truncation and smooth remapping, so that the pixel difference in the obtained intermediate result image is compressed into a range of 0 to 1 .
  • Step 493 Generate a second weight matrix based on the first detection result map, the second detection result map, the edge result map corresponding to the original image, the intermediate result map, and the beautification intensity value.
  • the beauty intensity value is used to adjust the respective proportions of the original image and the second-stage result image when mixing.
  • the second weight matrix is composed of the skin color probability based on the pixels of the first detection result image, the face probability of the pixels of the second detection result image, the difference value of the pixel of the edge result image corresponding to the original image, the difference value of the pixel of the intermediate result image, and the beauty intensity value Multiply to get, wherein, the skin color probability of the pixel of the first detection result map, the face probability of the pixel of the second detection result map, the difference value of the pixel of the edge result map corresponding to the original image and the difference value of the pixel of the intermediate result map are calculated by the above method It is obtained that the beautification intensity value is adjusted according to the user, and the size of the second weight matrix is determined according to the beautification intensity value adjusted by the user.
  • Step 494 based on the second weight matrix, the original image and the second-stage result image are mixed to generate a final result image.
  • the original image and the intermediate result map are mixed to generate the final result map.
  • the final result image is the original image
  • the user uses beautification, set the corresponding Beautification intensity value, the calculated second weight matrix is N, and N is greater than 0 and less than or equal to 1, then the pixel value of the pixel in the final result image is:
  • the fuzzy result map is first generated through the first-stage result map, and then the edge result map is generated through the processing of the first-stage result map and the fuzzy result map.
  • the edge information in the image can be well reflected through the edge result map, so that the beautification result can achieve the effect of removing freckles and acne and improving the uniformity of skin color without losing the edge of the facial features.
  • the initial uneven skin color result map is filtered by Gaussian filtering to obtain the uneven skin color result map.
  • the uneven skin color result map obtained after Gaussian filtering is processed. It can better improve the uniformity of skin tone, and at the same time, it can better preserve the details of skin texture.
  • setting the beautification intensity value enables the user to control the required degree of beautification by adjusting the beautification intensity value.
  • FIG. 5 shows a schematic diagram of overall steps of an image processing method provided by an embodiment of the present application.
  • the edge-preserving skin color filtering process is performed on the original image to obtain the first filtered image, and the first filtered image obtained by the edge-preserving skin color filtering process is subtracted from the original image, and the large defect area detection image is obtained through difference truncation and smooth remapping, and also It is the defect detection result map.
  • the mixed weight is obtained, that is, the first weight matrix, and the original image and the large defect area detection picture are combined according to the first weight
  • the matrix is calculated and the results map of freckle and acne removal is obtained, which is the result map of the first stage.
  • the fuzzy result map is obtained by performing mean filtering on the freckle and acne result map, and then based on the first-stage result map and the fuzzy result map, the difference between the pixel values of the corresponding position pixels is calculated to obtain the edge detection result m3, which is the edge Result graph. Then, based on the difference between the pixel values of the corresponding position pixels in the fuzzy result map and the freckle and acne result map obtained after mean filtering, the uneven area detection image is obtained, that is, the initial uneven skin color result map, the initial uneven skin color result
  • the image can be processed by Gaussian filtering to obtain an uneven skin color result image.
  • the inverse contrast enhancement is performed on the freckle and acne result map to obtain a skin color uniform result map, which is the second stage result map.
  • the above m1, m2, m3, m4 are multiplied by the beauty strength value set by the user to obtain the mixed weight, that is, the second weight matrix, and the original image and the result image of uniform skin color are mixed according to the second weight matrix to obtain the human body.
  • the result of face beautification is also the final result picture.
  • FIG. 6 shows a block diagram of an image processing apparatus provided by an embodiment of the present application.
  • the device has the function of realizing the above image processing method, and the function can be realized by hardware, and can also be realized by hardware executing corresponding software.
  • the device may be a computer device, or may be set in the computer device.
  • the apparatus 600 may include: an original image acquisition module 610 , an original image detection module 620 , a first processing module 630 , a second processing module 640 and a result generation module 650 .
  • the original image acquiring module 610 is configured to acquire the original image to be processed.
  • the original image detection module 620 is configured to perform skin color detection and face detection on the original image to obtain a first detection result graph and a second detection result graph; wherein, the first detection result graph is used to characterize the original image The skin color area in the image, the second detection result image is used to characterize the face area in the original image.
  • the first processing module 630 is configured to perform a first-stage processing on the original image based on the first detection result graph and the second detection result graph to obtain a first-stage result graph; wherein, the first stage Processing is used to remove areas of the face that differ from the overall skin tone.
  • the second processing module 640 is configured to perform a second-stage processing on the first-stage result map to obtain a second-stage result map; wherein, the second-stage processing is used to improve the skin color of different positions in the face region Uniformity.
  • the result generating module 650 is configured to generate a final result graph based on the original graph and the second-stage result graph.
  • the first processing module 630 includes: a first filtering unit 631 , a defect result generating unit 632 and a first result generating unit 633 .
  • the first filtering unit 631 is configured to perform a first filtering process on the original image based on the first detection result graph and the second detection result graph to obtain a first filtered image; wherein, the first filtered
  • the processing is used to perform filtering processing on the face area while retaining edges in the source image.
  • the blemish result generation unit 632 is configured to generate a blemish detection result graph based on the first filtered image and the original image; wherein the blemish detection result graph is used to characterize blemish positions in the face area.
  • the first result generating unit 633 is configured to, based on the blemish detection result map and the second detection result map, mix the original image and the first filtered image to generate the first stage result map.
  • the first filtering unit 631 is configured to:
  • the target pixel in the face area For the target pixel in the face area, according to the pixel value difference between the target pixel and each surrounding pixel and the skin color probability corresponding to each surrounding pixel, determine the first filter weight corresponding to each surrounding pixel respectively; Wherein, the skin color probability is obtained based on the first detection result map;
  • the first filtered image is obtained according to the first filtered pixel values respectively corresponding to each pixel in the face area.
  • the defect result generation unit 632 is configured to:
  • difference truncation and smooth remapping are performed to generate the blemish detection result map.
  • the first result generating unit 633 is configured to:
  • the second processing module 640 includes: a first result using unit 641 , a skin color result generating unit 642 and a second result generating unit 643 .
  • the first result using unit 641 is configured to generate a fuzzy result graph and an edge result graph based on the first stage result graph.
  • the skin color result generating unit 642 is configured to generate a skin color uneven result map according to the fuzzy result map and the first stage result map.
  • the second result generation unit 643 is configured to process the result map of the first stage by using an inverse contrast enhancement method according to the result map of uneven skin color, to obtain the result map of the second stage; wherein, the inverse contrast enhancement The method is used to narrow the lightness and darkness between different pixels.
  • the first result using unit 641 is configured to:
  • the edge result map is obtained based on the difference between the pixel value of the corresponding position in the result map of the first stage and the blur result map.
  • the skin color result generating unit 642 is configured to:
  • a third filtering process is performed on the initial uneven skin color result map to obtain the uneven skin color result map.
  • the second result generation unit 643 is configured to:
  • the uneven skin color result map determine a first pixel set and a second pixel set; wherein, the first pixel set includes pixels whose skin color uneven value in the skin color uneven result map belongs to a first numerical range, so The second pixel set includes pixels whose skin color unevenness values in the skin color unevenness result map belong to the second numerical range, and the skin color unevenness value of the pixels in the first pixel set is greater than the skin color unevenness of the pixels in the second pixel set value;
  • the result generation module 650 is configured to:
  • This application selects the skin color area and face area in the original image by performing skin color detection and face detection on the original image, and weakens the area of the face area that is greatly different from the overall skin color according to the first stage of processing, reducing the size of the face area.
  • the difference between the skin color of this area and the overall skin color of the face area is improved through the second stage of processing to improve the uniformity of skin color in the face area to obtain the final image processing result.
  • the first stage of processing the area that has a large difference from the overall skin color in the face area is weakened, and the difference between the skin color of this area and the overall skin color of the face area is reduced, and the skin color in the face area is improved through the second stage of processing.
  • the facial skin color area focusing on the image beauty processing of the facial skin color area obtained above, does not perform image beauty processing on the background area and the edge of the facial features, and reduces the background area and the edge of the facial features during the beauty process.
  • this solution can complete the image beautification process only through simple calculations, and does not need to use a neural network to perform a large number of calculations, which optimizes the calculation process of beautification processing, thus shortening the time required Time-consuming, it can meet the needs of some application scenarios with high real-time requirements.
  • the above-mentioned beauty optimization process can be performed while shooting, and the real-time performance is high. Therefore, the present application provides an image beautification solution that takes into account the ability to remove blemishes, the ability to preserve skin texture, and high real-time performance.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the content structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • a computer device comprises a processor and a memory in which a computer program is stored.
  • the computer device may be the terminal 10 and the server 20 described above, and the computer program is loaded and executed by a processor to implement the above image processing method.
  • a computer-readable storage medium in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the above image processing method.
  • a computer program product comprising computer instructions stored in a computer-readable storage medium from which a processor reads and Executing the computer instructions to implement the above image processing method.
  • the "plurality” mentioned herein refers to two or more than two.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character "/” generally indicates that the contextual objects are an "or” relationship.
  • the numbering of the steps described herein only exemplarily shows a possible sequence of execution among the steps. In some other embodiments, the above-mentioned steps may not be executed according to the order of the numbers, such as two different numbers The steps are executed at the same time, or two steps with different numbers are executed in the reverse order as shown in the illustration, which is not limited in this embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置、设备、存储介质及程序产品,属于计算机技术领域。上述方法包括:获取待处理的原图(210);对原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图(220);基于第一检测结果图和第二检测结果图,对原图进行第一阶段处理,得到第一阶段结果图(230);对第一阶段结果图进行第二阶段处理,得到第二阶段结果图(240);基于原图和第二阶段结果图,生成最终结果图(250)。本申请提供了一种兼顾去瑕疵能力、肤质纹理保留能力以及高实时性的图像美颜方案。

Description

图像处理方法、装置、设备、存储介质及程序产品
本申请要求于2021年12月9日提交的申请号为202111501454.3、发明名称为“图像处理方法、装置、设备、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种图像处理方法、装置、设备、存储介质及程序产品。
背景技术
在诸如拍照类应用、视频直播类应用中,都有对脸部图像进行处理的需求,也即我们通常所说的美颜需求。
传统的基于保边滤波算法的图像处理方法,着重于去除脸部图像中诸如痘痘、斑点等瑕疵区域,但同时也会丢失大量的面部细节,导致肤质纹理保留的效果不佳。在相关技术中,采用机器学习算法训练神经网络模型,通过该神经网络模型对原始的脸部图像进行处理,输出美颜后的脸部图像。这种基于神经网络模型的端到端处理方式,为了使得神经网络模型具有较好的去瑕疵能力以及肤质纹理保留能力,则需要设计较为复杂的神经网络结构,这又导致通过神经网络模型的处理过程耗时较长,无法满足一些实时性要求高的应用场景的需求。
发明内容
本申请实施例提供了一种图像处理方法、装置、设备、存储介质及程序产品。技术方案如下:
根据本申请实施例的一个方面,提供了一种图像处理方法,所述方法由计算机设备执行,所述方法包括:
获取待处理的原图;
对所述原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图;其中,所述第一检测结果图用于表征所述原图中的肤色区域,所述第二检测结果图用于表征所述原图中的脸部区域;
基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图;其中,所述第一阶段处理用于去除所述脸部区域中与整体肤色存在差异的区域;
对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图;其中,所述第二阶段处理用于提升所述脸部区域中不同位置肤色的均匀性;
基于所述原图和所述第二阶段结果图,生成最终结果图。
根据本申请实施例的一个方面,提供了一种图像处理装置,所述装置包括:
原图获取模块,配置为获取待处理的原图;
原图检测模块,配置为对所述原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图;其中,所述第一检测结果图用于表征所述原图中的肤色区域,所述第二检测结果图用于表征所述原图中的脸部区域;
第一处理模块,配置为基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图;其中,所述第一阶段处理用于去除所述脸部区域中与整体肤色存在差异的区域;
第二处理模块,配置为对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图;其中,所述第二阶段处理用于提升所述脸部区域中不同位置肤色的均匀性;
结果生成模块,配置为基于所述原图和所述第二阶段结果图,生成最终结果图。
根据本申请实施例的一个方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述图像处理方法。
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述图像处理方法。
根据本申请实施例的一个方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述图像处理方法。
本申请实施例提供的技术方案可以带来如下有益效果:
通过对原图进行肤色检测和脸部检测选出原图中的肤色区域和脸部区域,并根据第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,得到最终的图像处理结果。一方面,通过第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,加强了去瑕疵能力的同时更好地保留了肤质纹理;另一方面,基于肤色检测和脸部检测得到的第一检测结果图和第二检测结果图,从而确定原图中的脸部肤色区域,重点对上述得到的脸部肤色区域进行图像的美颜处理,不对背景区域和脸部五官边缘进行图像的美颜处理,减少了美颜处理时背景区域和脸部五官边缘的磨损;再一方面,本方案仅通过简单的计算即可完成对图像的美颜处理,不需要通过使用神经网络来进行大量的计算,优化了美颜处理的计算过程,从而缩短了所需耗时,能够满足一些实时性要求高的应用场景的需求,例如可以在进行拍摄的同时进行上述美颜优化过程,实时性高。因此,本申请提供了一种兼顾去瑕疵能力、肤质纹理保留能力以及高实时性的图像美颜方案。
附图说明
图1是本申请一个实施例提供的方案实施环境的示意图;
图2是本申请一个实施例提供的图像处理方法的流程图;
图3是本申请一个实施例提供的图像处理前后的示意图;
图4是本申请另一个实施例提供的图像处理方法的流程图;
图5是本申请一个实施例提供的图像处理方法的示意图;
图6是本申请一个实施例提供的图像处理装置的框图;
图7是本申请另一个实施例提供的图像处理装置的框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
请参考图1,其示出了本申请一个实施例提供的方案实施环境的示意图。该方案实施环境可以包括:终端10和服务器20。
终端10可以是诸如手机、平板电脑、PC(Personal Computer,个人计算机)、可穿戴设备、车载终端设备、VR(Virtual Reality,虚拟现实)设备和AR(Augmented Reality,增强现实)设备等电子设备,本申请对此不作限定。终端10中可以安装运行有目标应用程序的客户端。例如,该目标应用程序可以是图像处理应用程序或者其他具有图像处理功能的应用程序。可选地,目标应用程序是具有图像美颜功能的应用程序,如拍摄应用程序、视频直播应用程序、社交应用程序、视频编辑应用程序、短视频应用程序等,本申请对此不作限定。
以下对目标应用程序为具有图像美颜功能的应用程序为例进行介绍,目标应用程序的客户端具有对人脸图像进行美颜的功能,通过对人脸图像进行祛痘、祛疤、美白等处理,使人脸图像经过处理后能够满足用户的需求。
服务器20可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或分布式系统,还可以是提供云计算服务的云服务器。服务器20可以是上述目标应用程序的后台服务器,用于为目标应用程序的客户端提供后台服务。
终端10和服务器20之间可以通过网络进行通信,例如该网络可以是有线网络或者无线网络。
本申请实施例提供的图像处理方法,各步骤的执行主体可以是图1所示的方案实施环境中的服务器20,也即由服务器20执行本申请方法实施例的全部步骤;也可以是终端10(如目标应用程序的客户端),也即由终端10执行本申请方法实施例的全部步骤;或者由服务器20和终端10交互配合执行,也即由服务器20执行本申请方法实施例的一部分步骤,并由终端10执行本申请方法实施例的另一部分步骤。
在一可能的应用场景中,上述目标应用程序运用于对已经创建好的图片进行处理,对图片中的人脸进行美颜处理。
在另一可能的应用场景中,上述目标应用程序运用于直播、短视频类应用程序,在直播或拍摄视频的同时,对画面中的人脸进行实时的美颜处理,这一场景对处理的实时性要求较高,需要终端能够在拍摄的同时对拍摄画面中的人脸进行美颜处理。
在下文方法实施例中,为了便于说明,仅以各步骤执行主体是终端10为例,但对此不构成限定。
请参考图2,其示出了本申请一个实施例提供的图像处理方法的流程图。该方法可以包括如下几个步骤(210~250)中的至少一个步骤:
步骤210,获取待处理的原图。
可选地,原图中包含目标对象的脸部区域。例如,该原图可以是一个人脸图像,或者是一个包含人脸的图像。示例性地,原图可以是脸部正面图、脸部侧面图、或者包含多个脸部区域的图像等,本申请对此不作限定。通过本申请实施例,对待处理的原图进行脸部美颜处理,得到脸部美颜处理完成的图像。其中,脸部美颜处理的强度可以根据用户的需要进行调节。
步骤220,对原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图。
第一检测结果图用于表征原图中的肤色区域,第二检测结果图用于表征原图中的脸部区域。
肤色区域是指原图中颜色值与肤色相匹配的像素所构成的区域。基于第一检测结果图可以区分出原图中的肤色区域和非肤色区域。例如,肤色区域可以包括脸部区域,还可以包括脖子、手臂等皮肤区域,或者原图中颜色值与肤色相匹配的物品或背景所在的区域等。
在一些实施例中,对原图中像素设置肤色概率,第一检测结果图包含原图中各个像素分别对应的肤色概率。可选地,某一像素对应的肤色概率用于表示该像素是否属于肤色区域。可选地,以像素的RGB颜色值对原图进行肤色检测,对原图中像素进行RGB颜色值的提取,然后设置肤色区域对应的RGB颜色值区间,将原图中像素的RGB颜色值处于肤色区域对应的RGB颜色值区间的所有像素,所组成的区域确定为肤色区域。例如,设置肤色区域对应的RGB颜色值区间中R值区间为(206-254),G值区间为(123-234),B值区间为(100-230),对原图中的每个像素进行检测,当一个像素的RGB颜色值为(240,180,150)时,则该像素属于肤色区域;当另一个像素的RGB颜色值为(200,180,150)时,则该像素不属于肤色区域。
在另一些实施例中,某一像素对应的肤色概率用于表示该像素与肤色区域的关联度(或者说接近度)。例如,肤色概率是根据像素的RGB颜色值和肤色区域对应的RGB颜色值区间 上限或下限的差距最小值的总和设置的,像素的RGB颜色值和肤色区域对应的RGB颜色区间上限或下限的差距最小值的总和越小,肤色概率也就越大。肤色概率最大为1,即该像素位于肤色区域中;肤色概率最小为0,即该像素的RGB颜色值中没有任意的RGB颜色值位于肤色区域对应的RGB颜色值区间中;当像素的RGB颜色值中存在任意的RGB颜色值位于肤色区域对应的RGB颜色值区间中,则该点的肤色概率处于0到1之间。
在一些实施例中,设置肤色区域对应的RGB颜色值区间中R值区间为(206-254),G值区间为(123-234),B值区间为(100-230),对原图中的每个像素进行检测,当像素E的RGB颜色值为(240,180,150)时,则像素E属于肤色区域,该像素E对应的肤色概率为1;当像素F的RGB颜色值为(200,60,255)时,则像素F不属于肤色区域,且像素F的RGB颜色值中没有任意的RGB颜色值位于肤色区域对应的RGB颜色值区间中,则该像素F对应的肤色概率为0;当像素G的RGB颜色值为(200,180,150)时,则像素G不属于肤色区域,同时计算得到像素G的RGB颜色值和肤色区域对应的RGB颜色值区间的上限或下限的差距最小值的总和为6,具体为206-200=6,则像素G对应的肤色概率位于0~1之间;当像素H的RGB颜色值为(200,100,150)时,则像素H不属于肤色区域,同时计算得到像素H的RGB颜色值和肤色区域对应的RGB颜色值区间的上限或下限的差距最小值的总和为29,具体为(206-200)+(123-100)=29,则像素H对应的肤色概率位于0~1之间,且像素H对应的肤色概率小于像素H对应的肤色概率。
可选地,通过神经网络定位的方式定位原图中的脸部关键点,然后通过脸部关键点确定脸部区域,得到原图中的脸部区域。其中,脸部关键点用于帮助脸部检测,使得到更为正确的脸部区域,例如,脸部关键点包括嘴角、眼角、眉角等脸部部件的关键点。根据上述得到的原图中嘴角、眼角、眉角等脸部部件的关键点的位置,确定原图中的脸部区域。可选地,脸部区域仅包含脸部的肤色区域,不包括眼睛、眉毛等非肤色区域。
在一些实施例中,对原图中像素设置脸部概率,第二检测结果图包含原图中各个像素的脸部概率。某一像素的脸部概率用于表示该像素是否属于脸部区域。在一些实施例中,如图3所示,在原图31中,斑点35位于脸部区域33中,则斑点35对应的像素的脸部概率为1,眼角34不属于脸部区域33中,则眼角34对应的像素的脸部概率为0。一个像素位于原图的背景中,该像素不属于脸部区域33中,则该像素的脸部概率为0。
在另一些实施例中,某一像素的脸部概率用于表示该像素与脸部区域的关联度。可选地,关联度是根据像素是否能用于帮助确定脸部区域设置的,像素越能用于帮助确定脸部区域,则该像素的脸部概率越大。脸部概率最大为1,即该像素位于脸部区域中;脸部概率最小为0,即该像素不属于脸部区域中,且该像素不能用于帮助确定脸部区域;当像素不属于脸部区域中,且该像素能用于帮助确定脸部区域时,则该点的脸部概率处于0到1之间。
在一些实施例中,如图3所示,在原图31中,斑点35位于脸部区域33中,则斑点35对应的像素的脸部概率为1,眼角34不属于脸部区域33中,但眼角34用于帮助确定脸部区域33,则眼角34对应的像素的脸部概率位于0到1之间。一个像素位于背景中,该像素不属于脸部区域33中,且该像素不能用于帮助确定脸部区域33,则该像素的脸部概率为0。
步骤230,基于第一检测结果图和第二检测结果图,对原图进行第一阶段处理,得到第一阶段结果图。
第一阶段处理用于去除脸部区域中与整体肤色存在差异的区域。
通过对原图进行第一阶段处理,基于第一检测结果图和第二检测结果图,得到第一阶段结果图,上述第一阶段结果图中弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异。上述第一阶段处理只对原图中的脸部区域进行处理,主要是处理脸部区域中与整体肤色存在较大差异的区域。例如,脸部区域中的痘痘、斑点、疤痕等存在较大颜色差异的区域。第一阶段处理用于弱化上述痘痘、斑点、疤痕等存在较大颜色差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,使脸部区域整体的 颜色尽可能趋于一致。
在一些实施例中,如图3所示,第一阶段处理是用于处理原图31中面部区域33中的痘痘、斑点35、疤痕等存在颜色差异的区域。
步骤240,对第一阶段结果图进行第二阶段处理,得到第二阶段结果图。
第二阶段处理用于提升脸部区域中不同位置肤色的均匀性。
在第一阶段处理后得到的第一阶段结果图中,脸部区域弱化了存在较大颜色差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,但该区域的肤色与脸部区域整体肤色还是存在一定的差异,从整个脸部区域的整体视觉效果来看,可能上述去斑的区域与其他未去斑的区域存在一些略微的颜色差异,造成了上述去斑的区域与其他未去斑的区域的不连续和不均匀,通过第二阶段处理,提升脸部区域中肤色的均匀性,得到第二阶段结果图。
步骤250,基于原图和第二阶段结果图,生成最终结果图。
根据上述步骤,得到了第二阶段结果图,第二阶段结果图是美颜强度值取最大时用户得到的结果图。其中,美颜强度值用于调整原图和第二阶段结果图在混合时各自所占的比重,该值可以通过用户自定义控制,根据用户选择的美颜强度值,对原图和第二阶段结果图设置相应的权重后进行融合,得到最终结果图。
在一些实施中,当用户设置的美颜强度值为最小门限值时,则设置原图的权重为1,第二阶段结果图的权重为0,则最终结果图为原图;当用户设置的美颜强度值介于最小门限值和最大门限值之间时,则根据该美颜强度值对原图和第二阶段结果图设置相应的权重后进行融合,得到最终结果图;当用户设置的美颜强度值为最大门限值时,设置原图的权重为0,第二阶段结果图的权重为1,则最终结果图为第二阶段结果图。
本申请通过对原图进行肤色检测和脸部检测选出原图中的肤色区域和脸部区域,并根据第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,得到最终的图像处理结果。一方面,通过第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,加强了去瑕疵能力的同时更好地保留了肤质纹理;另一方面,基于肤色检测和脸部检测得到的第一检测结果图和第二检测结果图,从而确定原图中的脸部肤色区域,重点对上述得到的脸部肤色区域进行图像的美颜处理,不对背景区域和脸部五官边缘进行图像的美颜处理,减少了美颜处理时背景区域和脸部五官边缘的磨损;再一方面,本方案仅通过简单的计算即可完成对图像的美颜处理,不需要通过使用神经网络来进行大量的计算,优化了美颜处理的计算过程,从而缩短了所需耗时,能够满足一些实时性要求高的应用场景的需求,例如可以在进行拍摄的同时进行上述美颜优化过程,实时性高。因此,本申请提供了一种兼顾去瑕疵能力、肤质纹理保留能力以及高实时性的图像美颜方案。
请参考图4,其示出了本申请另一个实施例的提供的图像处理方法的流程图,以目标应用程序为图像美颜应用程序为例介绍,该方法可以包括如下几个步骤(410~490)中的至少一个步骤:
步骤410,获取待处理的原图。
步骤420,对原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图。
第一检测结果图用于表征原图中的肤色区域,第二检测结果图用于表征原图中的脸部区域。
有关步骤410和步骤420的介绍说明,请参见上文实施例,此处不再赘述。
步骤430,基于第一检测结果图和第二检测结果图,对原图进行第一滤波处理,得到第一滤波后图像。
第一滤波处理用于在保留原图中边缘的情况下,对脸部区域进行滤波处理。
第一滤波后图像中可以对脸部区域进行滤波处理,通过滤波处理,使脸部区域像素的像素值更加接近的同时,使脸部区域与其他区域的区别更加明显。
可选地,步骤430包括步骤431~434:
步骤431,基于第二检测结果图,确定原图中的脸部区域。
第二检测结果图用于表征原图中的脸部区域,基于第二检测结果图,确定原图中的脸部区域,其中,原图中的脸部区域可以包含肤色区域,不包括眼睛、眉毛等非肤色区域。
步骤432,对于脸部区域中的目标像素,根据目标像素与各个周围像素的像素值差异和各个周围像素分别对应的肤色概率,确定各个周围像素分别对应的第一滤波权重;其中,肤色概率是基于第一检测结果图得到的。
周围像素是与目标像素相邻的像素,可选地,周围像素也可以是目标像素间隔1个像素的像素,本申请对周围像素的定义不作限定。对于脸部区域的目标像素,计算得到目标像素对应的第一滤波权重。第一滤波权重通过目标像素和各个周围像素的像素值差异和各个周围像素的像素分别对应的肤色概率得到。其中,像素值可以是上个实施例中提到的RGB颜色值。
根据原图中目标像素的像素值和周围像素的像素值,通过计算得到目标像素和各个周围像素的像素值差异。例如,目标像素的像素值为A,其中一个周围像素的像素值为B,则目标像素和该周围像素的像素值差异为|A-B|。可选地,目标像素和该周围像素的像素值差异也可以为(A-B) 2,本申请对目标像素和周围像素的像素值差异的计算方法不作限定。
各个周围像素的像素值分别对应的肤色概率通过第一检测结果得到。肤色概率的具体介绍见上一个实施例中的内容,在此不再赘述。
将上述各个周围像素的像素值差异和肤色概率进行相乘后,得到各个周围像素分别对应的第一滤波权重。
在一些实施例中,目标像素的像素值为A,其中一个周围像素的像素值为B,周围像素对应的肤色概率为X,则该周围像素对应的第一滤波权重为X|A-B|。可选地,则该周围像素对应的第一滤波权重为X(A-B) 2,本申请对第一滤波权重的计算方法不作限定。
步骤433,根据各个周围像素分别对应的像素值和第一滤波权重,确定目标像素对应的第一滤波后像素值。
根据各个周围像素的像素值和第一滤波权重,确定目标像素对应的第一滤波后像素值。其中,各个周围像素的像素值可以从原图中获取,第一滤波权重根据上述计算过程得到。可选地,目标像素参与目标像素对应的第一滤波后像素值的计算过程,且目标像素的权重值可以设置为1。可选地,目标像素不参与目标像素对应的第一滤波后像素值的计算过程。本申请对目标像素对应的第一滤波后像素值的计算过程不作限定。
以目标像素为中心,确定目标像素对应的周围像素,根据各个周围像素分别对应的像素值和第一滤波权重,确定目标像素对应的第一滤波后像素值。目标像素周围像素的个数可以是任意的,例如,目标像素及其周围像素组成的区域可以是3*3的区域,则根据3*3的区域中各个周围像素的像素值和第一滤波权重,确定目标像素对应的第一滤波后像素值。可选地,目标像素及其周围像素组成的区域还可以是5*5的区域,则根据5*5的区域中各个周围像素的像素值和第一滤波权重,确定目标像素对应的第一滤波后像素值。本申请对目标像素和对应的周围像素组成的区域大小不作限定。
在一些实施例中,目标像素参与目标像素对应的第一滤波后像素值的计算过程,且设置目标像素的权重值为1,设置目标像素及其周围像素组成的区域是3*3的区域。则根据3*3的区域中各个周围像素的像素值和第一滤波权重,通过将各个周围像素的像素值和第一滤波权重相乘得到各个周围像素对应的乘积结果,同时根据目标像素自身像素值和权重,相乘得到目标像素对应的乘积结果。取上述9个乘积结果的平均值,为目标像素的第一滤波后像素值。
在一些实施例中,目标像素不参与目标像素对应的第一滤波后像素值的计算过程,设置 目标像素及其周围像素组成的区域是5*5的区域。则根据5*5的区域中各个周围像素的像素值和第一滤波权重,通过将各个周围像素的像素值和第一滤波权重相乘得到各个周围像素对应的乘积结果。取上述24个乘积结果的平均值,为目标像素的第一滤波后像素值。
步骤434,根据脸部区域中各个像素分别对应的第一滤波后像素值,得到第一滤波后图像。
根据步骤433计算得到脸部区域中所有像素分别对应的第一滤波后像素值,得到第一滤波后图像。其中,脸部区域中所有像素分别对应的第一滤波后像素值采用相同的计算方法得到。
通过第一阶段处理,得到第一滤波后图像,通过第一滤波权重对原图中的脸部区域进行滤波处理,重构了原图中的脸部区域像素的像素值,使下面步骤中对瑕疵位置的确定更加准确。
步骤440,基于第一滤波后图像和原图,生成瑕疵检测结果图。
对第一滤波后图像和原图进行处理,得到瑕疵检测结果图,其中,瑕疵检测结果图用于表征脸部区域中瑕疵的位置,瑕疵是指上述痘痘、斑点、疤痕等存在较大颜色差异的区域。
可选地,步骤440包括步骤441~443:
步骤441,基于第一滤波后图像和原图中对应位置的像素值的差值,得到第一差值图像。
可选地,将第一滤波后图像每个像素的像素值减去原图中对应位置像素的像素值,得到第一差值图像。第一差值图像中显示第一滤波后图像和原图中对应位置的像素值的差值。其中,脸部区域以外的区域由于没有产生变化,所以脸部区域以外的区域在第一滤波后图像和原图中完全相同,因此在第一差值图像中的差值为0,而脸部区域通过上述滤波处理后,由于瑕疵部分的像素值普遍为暗色,经过滤波处理后,瑕疵部分的明暗度会提高,则得到的第一差值图像中显示像素的差值大于0的部分为瑕疵部分。
步骤442,将第一差值图像中各个第一像素的像素值设为第一数值,得到处理后的第一差值图像;其中,第一像素是指第一差值图像中像素值符合第一条件的像素。
以上述步骤为例,第一差值图像中包含像素的差值,差值为第一滤波后图像和原图中对应位置像素的像素值之差,上述差值表示第一滤波后图像和原图中对应位置像素的像素值的明暗度差距。例如,差值大于0时,表示第一滤波后图像中像素的明暗度大于原图中对应位置像素的明暗度;差值等于0时,表示第一滤波后图像中像素的明暗度和原图中对应位置像素的明暗度相同;差值小于0时,表示第一滤波后图像中像素的明暗度小于原图中对应位置像素的明暗度。
步骤443,根据处理后的第一差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成瑕疵检测结果图。
根据得到的第一差值图像中的各个像素的差值,进行差异截断和平滑重映射处理,生成瑕疵检测结果图。
差异截断是指通过设置最大值和最小值(如该最大值和最小值可以是人为设置),根据设置的最大值和最小值,将第一差值图像中各个像素的差值大于最大值的差值设置为该最大值,将第一差值图像中各个像素的差值小于最小值的差值设置为该最小值,第一差值图像中各个像素的差值位于最大值和最小值之间的差值暂不改变。最终使通过差异截断后各个像素的差值位于上述最大值和最小值的区间内。平滑重映射处理是对经过差异截断之后的各个像素的差值,平滑地(或者说等比例地)映射到某一设置的区间之内(如该区间可以是人为设置),例如该区间可以是[0,1]。
例如,第一差值图像中存在3个像素的差值,分别为0,100,300,同时差异截断的最小值为50且最大值为250,则通过差异截断后得到的3个像素的差值分别为50,100,250。然后,通过平滑重映射将上述三个像素的差值映射至区间[0,1]之内,则差值为50的像素被映射为0,差值为100的像素被映射为0.25,差值为250的像素被映射为1。因此,第一差值图像中差 值为0,100,300的三个像素,通过差异截断和平滑重映射处理后得到的差值为0,0.25,1。
将上述得到的差异截断和平滑重映射处理后得到的各个像素的差值组成的图像,称为瑕疵检测结果图。其中,瑕疵检测结果图中的瑕疵所在的位置也就是差值大于0的像素所在的位置。
通过第一滤波后图像和原图,通过差异截断和平滑重映射处理生成瑕疵检测结果图,明确地体现出了脸部区域中瑕疵所在的位置,使下面步骤中对消除瑕疵,也就是对第一阶段结果图的处理更加准确。
步骤450,基于瑕疵检测结果图和第二检测结果图,对原图和第一滤波后图像进行混合,生成第一阶段结果图。
以瑕疵检测结果图和第二检测结果图的乘积为权重,对原图和第一滤波后图像进行混合,生成第一阶段结果图。
可选地,步骤450包括步骤451~452:
步骤451,基于瑕疵检测结果图像素的差值和第二检测结果图中对应位置的脸部概率的乘积,确定第一权重矩阵。
由于瑕疵检测结果图和第二检测结果图中像素的像素值都处于0~1,则第一权重矩阵中任一权重值的取值都为0~1。
步骤452,基于第一权重矩阵对原图和第一滤波后图像进行混合,生成第一阶段结果图。
根据上述得到的第一权重矩阵和第一阶段结果图的计算公式,将原图和第一滤波后图像进行混合,生成第一阶段结果图。
第一阶段结果图的计算公式为:
第一阶段结果图=第一滤波后图像*第一权重矩阵+原图*(1-第一权重矩阵)。
例如,当像素在脸部区域以外的一点时,此时第一权重矩阵中该像素的权重值为0,则第一阶段结果图中该像素的像素值为原图对应的像素值,当像素在脸部区域内的一点是,此时第一权重矩阵中该像素的值设为M,M大于0且小于等于1,则第一阶段结果图中该像素的像素值为:
第一滤波后图像中该像素的像素值*M+原图中该像素的像素值*(1-M)。
其中,当M取1时,第一阶段结果图中该像素的像素值为第一滤波后图像中该像素的像素值。
通过将原图和第一滤波后图像进行混合,弱化并缩小原图中的差异点和整体肤色的颜色差异,使初步美颜后的图像不存在痘痘、斑点、疤痕等差异点。
步骤460,基于第一阶段结果图,生成模糊结果图和边缘结果图。
模糊结果图是对第一阶段结果图进行滤波后生成的图像,用于生成边缘结果图。边缘结果图用于表现出图像背景、脸部五官位置的边缘信息,使美颜结果能够不损失这些边缘信息。可选地,步骤460包括步骤461~462:
步骤461,对第一阶段结果图进行第二滤波处理,得到模糊结果图。
根据选定像素与各个周围像素的像素值和权重,进行第二滤波处理,得到模糊结果图。其中,选定像素是第一阶段结果图中的任意像素,第二滤波处理采用均值滤波,即上述选定像素和各个周围像素的权重设置为相同值,例如都设置为1。可选地,所选像素参与所选像素对应的第二滤波后像素值的计算过程,且目标像素的权重值设置为1。可选地,所选像素不参与所选像素对应的第二滤波后像素值的计算过程。本申请对所选像素是否参加所选像素第二滤波后像素值的计算过程不作限定。
以所选像素为中心,确定所选像素对应的周围像素,根据各个周围像素分别对应的像素值和相同的权重,确定所选像素对应的第二滤波后像素值。所选像素周围像素的个数可以是任意的,例如,目标像素及其周围像素组成的区域可以是3*3的区域,则根据3*3的区域中各个周围像素的像素值和相同的权重,确定所选像素对应的第二滤波后像素值。可选地,所 选像素及其周围像素组成的区域可以是5*5的区域,则根据5*5的区域中各个周围像素的像素值和相同的权重,确定所选像素对应的第二滤波后像素值。本申请对所选像素和对应的周围像素组成的区域大小不作限定。
在一些实施例中,所选像素参与所选像素对应的第一滤波后像素值的计算过程,且设置所选像素的权重值为1,设置所选像素及其周围像素组成的区域是3*3的区域。则根据3*3的区域中各个周围像素的像素值和相同的权重,通过将各个周围像素的像素值和相同的权重相乘得到各个周围像素对应的乘积结果,同时根据所选像素自身像素值和权重,相乘得到所选像素对应的乘积结果。取上述9个乘积结果的平均值,为所选像素的第二滤波后像素值。
在一些实施例中,所选像素不参与所选像素对应的第一滤波后像素值的计算过程,设置所选像素及其周围像素组成的区域是5*5的区域。则根据5*5的区域中各个周围像素的像素值和相同的权重,通过将各个周围像素的像素值和相同的权重相乘得到各个周围像素对应的乘积结果。取上述2个乘积结果的平均值,为所选像素的第二滤波后像素值。
步骤462,基于第一阶段结果图与模糊结果图中对应位置的像素值的差值,得到边缘结果图。
将第一阶段结果图中各像素的像素值减去模糊结果图中对应位置像素的像素值,得到各像素的差值,然后将各像素的差值与第二数值相乘,得到边缘结果图。可选地,第二数值是预先设置好的数值。
可选地,对上述得到的边缘结果图进行均值滤波处理,得到更加平滑的脸部区域边缘。
基于第一阶段结果图,生成模糊结果图和边缘结果图。如果采用原图生成边缘结果图,会将部分斑点区域视为边缘,导致最终的结果无法完全去除斑点,通过采用第一阶段结果图生成边缘结果图,有效的防止了将斑点区域视为边缘,同时,通过边缘结果图能够很好的体现出图像中的边缘信息,从而使美颜结果能够不损失五官边缘的同时,达到祛斑祛痘提升肤色均匀性的效果。
步骤470,根据模糊结果图和第一阶段结果图,生成肤色不均匀结果图。
肤色不均匀结果图用于表示脸部区域存在肤色不均匀的区域。
可选地,步骤470包括步骤471~472:
步骤471,基于第一阶段结果图与模糊结果图中对应位置的像素值的差值,得到初始的肤色不均匀结果图。
在一些实施例中,将第一阶段结果图中各像素的像素值减去模糊结果图中对应位置像素的像素值,得到各像素的差值,然后将各像素的差值加第三数值,得到肤色不均匀值,得到肤色不均匀结果图。可选地,第三数值是预先设定好的任意数值,在实际使用中,第三数值可以设置为0.5。
其中,肤色不均匀值用于表示第一阶段结果图中像素的像素值是否偏暗、偏亮或均匀。像素的像素值均匀表示该像素的像素值既不偏暗,也不偏亮。
步骤472,对初始的肤色不均匀结果图进行第三滤波处理,得到肤色不均匀结果图。
对初始的肤色不均匀结果图中的脸部区域的像素进行第三滤波处理,得到肤色不均匀结果图,其中,第三滤波处理采用3*3的高斯滤波,高斯滤波中越靠近像素的权重越大,越远离中心点的权重越低。可选地,可以不使用上述高斯滤波,直接使用得到的初始的肤色不均匀结果图进行下面的步骤。
通过对模糊结果图和第一阶段结果图进行处理,得到初始的肤色不均匀结果图,体现出了脸部区域的明暗度,为下面步骤中对脸部区域的明暗度处理做准备,同时,采用高斯滤波对初始的肤色不均匀结果图进一步加工,得到肤色不均匀结果图,可以更好的保留肤色纹理细节。
步骤480,根据肤色不均匀结果图,采用逆对比度增强法对第一阶段结果图进行处理,得到第二阶段结果图;其中,逆对比度增强法用于拉近不同像素之间的明暗度。
逆对比度增强法是对比度增强法的相反的方法,对比度增强法是在设置中间值的情况下,将数值远离中间值且使数值向中间值两边等比例扩大。因此,逆对比度增强法是将数值向中间值靠近且使数值向中间值等比例缩小。在本申请中,逆对比度增强法用于拉近不同像素之间的明暗度,使明暗度趋于中间值而使脸部区域的明暗度相近。
可选地,步骤480包括步骤481~482:
步骤481,根据肤色不均匀结果图,确定第一像素集和第二像素集;其中,第一像素集包括肤色不均匀结果图中肤色不均匀值属于第一数值区间的像素,第二像素集包括肤色不均匀结果图中肤色不均匀值属于第二数值区间的像素,第一像素集中像素的肤色不均匀值大于第二像素集中像素的肤色不均匀值。
在一些实施例中,以步骤471的实施例为例,当肤色不均匀值大于0.5时,表示第一阶段结果图中像素的像素值大于模糊结果图中对应位置像素的像素值,即为肤色不均匀区域偏亮部分,需要减小对应的第一阶段结果图中像素的像素值;当肤色不均匀值小于0.5时,表示第一阶段结果图中像素的像素值小于模糊结果图中对应位置像素的像素值,即为肤色不均匀区域偏暗部分,需要增大对应的第一阶段结果图中像素的像素值;当肤色不均匀值等于0.5时,表示肤色均匀区域,此时该肤色不均匀值对应的第一阶段结果图中像素的像素值为使用逆对比度增强法时的中间值,且使用逆对比度增强法时对均匀区域不做处理。
在一些实施例中,以上述实施例为例,肤色不均匀结果图中像素的肤色不均匀值等于0.5时对应的第一阶段结果图中像素的像素值,是逆对比度增强法的中间值,其中,像素值为中间值的像素所处的区域也被称为均匀区域,其他像素所处的区域被称为不均匀区域。
步骤482,将第一阶段结果图中属于第一像素集的像素的像素值减小,以及将第一阶段结果图中属于第二像素集的像素的像素值增大,得到第二阶段结果图。
按上述肤色不均匀结果的计算公式,肤色不均匀值小于0.5时,其对应的第一阶段结果图中的像素为局部偏暗区域,需要增大其像素值;对于肤色不均匀值大于0.5时,其对应的像素为局部偏亮区域,需要减小其像素值。
采用上述逆对比度增强法拉近不同像素之间的明暗度,也就是将第一阶段结果图中属于第一像素集中的像素的像素值减小,以及将第一阶段结果图中属于第二像素集中的像素的像素值增大,对肤色均匀区域不做处理,得到第二阶段结果图。
在一些实施例中,根据像素的肤色不均匀值与中间值的差距大小,对第一阶段结果图中该像素的像素值采取不同程度的增大或减小,其中,该像素的肤色不均匀值与中间值的差距越大,第一阶段结果图中该像素的像素值增大或减小的程度越大;该像素的肤色不均匀值与中间值的差距越小,第一阶段结果图中该像素的像素值增大或减小的程度越小。
例如,设置第一阶段结果图中像素的像素值为A,肤色不均匀结果图中像素的肤色不均匀值为B,第二阶段结果图中像素的像素值为C,不均匀程度值为D,不均匀程度值用于表示第一阶段结果图的脸部区域不均匀程度。
其中,肤色不均匀结果图中像素的肤色不均匀值用于描述第一阶段结果图的肤色均匀性,根据肤色不均匀结果图中像素的肤色不均匀值将第一阶段结果图划分为均匀区域,局部不均匀区域偏亮部分,局部不均匀区域偏暗部分,对上述第一阶段结果图中的三个区域像素的像素值采取不同的处理。
其中,当B>0.5时,即上述第一像素集,也就是第一阶段结果图中的局部不均匀偏亮区域,通过肤色不均匀值计算得到不均匀程度:
Figure PCTCN2022134464-appb-000001
则第二阶段结果图中像素的像素值为:
C=A-(1-A)D
当B<0.5时,即上述第二像素集,也就是第一阶段结果图中的局部不均匀偏暗区域,通 过肤色不均匀结果图计算得到不均匀程度:
Figure PCTCN2022134464-appb-000002
则第二阶段结果图中像素的像素值为:
C=A+AD
其中,当B=0.5时,通过肤色不均匀值计算得到不均匀程度D=0,则第二阶段结果图中该像素的像素值和第一阶段结果图中该像素的像素值相同。
根据不均匀程度对第一阶段结果图进行逆对比度增强法,其中,当肤色不均匀结果图中多个肤色不均匀值和中间值的差距相同时,对应的第一阶段结果图中区域的不均匀程度是相同的。例如,当中间值为0.5时,肤色不均匀结果图中两个肤色不均匀值分别为0.4和0.6时,与0.5的差值都为0.1,其对应的第一阶段结果图中区域的不均匀程度是相同的。
通过逆对比度增强法使第一阶段结果图中像素的像素值大小接近,使第二阶段结果图中整体肤色更加和谐一致。
步骤490,基于原图和第二阶段结果图,生成最终结果图。
步骤490已在上个实施例中介绍过,此处不在赘述。
可选地,步骤490,包括步骤491~494:
步骤491,基于第二阶段结果图和原图中对应位置的像素值的差值,得到第二差值图像。
将第二阶段结果图像素的像素值减去原图中对应位置像素的像素值,得到像素的差值和第二差值图像,第二差值图像中像素的像素值为上述方法得到的差值。
步骤492,根据第二差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成中间结果图。
同样的,通过差异截断和平滑重映射处理对第二差值图像中各个像素的像素值进行像素值区间的选取和压缩,使得到的中间结果图中像素的差值压缩为0~1的区间。
步骤493,基于第一检测结果图、第二检测结果图、原图对应的边缘结果图、中间结果图和美颜强度值,生成第二权重矩阵。其中,美颜强度值用于调整原图和第二阶段结果图在混合时各自所占的比重。
第二权重矩阵由基于第一检测结果图像素的肤色概率、第二检测结果图像素的脸部概率、原图对应的边缘结果图像素的差值、中间结果图像素的差值和美颜强度值相乘得到,其中,第一检测结果图像素的肤色概率、第二检测结果图像素的脸部概率、原图对应的边缘结果图像素的差值和中间结果图像素的差值通过上述方法计算得到,美颜强度值根据用户调节,根据用户调节的美颜强度值,确定第二权重矩阵的大小。
步骤494,基于第二权重矩阵对原图和第二阶段结果图进行混合,生成最终结果图。
根据上述得到的第二权重矩阵和最终结果图的计算公式,将原图和中间结果图进行混合,生成最终结果图。
最终结果图的计算公式为:
最终结果图=第二阶段结果图*第二权重矩阵+原图*(1-第二权重矩阵)
在一些实施中,当用户不使用美颜时,则设置美颜强度值为1,第二阶段结果图的权重为0,则最终结果图为原图;当用户使用美颜时,设置相应的美颜强度值,计算得到第二权重矩阵为N,N大于0且小于等于1,则最终结果图中像素的像素值为:
最终结果图=第二阶段结果图*N+原图*(1-N)。
本实施例通过先通过第一阶段结果图生成模糊结果图,再通过第一阶段结果图和模糊结果图的处理生成边缘结果图。通过边缘结果图能够很好的体现出图像中的边缘信息,从而使美颜结果能够不损失五官边缘的同时,达到祛斑祛痘和提升肤色均匀性的效果。
同时,通过高斯滤波对初始的肤色不均匀结果图进行滤波,得到肤色不均匀结果图,相比对初始的肤色不均匀结果图进行处理,对高斯滤波后得到的肤色不均匀结果图进行处理,可以更好的提升肤色的均匀性,同时可以更好地保留皮肤纹理细节。
另外,设置美颜强度值,使用户可以通过调节美颜强度值来控制所需要的美颜的程度。
请参考图5,其出示了本申请一个实施例提供的图像处理方法的整体步骤示意图。
首先获取待处理的原图,通过对原图进行肤色检测得到肤色检测结果m2,也就是第一检测结果图,通过对原图进行脸部检测得到人脸检测结果m1,也就是第二检测结果图。对原图进行保边肤色滤波处理得到第一滤波后图像,通过将保边肤色滤波处理得到的第一滤波后图像减去原图并经过差异截断和平滑重映射得到大瑕疵区域检测图片,也就是瑕疵检测结果图。接着通过人脸检测结果m1中脸部概率和大瑕疵区域检测图片中对应位置像素的差值相乘得到混合权重,也就是第一权重矩阵,将原图和大瑕疵区域检测图片根据第一权重矩阵计算和得到祛斑祛痘结果图,也就是第一阶段结果图。
接着通过对祛斑祛痘结果图进行均值滤波处理后得到模糊结果图,再基于第一阶段结果图和模糊结果图进行对应位置像素的像素值的差值计算,得到边缘检测结果m3,也就是边缘结果图。接着基于均值滤波后得到的模糊结果图和祛斑祛痘结果图对应位置像素的像素值的差值,得到不均匀区域检测图像,也就是初始的肤色不均匀结果图,该初始的肤色不均匀结果图可以通过高斯滤波处理后得到肤色不均匀结果图。根据不均匀区域检测图像中确定像素的中间值对应的祛斑祛痘结果图的像素值,对祛斑祛痘结果图进行逆对比度增强得到肤色均匀结果图,也就是第二阶段结果图。将肤色均匀结果图和原图进行差异截断和平滑重映射获得面部大瑕疵以及较浅的不均匀区域m4,也就是中间结果图。最后将上述m1,m2,m3,m4和用户设定的美颜强度值相乘得到混合权重,也就是第二权重矩阵,将原图和肤色均匀结果图根据第二权重矩阵进行混合后得到人脸美颜结果,也就是最终结果图。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图6,其示出了本申请一个实施例提供的图像处理装置的框图。该装置具有实现上述图像处理方法的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是计算机设备,也可以设置在计算机设备中。该装置600可以包括:原图获取模块610、原图检测模块620、第一处理模块630、第二处理模块640和结果生成模块650。
原图获取模块610,配置为获取待处理的原图。
原图检测模块620,配置为对所述原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图;其中,所述第一检测结果图用于表征所述原图中的肤色区域,所述第二检测结果图用于表征所述原图中的脸部区域。
第一处理模块630,配置为基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图;其中,所述第一阶段处理用于去除所述脸部区域中与整体肤色存在差异的区域。
第二处理模块640,配置为对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图;其中,所述第二阶段处理用于提升所述脸部区域中不同位置肤色的均匀性。
结果生成模块650,配置为基于所述原图和所述第二阶段结果图,生成最终结果图。
在一些实施例中,第一处理模块630包括:第一滤波单元631、瑕疵结果生成单元632和第一结果生成单元633。
第一滤波单元631,配置为基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一滤波处理,得到第一滤波后图像;其中,所述第一滤波处理用于在保留所述源图中边缘的情况下,对所述脸部区域进行滤波处理。
瑕疵结果生成单元632,配置为基于所述第一滤波后图像和所述原图,生成瑕疵检测结果图;其中,所述瑕疵检测结果图用于表征所述脸部区域中的瑕疵位置。
第一结果生成单元633,配置为基于所述瑕疵检测结果图和所述第二检测结果图,对所述原图和所述第一滤波后图像进行混合,生成所述第一阶段结果图。
在一些实施例中,第一滤波单元631,配置为:
基于所述第二检测结果图,确定所述原图中的脸部区域;
对于所述脸部区域中的目标像素,根据所述目标像素与各个周围像素的像素值差异和所述各个周围像素分别对应的肤色概率,确定所述各个周围像素分别对应的第一滤波权重;其中,所述肤色概率是基于所述第一检测结果图得到的;
根据所述各个周围像素分别对应的像素值和第一滤波权重,确定所述目标像素对应的第一滤波后像素值;
根据所述脸部区域中各个像素分别对应的第一滤波后像素值,得到所述第一滤波后图像。
在一些实施例中,瑕疵结果生成单元632,配置为:
基于所述第一滤波后图像和所述原图中对应位置的像素值的差值,得到第一差值图像;
将所述第一差值图像中各个第一像素的像素值设为第一数值,得到处理后的第一差值图像;其中,所述第一像素是指所述第一差值图像中像素值符合第一条件的像素;
根据所述处理后的第一差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成所述瑕疵检测结果图。
在一些实施例中,第一结果生成单元633,配置为:
基于所述瑕疵检测结果图和所述第二检测结果图中对应位置的像素值的乘积,确定第一权重矩阵;
基于所述第一权重矩阵对所述原图和所述第一滤波后图像进行混合,生成所述第一阶段结果图。
在一些实施例中,第二处理模块640,包括:第一结果使用单元641、肤色结果生成单元642和第二结果生成单元643。
第一结果使用单元641,配置为基于所述第一阶段结果图,生成模糊结果图和边缘结果图。
肤色结果生成单元642,配置为根据所述模糊结果图和所述第一阶段结果图,生成肤色不均匀结果图。
第二结果生成单元643,配置为根据所述肤色不均匀结果图,采用逆对比度增强法对所述第一阶段结果图进行处理,得到所述第二阶段结果图;其中,所述逆对比度增强法用于拉近不同像素之间的明暗度。
在一些实施例中,第一结果使用单元641,配置为:
对所述第一阶段结果图进行第二滤波处理,得到所述模糊结果图;
基于所述第一阶段结果图与所述模糊结果图中对应位置的像素值的差值,得到所述边缘结果图。
在一些实施例中,肤色结果生成单元642,配置为:
基于所述第一阶段结果图与所述模糊结果图中对应位置的像素值的差值,得到初始的肤色不均匀结果图;
对所述初始的肤色不均匀结果图进行第三滤波处理,得到所述肤色不均匀结果图。
在一些实施例中,第二结果生成单元643,配置为:
根据所述肤色不均匀结果图,确定第一像素集和第二像素集;其中,所述第一像素集包括所述肤色不均匀结果图中肤色不均匀值属于第一数值区间的像素,所述第二像素集包括所述肤色不均匀结果图中肤色不均匀值属于第二数值区间的像素,所述第一像素集中像素的肤色不均匀值大于所述第二像素集中像素的肤色不均匀值;
将所述第一阶段结果图中属于所述第一像素集的像素的像素值减小,以及将所述第一阶段结果图中属于所述第二像素集的像素的像素值增大,得到所述第二阶段结果图。
在一些实施例中,结果生成模块650,配置为:
基于所述第二阶段结果图和所述原图中对应位置的像素值的差值,得到第二差值图像;
根据所述第二差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成中间结果图;
基于所述第一检测结果图、所述第二检测结果图、所述原图对应的边缘结果图、所述中间结果图和美颜强度值,生成第二权重矩阵;其中,所述美颜强度值用于调整所述原图和所述第二阶段结果图在混合时各自所占的比重;
基于所述第二权重矩阵对所述原图和所述第二阶段结果图进行混合,生成所述最终结果图。
本申请通过对原图进行肤色检测和脸部检测选出原图中的肤色区域和脸部区域,并根据第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,得到最终的图像处理结果。一方面,通过第一阶段处理弱化了脸部区域中与整体肤色存在较大差异的区域,缩小了该区域的肤色与脸部区域整体肤色的差异,通过第二阶段处理提升脸部区域中肤色的均匀性,加强了去瑕疵能力的同时更好地保留了肤质纹理;另一方面,基于肤色检测和脸部检测得到的第一检测结果图和第二检测结果图,从而确定原图中的脸部肤色区域,重点对上述得到的脸部肤色区域进行图像的美颜处理,不对背景区域和脸部五官边缘进行图像的美颜处理,减少了美颜处理时背景区域和脸部五官边缘的磨损;再一方面,本方案仅通过简单的计算即可完成对图像的美颜处理,不需要通过使用神经网络来进行大量的计算,优化了美颜处理的计算过程,从而缩短了所需耗时,能够满足一些实时性要求高的应用场景的需求,例如可以在进行拍摄的同时进行上述美颜优化过程,实时性高。因此,本申请提供了一种兼顾去瑕疵能力、肤质纹理保留能力以及高实时性的图像美颜方案。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内容结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
在示例性实施例中,还提供了一种计算机设备。该计算机设备包括处理器和存储器,该存储器中存储有计算机程序。该计算机设备可以是上文中介绍的终端10和服务器20,该计算机程序由处理器加载并执行以实现上述图像处理方法。
在示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述图像处理方法。
在示例性实施例中,还提供一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现上述图像处理方法。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。

Claims (14)

  1. 一种图像处理方法,所述方法由计算机设备执行,所述方法包括:
    获取待处理的原图;
    对所述原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图;其中,所述第一检测结果图用于表征所述原图中的肤色区域,所述第二检测结果图用于表征所述原图中的脸部区域;
    基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图;其中,所述第一阶段处理用于去除所述脸部区域中与整体肤色存在差异的区域;
    对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图;其中,所述第二阶段处理用于提升所述脸部区域中不同位置肤色的均匀性;
    基于所述原图和所述第二阶段结果图,生成最终结果图。
  2. 根据权利要求1所述的方法,其中,所述基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图,包括:
    基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一滤波处理,得到第一滤波后图像;其中,所述第一滤波处理用于在保留所述源图中边缘的情况下,对所述脸部区域进行滤波处理;
    基于所述第一滤波后图像和所述原图,生成瑕疵检测结果图;其中,所述瑕疵检测结果图用于表征所述脸部区域中的瑕疵位置;
    基于所述瑕疵检测结果图和所述第二检测结果图,对所述原图和所述第一滤波后图像进行混合,生成所述第一阶段结果图。
  3. 根据权利要求2所述的方法,其中,所述基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一滤波处理,得到第一滤波后图像,包括:
    基于所述第二检测结果图,确定所述原图中的脸部区域;
    对于所述脸部区域中的目标像素,根据所述目标像素与各个周围像素的像素值差异和所述各个周围像素分别对应的肤色概率,确定所述各个周围像素分别对应的第一滤波权重;其中,所述肤色概率是基于所述第一检测结果图得到的;
    根据所述各个周围像素分别对应的像素值和第一滤波权重,确定所述目标像素对应的第一滤波后像素值;
    根据所述脸部区域中各个像素分别对应的第一滤波后像素值,得到所述第一滤波后图像。
  4. 根据权利要求2或3所述的方法,其中,所述基于所述第一滤波后图像和所述原图,生成瑕疵检测结果图,包括:
    基于所述第一滤波后图像和所述原图中对应位置的像素值的差值,得到第一差值图像;
    将所述第一差值图像中各个第一像素的像素值设为第一数值,得到处理后的第一差值图像;其中,所述第一像素是指所述第一差值图像中像素值符合第一条件的像素;
    根据所述处理后的第一差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成所述瑕疵检测结果图。
  5. 根据权利要求2至4任一项所述的方法,其中,所述基于所述瑕疵检测结果图和所述第二检测结果图,对所述原图和所述第一滤波后图像进行混合,生成所述第一阶段结果图,包括:
    基于所述瑕疵检测结果图和所述第二检测结果图中对应位置的像素值的乘积,确定第一权重矩阵;
    基于所述第一权重矩阵对所述原图和所述第一滤波后图像进行混合,生成所述第一阶段结果图。
  6. 根据权利要求1至5任一项所述的方法,其中,所述对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图,包括:
    基于所述第一阶段结果图,生成模糊结果图和边缘结果图;
    根据所述模糊结果图和所述第一阶段结果图,生成肤色不均匀结果图;
    根据所述肤色不均匀结果图,采用逆对比度增强法对所述第一阶段结果图进行处理,得到所述第二阶段结果图;其中,所述逆对比度增强法用于拉近不同像素之间的明暗度。
  7. 根据权利要求6所述的方法,其中,所述基于所述第一阶段结果图,生成模糊结果图和边缘结果图,包括:
    对所述第一阶段结果图进行第二滤波处理,得到所述模糊结果图;
    基于所述第一阶段结果图与所述模糊结果图中对应位置的像素值的差值,得到所述边缘结果图。
  8. 根据权利要求6或7所述的方法,其中,所述根据所述模糊结果图和所述第一阶段结果图,生成肤色不均匀结果图,包括:
    基于所述第一阶段结果图与所述模糊结果图中对应位置的像素值的差值,得到初始的肤色不均匀结果图;
    对所述初始的肤色不均匀结果图进行第三滤波处理,得到所述肤色不均匀结果图。
  9. 根据权利要求6至8任一项所述的方法,其中,所述根据所述肤色不均匀结果图,采用逆对比度增强法对所述第一阶段结果图进行处理,得到所述第二阶段结果图,包括:
    根据所述肤色不均匀结果图,确定第一像素集和第二像素集;其中,所述第一像素集包括所述肤色不均匀结果图中肤色不均匀值属于第一数值区间的像素,所述第二像素集包括所述肤色不均匀结果图中肤色不均匀值属于第二数值区间的像素,所述第一像素集中像素的肤色不均匀值大于所述第二像素集中像素的肤色不均匀值;
    将所述第一阶段结果图中属于所述第一像素集的像素的像素值减小,以及将所述第一阶段结果图中属于所述第二像素集的像素的像素值增大,得到所述第二阶段结果图。
  10. 根据权利要求1至9任一项所述的方法,其中,所述基于所述原图和所述第二阶段结果图,生成最终结果图,包括:
    基于所述第二阶段结果图和所述原图中对应位置的像素值的差值,得到第二差值图像;
    根据所述第二差值图像中各个像素的像素值,进行差异截断和平滑重映射处理,生成中间结果图;
    基于所述第一检测结果图、所述第二检测结果图、所述原图对应的边缘结果图、所述中间结果图和美颜强度值,生成第二权重矩阵;其中,所述美颜强度值用于调整所述原图和所述第二阶段结果图在混合时各自所占的比重;
    基于所述第二权重矩阵对所述原图和所述第二阶段结果图进行混合,生成所述最终结果图。
  11. 一种图像处理装置,所述装置包括:
    原图获取模块,配置为获取待处理的原图;
    原图检测模块,配置为对所述原图进行肤色检测和脸部检测,得到第一检测结果图和第二检测结果图;其中,所述第一检测结果图用于表征所述原图中的肤色区域,所述第二检测结果图用于表征所述原图中的脸部区域;
    第一处理模块,配置为基于所述第一检测结果图和所述第二检测结果图,对所述原图进行第一阶段处理,得到第一阶段结果图;其中,所述第一阶段处理用于去除所述脸部区域中与整体肤色存在差异的区域;
    第二处理模块,配置为对所述第一阶段结果图进行第二阶段处理,得到第二阶段结果图;其中,所述第二阶段处理用于提升所述脸部区域中不同位置肤色的均匀性;
    结果生成模块,配置为基于所述原图和所述第二阶段结果图,生成最终结果图。
  12. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要1至10任一项所述的方法。
  13. 一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至10任一项所述的方法。
  14. 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至10任一所述的方法。
PCT/CN2022/134464 2021-12-09 2022-11-25 图像处理方法、装置、设备、存储介质及程序产品 WO2023103813A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111501454.3 2021-12-09
CN202111501454.3A CN114187202A (zh) 2021-12-09 2021-12-09 图像处理方法、装置、设备、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2023103813A1 true WO2023103813A1 (zh) 2023-06-15

Family

ID=80604120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134464 WO2023103813A1 (zh) 2021-12-09 2022-11-25 图像处理方法、装置、设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN114187202A (zh)
WO (1) WO2023103813A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187202A (zh) * 2021-12-09 2022-03-15 百果园技术(新加坡)有限公司 图像处理方法、装置、设备、存储介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8265410B1 (en) * 2009-07-11 2012-09-11 Luxand, Inc. Automatic correction and enhancement of facial images
CN109712095A (zh) * 2018-12-26 2019-05-03 西安工程大学 一种快速边缘保留的人脸美化方法
CN110248242A (zh) * 2019-07-10 2019-09-17 广州虎牙科技有限公司 一种图像处理和直播方法、装置、设备和存储介质
CN110706187A (zh) * 2019-05-31 2020-01-17 成都品果科技有限公司 一种均匀肤色的图像调整方法
CN114187202A (zh) * 2021-12-09 2022-03-15 百果园技术(新加坡)有限公司 图像处理方法、装置、设备、存储介质及程序产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8265410B1 (en) * 2009-07-11 2012-09-11 Luxand, Inc. Automatic correction and enhancement of facial images
CN109712095A (zh) * 2018-12-26 2019-05-03 西安工程大学 一种快速边缘保留的人脸美化方法
CN110706187A (zh) * 2019-05-31 2020-01-17 成都品果科技有限公司 一种均匀肤色的图像调整方法
CN110248242A (zh) * 2019-07-10 2019-09-17 广州虎牙科技有限公司 一种图像处理和直播方法、装置、设备和存储介质
CN114187202A (zh) * 2021-12-09 2022-03-15 百果园技术(新加坡)有限公司 图像处理方法、装置、设备、存储介质及程序产品

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Portrait Beauty Algorithm -Skin Detection", CLOUD TENCENT, 13 November 2020 (2020-11-13), XP093072541, Retrieved from the Internet <URL:https://cloud.tencent.com/developer/article/1747827> [retrieved on 20230810] *

Also Published As

Publication number Publication date
CN114187202A (zh) 2022-03-15

Similar Documents

Publication Publication Date Title
WO2020125631A1 (zh) 视频压缩方法、装置和计算机可读存储介质
CN111127591B (zh) 图像染发处理方法、装置、终端和存储介质
CN108961175B (zh) 人脸亮度调整方法、装置、计算机设备及存储介质
CN111369644A (zh) 人脸图像的试妆处理方法、装置、计算机设备和存储介质
US20140176548A1 (en) Facial image enhancement for video communication
CN109919866B (zh) 图像处理方法、装置、介质及电子设备
Kim et al. Low-light image enhancement based on maximal diffusion values
CN110248242B (zh) 一种图像处理和直播方法、装置、设备和存储介质
US10929982B2 (en) Face pose correction based on depth information
CN112330527A (zh) 图像处理方法、装置、电子设备和介质
WO2023103813A1 (zh) 图像处理方法、装置、设备、存储介质及程序产品
CN112333385B (zh) 电子防抖控制方法及装置
CN110503599B (zh) 图像处理方法和装置
Lei et al. A novel intelligent underwater image enhancement method via color correction and contrast stretching
WO2021128835A1 (zh) 图像处理方法及装置、视频处理方法及装置、电子设备和存储介质
CN112465882A (zh) 图像处理方法、装置、电子设备及存储介质
US20240013358A1 (en) Method and device for processing portrait image, electronic equipment, and storage medium
CN114862729A (zh) 图像处理方法、装置、计算机设备和存储介质
CN112435173A (zh) 一种图像处理和直播方法、装置、设备和存储介质
CN113610723B (zh) 图像处理方法及相关装置
CN113744145B (zh) 提升图像清晰度的方法、存储介质、电子设备及系统
CN113379623B (zh) 图像处理方法、装置、电子设备及存储介质
CN113160099B (zh) 人脸融合方法、装置、电子设备、存储介质及程序产品
CN111583163B (zh) 基于ar的人脸图像处理方法、装置、设备及存储介质
CN114331810A (zh) 图像处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903236

Country of ref document: EP

Kind code of ref document: A1