WO2022105270A1 - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
WO2022105270A1
WO2022105270A1 PCT/CN2021/106299 CN2021106299W WO2022105270A1 WO 2022105270 A1 WO2022105270 A1 WO 2022105270A1 CN 2021106299 W CN2021106299 W CN 2021106299W WO 2022105270 A1 WO2022105270 A1 WO 2022105270A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute information
face
area
region
image
Prior art date
Application number
PCT/CN2021/106299
Other languages
French (fr)
Chinese (zh)
Inventor
李乐
秦文煜
刘晓坤
陶建华
鹿镇
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022105270A1 publication Critical patent/WO2022105270A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method and apparatus.
  • the facial degreasing technology mainly includes: first, obtaining the brightness value of each pixel point in the face image; then, determining the difference value between the brightness value of each pixel point and the preset brightness value, and finally, according to the determined difference value The value determines the glossy areas in the face image and de-glosses the glossy areas.
  • the present disclosure provides an image processing method, an apparatus, an electronic device and a storage medium, so as to improve the effect of de-grease of the face in the face image.
  • the embodiments of the present disclosure provide an image processing method, the method includes: acquiring a face image, and determining a face area in the face image; inputting the face image into an attribute information extraction model ; Obtain the facial attribute information of the face area and the environmental attribute information of the face image; According to the facial attribute information and the environmental attribute information, the area extraction is carried out on the face area to obtain the oily area in the face area; According to the facial attribute information and Environmental attribute information, and de-gloss processing is performed on the glossy area to obtain a de-glossy face image.
  • the facial attribute information and the environmental attribute information are referred to, so that the glossy area can be determined more accurately and the glossy area can be appropriately processed, thereby improving the The effect of de-glossing the face in the face image.
  • the facial attribute information includes at least one of skin color, skin texture, brightness or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
  • the skin color, skin texture, brightness or age of the face, and ambient lighting information directly affect the average brightness value of each pixel in the face, so the oily area determined based on these information is more accurate.
  • the above-mentioned step of performing region extraction on the face region according to the face attribute information and the environmental attribute information to obtain the oily region in the face region includes: obtaining the first region according to the face attribute information and the environmental attribute information Adjustment coefficient; increase the brightness value of each pixel in the face image according to the first adjustment coefficient; determine the oily area in the face area according to the face area after the increased brightness value.
  • the oily area in the face area can be determined according to the preset standard, and the facial attribute information and the environmental attribute information directly affect the adjustment of the brightness value The first adjustment factor of . Therefore, after determining the first adjustment coefficient according to the facial attribute information and the environmental attribute information, and adjusting the face image, the determined oily area is more accurate.
  • the above-mentioned step of obtaining the first adjustment coefficient according to the facial attribute information and the environmental attribute information includes: inputting the facial attribute information and the environmental attribute information into the adjustment coefficient extraction function to obtain the first adjustment coefficient;
  • the function is obtained by linear regression analysis based on the adjustment coefficients, facial attribute information and environmental attribute information marked in multiple sample face images.
  • the linear regression analysis is performed on the adjustment coefficients, facial attribute information and environmental attribute information marked in the multiple sample face images, and the interdependence between the adjustment coefficients, the facial attribute information and the environmental attribute information can be obtained.
  • the marked adjustment coefficient, facial attribute information, and environmental attribute information can be determined according to the values when the sample image is manually processed to achieve the expected effect. Therefore, the first adjustment coefficient calculated according to the adjustment coefficient extraction function has a higher probability to achieve the expected effect after adjusting the face image.
  • the above-mentioned step of increasing the brightness value of each pixel in the face image according to the first adjustment coefficient includes: converting the color space of the face image into a target color space; the target color space includes representing a face The information of the brightness value of the pixel point in the image; the brightness value of each pixel point in the face image in the target color space is increased according to the first adjustment coefficient.
  • the above-mentioned step of determining the oily area in the face area according to the increased brightness value of the face area includes: setting the brightness value in the increased brightness value of the face area to be greater than the first preset value
  • the area composed of threshold pixels is determined as the oily area in the face area.
  • the above-mentioned step of performing de-gloss processing on the glossy area according to the facial attribute information and the environmental attribute information includes: inputting the facial attribute information and the environmental attribute information into the dimming coefficient calculation function to obtain the facial image Darkening coefficient; the darkening coefficient calculation function is obtained by linear regression based on the darkening coefficient, facial attribute information and environmental attribute information marked in multiple face images; reduce each pixel in the oily area according to the darkening coefficient of the face image The brightness value of the point.
  • the method further includes: fusing the face image with the de-glossed face image to obtain the target image.
  • the fused image takes into account the details in the face image and the effect of degreasing.
  • the above-mentioned step of performing region extraction on the face region to obtain the shiny region in the face region according to the face attribute information and the environmental attribute information includes: performing skin grinding on the face region; information and environmental attribute information, and extract the area of the face area after microdermabrasion to obtain the shiny area in the face area.
  • microdermabrasion on the face area can initially reduce part of the oily gloss, so that the final oily gloss removal effect is better.
  • the embodiments of the present disclosure provide an image processing apparatus, including: an acquisition module configured to acquire a face image and determine a face area in the face image; an extraction module configured to In order to input the face image into the attribute information extraction model; obtain the face attribute information of the face area and the environmental attribute information of the face image; according to the face attribute information and the environmental attribute information, extract the face area to obtain the face area
  • the de-gloss module is configured to perform de-gloss processing on the glossy area according to the facial attribute information and the environmental attribute information, so as to obtain the de-gloss face image.
  • the facial attribute information includes at least one of skin color, skin texture, brightness, or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
  • the acquiring module is further configured to: acquire a first adjustment coefficient according to the facial attribute information and the environmental attribute information; the extracting module is specifically configured to: increase the brightness of each pixel in the face image according to the first adjustment coefficient value; according to the face area after increasing the brightness value, determine the oily area in the face area.
  • the acquisition module is further configured to input the facial attribute information and the environmental attribute information into an adjustment coefficient extraction function to obtain a first adjustment coefficient; the adjustment coefficient extraction function is based on adjustments marked in a plurality of sample face images coefficients, facial attribute information and environmental attribute information obtained by linear regression analysis.
  • the image processing apparatus further includes an augmentation module configured to: convert the color space of the face image into a target color space; the target color space includes information representing luminance values of pixels in the face image; The brightness value of each pixel in the face image in the target color space is increased according to the first adjustment coefficient.
  • the extraction module is further configured to: determine an area composed of pixels whose brightness value is greater than the first preset threshold in the face region after the brightness value has been increased as a glossy region in the face region.
  • the exponential coefficient of image annotation, facial attribute information and environmental attribute information are obtained by linear regression analysis.
  • the degreasing module is further configured to: input the facial attribute information and the environmental attribute information into a dimming coefficient calculation function to obtain a dimming coefficient of the face image; the dimming coefficient calculation function is based on a plurality of face images It is obtained by linear regression of the dimming coefficient, facial attribute information and environmental attribute information marked in the face image; the brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
  • the image processing apparatus further includes a fusion module configured to fuse the face image with the de-glossed face image to obtain the target image.
  • the extraction module is further configured to: perform microdermabrasion processing on the face area; perform area extraction on the microdermabrasion processed face area according to the facial attribute information and the environmental attribute information, so as to obtain the facial area in the face area. oily areas.
  • an electronic device including: a processor; and a memory for storing instructions executable by the processor.
  • the processor is configured to execute the instructions to implement the image processing method shown in the first aspect and any implementation manner of the first aspect.
  • a computer-readable storage medium when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the image processing method shown in the first aspect .
  • a computer program product which can be directly loaded into an internal memory of an electronic device and contains software codes, and after the computer program is loaded and executed by the electronic device, can realize the above-mentioned first aspect.
  • Any image processing device, electronic device or computer-readable storage medium or computer program product provided above is used to execute the corresponding method provided above, therefore, it is possible to determine the glossy area and carry out de-gloss processing to the glossy area.
  • the process referring to the facial attribute information and the environmental attribute information, it is possible to appropriately process the glossy area, thereby improving the effect of removing the glossiness of the face in the face image.
  • FIG. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment
  • FIG. 2 is a schematic diagram of a human face key point according to an exemplary embodiment
  • FIG. 3 is a block diagram of an image processing apparatus according to an exemplary embodiment
  • Fig. 4 is a block diagram of an electronic device according to an exemplary embodiment.
  • words such as “exemplary” or “such as” are used to represent examples, illustrations, or illustrations. Any embodiments or designs described in the embodiments of the present disclosure as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the data involved in this disclosure may be data authorized by the user or fully authorized by all parties.
  • At least one refers to one or more.
  • “Plural” means two or more.
  • a composition includes one or more objects.
  • the image processing method provided by the embodiments of the present disclosure may be applied to an electronic device or a server.
  • electronic devices include but are not limited to mobile phones, tablet computers, notebook computers, handheld computers, vehicle terminals, and the like.
  • the server may be one server, or may also be a server cluster composed of multiple servers, which is not limited in the present disclosure.
  • FIG. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment.
  • the method shown in FIG. 1 can be applied to an electronic device or a server.
  • the method shown in Figure 1 may include the following steps:
  • a face image is acquired, and a face region in the face image is determined.
  • the present disclosure does not limit the manner of acquiring a face image including a human face.
  • a face image sent by other electronic devices is received, and in another implementation manner, a face image uploaded by a user is acquired,
  • a local face image is read, and in another implementation manner, the face image is collected by a collection device integrated in an electronic device.
  • the execution subject is an electronic device
  • the electronic device is an electronic device including a capture device such as a camera, and the camera of the electronic device captures a face image in the current environment.
  • the lighting information of the current environment is certain.
  • determining the face region in the face image includes the following steps:
  • Step 1 Obtain the key points of the face in the face image.
  • the key points of the face are obtained using the trained face key point detection model as shown in Figure 2.
  • the white dots in Figure 2 are the detected face key points.
  • Step 2 Determine the face area in the face image according to the acquired key points.
  • the closed area formed by connecting the acquired key points is the face area in the face image shown in FIG. 2 .
  • the face region in the face image is determined based on skin color detection.
  • the face region in the face image is determined.
  • a histogram method is used to determine a binarization threshold, the face image is binarized by using the binarization threshold, and then a face region is determined in the binarized face image.
  • determination of the face region in the face image in the present disclosure may also be other methods in the prior art, which are not limited in the present disclosure.
  • the face image is input into the attribute information extraction model to obtain the face attribute information of the face region and the environmental attribute information of the face image.
  • the attribute information extraction model may be trained in advance based on sample face images; the facial attribute information includes at least one of skin color, skin texture, brightness or age of the face in the face area; the environmental attribute information includes at least ambient lighting information.
  • the skin color, skin quality, brightness or age of the face, and ambient lighting information directly affect the average brightness value of each pixel in the face. The results are more accurate.
  • the facial area in the facial image is subjected to skin resurfacing processing.
  • an edge-preserving filter may be used to perform skin resurfacing on the face region in the face image.
  • the edge-preserving filter may be: a bilateral filter, a guided image filter, a weighted least squares filter ( Weighted least square filter) and other edge-preserving filters.
  • microdermabrasion on the face area can initially reduce part of the oily gloss, so that the final oily gloss removal effect is better.
  • region extraction is performed on the face region according to the face attribute information and the environment attribute information, so as to obtain a glossy region in the face region.
  • the oily area in the face area is extracted by the following steps:
  • step 1 the first adjustment coefficient of the face image is obtained according to the face attribute information and the environment attribute information.
  • the facial attribute information and the environmental attribute information are input into the adjustment coefficient extraction function for operation to obtain the first adjustment coefficient.
  • the adjustment coefficient extraction function is obtained by linear regression analysis based on the adjustment coefficients, facial attribute information and environmental attribute information marked in the multiple sample face images.
  • the marked adjustment coefficient, facial attribute information and environmental attribute information can be determined according to the adjustment coefficient, facial attribute information and environmental attribute information when the sample image is manually processed to achieve the expected effect. Therefore, the first adjustment coefficient calculated according to the adjustment coefficient extraction function has a higher probability to achieve the expected effect after adjusting the face image.
  • step 2 the brightness value of each pixel in the face image is increased according to the first adjustment coefficient.
  • the color space of the face image may be converted to include a luminance component
  • the color space in one example, converts a face image in the red (R), green (G), blue (B) color space to any of the YUV color space, HSI color space, HSV color space, or Lab color space A sort of. Then, use the first parameter to adjust the brightness value of each pixel in the face image.
  • the first parameter is used to increase the brightness value of each pixel in the face image according to the same algorithm. If the brightness value of the pixel point after the brightness value is increased is greater than 1, the brightness value of the pixel point is determined as 1; if the brightness value of the pixel point after the brightness value is increased is less than 1, the brightness value of the pixel point after the brightness value is increased The brightness value of the pixel is taken as the brightness value of the pixel.
  • the oily area in the face area can be determined according to the preset standard, and the facial attribute information and the environmental attribute information directly affect the adjustment of the brightness value The first adjustment factor of . Therefore, after determining the first adjustment coefficient according to the facial attribute information and the environmental attribute information, and adjusting the face image, the determined oily area is more accurate.
  • step three is further included.
  • step 3 an exponential operation is performed on the brightness value of each pixel in the face image increased by the first parameter according to the second parameter.
  • the second parameter is obtained by inputting the facial attribute information and environmental attribute information of the face image into the exponential coefficient calculation function; the exponential coefficient calculation function is obtained according to the exponential coefficient, facial attribute information and environmental attribute information marked on multiple sample face images obtained by linear regression analysis.
  • the second parameter is used as an exponent to perform exponentiation on the luminance value of each pixel in the face image adjusted by using the first parameter. Assuming that the second parameter is 10, and the brightness value of the first pixel in the face image adjusted by the first parameter is 0.5, then, using 10 as an exponent, the brightness value of the first pixel is obtained by exponentiating 0.5.
  • the new luminance value can be expressed as: (0.5) 10 with the formula.
  • step four is also included.
  • step 4 in the case where step 3 is not performed, the area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the adjustment of the brightness value is determined as the oily area in the face area.
  • step 3 in the face image after the exponential operation, the area composed of pixels whose brightness value is greater than the second preset threshold is determined as the oily area in the face area.
  • the oily area in the face area satisfies the following formula:
  • M is the oily area in the face area
  • pow is the exponential function
  • Y is the brightness value of each pixel in the face area after increasing the brightness value
  • s is the coefficient of the exponential function.
  • s can be obtained by inputting the facial attribute information of the face image and the environmental attribute information into the exponential coefficient calculation function.
  • the exponential coefficient calculation function is obtained based on the linear regression analysis of the exponential coefficient, facial attribute information and environmental attribute information annotated by multiple sample face images.
  • de-gloss processing is performed on the glossy area to obtain an image of the de-glossy face.
  • the facial attribute information and the environmental attribute information of the face image are input into a dimming coefficient calculation function to obtain a dimming coefficient of the face image, and the dimming coefficient of the face image is used to reduce the shading coefficient of each face image in the oily area.
  • the luminance value of a pixel can be obtained by linear regression analysis based on the dimming coefficients, facial attribute information and environmental attribute information marked on multiple sample face images.
  • the dimming factor can be a percentage less than 1, and the dimming factor is used to multiply the brightness value of each pixel in the oily area to obtain a new brightness value corresponding to each pixel, using the new brightness The value replaces the original brightness value of the pixel to obtain a de-glossy face image.
  • the brightness value of each pixel in the oily area is subtracted by the dimming coefficient to obtain a new brightness value of each pixel, and the original brightness value of the pixel is replaced with the new brightness value to obtain the Oily face image.
  • the facial attribute information and the environmental attribute information are referred to, so that the glossy area can be appropriately processed, thereby improving the human face in the image.
  • the face image after dermabrasion processing of the face region is mixed with the degreasing face image to obtain the target image.
  • the face image after the skin-removed face region is fused with the degreasing face image to obtain the target image. Based on the alpha channel, or using the feathering operation, the face image after microdermabrasion processing and the de-gloss face image can be fused.
  • the original face image and the degreasing face image can also be fused to obtain the target image.
  • the effect of the target image obtained in this way is worse than the target image obtained by fusing the face image after the skin-grinding process with the de-gloss face image.
  • the obtained target image has both the details of the face image after the skin-grinding process and the de-gloss effect of the de-glossy face image.
  • the image processing apparatus can be divided into functional modules according to the above method examples.
  • each functional module can be divided according to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present disclosure is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Fig. 3 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • the image processing apparatus 200 includes an acquisition module 201 , an extraction module 202 and a degreasing module 203 .
  • the image processing apparatus 200 further includes an enlargement module 204 and a fusion module 205 .
  • the acquisition module 201 is configured to acquire a face image and determine the face region in the face image
  • the extraction module 202 is configured to input the face image into the attribute information extraction model; obtain the face attribute information of the face region And the environmental attribute information of the face image; according to the facial attribute information and the environmental attribute information, the face area is extracted to obtain the oily area in the face area
  • the oil removal module 203 is configured to be based on the facial attribute information and the environment. attribute information, and de-gloss processing is performed on the glossy area to obtain a de-glossy face image.
  • the acquisition module 201 can be used to execute S100-S101
  • the extraction module 202 can be used to execute S103
  • the degreasing module 203 can be used to execute S104
  • the fusion module 205 can be used to execute S105.
  • the facial attribute information includes at least one of skin color, skin texture, brightness, or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
  • the obtaining module 201 is further configured to: obtain the first adjustment coefficient according to the facial attribute information and the environmental attribute information; the extracting module 202 is specifically configured to: increase each pixel in the face image according to the first adjustment coefficient The brightness value of ; determine the oily area in the face area according to the face area after increasing the brightness value.
  • the acquiring module 201 is further configured to input the facial attribute information and the environmental attribute information into the adjustment coefficient extraction function to obtain the first adjustment coefficient; the adjustment coefficient extraction function is based on the adjustment marked in the multiple sample face images coefficients, facial attribute information and environmental attribute information obtained by linear regression analysis.
  • the image processing apparatus 200 further includes an augmentation module 204 configured to: convert the color space of the face image into a target color space; the target color space includes information representing the brightness values of pixels in the face image ; Increase the brightness value of each pixel in the face image in the target color space according to the first adjustment coefficient.
  • an augmentation module 204 configured to: convert the color space of the face image into a target color space; the target color space includes information representing the brightness values of pixels in the face image ; Increase the brightness value of each pixel in the face image in the target color space according to the first adjustment coefficient.
  • the extraction module 202 is further configured to: determine the area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the brightness value has been increased as an oily area in the face area.
  • the oily area in the face area satisfies the following formula:
  • M is the oily area in the face area
  • pow is the exponential function
  • Y is the brightness value of each pixel in the face area after increasing the brightness value
  • s is the coefficient of the exponential function
  • s is the facial attribute information.
  • the environmental attribute information is obtained by inputting the exponential coefficient calculation function; the exponential coefficient calculation function is obtained based on the exponential coefficient, facial attribute information and environmental attribute information of multiple sample face images marked by linear regression analysis.
  • the degreasing module 203 is further configured to: input the facial attribute information and the environmental attribute information into a dimming coefficient calculation function to obtain a dimming coefficient of the face image; the dimming coefficient calculation function is based on a plurality of samples The dimming coefficient, facial attribute information and environmental attribute information marked in the face image are obtained by linear regression analysis; the brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
  • the image processing apparatus 200 further includes: a fusion module 205, which is configured to fuse the face image and the degreasing face image to obtain the target image.
  • the extraction module 202 is further configured to: perform microdermabrasion processing on the face area; perform area extraction on the microdermabrasion processed face area according to the facial attribute information and the environmental attribute information, so as to obtain the human face area Oily areas in .
  • Fig. 4 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 40 includes but is not limited to: a processor 401, a memory 402, a display 403, an input unit 404, an interface unit 405, a power supply 406, and the like.
  • the above-mentioned processor 401 is a memory for storing the above-mentioned processor-executable instructions. It can be understood that the above-mentioned processor 401 is configured to execute any step in the above-mentioned embodiment shown in FIG. 1 . That is, the block diagram of the electronic device 40 can be used as a hardware configuration diagram of the image processing apparatus 200 .
  • the structure of the electronic device shown in FIG. 4 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than those shown in FIG. some components, or a different arrangement of components.
  • the processor 401 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the memory 402, and calling the data stored in the memory 402. , perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole.
  • the processor 401 may include one or more processing units; optionally, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem
  • the modulation processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 401.
  • Memory 402 may be used to store software programs as well as various data.
  • the memory 402 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required by at least one functional unit (such as an acquisition unit, a transceiving unit, or a merging unit, etc.). Additionally, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the display 403 is used to display information input by the user or information provided to the user.
  • the display 403 may include a display panel, and the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • LCD liquid crystal display
  • OLED Organic Light-Emitting Diode
  • the input unit 404 may include a Graphics Processing Unit (GPU), which processes image data of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the processed image frames can be displayed on display 403 .
  • the image frames processed by the graphics processor may be stored in memory 402 (or other storage medium).
  • the interface unit 405 is an interface for connecting an external device to the electronic device 400 .
  • external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 405 may be used to receive input (eg, data information, etc.) from an external device and transmit the received input to one or more elements within the electronic device 400 or may be used to communicate between the electronic device 400 and an external device transfer data.
  • the power supply 406 (such as a battery) can be used to supply power to various components.
  • the power supply 406 can be logically connected to the processor 401 through a power management system, so as to realize functions such as managing charging, discharging, and power consumption management through the power management system.
  • an embodiment of the present disclosure further provides a storage medium including instructions, such as a memory 402 including instructions, and the above-mentioned instructions can be executed by the processor 401 of the electronic device 400 to complete the above-mentioned method.
  • the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory) , RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices, etc.
  • the receiving function of the above-mentioned acquisition module 201 may be implemented by the interface unit 405 in FIG. 4 .
  • the processing functions of the acquisition module 201 , the extraction module 202 , the degreasing module 203 , the enlargement module 204 and the fusion module 205 can all be implemented by the processor 401 in FIG. 4 calling the computer program stored in the memory 402 .
  • an embodiment of the present disclosure also provides a computer program product including one or more instructions, which can be executed by the processor 401 of the electronic device 400 to accomplish the above method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

An image processing method and apparatus, and an electronic device and a storage medium, which are used to improve a facial de-glossing effect in a facial image. The method comprises: acquiring a facial image, and determining a facial region in the facial image (S100); inputting the facial image into an attribute information extraction model so as to obtain facial attribute information of the facial region and environmental attribute information of the facial image (S101); performing region extraction on the facial region according to the facial attribute information and the environmental attribute information so as to obtain a glossy region in the facial region (S103); and performing a de-glossing treatment on the glossy region according to the facial attribute information and the environmental attribute information so as to obtain a de-glossed facial image (S104).

Description

图像处理方法和装置Image processing method and device
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2020年11月17日递交的中国专利申请202011287995.6的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。This application claims the priority of Chinese patent application No. 202011287995.6 filed on November 17, 2020. The contents disclosed in the above Chinese patent application are hereby cited in their entirety as a part of this application.
技术领域technical field
本公开涉及图像处理技术领域,尤其涉及图像处理方法和装置。The present disclosure relates to the technical field of image processing, and in particular, to an image processing method and apparatus.
背景技术Background technique
在采集人脸图像的过程中,由于光照等因素,容易在采集的人脸图像中包括油光区域,导致图像中的人物形象差,为了获得更好的人脸图像效果需要对图像中人脸面部进行去油光处理。In the process of collecting face images, due to factors such as lighting, it is easy to include oily areas in the collected face images, resulting in poor human figures in the images. Degrease treatment.
目前,面部去油光技术主要包括:首先,获取人脸图像中各个像素点的亮度值;然后,确定各个像素点的亮度值与预设亮度值之间的差异值,最后,根据确定出的差异值确定人脸图像中的油光区域,并对该油光区域进行去油光处理。At present, the facial degreasing technology mainly includes: first, obtaining the brightness value of each pixel point in the face image; then, determining the difference value between the brightness value of each pixel point and the preset brightness value, and finally, according to the determined difference value The value determines the glossy areas in the face image and de-glosses the glossy areas.
发明内容SUMMARY OF THE INVENTION
本公开提供一种图像处理方法、装置、电子设备及存储介质,以改善人脸图像中人脸面部去油光的效果。The present disclosure provides an image processing method, an apparatus, an electronic device and a storage medium, so as to improve the effect of de-grease of the face in the face image.
本公开的技术方案如下:The technical solutions of the present disclosure are as follows:
根据本公开实施例的第一方面,本公开实施例提供一种图像处理方法,该方法包括:获取人脸图像,并确定人脸图像中的人脸区域;将人脸图像输入属性信息提取模型;得到人脸区域的面部属性信息以及人脸图像的环境属性信息;根据面部属性信息以及环境属性信息,对人脸区域进行区域提取,以得到人脸区域中的油光区域;根据面部属性信息以及环境属性信息,对油光区域进行去油光处理,以得到去油光人脸图像。According to a first aspect of the embodiments of the present disclosure, the embodiments of the present disclosure provide an image processing method, the method includes: acquiring a face image, and determining a face area in the face image; inputting the face image into an attribute information extraction model ; Obtain the facial attribute information of the face area and the environmental attribute information of the face image; According to the facial attribute information and the environmental attribute information, the area extraction is carried out on the face area to obtain the oily area in the face area; According to the facial attribute information and Environmental attribute information, and de-gloss processing is performed on the glossy area to obtain a de-glossy face image.
本公开实施例中,在确定油光区域以及对油光区域进行去油光处理的过程中,参考了面部属性信息以及环境属性信息,能够更精确的确定油光区域并对油光区域进行适度的处理,从而改善了人脸图像中人脸面部去油光的效果。In the embodiment of the present disclosure, in the process of determining the glossy area and performing de-gloss processing on the glossy area, the facial attribute information and the environmental attribute information are referred to, so that the glossy area can be determined more accurately and the glossy area can be appropriately processed, thereby improving the The effect of de-glossing the face in the face image.
在一种实现方式中,面部属性信息包括人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;并且环境属性信息至少包括环境光照信息。In an implementation manner, the facial attribute information includes at least one of skin color, skin texture, brightness or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
人脸的肤色、肤质、亮度或年龄以及环境光照信息直接影响了人脸中每个像素点的平均亮度值,因此,根据这些信息确定的油光区域更准确。The skin color, skin texture, brightness or age of the face, and ambient lighting information directly affect the average brightness value of each pixel in the face, so the oily area determined based on these information is more accurate.
在另一种实现方式中,上述根据面部属性信息以及环境属性信息,对人脸区域进行区域提取,以得到人脸区域中的油光区域的步骤包括:根据面部属性信息以及环境属性信息 获取第一调整系数;根据第一调整系数增大人脸图像中每个像素点的亮度值;根据增大亮度值后的人脸区域,确定人脸区域中的油光区域。In another implementation manner, the above-mentioned step of performing region extraction on the face region according to the face attribute information and the environmental attribute information to obtain the oily region in the face region includes: obtaining the first region according to the face attribute information and the environmental attribute information Adjustment coefficient; increase the brightness value of each pixel in the face image according to the first adjustment coefficient; determine the oily area in the face area according to the face area after the increased brightness value.
这样,将人脸图像中每个像素点的亮度值增大至相同的标准后,可以根据预设的标准确定人脸区域中的油光区域,而面部属性信息以及环境属性信息直接影响调整亮度值的第一调整系数。因此,根据面部属性信息以及环境属性信息确定第一调整系数,并对人脸图像进行调整后,确定的油光区域更准确。In this way, after increasing the brightness value of each pixel in the face image to the same standard, the oily area in the face area can be determined according to the preset standard, and the facial attribute information and the environmental attribute information directly affect the adjustment of the brightness value The first adjustment factor of . Therefore, after determining the first adjustment coefficient according to the facial attribute information and the environmental attribute information, and adjusting the face image, the determined oily area is more accurate.
在另一种实现方式中,上述根据面部属性信息以及环境属性信息获取第一调整系数的步骤包括:将面部属性信息以及环境属性信息输入调整系数提取函数,以得到第一调整系数;调整系数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。In another implementation manner, the above-mentioned step of obtaining the first adjustment coefficient according to the facial attribute information and the environmental attribute information includes: inputting the facial attribute information and the environmental attribute information into the adjustment coefficient extraction function to obtain the first adjustment coefficient; The function is obtained by linear regression analysis based on the adjustment coefficients, facial attribute information and environmental attribute information marked in multiple sample face images.
这样,对多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息进行线性回归分析,可以得到调整系数、面部属性信息以及环境属性信息之间的相互依赖关系。而标注的调整系数、面部属性信息以及环境属性信息可以根据人工对样本图像进行处理达到预期效果时的值确定。因此,根据该调整系数提取函数计算得到的第一调整系数,也会更大概率使得对人脸图像调整后达到预期效果。In this way, the linear regression analysis is performed on the adjustment coefficients, facial attribute information and environmental attribute information marked in the multiple sample face images, and the interdependence between the adjustment coefficients, the facial attribute information and the environmental attribute information can be obtained. The marked adjustment coefficient, facial attribute information, and environmental attribute information can be determined according to the values when the sample image is manually processed to achieve the expected effect. Therefore, the first adjustment coefficient calculated according to the adjustment coefficient extraction function has a higher probability to achieve the expected effect after adjusting the face image.
在另一种实现方式中,上述根据第一调整系数增大人脸图像中每个像素点的亮度值的步骤包括:将人脸图像的颜色空间转换为目标颜色空间;目标颜色空间包括表征人脸图像中像素点的亮度值的信息;根据第一调整系数将目标颜色空间下的人脸图像中每个像素点的亮度值增大。In another implementation manner, the above-mentioned step of increasing the brightness value of each pixel in the face image according to the first adjustment coefficient includes: converting the color space of the face image into a target color space; the target color space includes representing a face The information of the brightness value of the pixel point in the image; the brightness value of each pixel point in the face image in the target color space is increased according to the first adjustment coefficient.
在另一种实现方式中,上述根据增大亮度值后的人脸区域,确定人脸区域中的油光区域的步骤包括:将增大亮度值后的人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为人脸区域中的油光区域。In another implementation manner, the above-mentioned step of determining the oily area in the face area according to the increased brightness value of the face area includes: setting the brightness value in the increased brightness value of the face area to be greater than the first preset value The area composed of threshold pixels is determined as the oily area in the face area.
在另一种实现方式中,上述人脸区域中的油光区域满足下述公式:M=pow(Y,s)其中,M为人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的人脸区域中每个像素点的亮度值,s为指数函数的系数;并且s是将面部属性信息以及环境属性信息输入指数系数计算函数得到的;指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。In another implementation manner, the oily area in the above-mentioned face area satisfies the following formula: M=pow(Y,s), where M is the oily area in the face area, pow is an exponential function; Y is the increase in brightness The brightness value of each pixel in the face area after the value, s is the coefficient of the exponential function; and s is obtained by inputting the facial attribute information and environmental attribute information into the exponential coefficient calculation function; the exponential coefficient calculation function is based on multiple samples The index coefficient, facial attribute information and environmental attribute information of face image annotation are obtained by linear regression analysis.
在另一种实现方式中,上述根据面部属性信息以及环境属性信息,对油光区域进行去油光处理的步骤包括:将面部属性信息以及环境属性信息输入调暗系数计算函数,以得到人脸图像的调暗系数;调暗系数计算函数是基于多个人脸图像中标注的调暗系数、面部属性信息以及环境属性信息线性回归得到的;根据人脸图像的调暗系数减小油光区域中每个像素点的亮度值。In another implementation manner, the above-mentioned step of performing de-gloss processing on the glossy area according to the facial attribute information and the environmental attribute information includes: inputting the facial attribute information and the environmental attribute information into the dimming coefficient calculation function to obtain the facial image Darkening coefficient; the darkening coefficient calculation function is obtained by linear regression based on the darkening coefficient, facial attribute information and environmental attribute information marked in multiple face images; reduce each pixel in the oily area according to the darkening coefficient of the face image The brightness value of the point.
在另一种实现方式中,该方法还包括:将人脸图像与去油光人脸图像进行融合,以得到目标图像。In another implementation manner, the method further includes: fusing the face image with the de-glossed face image to obtain the target image.
这样,使得融合后的图像兼顾人脸图像中的细节与去油光的效果。In this way, the fused image takes into account the details in the face image and the effect of degreasing.
在另一种实现方式中,上述根据面部属性信息以及环境属性信息,对人脸区域进行区域提取以得到人脸区域中的油光区域的步骤包括:对人脸区域进行磨皮处理;根据面部属性信息以及环境属性信息,对磨皮处理后的人脸区域进行区域提取,以得到人脸区域中的油光区域。In another implementation manner, the above-mentioned step of performing region extraction on the face region to obtain the shiny region in the face region according to the face attribute information and the environmental attribute information includes: performing skin grinding on the face region; information and environmental attribute information, and extract the area of the face area after microdermabrasion to obtain the shiny area in the face area.
这样,在确定油光区域之前,对人脸区域进行磨皮处理可以初步减少部分油光,使得最终去油光的效果更好。In this way, before determining the oily area, microdermabrasion on the face area can initially reduce part of the oily gloss, so that the final oily gloss removal effect is better.
根据本公开实施例的第二方面,本公开实施例提供一种图像处理装置,包括:获取模块,被配置为获取人脸图像,并确定人脸图像中的人脸区域;提取模块,被配置为将人脸图像输入属性信息提取模型;得到人脸区域的面部属性信息以及人脸图像的环境属性信息;根据面部属性信息以及环境属性信息,对人脸区域进行区域提取,以得到人脸区域中的油光区域;去油光模块,被配置为根据面部属性信息以及环境属性信息,对油光区域进行去油光处理,以得到去油光人脸图像。According to a second aspect of the embodiments of the present disclosure, the embodiments of the present disclosure provide an image processing apparatus, including: an acquisition module configured to acquire a face image and determine a face area in the face image; an extraction module configured to In order to input the face image into the attribute information extraction model; obtain the face attribute information of the face area and the environmental attribute information of the face image; according to the face attribute information and the environmental attribute information, extract the face area to obtain the face area The oily area in the ; the de-gloss module is configured to perform de-gloss processing on the glossy area according to the facial attribute information and the environmental attribute information, so as to obtain the de-gloss face image.
在一些实施例中,面部属性信息包括人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;环境属性信息至少包括环境光照信息。In some embodiments, the facial attribute information includes at least one of skin color, skin texture, brightness, or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
在一些实施例中,获取模块还被配置为:根据面部属性信息以及环境属性信息获取第一调整系数;提取模块具体被配置为:根据第一调整系数增大人脸图像中每个像素点的亮度值;根据增大亮度值后的人脸区域,确定人脸区域中的油光区域。In some embodiments, the acquiring module is further configured to: acquire a first adjustment coefficient according to the facial attribute information and the environmental attribute information; the extracting module is specifically configured to: increase the brightness of each pixel in the face image according to the first adjustment coefficient value; according to the face area after increasing the brightness value, determine the oily area in the face area.
在一些实施例中,获取模块还被配置为将面部属性信息以及环境属性信息输入调整系数提取函数,以得到第一调整系数;该调整系数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。In some embodiments, the acquisition module is further configured to input the facial attribute information and the environmental attribute information into an adjustment coefficient extraction function to obtain a first adjustment coefficient; the adjustment coefficient extraction function is based on adjustments marked in a plurality of sample face images coefficients, facial attribute information and environmental attribute information obtained by linear regression analysis.
在一些实施例中,图像处理装置还包括增大模块,其被配置为:将人脸图像的颜色空间转换为目标颜色空间;目标颜色空间包括表征人脸图像中像素点的亮度值的信息;根据第一调整系数将目标颜色空间下的人脸图像中每个像素点的亮度值增大。In some embodiments, the image processing apparatus further includes an augmentation module configured to: convert the color space of the face image into a target color space; the target color space includes information representing luminance values of pixels in the face image; The brightness value of each pixel in the face image in the target color space is increased according to the first adjustment coefficient.
在一些实施例中,提取模块还被配置为:将增大亮度值后的人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为人脸区域中的油光区域。In some embodiments, the extraction module is further configured to: determine an area composed of pixels whose brightness value is greater than the first preset threshold in the face region after the brightness value has been increased as a glossy region in the face region.
在一些实施例中,人脸区域中的油光区域满足下述公式:M=pow(Y,s),其中,M为人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的人脸区域中每个像素点的亮度值,s为指数函数的系数;并且s是将面部属性信息以及环境属性信息输入指数系数计算函数得到的;指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。In some embodiments, the oily area in the face area satisfies the following formula: M=pow(Y,s), where M is the oily area in the face area, pow is an exponential function; Y is the value after increasing the brightness value The brightness value of each pixel in the face area of , s is the coefficient of the exponential function; and s is obtained by inputting facial attribute information and environmental attribute information into the exponential coefficient calculation function; The exponential coefficient of image annotation, facial attribute information and environmental attribute information are obtained by linear regression analysis.
在一些实施例中,去油光模块还被配置为:将面部属性信息以及环境属性信息输入调暗系数计算函数,以得到人脸图像的调暗系数;调暗系数计算函数是基于多个人脸图像中标注的调暗系数、面部属性信息以及环境属性信息线性回归得到的;根据人脸图像的调暗系数减小油光区域中每个像素点的亮度值。In some embodiments, the degreasing module is further configured to: input the facial attribute information and the environmental attribute information into a dimming coefficient calculation function to obtain a dimming coefficient of the face image; the dimming coefficient calculation function is based on a plurality of face images It is obtained by linear regression of the dimming coefficient, facial attribute information and environmental attribute information marked in the face image; the brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
在一些实施例中,图像处理装置还包括融合模块,其被配置为将人脸图像与去油光人 脸图像进行融合,以得到目标图像。In some embodiments, the image processing apparatus further includes a fusion module configured to fuse the face image with the de-glossed face image to obtain the target image.
在一些实施例中,提取模块还被配置为:对人脸区域进行磨皮处理;根据面部属性信息以及环境属性信息,对磨皮处理后的人脸区域进行区域提取,以得到人脸区域中的油光区域。In some embodiments, the extraction module is further configured to: perform microdermabrasion processing on the face area; perform area extraction on the microdermabrasion processed face area according to the facial attribute information and the environmental attribute information, so as to obtain the facial area in the face area. oily areas.
根据本公开实施例的第三方面,提供一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器。其中,处理器被配置为执行所述指令,以实现上述第一方面以及第一方面的任一种实现方式所示的图像处理方法。According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including: a processor; and a memory for storing instructions executable by the processor. The processor is configured to execute the instructions to implement the image processing method shown in the first aspect and any implementation manner of the first aspect.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,当该存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如第一方面所示的图像处理方法。According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the image processing method shown in the first aspect .
根据本公开实施例的第五方面,提供一种计算机程序产品,可直接加载到电子设备的内部存储器中,并含有软件代码,该计算机程序经由电子设备载入并执行后能够实现第一方面所示的图像处理方法。According to a fifth aspect of the embodiments of the present disclosure, a computer program product is provided, which can be directly loaded into an internal memory of an electronic device and contains software codes, and after the computer program is loaded and executed by the electronic device, can realize the above-mentioned first aspect. The image processing method shown.
上述提供的任一种图像处理装置、电子设备或计算机可读存储介质或计算机程序产品用于执行上文所提供的对应的方法,因此,均能在确定油光区域以及对油光区域进行去油光处理的过程中,参考面部属性信息以及环境属性信息,能够使得对油光区域进行适度的处理,从而改善了人脸图像中人脸面部去油光的效果。Any image processing device, electronic device or computer-readable storage medium or computer program product provided above is used to execute the corresponding method provided above, therefore, it is possible to determine the glossy area and carry out de-gloss processing to the glossy area. During the process, referring to the facial attribute information and the environmental attribute information, it is possible to appropriately process the glossy area, thereby improving the effect of removing the glossiness of the face in the face image.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the principles of the present disclosure and do not unduly limit the present disclosure.
图1是根据一示例性实施例示出的一种图像处理方法的流程示意图;FIG. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment;
图2是根据一示例性实施例示出的人脸关键点的示意图;FIG. 2 is a schematic diagram of a human face key point according to an exemplary embodiment;
图3是根据一示例性实施例示出的一种图像处理装置框图;3 is a block diagram of an image processing apparatus according to an exemplary embodiment;
图4是根据一示例性实施例示出的一种电子设备的框图。Fig. 4 is a block diagram of an electronic device according to an exemplary embodiment.
具体实施方式Detailed ways
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。In order to make those skilled in the art better understand the technical solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
需要说明的是,在本公开的实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in the embodiments of the present disclosure, words such as "exemplary" or "such as" are used to represent examples, illustrations, or illustrations. Any embodiments or designs described in the embodiments of the present disclosure as "exemplary" or "such as" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present the related concepts in a specific manner.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二” 等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。It should be noted that the terms "first", "second" and the like in the description and claims of the present disclosure and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.
本公开所涉及的数据可以为经用户授权或者经过各方充分授权的数据。The data involved in this disclosure may be data authorized by the user or fully authorized by all parties.
在本公开实施例中,“至少一个”是指一个或多个。“多个”是指两个或两个以上。In the embodiments of the present disclosure, "at least one" refers to one or more. "Plural" means two or more.
在本公开实施例中,“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。In this embodiment of the present disclosure, "and/or" is only an association relationship describing associated objects, indicating that there may be three kinds of relationships, for example, A and/or B, which may indicate that A exists alone, and A and B exist at the same time. B, there are three cases of B alone. In addition, the character "/" in this document generally indicates that the related objects are an "or" relationship.
在本公开实施例中,组合包括一个或多个对象。In an embodiment of the present disclosure, a composition includes one or more objects.
需要说明的是,本公开实施例提供的图像处理方法可以应用于电子设备或服务器。其中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端等。服务器可以为一台服务器,或者,也可以为由多台服务器组成的服务器集群,本公开对此不做限定。It should be noted that the image processing method provided by the embodiments of the present disclosure may be applied to an electronic device or a server. Among them, electronic devices include but are not limited to mobile phones, tablet computers, notebook computers, handheld computers, vehicle terminals, and the like. The server may be one server, or may also be a server cluster composed of multiple servers, which is not limited in the present disclosure.
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.
如图1所示,图1是根据一示例性实施例示出的一种图像处理方法的流程示意图。图1所示的方法可以应用于电子设备或服务器。图1所示的方法可以包括以下步骤:As shown in FIG. 1 , FIG. 1 is a schematic flowchart of an image processing method according to an exemplary embodiment. The method shown in FIG. 1 can be applied to an electronic device or a server. The method shown in Figure 1 may include the following steps:
在S100中,获取人脸图像,并确定人脸图像中的人脸区域。In S100, a face image is acquired, and a face region in the face image is determined.
本公开对获取包括人脸的人脸图像的方式不进行限定,在一种实现方式中,接收其他电子设备发送的人脸图像,在另一种实现方式中,获取用户上传的人脸图像,在另一种实现方式中读取本地的人脸图像,在另一种实现方式中由电子设备集成的采集器件采集的人脸图像。The present disclosure does not limit the manner of acquiring a face image including a human face. In one implementation manner, a face image sent by other electronic devices is received, and in another implementation manner, a face image uploaded by a user is acquired, In another implementation manner, a local face image is read, and in another implementation manner, the face image is collected by a collection device integrated in an electronic device.
在一个示例中,执行主体为电子设备,该电子设备为包括摄像头这样采集器件的电子设备,该电子设备的摄像头采集当前环境下的人脸图像。其中,当前环境的光照信息一定。In an example, the execution subject is an electronic device, and the electronic device is an electronic device including a capture device such as a camera, and the camera of the electronic device captures a face image in the current environment. Among them, the lighting information of the current environment is certain.
在一种实现方式中,确定人脸图像中的人脸区域包括如下步骤:In an implementation manner, determining the face region in the face image includes the following steps:
步骤一:获取人脸图像中人脸的关键点。Step 1: Obtain the key points of the face in the face image.
在一个例子中,使用训练好的人脸关键点检测模型获取人脸的关键点如图2所示。图2中白色圆点即为检测到的人脸关键点。In an example, the key points of the face are obtained using the trained face key point detection model as shown in Figure 2. The white dots in Figure 2 are the detected face key points.
步骤二:根据获取的关键点确定人脸图像中的人脸区域。Step 2: Determine the face area in the face image according to the acquired key points.
基于图2中人脸关键点的示例,连接获取到的关键点所形成的闭合区域即为图2所示人脸图像中的人脸区域。Based on the example of face key points in FIG. 2 , the closed area formed by connecting the acquired key points is the face area in the face image shown in FIG. 2 .
在另一种实现方式中,基于肤色检测确定人脸图像中的人脸区域。In another implementation, the face region in the face image is determined based on skin color detection.
具体的,对人脸图像人脸区域以及非人脸区域进行二值化处理后,确定人脸图像中的人脸区域。Specifically, after binarizing the face region and the non-face region of the face image, the face region in the face image is determined.
在一个例子中,使用直方图方法确定二值化阈值,使用二值化阈值对人脸图像进行二值化处理,然后,在二值化处理后的人脸图像中确定人脸区域。In one example, a histogram method is used to determine a binarization threshold, the face image is binarized by using the binarization threshold, and then a face region is determined in the binarized face image.
需要说明的是,本公开中确定人脸图像中的人脸区域还可以是其他现有技术中的方法,本公开对此不进行限定。It should be noted that the determination of the face region in the face image in the present disclosure may also be other methods in the prior art, which are not limited in the present disclosure.
在S101中,将人脸图像输入属性信息提取模型,以得到人脸区域的面部属性信息以及人脸图像的环境属性信息。In S101, the face image is input into the attribute information extraction model to obtain the face attribute information of the face region and the environmental attribute information of the face image.
属性信息提取模型可以是预先根据样本人脸图像训练得到的;面部属性信息包括人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;环境属性信息至少包括环境光照信息。The attribute information extraction model may be trained in advance based on sample face images; the facial attribute information includes at least one of skin color, skin texture, brightness or age of the face in the face area; the environmental attribute information includes at least ambient lighting information.
人脸的肤色、肤质、亮度或年龄以及环境光照信息直接影响了人脸中每个像素点的平均亮度值,因此,提取人脸图像中这些信息以确定该人脸图像的油光区域得到的结果更准确。The skin color, skin quality, brightness or age of the face, and ambient lighting information directly affect the average brightness value of each pixel in the face. The results are more accurate.
在一些实施例中,在S102中,对人脸图像中的人脸区域进行磨皮处理。In some embodiments, in S102, the facial area in the facial image is subjected to skin resurfacing processing.
在一些实施例中,可以使用保边滤波器对人脸图像中的人脸区域进行磨皮处理。In some embodiments, an edge-preserving filter may be used to perform skin resurfacing on the face region in the face image.
本公开实施例对使用的保边滤波器不进行限定,示例性的,保边滤波器可以是:双边滤波器(bilateral filter)、引导滤波器(guided image filter)、加权最小二乘法滤波器(weighted least square filter)等保边滤波器中的任意一种。This embodiment of the present disclosure does not limit the edge-preserving filter used. Exemplarily, the edge-preserving filter may be: a bilateral filter, a guided image filter, a weighted least squares filter ( Weighted least square filter) and other edge-preserving filters.
这样,在确定油光区域之前,对人脸区域进行磨皮处理可以初步减少部分油光,使得最终去油光的效果更好。In this way, before determining the oily area, microdermabrasion on the face area can initially reduce part of the oily gloss, so that the final oily gloss removal effect is better.
在S103中,根据面部属性信息以及环境属性信息,对人脸区域进行区域提取,以得到人脸区域中的油光区域。In S103, region extraction is performed on the face region according to the face attribute information and the environment attribute information, so as to obtain a glossy region in the face region.
在一些实施例中,通过如下步骤提取人脸区域中的油光区域:In some embodiments, the oily area in the face area is extracted by the following steps:
在步骤一中,根据面部属性信息以及环境属性信息获取人脸图像的第一调整系数。In step 1, the first adjustment coefficient of the face image is obtained according to the face attribute information and the environment attribute information.
在一些实施例中,将面部属性信息以及环境属性信息输入调整系数提取函数进行运算,以得到第一调整系数。其中,调整系数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。In some embodiments, the facial attribute information and the environmental attribute information are input into the adjustment coefficient extraction function for operation to obtain the first adjustment coefficient. The adjustment coefficient extraction function is obtained by linear regression analysis based on the adjustment coefficients, facial attribute information and environmental attribute information marked in the multiple sample face images.
可以理解的是,标注的调整系数、面部属性信息以及环境属性信息可以根据人工对样本图像进行处理达到预期效果时的调整系数、面部属性信息以及环境属性信息确定。因此,根据该调整系数提取函数计算得到的第一调整系数,也会更大概率使得对人脸图像调整后达到预期效果。It can be understood that the marked adjustment coefficient, facial attribute information and environmental attribute information can be determined according to the adjustment coefficient, facial attribute information and environmental attribute information when the sample image is manually processed to achieve the expected effect. Therefore, the first adjustment coefficient calculated according to the adjustment coefficient extraction function has a higher probability to achieve the expected effect after adjusting the face image.
在步骤二中,根据第一调整系数增大人脸图像中每个像素点的亮度值。In step 2, the brightness value of each pixel in the face image is increased according to the first adjustment coefficient.
在一些实施例中,在调整人脸图像中每个像素点的亮度值之前,若人脸图像的颜色空 间为用于显示的颜色空间,则可以将人脸图像的颜色空间转换为包括亮度分量的颜色空间,在一个例子中,将红(R)、绿(G)、蓝(B)颜色空间的人脸图像转换为YUV颜色空间、HSI颜色空间、HSV颜色空间或Lab颜色空间中的任意一种。然后,使用第一参数调整人脸图像中每个像素点的亮度值。In some embodiments, before adjusting the luminance value of each pixel in the face image, if the color space of the face image is the color space used for display, the color space of the face image may be converted to include a luminance component The color space, in one example, converts a face image in the red (R), green (G), blue (B) color space to any of the YUV color space, HSI color space, HSV color space, or Lab color space A sort of. Then, use the first parameter to adjust the brightness value of each pixel in the face image.
在一个例子中,使用第一参数按照相同的算法增大人脸图像中每个像素点的亮度值。若亮度值增大后的像素点的亮度值大于1则将该像素点的亮度值确定为1,若亮度值增大后的像素点的亮度值小于1,则将亮度值增大后的该像素点的亮度值作为该像素点的亮度值。In one example, the first parameter is used to increase the brightness value of each pixel in the face image according to the same algorithm. If the brightness value of the pixel point after the brightness value is increased is greater than 1, the brightness value of the pixel point is determined as 1; if the brightness value of the pixel point after the brightness value is increased is less than 1, the brightness value of the pixel point after the brightness value is increased The brightness value of the pixel is taken as the brightness value of the pixel.
这样,将人脸图像中每个像素点的亮度值增大至相同的标准后,可以根据预设的标准确定人脸区域中的油光区域,而面部属性信息以及环境属性信息直接影响调整亮度值的第一调整系数。因此,根据面部属性信息以及环境属性信息确定第一调整系数,并对人脸图像进行调整后,确定的油光区域更准确。In this way, after increasing the brightness value of each pixel in the face image to the same standard, the oily area in the face area can be determined according to the preset standard, and the facial attribute information and the environmental attribute information directly affect the adjustment of the brightness value The first adjustment factor of . Therefore, after determining the first adjustment coefficient according to the facial attribute information and the environmental attribute information, and adjusting the face image, the determined oily area is more accurate.
在一些实施例中,还包括步骤三。在步骤三中,根据第二参数对使用第一参数增大后的人脸图像中每个像素点的亮度值进行指数运算。其中,第二参数是将人脸图像的面部属性信息以及环境属性信息输入指数系数计算函数得到的;指数系数计算函数是根据多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。In some embodiments, step three is further included. In step 3, an exponential operation is performed on the brightness value of each pixel in the face image increased by the first parameter according to the second parameter. Among them, the second parameter is obtained by inputting the facial attribute information and environmental attribute information of the face image into the exponential coefficient calculation function; the exponential coefficient calculation function is obtained according to the exponential coefficient, facial attribute information and environmental attribute information marked on multiple sample face images obtained by linear regression analysis.
在一个例子中,将第二参数作为指数对使用第一参数调整后的人脸图像中的每个像素点的亮度值进行幂运算。假设第二参数为10,使用第一参数调整后的人脸图像中的第一像素点的亮度值为0.5,那么,将10作为指数对该第一像素点的亮度值0.5进行幂运算得到的新的亮度值用公式可以表示为:(0.5) 10In one example, the second parameter is used as an exponent to perform exponentiation on the luminance value of each pixel in the face image adjusted by using the first parameter. Assuming that the second parameter is 10, and the brightness value of the first pixel in the face image adjusted by the first parameter is 0.5, then, using 10 as an exponent, the brightness value of the first pixel is obtained by exponentiating 0.5. The new luminance value can be expressed as: (0.5) 10 with the formula.
在一些实施例中,还包括步骤四。在步骤四中,在没有执行步骤三的情况下将调整亮度值后的人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为人脸区域中的油光区域。In some embodiments, step four is also included. In step 4, in the case where step 3 is not performed, the area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the adjustment of the brightness value is determined as the oily area in the face area.
在执行了步骤三的情况下,将指数运算后的人脸图像中,亮度值大于第二预设阈值的像素点组成的区域,确定为人脸区域中的油光区域。In the case where step 3 is performed, in the face image after the exponential operation, the area composed of pixels whose brightness value is greater than the second preset threshold is determined as the oily area in the face area.
人脸区域中的油光区域满足如下公式:The oily area in the face area satisfies the following formula:
M=pow(Y,s)M=pow(Y,s)
其中,M为人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的人脸区域中每个像素点的亮度值,s为指数函数的系数。s可以通过将人脸图像的面部属性信息以及环境属性信息输入指数系数计算函数得到。指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。Among them, M is the oily area in the face area, pow is the exponential function; Y is the brightness value of each pixel in the face area after increasing the brightness value, and s is the coefficient of the exponential function. s can be obtained by inputting the facial attribute information of the face image and the environmental attribute information into the exponential coefficient calculation function. The exponential coefficient calculation function is obtained based on the linear regression analysis of the exponential coefficient, facial attribute information and environmental attribute information annotated by multiple sample face images.
在S104中,根据面部属性信息以及环境属性信息,对油光区域进行去油光处理,以得到去油光人脸图像。In S104 , according to the facial attribute information and the environmental attribute information, de-gloss processing is performed on the glossy area to obtain an image of the de-glossy face.
在一些实施例中,将该人脸图像的面部属性信息以及环境属性信息输入调暗系数计算函数,以得到人脸图像的调暗系数,根据人脸图像的调暗系数减小油光区域中每个像素点 的亮度值。调暗系数计算函数可以基于多个样本人脸图像标注的调暗系数、面部属性信息以及环境属性信息线性回归分析得到。In some embodiments, the facial attribute information and the environmental attribute information of the face image are input into a dimming coefficient calculation function to obtain a dimming coefficient of the face image, and the dimming coefficient of the face image is used to reduce the shading coefficient of each face image in the oily area. The luminance value of a pixel. The dimming coefficient calculation function can be obtained by linear regression analysis based on the dimming coefficients, facial attribute information and environmental attribute information marked on multiple sample face images.
在一个例子中,调暗系数可以为一个小于1的百分数,使用调暗系数乘以油光区域中每个像素点的亮度值,以得到每个像素点对应的新的亮度值,使用新的亮度值替换该像素点的原亮度值,以得到去油光人脸图像。在另一个例子中,将油光区域中每个像素点的亮度值减去调暗系数得到每个像素点的新的亮度值,使用新的亮度值替换该像素点的原亮度值,以得到去油光人脸图像。In one example, the dimming factor can be a percentage less than 1, and the dimming factor is used to multiply the brightness value of each pixel in the oily area to obtain a new brightness value corresponding to each pixel, using the new brightness The value replaces the original brightness value of the pixel to obtain a de-glossy face image. In another example, the brightness value of each pixel in the oily area is subtracted by the dimming coefficient to obtain a new brightness value of each pixel, and the original brightness value of the pixel is replaced with the new brightness value to obtain the Oily face image.
本公开实施例中,在确定油光区域以及对油光区域进行去油光处理的过程中,参考了面部属性信息以及环境属性信息,能够使得对油光区域进行适度的处理,从而改善了人脸图像中人脸面部去油光的效果。In the embodiment of the present disclosure, in the process of determining the glossy area and performing de-gloss processing on the glossy area, the facial attribute information and the environmental attribute information are referred to, so that the glossy area can be appropriately processed, thereby improving the human face in the image. The effect of removing oil on the face.
在一些实施例中,在S105中,将磨皮处理人脸区域后的人脸图像与去油光人脸图像混合,以得到目标图像。In some embodiments, in S105 , the face image after dermabrasion processing of the face region is mixed with the degreasing face image to obtain the target image.
在一些实施例中,将磨皮处理人脸区域后的人脸图像与去油光人脸图像进行融合,以得到目标图像。可以基于alpha通道,或者使用羽化操作对磨皮处理人脸区域后的人脸图像与去油光人脸图像进行融合。In some embodiments, the face image after the skin-removed face region is fused with the degreasing face image to obtain the target image. Based on the alpha channel, or using the feathering operation, the face image after microdermabrasion processing and the de-gloss face image can be fused.
可以理解的是,也可以将原始的人脸图像与去油光人脸图像进行融合,以得到目标图像。这样得到的目标图像的效果较差于将磨皮处理人脸区域后的人脸图像与去油光人脸图像进行融合,以得到的目标图像。It can be understood that the original face image and the degreasing face image can also be fused to obtain the target image. The effect of the target image obtained in this way is worse than the target image obtained by fusing the face image after the skin-grinding process with the de-gloss face image.
这样,得到的目标图像兼具磨皮处理人脸区域后的人脸图像的细节以及去油光人脸图像的去油光效果。In this way, the obtained target image has both the details of the face image after the skin-grinding process and the de-gloss effect of the de-glossy face image.
上述主要从方法的角度对本公开实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的方法步骤,本公开能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。The foregoing mainly introduces the solutions provided by the embodiments of the present disclosure from the perspective of methods. In order to realize the above-mentioned functions, it includes corresponding hardware structures and/or software modules to perform each function. Those skilled in the art should readily appreciate that the present disclosure can be implemented in hardware or in a combination of hardware and computer software, in conjunction with the method steps of the examples described in the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this disclosure.
本公开实施例可以根据上述方法示例对图像处理装置进行功能模块的划分,例如可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本公开实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the embodiments of the present disclosure, the image processing apparatus can be divided into functional modules according to the above method examples. For example, each functional module can be divided according to each function, or two or more functions can be integrated into one processing module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present disclosure is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
图3是根据一示例性实施例示出的一种图像处理装置框图。参照图4,该图像处理装置200包括获取模块201、提取模块202和去油光模块203,可选的,图像处理装置200还包括增大模块204和融合模块205。其中:获取模块201:被配置为获取人脸图像,并确定人脸图像中的人脸区域;提取模块202,被配置为将人脸图像输入属性信息提取模型; 得到人脸区域的面部属性信息以及人脸图像的环境属性信息;根据面部属性信息以及环境属性信息,对人脸区域进行区域提取,以得到人脸区域中的油光区域;去油光模块203,被配置为根据面部属性信息以及环境属性信息,对油光区域进行去油光处理,以得到去油光人脸图像。例如,结合图1,获取模块201可以用于执行S100-S101,提取模块202可以用于执行S103,去油光模块203可以用于执行S104,融合模块205可以用于执行S105。Fig. 3 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to FIG. 4 , the image processing apparatus 200 includes an acquisition module 201 , an extraction module 202 and a degreasing module 203 . Optionally, the image processing apparatus 200 further includes an enlargement module 204 and a fusion module 205 . Wherein: the acquisition module 201 is configured to acquire a face image and determine the face region in the face image; the extraction module 202 is configured to input the face image into the attribute information extraction model; obtain the face attribute information of the face region And the environmental attribute information of the face image; according to the facial attribute information and the environmental attribute information, the face area is extracted to obtain the oily area in the face area; the oil removal module 203 is configured to be based on the facial attribute information and the environment. attribute information, and de-gloss processing is performed on the glossy area to obtain a de-glossy face image. For example, referring to FIG. 1, the acquisition module 201 can be used to execute S100-S101, the extraction module 202 can be used to execute S103, the degreasing module 203 can be used to execute S104, and the fusion module 205 can be used to execute S105.
在一些实施例中,面部属性信息包括人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;环境属性信息至少包括环境光照信息。In some embodiments, the facial attribute information includes at least one of skin color, skin texture, brightness, or age of the human face in the face region; and the environmental attribute information includes at least ambient lighting information.
在一些实施例中,获取模块201还被配置为:根据面部属性信息以及环境属性信息获取第一调整系数;提取模块202具体被配置为:根据第一调整系数增大人脸图像中每个像素点的亮度值;根据增大亮度值后的人脸区域,确定人脸区域中的油光区域。In some embodiments, the obtaining module 201 is further configured to: obtain the first adjustment coefficient according to the facial attribute information and the environmental attribute information; the extracting module 202 is specifically configured to: increase each pixel in the face image according to the first adjustment coefficient The brightness value of ; determine the oily area in the face area according to the face area after increasing the brightness value.
在一些实施例中,获取模块201还被配置为将面部属性信息以及环境属性信息输入调整系数提取函数,以得到第一调整系数;调整系数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。In some embodiments, the acquiring module 201 is further configured to input the facial attribute information and the environmental attribute information into the adjustment coefficient extraction function to obtain the first adjustment coefficient; the adjustment coefficient extraction function is based on the adjustment marked in the multiple sample face images coefficients, facial attribute information and environmental attribute information obtained by linear regression analysis.
在一些实施例中,图像处理装置200还包括增大模块204,被配置为:将人脸图像的颜色空间转换为目标颜色空间;目标颜色空间包括表征人脸图像中像素点的亮度值的信息;根据第一调整系数将目标颜色空间下的人脸图像中每个像素点的亮度值增大。In some embodiments, the image processing apparatus 200 further includes an augmentation module 204 configured to: convert the color space of the face image into a target color space; the target color space includes information representing the brightness values of pixels in the face image ; Increase the brightness value of each pixel in the face image in the target color space according to the first adjustment coefficient.
在一些实施例中,提取模块202还被配置为:将增大亮度值后的人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为人脸区域中的油光区域。In some embodiments, the extraction module 202 is further configured to: determine the area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the brightness value has been increased as an oily area in the face area.
可选的,人脸区域中的油光区域满足下述公式:Optionally, the oily area in the face area satisfies the following formula:
M=pow(Y,s)M=pow(Y,s)
其中,M为人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的人脸区域中每个像素点的亮度值,s为指数函数的系数;并且s为将面部属性信息以及环境属性信息输入指数系数计算函数得到的;指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。Among them, M is the oily area in the face area, pow is the exponential function; Y is the brightness value of each pixel in the face area after increasing the brightness value, s is the coefficient of the exponential function; and s is the facial attribute information. And the environmental attribute information is obtained by inputting the exponential coefficient calculation function; the exponential coefficient calculation function is obtained based on the exponential coefficient, facial attribute information and environmental attribute information of multiple sample face images marked by linear regression analysis.
在一些实施例中,去油光模块203还被配置为:将面部属性信息以及环境属性信息输入调暗系数计算函数,以得到人脸图像的调暗系数;调暗系数计算函数是基于多个样本人脸图像中标注的调暗系数、面部属性信息以及环境属性信息线性回归分析得到的;根据人脸图像的调暗系数减小油光区域中每个像素点的亮度值。In some embodiments, the degreasing module 203 is further configured to: input the facial attribute information and the environmental attribute information into a dimming coefficient calculation function to obtain a dimming coefficient of the face image; the dimming coefficient calculation function is based on a plurality of samples The dimming coefficient, facial attribute information and environmental attribute information marked in the face image are obtained by linear regression analysis; the brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
在一些实施例中,图像处理装置200还包括:融合模块205,被配置为将人脸图像与去油光人脸图像进行融合,以得到目标图像。In some embodiments, the image processing apparatus 200 further includes: a fusion module 205, which is configured to fuse the face image and the degreasing face image to obtain the target image.
在一些实施例中,提取模块202还被配置为:对人脸区域进行磨皮处理;根据面部属性信息以及环境属性信息,对磨皮处理后的人脸区域进行区域提取,以得到人脸区域中的油光区域。In some embodiments, the extraction module 202 is further configured to: perform microdermabrasion processing on the face area; perform area extraction on the microdermabrasion processed face area according to the facial attribute information and the environmental attribute information, so as to obtain the human face area Oily areas in .
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。此外,上述提供的任一种图像处理装 置200的解释以及有益效果的描述均可参考上述对应的方法实施例,不再赘述。Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here. In addition, for the explanation of any image processing apparatus 200 provided above and the description of the beneficial effects, reference may be made to the above-mentioned corresponding method embodiments, which will not be repeated.
图4是根据一示例性实施例示出的一种电子设备的框图。如图4所示,电子设备40包括但不限于:处理器401、存储器402、显示器403、输入单元404、接口单元405和电源406等。Fig. 4 is a block diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 4 , the electronic device 40 includes but is not limited to: a processor 401, a memory 402, a display 403, an input unit 404, an interface unit 405, a power supply 406, and the like.
其中,上述的处理器401,用于存储上述处理器可执行指令的存储器。可以理解,上述处理器401被配置为执行上述图1所示实施例中任一步骤。即,上述电子设备40的框图可以作为上述图像处理装置200的硬件结构图。The above-mentioned processor 401 is a memory for storing the above-mentioned processor-executable instructions. It can be understood that the above-mentioned processor 401 is configured to execute any step in the above-mentioned embodiment shown in FIG. 1 . That is, the block diagram of the electronic device 40 can be used as a hardware configuration diagram of the image processing apparatus 200 .
需要说明的是,本领域技术人员可以理解,图4中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图4所示更多或更少的部件,或者组合某些部件,或者不同的部件布置。It should be noted that those skilled in the art can understand that the structure of the electronic device shown in FIG. 4 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than those shown in FIG. some components, or a different arrangement of components.
处理器401是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器401可包括一个或多个处理单元;可选的,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。The processor 401 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the memory 402, and calling the data stored in the memory 402. , perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole. The processor 401 may include one or more processing units; optionally, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem The modulation processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 401.
存储器402可用于存储软件程序以及各种数据。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能单元所需的应用程序(比如获取单元、收发单元或合并单元等)等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。 Memory 402 may be used to store software programs as well as various data. The memory 402 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required by at least one functional unit (such as an acquisition unit, a transceiving unit, or a merging unit, etc.). Additionally, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
显示器403用于显示由用户输入的信息或提供给用户的信息。显示器403可包括显示面板,可以采用液晶显示器(liquid crystal display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。The display 403 is used to display information input by the user or information provided to the user. The display 403 may include a display panel, and the display panel may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
输入单元404可以包括图形处理器(Graphics Processing Unit,GPU),图形处理器对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示器403上。经图形处理器处理后的图像帧可以存储在存储器402(或其它存储介质)中。The input unit 404 may include a Graphics Processing Unit (GPU), which processes image data of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. The processed image frames can be displayed on display 403 . The image frames processed by the graphics processor may be stored in memory 402 (or other storage medium).
接口单元405为外部装置与电子设备400连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元405可以用于接收来自外部装置的输入(例如,数据信息等)并且将接收到的输入传输到电子设备400内的一个或多个元件或者可以用于在电子设备400和外部装置之间传输数据。The interface unit 405 is an interface for connecting an external device to the electronic device 400 . For example, external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more. The interface unit 405 may be used to receive input (eg, data information, etc.) from an external device and transmit the received input to one or more elements within the electronic device 400 or may be used to communicate between the electronic device 400 and an external device transfer data.
电源406(比如电池)可以用于为各个部件供电,可选的,电源406可以通过电源管 理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The power supply 406 (such as a battery) can be used to supply power to various components. Optionally, the power supply 406 can be logically connected to the processor 401 through a power management system, so as to realize functions such as managing charging, discharging, and power consumption management through the power management system.
在示例性实施例中,本公开实施例还提供了一种包括指令的存储介质,例如包括指令的存储器402,上述指令可由电子设备400的处理器401执行以完成上述方法。可选地,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, an embodiment of the present disclosure further provides a storage medium including instructions, such as a memory 402 including instructions, and the above-mentioned instructions can be executed by the processor 401 of the electronic device 400 to complete the above-mentioned method. Optionally, the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory) , RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices, etc.
在一个示例中,参见图3,上述获取模块201的接收功能可以由图4中的接口单元405实现。上述获取模块201的处理功能、提取模块202、去油光模块203、增大模块204以及融合模块205均可以由图4中的处理器401调用存储器402中存储的计算机程序实现。In an example, referring to FIG. 3 , the receiving function of the above-mentioned acquisition module 201 may be implemented by the interface unit 405 in FIG. 4 . The processing functions of the acquisition module 201 , the extraction module 202 , the degreasing module 203 , the enlargement module 204 and the fusion module 205 can all be implemented by the processor 401 in FIG. 4 calling the computer program stored in the memory 402 .
在示例性实施例中,本公开实施例还提供了一种包括一条或多条指令的计算机程序产品,该一条或多条指令可以由电子设备400的处理器401执行以完成上述方法。In an exemplary embodiment, an embodiment of the present disclosure also provides a computer program product including one or more instructions, which can be executed by the processor 401 of the electronic device 400 to accomplish the above method.
需要说明的是,上述存储介质中的指令或计算机程序产品中的一条或多条指令被处理器401执行时实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。It should be noted that when the instructions in the above-mentioned storage medium or one or more instructions in the computer program product are executed by the processor 401, each process of the above-mentioned method embodiment can be realized, and the same technical effect can be achieved. To avoid repetition, here No longer.
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。All the embodiments of the present disclosure can be implemented independently or in combination with other embodiments, which are all regarded as the protection scope required by the present disclosure.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

  1. 一种图像处理方法,包括:An image processing method, comprising:
    获取人脸图像,并确定所述人脸图像中的人脸区域;Obtain a face image, and determine the face area in the face image;
    将所述人脸图像输入属性信息提取模型,以得到所述人脸区域的面部属性信息以及所述人脸图像的环境属性信息;Inputting the face image into an attribute information extraction model to obtain the face attribute information of the face region and the environmental attribute information of the face image;
    根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域;According to the facial attribute information and the environmental attribute information, region extraction is performed on the human face region to obtain a glossy region in the human face region;
    根据所述面部属性信息以及所述环境属性信息,对所述油光区域进行去油光处理,以得到去油光人脸图像。According to the facial attribute information and the environmental attribute information, de-gloss processing is performed on the glossy area to obtain an image of the de-glossy face.
  2. 根据权利要求1所述的方法,其中,The method of claim 1, wherein,
    所述面部属性信息包括所述人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;所述环境属性信息至少包括环境光照信息。The facial attribute information includes at least one of skin color, skin texture, brightness or age of the human face in the face region; the environmental attribute information includes at least ambient lighting information.
  3. 根据权利要求1所述的方法,其中,所述根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域,包括:The method according to claim 1, wherein the performing region extraction on the face region according to the face attribute information and the environment attribute information to obtain a glossy region in the face region, comprising:
    根据所述面部属性信息以及所述环境属性信息获取第一调整系数;Obtain a first adjustment coefficient according to the facial attribute information and the environmental attribute information;
    根据所述第一调整系数增大所述人脸图像中每个像素点的亮度值;Increase the brightness value of each pixel in the face image according to the first adjustment coefficient;
    根据增大亮度值后的所述人脸区域,确定所述人脸区域中的油光区域。According to the face area after the brightness value is increased, the oily area in the face area is determined.
  4. 根据权利要求3所述的方法,其中,所述根据所述面部属性信息以及所述环境属性信息获取第一调整系数,包括:The method according to claim 3, wherein the obtaining the first adjustment coefficient according to the face attribute information and the environment attribute information comprises:
    将所述面部属性信息以及所述环境属性信息输入调整系数提取函数,以得到所述第一调整系数;所述调整系数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。Inputting the facial attribute information and the environmental attribute information into an adjustment coefficient extraction function to obtain the first adjustment coefficient; the adjustment coefficient extraction function is based on the adjustment coefficients and facial attribute information marked in the multiple sample face images And environmental attribute information obtained by linear regression analysis.
  5. 根据权利要求3所述的方法,其中,所述根据所述第一调整系数增大所述人脸图像中每个像素点的亮度值,包括:The method according to claim 3, wherein the increasing the brightness value of each pixel in the face image according to the first adjustment coefficient comprises:
    将所述人脸图像的颜色空间转换为目标颜色空间;所述目标颜色空间包括表征所述人脸图像中像素点的亮度值的信息;Converting the color space of the face image into a target color space; the target color space includes information representing the brightness values of pixels in the face image;
    根据所述第一调整系数将所述目标颜色空间下的所述人脸图像中每个像素点的亮度值增大。The luminance value of each pixel in the face image in the target color space is increased according to the first adjustment coefficient.
  6. 根据权利要求3所述的方法,其中,所述根据增大亮度值后的所述人脸区域,确定所述人脸区域中的油光区域,包括:The method according to claim 3, wherein the determining the oily area in the face area according to the face area after the increased brightness value comprises:
    将增大亮度值后的所述人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为所述人脸区域中的油光区域。The area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the brightness value has been increased is determined as the oily area in the face area.
  7. 根据权利要求3所述的方法,其中,所述人脸区域中的油光区域满足下述公式:The method according to claim 3, wherein, the oily area in the face area satisfies the following formula:
    M=pow(Y,s)M=pow(Y,s)
    其中,M为所述人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的所 述人脸区域中每个像素点的亮度值,s为所述指数函数的系数;并且s是将所述面部属性信息以及所述环境属性信息输入指数系数计算函数得到的;所述指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。Wherein, M is the oily area in the face area, and pow is an exponential function; Y is the brightness value of each pixel in the face area after increasing the brightness value, and s is the coefficient of the exponential function; And s is obtained by inputting the facial attribute information and the environmental attribute information into an exponential coefficient calculation function; the exponential coefficient calculation function is linear based on the exponential coefficient, facial attribute information and environmental attribute information marked on a plurality of sample face images. obtained by regression analysis.
  8. 根据权利要求1-7任一项所述的方法,其中,所述根据所述面部属性信息以及所述环境属性信息,对所述油光区域进行去油光处理,包括:The method according to any one of claims 1-7, wherein the performing de-gloss processing on the glossy area according to the facial attribute information and the environmental attribute information, comprising:
    将所述面部属性信息以及所述环境属性信息输入调暗系数计算函数,以得到所述人脸图像的调暗系数;所述调暗系数计算函数是基于多个样本人脸图像中标注的调暗系数、面部属性信息以及环境属性信息线性回归分析得到的;The facial attribute information and the environmental attribute information are input into the dimming coefficient calculation function to obtain the dimming coefficient of the face image; the dimming coefficient calculation function is based on the adjustment parameters marked in the multiple sample face images. Dark coefficient, facial attribute information and environmental attribute information obtained by linear regression analysis;
    根据所述人脸图像的调暗系数减小所述油光区域中每个像素点的亮度值。The brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
  9. 根据权利要求1-7任一项所述的方法,还包括:The method according to any one of claims 1-7, further comprising:
    将所述人脸图像与所述去油光人脸图像进行融合,以得到目标图像。The face image is fused with the degreasing face image to obtain a target image.
  10. 根据权利要求1-7任一项所述的方法,其中,所述根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域,包括:The method according to any one of claims 1-7, wherein the region extraction is performed on the face region according to the face attribute information and the environment attribute information, so as to obtain the Oily areas, including:
    对所述人脸区域进行磨皮处理;performing microdermabrasion on the face area;
    根据所述面部属性信息以及所述环境属性信息,对磨皮处理后的所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域。According to the facial attribute information and the environmental attribute information, region extraction is performed on the face region after the skin resurfacing process, so as to obtain a shiny region in the face region.
  11. 一种图像处理装置,包括:An image processing device, comprising:
    获取模块,被配置为获取人脸图像,并确定所述人脸图像中的人脸区域;an acquisition module, configured to acquire a face image, and determine a face area in the face image;
    提取模块,被配置为将所述人脸图像输入属性信息提取模型;得到所述人脸区域的面部属性信息以及所述人脸图像的环境属性信息;根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域;an extraction module, configured to input the face image into an attribute information extraction model; obtain the face attribute information of the face region and the environment attribute information of the face image; according to the face attribute information and the environment attribute information, and region extraction is performed on the face region to obtain the glossy region in the face region;
    去油光模块,被配置为根据所述面部属性信息以及所述环境属性信息,对所述油光区域进行去油光处理,以得到去油光人脸图像。The gloss removal module is configured to perform a gloss removal process on the glossy area according to the facial attribute information and the environmental attribute information, so as to obtain a glossy face image.
  12. 根据权利要求11所述的装置,其中,The apparatus of claim 11, wherein,
    所述面部属性信息包括所述人脸区域中人脸的肤色、肤质、亮度或年龄中的至少一种;所述环境属性信息至少包括环境光照信息。The facial attribute information includes at least one of skin color, skin texture, brightness or age of the human face in the face region; the environmental attribute information includes at least ambient lighting information.
  13. 根据权利要求11所述的装置,其中,The apparatus of claim 11, wherein,
    所述获取模块还被配置为:根据所述面部属性信息以及所述环境属性信息获取第一调整系数;The obtaining module is further configured to: obtain a first adjustment coefficient according to the face attribute information and the environment attribute information;
    所述提取模块被配置为:根据所述第一调整系数增大所述人脸图像中每个像素点的亮度值;根据增大亮度值后的所述人脸区域,确定所述人脸区域中的油光区域。The extraction module is configured to: increase the brightness value of each pixel in the face image according to the first adjustment coefficient; determine the face region according to the face region after the increased brightness value Oily areas in .
  14. 根据权利要求13所述的装置,其中,所述获取模块被配置为将所述面部属性信息以及所述环境属性信息输入调整系数提取函数,以得到所述第一调整系数;所述调整系 数提取函数是基于多个样本人脸图像中标注的调整系数、面部属性信息以及环境属性信息线性回归分析得到的。The apparatus according to claim 13, wherein the acquisition module is configured to input the facial attribute information and the environmental attribute information into an adjustment coefficient extraction function to obtain the first adjustment coefficient; the adjustment coefficient extraction The function is obtained by linear regression analysis based on the adjustment coefficients, facial attribute information and environmental attribute information marked in multiple sample face images.
  15. 根据权利要求13所述的装置,其中,所述图像处理装置还包括增大模块,被配置为:The apparatus of claim 13, wherein the image processing apparatus further comprises an enlargement module configured to:
    将所述人脸图像的颜色空间转换为目标颜色空间;所述目标颜色空间包括表征所述人脸图像中像素点的亮度值的信息;Converting the color space of the face image into a target color space; the target color space includes information representing the brightness values of pixels in the face image;
    根据所述第一调整系数将所述目标颜色空间下的所述人脸图像中每个像素点的亮度值增大。The luminance value of each pixel in the face image in the target color space is increased according to the first adjustment coefficient.
  16. 根据权利要求13所述的装置,其中,所述提取模块被配置为:The apparatus of claim 13, wherein the extraction module is configured to:
    将增大亮度值后的所述人脸区域中亮度值大于第一预设阈值的像素点组成的区域,确定为所述人脸区域中的油光区域。The area composed of pixels whose brightness value is greater than the first preset threshold in the face area after the brightness value has been increased is determined as the oily area in the face area.
  17. 根据权利要求13所述的装置,其中,所述人脸区域中的油光区域满足下述公式:The device according to claim 13, wherein the oily area in the face area satisfies the following formula:
    M=pow(Y,s)M=pow(Y,s)
    其中,M为所述人脸区域中的油光区域,pow为指数函数;Y为增大亮度值后的所述人脸区域中每个像素点的亮度值,s为所述指数函数的系数;并且s是将所述面部属性信息以及所述环境属性信息输入指数系数计算函数得到的;所述指数系数计算函数是基于多个样本人脸图像标注的指数系数、面部属性信息以及环境属性信息线性回归分析得到的。Wherein, M is the oily area in the face area, and pow is an exponential function; Y is the brightness value of each pixel in the face area after increasing the brightness value, and s is the coefficient of the exponential function; And s is obtained by inputting the facial attribute information and the environmental attribute information into an exponential coefficient calculation function; the exponential coefficient calculation function is linear based on the exponential coefficient, facial attribute information and environmental attribute information marked on a plurality of sample face images. obtained by regression analysis.
  18. 根据权利要求11-17任一项所述的装置,其中,所述去油光模块被配置为:The apparatus of any one of claims 11-17, wherein the degreasing light module is configured to:
    将所述面部属性信息以及所述环境属性信息输入调暗系数计算函数,以得到所述人脸图像的调暗系数;所述调暗系数计算函数是基于多个样本人脸图像中标注的调暗系数、面部属性信息以及环境属性信息线性回归分析得到的;The facial attribute information and the environmental attribute information are input into the dimming coefficient calculation function to obtain the dimming coefficient of the face image; the dimming coefficient calculation function is based on the adjustment parameters marked in the multiple sample face images. Dark coefficient, facial attribute information and environmental attribute information obtained by linear regression analysis;
    根据所述人脸图像的调暗系数减小所述油光区域中每个像素点的亮度值。The brightness value of each pixel in the oily area is reduced according to the dimming coefficient of the face image.
  19. 根据权利要求11-17任一项所述的装置,还包括:The apparatus of any one of claims 11-17, further comprising:
    融合模块,被配置为将所述人脸图像与所述去油光人脸图像进行融合,以得到目标图像。The fusion module is configured to fuse the face image and the degreasing face image to obtain a target image.
  20. 根据权利要求11-17任一项所述的装置,其中,所述提取模块被配置为:The apparatus of any of claims 11-17, wherein the extraction module is configured to:
    对所述人脸区域进行磨皮处理;performing microdermabrasion on the face area;
    根据所述面部属性信息以及所述环境属性信息,对磨皮处理后的所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域。According to the facial attribute information and the environmental attribute information, region extraction is performed on the face region after the skin resurfacing process, so as to obtain a shiny region in the face region.
  21. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器和用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述可执行指令,以实现以下处理:a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to execute the executable instructions to implement the following processes:
    获取人脸图像,并确定所述人脸图像中的人脸区域;Obtain a face image, and determine the face area in the face image;
    将所述人脸图像输入属性信息提取模型,以得到所述人脸区域的面部属性信息以及所 述人脸图像的环境属性信息;The face image is input into the attribute information extraction model, to obtain the face attribute information of the face region and the environmental attribute information of the face image;
    根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域;According to the facial attribute information and the environmental attribute information, region extraction is performed on the human face region to obtain a glossy region in the human face region;
    根据所述面部属性信息以及所述环境属性信息,对所述油光区域进行去油光处理,以得到去油光人脸图像。According to the facial attribute information and the environmental attribute information, de-gloss processing is performed on the glossy area to obtain an image of the de-glossy face.
  22. 一种计算机可读存储介质,其特征在于,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下处理:A computer-readable storage medium, characterized in that, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device can perform the following processing:
    获取人脸图像,并确定所述人脸图像中的人脸区域;Obtain a face image, and determine the face area in the face image;
    将所述人脸图像输入属性信息提取模型,以得到所述人脸区域的面部属性信息以及所述人脸图像的环境属性信息;Inputting the face image into an attribute information extraction model to obtain the face attribute information of the face region and the environmental attribute information of the face image;
    根据所述面部属性信息以及所述环境属性信息,对所述人脸区域进行区域提取,以得到所述人脸区域中的油光区域;According to the facial attribute information and the environmental attribute information, region extraction is performed on the human face region to obtain a glossy region in the human face region;
    根据所述面部属性信息以及所述环境属性信息,对所述油光区域进行去油光处理,以得到去油光人脸图像。According to the facial attribute information and the environmental attribute information, de-gloss processing is performed on the glossy area to obtain an image of the de-glossy face.
PCT/CN2021/106299 2020-11-17 2021-07-14 Image processing method and apparatus WO2022105270A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011287995.6 2020-11-17
CN202011287995.6A CN112381737B (en) 2020-11-17 2020-11-17 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022105270A1 true WO2022105270A1 (en) 2022-05-27

Family

ID=74584908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106299 WO2022105270A1 (en) 2020-11-17 2021-07-14 Image processing method and apparatus

Country Status (2)

Country Link
CN (1) CN112381737B (en)
WO (1) WO2022105270A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381737B (en) * 2020-11-17 2024-07-12 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN115988339B (en) * 2022-11-22 2024-03-26 荣耀终端有限公司 Image processing method, electronic device, storage medium, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
US20170163953A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for processing image containing human face
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment
CN109146893A (en) * 2018-08-01 2019-01-04 厦门美图之家科技有限公司 Glossy region segmentation method, device and mobile terminal
CN112381737A (en) * 2020-11-17 2021-02-19 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600578B (en) * 2016-11-22 2017-11-10 武汉大学 Characteristic function space filter value regression model parallel method based on remote sensing image
CN107194374A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Human face region goes glossy method, device and terminal
CN110188640B (en) * 2019-05-20 2022-02-25 北京百度网讯科技有限公司 Face recognition method, face recognition device, server and computer readable medium
CN111626921A (en) * 2020-05-09 2020-09-04 北京字节跳动网络技术有限公司 Picture processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170163953A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for processing image containing human face
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment
CN109146893A (en) * 2018-08-01 2019-01-04 厦门美图之家科技有限公司 Glossy region segmentation method, device and mobile terminal
CN112381737A (en) * 2020-11-17 2021-02-19 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112381737B (en) 2024-07-12
CN112381737A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US12056883B2 (en) Method for testing skin texture, method for classifying skin texture and device for testing skin texture
US8861847B2 (en) System and method for adaptive skin tone detection
WO2020125631A1 (en) Video compression method and apparatus, and computer-readable storage medium
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
CN110084135B (en) Face recognition method, device, computer equipment and storage medium
Dhavalikar et al. Face detection and facial expression recognition system
WO2022105270A1 (en) Image processing method and apparatus
Iraji et al. Skin color segmentation in fuzzy YCBCR color space with the mamdani inference
WO2023056950A1 (en) Image processing method and electronic device
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN112581395A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114022880B (en) Esophageal mucosa flatness degree quantification method, device, terminal and storage medium
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN110599553B (en) Skin color extraction and detection method based on YCbCr
CN110648336B (en) Method and device for dividing tongue texture and tongue coating
JP6819445B2 (en) Information processing equipment, control methods, and programs
CN111582278B (en) Portrait segmentation method and device and electronic equipment
CN113132639B (en) Image processing method and device, electronic equipment and storage medium
CN113298753A (en) Sensitive muscle detection method, image processing method, device and equipment
EP2904546A1 (en) Method and apparatus for ambient lighting color determination
CN108491820B (en) Method, device and equipment for identifying limb representation information in image and storage medium
WO2023151210A1 (en) Image processing method, electronic device and computer-readable storage medium
KR100488014B1 (en) YCrCb color based human face location detection method
CN113298841B (en) Skin oil parting method, computer equipment, system and storage medium
KR102334030B1 (en) Method for dyeing hair by using computer device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893422

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.09.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21893422

Country of ref document: EP

Kind code of ref document: A1