CN111241934A - Method and device for acquiring photophobic region in face image - Google Patents

Method and device for acquiring photophobic region in face image Download PDF

Info

Publication number
CN111241934A
CN111241934A CN201911395835.0A CN201911395835A CN111241934A CN 111241934 A CN111241934 A CN 111241934A CN 201911395835 A CN201911395835 A CN 201911395835A CN 111241934 A CN111241934 A CN 111241934A
Authority
CN
China
Prior art keywords
image
area
pixel
highlight
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911395835.0A
Other languages
Chinese (zh)
Inventor
梁炜
徐灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201911395835.0A priority Critical patent/CN111241934A/en
Publication of CN111241934A publication Critical patent/CN111241934A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The method comprises the steps of carrying out fuzzy processing on an acquired first image containing a human face to obtain a second image with a fuzzy image, identifying a skin color area of the human face in the first image based on pixel parameters of the first image and the second image under an RGB color model, calculating to acquire a highlight area in the skin color area, and confirming that the highlight area is a target highlight area in the first image. The method and the device have the advantages that the effective identification of the oil-light area in the portrait image is realized, the highlight area in the background cannot be mistakenly identified as the oil-light area, the realization process is simple, no complex calculation is needed, and the method and the device have broad application prospects.

Description

Method and device for acquiring photophobic region in face image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for acquiring an oil light area in a face image.
Background
In the process of taking portrait photos, due to reasons such as illumination and the like, obvious oily and shiny phenomena are easy to generate, so that people in the photos are old, unclean and greasy. Therefore, when general wedding photos, parent-child photos and the like are shot, some methods are used for removing oil and light during later-stage image processing. Most of the methods for removing the oily light can only perform later local processing, and large calculation resources are needed, so that the application scenes are very limited, and professional personnel are needed for operation.
Disclosure of Invention
The application provides a method and a device for acquiring an oil light area in a face image, which are used for solving the problem of effective identification of the oil light area in the image containing the face.
First aspect
The application provides a method for acquiring an oil light area in a face image, which comprises the following steps: acquiring a first image containing a human face; carrying out fuzzy processing on the first image to obtain a second image; identifying a skin color area of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model; and acquiring a highlight region in the skin color region, and confirming that the highlight region is the target gloss region in the first image.
Second aspect of the invention
The application provides a device for obtaining the highlight area in a face image, which comprises: the image acquisition module is used for acquiring a first image containing a human face; the fuzzy processing module is used for carrying out fuzzy processing on the first image to obtain a second image; the skin color area identification module is used for identifying a skin color area of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model; and the highlight area identification module is used for acquiring the highlight area in the skin color area and confirming that the highlight area is the target oil light area in the first image.
The method comprises the steps of blurring an acquired first image containing a human face to obtain a second image with a blurred image, identifying a skin color area of the human face in the first image based on pixel parameters of the first image and the second image under an RGB color model, calculating and acquiring a highlight area in the skin color area, and confirming that the highlight area is a target gloss area in the first image. The technical scheme realizes effective identification of the oil light area in the portrait image, does not mistakenly identify the highlight area in the background as the oil light area, has simple realization process and no need of complex calculation, can be applied to various image acquisition equipment, image processing equipment or various image processing software when being realized as a computer program, and has wide application prospect.
Drawings
Fig. 1 shows a flowchart of an implementation of a method for acquiring an oil-light region in a face image according to the present application.
Fig. 2 shows a flowchart for implementing step S103 in the embodiment shown in fig. 1.
Fig. 3 shows a flowchart for implementing step S104 in the embodiment shown in fig. 1.
FIG. 4 illustrates a flowchart of an example implementation of reducing the brightness of a target gloss area provided herein.
Fig. 5 is a schematic structural diagram illustrating an apparatus for acquiring an oil light region in a face image according to the present application.
Fig. 6 is a schematic structural diagram of the skin color region identification module in the embodiment shown in fig. 5.
Fig. 7 shows a schematic structural diagram of the highlight region identification module in the embodiment shown in fig. 5.
Fig. 8 is a schematic structural diagram of an apparatus for acquiring an oil light region in a face image according to the present application.
Fig. 9 is a schematic structural diagram of the oil light region removing module in the embodiment shown in fig. 8.
Detailed Description
The inventor of the application finds out through research that: the principle of oil removal is simple, only the brightness of the pixels needs to be changed, why simply why can the acquired images not be processed directly in various intelligent terminals or cameras to remove oil automatically? The inventor further researches and finds that the reason is that the oil light area needs to be found before the brightness of the pixel is reduced, and if the brightness is detected simply, the high-brightness part in the background can be easily mistaken for the oil light area, so that the image quality is influenced. Therefore, how to recognize the oil-light area in the face area is the key, but the inventor finds out through further research again that if the face area is determined first and then the oil-light area is recognized, algorithms for face recognition need to be used, and the algorithms need to occupy larger operation resources, so that the application scene is difficult to be generalized. Therefore, through continuous research and experiments, the inventor provides a method for identifying the portrait oil light area only by performing simple pixel traversal and pixel calculation on the image.
Example 1
Referring to fig. 1, a flowchart of an implementation of the method for acquiring a highlight area in a face image according to the present application is shown, and the method for acquiring the highlight area in the face image according to the present application can be applied to various intelligent terminals, for example, the intelligent terminal can include various electronic devices capable of taking pictures, such as a mobile phone, a camera, a computer, and a tablet computer.
As shown in fig. 1, the method for acquiring the highlight area in the face image includes the following steps:
s101, a first image containing a human face is obtained.
The first image is an original image, namely an original picture obtained by photographing through a camera. Of course, the first image may also be a processed image, and the image includes a region with human face gloss.
S102, carrying out fuzzy processing on the first image to obtain a second image.
The blurring of the first image may be implemented by performing blurring on the first image by using various existing image blurring algorithms, for example, the first image may be processed by using a gaussian blurring algorithm to obtain a blurred image.
Specifically, the second image is to be understood as parameters of all pixels after the blurring process is performed on the basis of the first image, and may also be a blurred image whose content is consistent with that of the first image, and the second image and the first image exist at the same time.
S103, identifying a skin color area of the human face in the first image based on pixel parameters of the first image and the second image under an RGB color model.
The RGB color model is a common image representation method for the current image, so that the pixel parameters of the image can be directly obtained for calculation. Of course, in specific implementations, other color models can be used to represent the pixels of the image for calculation, and this method often requires conversion and is not as efficient as directly using the RGB color model. The skin color area of the face in the first image can be identified through the calculation of the pixel parameters, compared with a face identification algorithm, the step is not simple, and the skin color area of the face can be identified, so that the skin color areas of other body parts in the face can be identified, and highlight areas on other body parts can be identified in the subsequent steps.
And S104, acquiring a highlight region in the skin color region, and confirming that the highlight region is the target gloss region in the first image.
Highlight recognition is carried out on the obtained skin color area in the step, and therefore the target gloss area is obtained. Of course, since the skin color region includes a human face region, if the skin color region includes other body regions that are not human faces and there is a highlight region in the body region, the highlight region is also recognized. That is, the target gloss region in this embodiment includes a gloss region of a skin color region of a body part other than a human face, for example, a gloss region of a skin color region such as a neck, an arm, or a chest of a human figure.
By the method, the oil light area in the portrait can be quickly identified, highlight of other non-portrait areas cannot be mistakenly identified as the oil light area, meanwhile, the identification process is very simple, and when the method is implemented as a computer program, large calculation resources cannot be occupied.
Specifically, in an exemplary embodiment, referring to fig. 2, which is a flowchart illustrating an implementation of step S103 in the embodiment shown in fig. 1, as shown in fig. 2, in step S103, identifying a skin color region of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model includes:
s201, traversing the values of all pixel channels of the first image and the second image under the RGB color model.
Under the RGB color model, the image comprises three channels of red, green and blue. Since the second image is a blurred image of the first image, the channel values of pixels based on the same coordinates of the first image are different under the first image and under the second image. For example, in calculating the pixels, the red channel of the first image may be represented by org.r, the blue channel of the first image may be represented by org.g, the red channel of the second image may be represented by blu.r, and the green channel of the second image may be represented by blu.g.
The pixel traversal of the first image and the second image only needs to be performed once, that is, values of a green channel and a red channel of the first image and the second image are simultaneously obtained in the traversal process, and then calculation is performed according to the obtained channel values of the pixels.
S202, determining a first skin parameter of the pixel with the same coordinate based on the values of the red channels of the pixels of the first image and the second image with the same coordinate, and normalizing the first skin parameter to a preset interval value to obtain a first skin parameter normalized value of the pixel with the same coordinate.
The preset interval value here is in the range of [0,1], and the preset interval is [0,1 ].
For example, in an exemplary embodiment, the first skin parameter may be obtained by the following calculation:
skin=(min(org.r,blur.r-0.1)-0.2)*4.0…(1),
and implementing min (X, Y) to represent the smaller value of X and Y.
The normalization of the first skin parameter to a preset interval value can be obtained by the following calculation formula:
skin=clamp(skin,0,1)…(2),
wherein, the clamp (a, x, y) indicates that the a value is limited between x and y, the above formula (2) indicates that the value of the current skin is limited between [0-1], and if the skin is greater than 1, the skin is 1, and if the skin is less than 0, the skin is 0.
S203, determining a second skin parameter of the pixel with the same coordinate based on the first skin parameter normalized value and the numerical value of the red channel of the pixel of the first image and the second image with the same coordinate, and normalizing the second skin parameter into a preset interval value to obtain a second skin parameter normalized value of the pixel with the same coordinate.
In combination with the above exemplary embodiment, the second skin parameter can be obtained by the following calculation formula:
skin=max(0,org.r-blur.g)*skin*10…(3),
the probability that the current pixel point represented by skin color is adopted, and max (X, Y) represents the larger value of X and Y.
The normalization of the second skin parameter to a preset interval value may be obtained by the following calculation formula:
skin=clamp(skin,0,1)…(4),
wherein, the above formula (4) indicates that the value of the current skin is limited to [0-1], and if the skin is greater than 1, the skin is 1, and if the skin is less than 0, the skin is 0.
S204, replacing the pixels under all the coordinates in the first image with the corresponding second skin parameter normalization values respectively so as to identify the skin color area of the face in the first image.
According to the methods of the above formulas (1) - (4) provided in this embodiment, it is highlighted that the skin color probability mask of the first image can be obtained after the pixels of the first image and the second image are calculated, that is, the skin color area in the portrait is identified.
Specifically, in an exemplary embodiment, as shown in fig. 3, a flowchart illustrating an implementation of step S104 in the embodiment shown in fig. 1 is shown, and as shown in fig. 3, in step S104, acquiring a highlight region in the skin color region, and confirming that the highlight region is a target oil light region includes:
s301, determining a first highlight parameter of a pixel with the same coordinate in the skin color area based on the red channel of the pixel with the same coordinate in the first image and the attribute red channel of the pixel with the second image.
Continuing with the example in the embodiment shown in fig. 2, the first high brightness parameter can be calculated by the following formula:
glossy=max(org.r-0.9,org.r-blurValue.r)*10*org.r*org.r…(5),
glossy=glossy*glossy*glossy*3…(6);
where glossy represents the value of the current highlight region, and blu value.
S302, normalizing the first highlight parameters of the pixels with the same coordinates in the skin color area into a preset interval value, identifying the highlight area of the skin color area, and confirming that the highlight area is the target gloss area in the first image.
Continuing with the above example, the normalization of the first highlight parameter to a preset interval value can be implemented by the following formula:
glossy=clamp(glossy,0,1)…(7)。
wherein, the value of the current glosssy represented in formula (7) is limited to [0-1], and if skin is greater than 1, the glosssy is 1, and if the glosssy is less than 0, the glosssy is equal to 0. Through the operation of the formulas (5), (6) and (7), a highlight area can be gradually and accurately positioned, highlight identification of a skin color area is realized, and the highlight area is a target oil light area.
In an exemplary embodiment, after the step of acquiring the highlight region in the skin color region and confirming that the highlight region is the target gloss region, the method further includes: and reducing the brightness of the target oil light area. The oil light can be removed by reducing the brightness of the target oil light area.
In the implementation, there are many methods for changing the brightness of the highlight area, and those skilled in the art can select the method according to the needs when implementing the technical solution of the present application.
For example, in an embodiment, see fig. 4, which is a flowchart illustrating an exemplary implementation of reducing the brightness of the target gloss area provided by the present application, as shown in fig. 4, the step of reducing the brightness of the target gloss area may include the steps of:
s401, acquiring a fuzzy area corresponding to the highlight area in the second image.
As can be seen from the above description, the pixels with the same coordinates in the blurred image (i.e., the second image) can be obtained through the coordinates of the highlight area, so that the blurred area corresponding to the highlight area can be obtained.
S402, the fuzzy area in the second image is covered and filled in the highlight area in the first image, and the covered and filled first image is displayed.
The method provided by the embodiment can realize the human image oil light removal processing of the first image to obtain a new first image.
It should be understood that the reference numbers before the steps do not indicate the sequential limitation of the execution order of the steps.
Example 2
Based on the same concept as that of embodiment 1, correspondingly, the present embodiment further provides an apparatus for acquiring the highlight area in the face image.
Referring to fig. 5, a schematic structural diagram of an apparatus for acquiring an oil-light region in a face image according to the present application is shown, and as shown in fig. 5, the apparatus 500 includes: an image obtaining module 510, configured to obtain a first image including a human face; a blurring module 520, configured to perform blurring processing on the first image to obtain a second image; a skin color region identification module 530, configured to identify a skin color region of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model; a highlight region identification module 540, configured to obtain a highlight region in the skin color region, and confirm that the highlight region is the target gloss region in the first image.
Referring to fig. 6, a schematic structural diagram of the skin color region identification module in the embodiment shown in fig. 5 is shown, and as shown in fig. 6, the skin color region identification module 530 includes: the pixel traversing unit 601 is configured to traverse the values of all pixel channels of the first image and the second image in the RGB color model; a first pixel operation unit 602, configured to determine a first skin parameter of a pixel with the same coordinate based on a value of a pixel red channel of a first image and a pixel red channel of a second image with the same coordinate, and normalize the first skin parameter to a preset interval value, so as to obtain a first skin parameter normalized value of the pixel with the same coordinate; a second pixel operation unit 603, configured to determine a second skin parameter of the pixel with the same coordinate based on the first skin parameter normalized value and the values of the pixel red channels of the first image and the second image with the same coordinate, and normalize the second skin parameter to a preset interval value, so as to obtain a second skin parameter normalized value of the pixel with the same coordinate; a first pixel region determining unit 604, configured to replace pixels in all coordinates in the first image with the corresponding second skin parameter normalization values, respectively, so as to identify a skin color region of a human face in the first image.
Referring to fig. 7, a schematic structural diagram of the highlight region identification module in the embodiment shown in fig. 5 is shown, and as shown in fig. 7, the highlight region identification module 540 includes: a highlight pixel identification unit 701, configured to determine a first highlight parameter of a pixel with the same coordinate in the skin color region based on the red channel of the first image pixel and the attribute red channel of the second image pixel with the same coordinate; a second pixel region determining unit 702, configured to normalize the first highlight parameter of the pixel with the same coordinate in the skin color region to a preset interval value, identify a highlight region of the skin color region, and confirm that the highlight region is the target gloss region in the first image.
Referring to fig. 8, another schematic structural diagram of an apparatus for acquiring an oil-light region in a face image according to the present application is shown, and as shown in fig. 8, the apparatus 500 further includes: and an oily light region removing module 810 for reducing the brightness of the target oily light region.
Referring to fig. 9, a schematic structural diagram of the oily light region removing module in the embodiment shown in fig. 8 is shown, and as shown in fig. 9, the oily light region removing module 810 includes: a blurred region acquiring unit 901, configured to acquire a blurred region corresponding to the highlight region in the second image; a pixel coverage filling unit 902, configured to fill the blurred region in the second image in the highlight region in the first image in a coverage manner, and display the first image after the filling in the coverage manner.

Claims (10)

1. A method of obtaining an oil-light region in a face image, comprising:
acquiring a first image containing a human face;
carrying out fuzzy processing on the first image to obtain a second image;
identifying a skin color area of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model;
and acquiring a highlight region in the skin color region, and confirming that the highlight region is the target gloss region in the first image.
2. The method of claim 1, wherein the identifying the skin color region of the face in the first image based on the pixel parameters of the first image and the second image under the RGB color model comprises:
traversing the numerical values of all pixel channels of the first image and the second image under the RGB color model;
determining a first skin parameter of a pixel with the same coordinate based on the numerical value of the pixel red channel of the first image and the second image with the same coordinate, and normalizing the first skin parameter into a preset interval value to obtain a first skin parameter normalization value of the pixel with the same coordinate;
determining a second skin parameter of the pixel with the same coordinate based on the first skin parameter normalized value and the numerical value of the pixel red channel of the first image and the second image with the same coordinate, and normalizing the second skin parameter into a preset interval value to obtain a second skin parameter normalized value of the pixel with the same coordinate;
and replacing pixels under all coordinates in the first image with the corresponding second skin parameter normalization values respectively so as to identify the skin color area of the face in the first image.
3. The method of claim 2, wherein the step of obtaining the highlight region in the skin tone region and confirming that the highlight region is the target highlight region in the first image comprises the steps of:
determining a first highlight parameter of the same coordinate pixel in the skin color area based on the red channel of the first image pixel and the attribute red channel of the second image pixel with the same coordinate;
normalizing the first highlight parameters of the pixels with the same coordinates in the skin color area into a preset interval value, identifying the highlight area of the skin color area, and confirming that the highlight area is the target oil light area in the first image.
4. The method of any one of claims 1-3, further comprising, after the step of obtaining the highlight region in the skin tone region and confirming that the highlight region is the target highlight region in the first image: and reducing the brightness of the target oil light area.
5. The method of claim 4, wherein the step of reducing the brightness of the target gloss area comprises:
acquiring a fuzzy area corresponding to the highlight area in the second image;
and filling the blurred area in the second image into the highlight area in the first image in an overlaying mode, and displaying the first image after filling in the overlaying mode.
6. An apparatus for obtaining a highlight region in a face image, comprising:
the image acquisition module is used for acquiring a first image containing a human face;
the fuzzy processing module is used for carrying out fuzzy processing on the first image to obtain a second image;
the skin color area identification module is used for identifying a skin color area of a human face in the first image based on pixel parameters of the first image and the second image under an RGB color model;
and the highlight area identification module is used for acquiring the highlight area in the skin color area and confirming that the highlight area is the target oil light area in the first image.
7. The apparatus for obtaining the gloss area in the face image according to claim 6, wherein the skin color area recognition module comprises:
the pixel traversing unit is used for traversing the numerical values of all pixel channels of the first image and the second image under the RGB color model;
the first pixel operation unit is used for determining a first skin parameter of a pixel with the same coordinate based on the numerical value of the pixel red channel of the first image and the second image with the same coordinate, and normalizing the first skin parameter into a preset interval value to obtain a first skin parameter normalization value of the pixel with the same coordinate;
the second pixel operation unit is used for determining a second skin parameter of the pixel with the same coordinate based on the first skin parameter normalized value and the numerical value of the pixel red channel of the first image and the second image with the same coordinate, normalizing the second skin parameter into a preset interval value, and obtaining a second skin parameter normalized value of the pixel with the same coordinate;
and the first pixel area determining unit is used for respectively replacing pixels under all coordinates in the first image with the corresponding second skin parameter normalization values so as to identify the skin color area of the face in the first image.
8. The apparatus for obtaining the highlight region in the face image according to claim 7, wherein said highlight region recognition module comprises:
the highlight pixel identification unit is used for determining a first highlight parameter of a pixel with the same coordinate in the skin color area based on the red channel of the pixel with the same coordinate in the first image and the attribute red channel of the pixel with the same coordinate in the second image;
and the second pixel area determining unit is used for normalizing the first highlight parameters of the pixels with the same coordinates in the skin color area into a preset interval value, identifying the highlight area of the skin color area, and confirming that the highlight area is the target gloss area in the first image.
9. The apparatus for capturing the highlight in the face image of any of claims 6-8, further comprising:
and the oily light area removing unit is used for reducing the brightness of the target oily light area.
10. The apparatus for acquiring the gloss region in the face image according to claim 9, wherein the gloss region removing unit comprises:
a blur area acquisition unit configured to acquire a corresponding blur area of the highlight area in the second image;
and the pixel coverage filling unit is used for filling the blurred area in the second image into the highlight area in the first image in a coverage mode and displaying the first image after the filling is performed in the coverage mode.
CN201911395835.0A 2019-12-30 2019-12-30 Method and device for acquiring photophobic region in face image Pending CN111241934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911395835.0A CN111241934A (en) 2019-12-30 2019-12-30 Method and device for acquiring photophobic region in face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911395835.0A CN111241934A (en) 2019-12-30 2019-12-30 Method and device for acquiring photophobic region in face image

Publications (1)

Publication Number Publication Date
CN111241934A true CN111241934A (en) 2020-06-05

Family

ID=70875809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911395835.0A Pending CN111241934A (en) 2019-12-30 2019-12-30 Method and device for acquiring photophobic region in face image

Country Status (1)

Country Link
CN (1) CN111241934A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221618A (en) * 2021-01-28 2021-08-06 深圳市雄帝科技股份有限公司 Method, system and storage medium for removing highlight of face image
CN114219718A (en) * 2020-09-04 2022-03-22 广州虎牙科技有限公司 Skin processing method, live broadcast method, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070064979A1 (en) * 2005-09-20 2007-03-22 Brigh Tex Bio-Photonics Llc Systems and methods for automatic skin-based identification of people using digital images
CN104008534A (en) * 2014-06-18 2014-08-27 福建天晴数码有限公司 Intelligent human face beautifying method and device
CN104282002A (en) * 2014-09-22 2015-01-14 厦门美图网科技有限公司 Quick digital image beautifying method
CN106296617A (en) * 2016-08-22 2017-01-04 腾讯科技(深圳)有限公司 The processing method and processing device of facial image
CN107194374A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Human face region goes glossy method, device and terminal
CN107392858A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Image highlight area processing method, device and terminal device
CN107749077A (en) * 2017-11-08 2018-03-02 米哈游科技(上海)有限公司 A kind of cartoon style shadows and lights method, apparatus, equipment and medium
CN110069974A (en) * 2018-12-21 2019-07-30 北京字节跳动网络技术有限公司 Bloom image processing method, device and electronic equipment
CN110378846A (en) * 2019-06-28 2019-10-25 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and the electronic equipment of processing image mill skin
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
CN110544257A (en) * 2019-08-20 2019-12-06 成都品果科技有限公司 Rapid skin color segmentation method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070064979A1 (en) * 2005-09-20 2007-03-22 Brigh Tex Bio-Photonics Llc Systems and methods for automatic skin-based identification of people using digital images
CN104008534A (en) * 2014-06-18 2014-08-27 福建天晴数码有限公司 Intelligent human face beautifying method and device
CN104282002A (en) * 2014-09-22 2015-01-14 厦门美图网科技有限公司 Quick digital image beautifying method
US20160086355A1 (en) * 2014-09-22 2016-03-24 Xiamen Meitu Technology Co., Ltd. Fast face beautifying method for digital images
CN106296617A (en) * 2016-08-22 2017-01-04 腾讯科技(深圳)有限公司 The processing method and processing device of facial image
CN107194374A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Human face region goes glossy method, device and terminal
CN107392858A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Image highlight area processing method, device and terminal device
CN107749077A (en) * 2017-11-08 2018-03-02 米哈游科技(上海)有限公司 A kind of cartoon style shadows and lights method, apparatus, equipment and medium
CN110069974A (en) * 2018-12-21 2019-07-30 北京字节跳动网络技术有限公司 Bloom image processing method, device and electronic equipment
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
CN110378846A (en) * 2019-06-28 2019-10-25 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and the electronic equipment of processing image mill skin
CN110544257A (en) * 2019-08-20 2019-12-06 成都品果科技有限公司 Rapid skin color segmentation method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219718A (en) * 2020-09-04 2022-03-22 广州虎牙科技有限公司 Skin processing method, live broadcast method, computer equipment and storage medium
CN113221618A (en) * 2021-01-28 2021-08-06 深圳市雄帝科技股份有限公司 Method, system and storage medium for removing highlight of face image
CN113221618B (en) * 2021-01-28 2023-10-17 深圳市雄帝科技股份有限公司 Face image highlight removing method, system and storage medium thereof

Similar Documents

Publication Publication Date Title
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
US8405780B1 (en) Generating a clean reference image
JP6499188B2 (en) How to convert a saturated image to a non-saturated image
US20130044243A1 (en) Red-Eye Filter Method and Apparatus
JP2009506688A (en) Image segmentation method and image segmentation system
JP2020530920A (en) Image lighting methods, devices, electronics and storage media
CN110855889B (en) Image processing method, image processing apparatus, image processing device, and storage medium
CN111368819B (en) Light spot detection method and device
US20160323505A1 (en) Photographing processing method, device and computer storage medium
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
CN111241934A (en) Method and device for acquiring photophobic region in face image
JP2009123081A (en) Face detection method and photographing apparatus
CN109583330B (en) Pore detection method for face photo
JP2004239733A (en) Defect detection method and apparatus of screen
CN111815729B (en) Real-time skin beautifying method, device, equipment and computer storage medium
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
CN110136085B (en) Image noise reduction method and device
CN109003268B (en) Method for detecting appearance color of ultrathin flexible IC substrate
CN111797694A (en) License plate detection method and device
WO2017101570A1 (en) Photo processing method and processing system
JP2010147660A (en) Image processor, electronic camera and image processing program
CN114663299A (en) Training method and device suitable for image defogging model of underground coal mine
CN113379702A (en) Blood vessel path extraction method and device of microcirculation image
KR20180064064A (en) Method for Detecting Edges on Color Image Based on Fuzzy Theory
JP2006053692A (en) Image processor, image processing method and image processing program in moving image, and recording medium recording the program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication