CN113095148B - Method and system for detecting occlusion of eyebrow area, photographing device and storage medium - Google Patents

Method and system for detecting occlusion of eyebrow area, photographing device and storage medium Download PDF

Info

Publication number
CN113095148B
CN113095148B CN202110280891.0A CN202110280891A CN113095148B CN 113095148 B CN113095148 B CN 113095148B CN 202110280891 A CN202110280891 A CN 202110280891A CN 113095148 B CN113095148 B CN 113095148B
Authority
CN
China
Prior art keywords
area
eyebrow
face
hair
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110280891.0A
Other languages
Chinese (zh)
Other versions
CN113095148A (en
Inventor
丁凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Emperor Technology Co Ltd
Original Assignee
Shenzhen Emperor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Emperor Technology Co Ltd filed Critical Shenzhen Emperor Technology Co Ltd
Priority to CN202110280891.0A priority Critical patent/CN113095148B/en
Publication of CN113095148A publication Critical patent/CN113095148A/en
Application granted granted Critical
Publication of CN113095148B publication Critical patent/CN113095148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting the occlusion of an eyebrow area, which comprises the following steps: acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image; carrying out face detection on the original face image, acquiring face characteristic points of the original face image, and acquiring a face forehead image of the color space image according to the face characteristic points; performing face segmentation on the forehead image of the face by adopting a color clustering method to obtain an eyebrow area and a hair area; judging whether the eyebrow area is connected with the hair area, if so, judging that the eyebrow area is shielded, and judging whether the range of the connection area of the eyebrow area and the hair area meets a preset condition; and if the range of the connection area does not meet the preset condition, sending a rephotography prompt to the user. The invention also provides a detection system, a photographing device and a storage medium for the blocked eyebrow area. The invention can effectively save time and reduce resource waste.

Description

Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for detecting occlusion of an eyebrow area, photographing equipment and a storage medium.
Background
With the vigorous development of economy and the continuous innovation of various industrial technologies, the living standard of people is also continuously improved. The innovative application of the self-service photographing equipment also brings very convenient experience to people applying for the certificate photo. But the relevant departments of the relevant countries have certain requirements on the effect of the shot pictures. The eyebrows of a person are used as an important characteristic of face recognition, a related certificate making department has a definite requirement on whether the eyebrows are shielded, and after a user takes a certificate photo, the taken certificate photo does not meet the requirement of a related department due to the fact that the related department does not know whether the eyebrows are shielded, the photo needs to be taken again, so that the photo taking effect is general, and the certificate photo collecting efficiency is low.
Disclosure of Invention
In view of the above, it is necessary to provide a method, a system, a photographing apparatus, and a storage medium for detecting occlusion of an eyebrow region in view of the above problems.
A method for detecting occlusion of an eyebrow region, comprising: acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image; performing face detection on the original face image, acquiring face characteristic points of the original face image, and acquiring a face forehead image of the color space image according to the face characteristic points; performing face segmentation on the face forehead image by adopting a color clustering method to obtain an eyebrow region and a hair region; judging whether the eyebrow area is connected with the hair area or not, and if the eyebrow area is connected with the hair area, judging whether the range of the connection area of the eyebrow area and the hair area meets a preset condition or not; and if the range of the connection region does not meet the preset condition, judging that the eyebrow region is shielded, and sending a rephotograph prompt to the user.
A detection system for occlusion of an eyebrow region, comprising: the conversion module is used for acquiring an original face image, converting the original face image into a Lab color space and acquiring a color space face image; the detection module is used for carrying out face detection on the original face image, acquiring face characteristic points of the original face image and acquiring a face forehead image of the color space image according to the face characteristic points; the segmentation module is used for carrying out face segmentation on the face forehead image by adopting a color clustering method to obtain an eyebrow region and a hair region; the judging module is used for judging whether the eyebrow area is connected with the hair area or not, and if the eyebrow area is connected with the hair area, judging whether the range of the connecting area of the eyebrow area and the hair area meets a preset condition or not; and the prompting module is used for judging that the eyebrow area is shielded and sending a rephotograph prompt to a user if the range of the connecting area does not meet the preset condition.
A photographing apparatus comprising: a processor, a memory and a communication circuit, the processor being coupled to the memory and the communication circuit, the memory having stored therein a computer program, the processor executing the computer program to implement the method as described above.
A storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method as described above.
By adopting the embodiment of the invention, the following beneficial effects are achieved:
acquiring a face forehead image of a color space image according to the face characteristic points; performing face segmentation on the forehead image of the face by adopting a color clustering method to obtain an eyebrow area and a hair area; when the connection between the eyebrow area and the hair area is detected, judging whether the range of the connection area between the eyebrow area and the hair area meets a preset condition or not; if the range of the connection area does not meet the preset condition, a rephotograph prompt is sent to the user, so that the effectiveness of the user in shooting the photo can be effectively improved, repeated shooting is avoided, the time is effectively saved, and the resource waste is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
fig. 1 is a schematic flowchart of a method for detecting occlusion of an eyebrow area according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a human face feature point provided by the present invention;
FIG. 3 is a schematic flow chart of a first embodiment of obtaining an eyebrow area according to the present invention
FIG. 4 is a flowchart illustrating an embodiment of a method for obtaining a cluster block according to the present invention;
FIG. 5 is a schematic flow chart illustrating a second embodiment of obtaining an eyebrow area according to the present invention;
FIG. 6 is a flowchart illustrating a method for detecting occlusion of an eyebrow area according to a second embodiment of the present invention;
fig. 7 is a flowchart illustrating an embodiment of a method for obtaining a forehead image of a human face according to the present invention;
FIG. 8 is a schematic structural diagram of an embodiment of an eyebrow area detecting system according to the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of a photographing apparatus provided by the present invention;
fig. 10 is a schematic structural diagram of an embodiment of a storage medium provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for detecting occlusion of an eyebrow area according to a first embodiment of the present invention. The method for detecting the blocked eyebrow area comprises the following steps:
s101: and acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image.
In one particular implementation scenario, an original face image is acquired. The original face image can be a certificate photo shot by a user when the user uses the self-service shooting device, or a certificate photo to be processed provided by the user, or a face image obtained from the original image through a face recognition technology. The original image is an RGB image, the original image is converted into a Lab color space from an RGB format, and a color space face image is obtained. Lab is a less common color space. It is a device-independent color system, and is also a color system based on physiological characteristics. This means that it describes the human visual perception digitally. The L component in the Lab color space is used for representing the brightness of the pixel, the value range is [0,100], and the L component represents pure black to pure white; a represents the range from red to green, and the value range is [127, -128 ]; b represents the range from yellow to blue, and the value range is [127, -128 ]. The RGB color space cannot be directly converted into the Lab color space, and the RGB color space needs to be converted into the XYZ color space by means of the XYZ color space, and then the XYZ color space is converted into the Lab color space.
Specifically, the conversion is performed according to the following formula.
Figure BDA0002978333190000041
L * =116·f(Y/Y n )-16
a * =500·[f(X/X n )-f(Y/Y n )]
b * =200·[f(Y/Y n )-f(Z/Z n )]
Figure BDA0002978333190000042
R, G, B are the R, G and B values of the pixel points, L, a and B are the L, a and B values of the pixel points.
S102: and carrying out face detection on the original face image to obtain face characteristic points of the original face image, and obtaining a face forehead image of the color space image according to the face characteristic points.
In a specific implementation scenario, a face detection is performed on an original face image to obtain a face feature point of the original face image, please refer to fig. 2 in combination, and fig. 2 is a schematic diagram of the face feature point provided by the present invention. And acquiring the specific position of the human face characteristic point on the original human face image shown in the figure 2, and performing image segmentation on the color space image according to the acquired specific position to acquire the forehead image of the human face. For example, when a face region above the eye position is used as the forehead region, the position of the corresponding face feature point is acquired, and image division is performed according to the acquired position.
S103: and carrying out face segmentation on the forehead image of the face by adopting a color clustering method to obtain an eyebrow region and a hair region.
In a specific implementation scenario, clustering is an aggregation of data, where similar data is clustered into a class. Clustering is an unsupervised classification approach, which has the advantage that no prior training process is required. Currently, the commonly used clustering methods include K-means, Gaussian Mixture Models (GMM), Mean shift, and the like. In the implementation scene, a K-means algorithm is adopted for color clustering. First an initial cluster center is randomly selected. Each pixel point in the face forehead image is then assigned to the nearest center (based on the euclidean distance of the pixel point to the cluster center). And (4) according to the class gathered in the previous step, recalculating the clustering center (the average value from all pixel points to the clustering center in the previous step). And repeating the steps until the cluster center is not changed any more.
The pixels with the same or similar colors can be divided into a cluster block through a color clustering method, a skin region and a hair region in the face forehead image can be obtained in the implementation scene, then the eyebrow region can be obtained according to the position of the face feature point corresponding to the eyebrow in the face feature point, and the hair region above the face forehead image is a hair region according to common knowledge.
S104: and judging whether the eyebrow area is connected with the hair area, if so, executing the step S105, and if not, executing the step S107.
In a specific implementation scenario, whether the eyebrow region and the hair region are connected or not is determined according to the obtained eyebrow region and hair region, for example, position information of the eyebrow region and position information of the hair region can be obtained, and whether binary values are connected or not is determined according to the position information. In other implementation scenarios, connected domain analysis can be performed on the eyebrow region and the hair region to determine whether the eyebrow region and the hair region are connected.
S105: and judging whether the range of the connecting area of the eyebrow area and the hair area meets a preset condition, if not, executing step S106, and if so, executing step S107.
In a specific implementation scenario, when the eyebrow region and the hair region are connected, a range of the connection region between the eyebrow region and the hair region is obtained, and whether the range meets a preset condition is judged. For example, the width of the connecting region may be obtained, and it is determined whether the width is smaller than a preset width threshold, or the overlapping area of the eyebrow region and the hair region may be calculated, and it is determined whether the area is smaller than a preset area threshold.
S106: and judging that the eyebrow area is blocked, and sending a rephotography prompt to the user.
In a specific implementation scenario, if the range of the connection region between the eyebrow region and the hair region does not satisfy the preset condition, a rephoto prompt is sent to the user to remind the user to take a picture again.
S107: and sending a qualified prompt to the user.
In a specific implementation scenario, if the range of the connection region between the eyebrow region and the hair region meets a preset condition, or the eyebrow region and the hair region are not connected, a qualified prompt is sent to the user, and the user may perform the next operation, such as downloading a photo or printing a photo. In other implementation scenarios, step S107 may not be performed, and the default is acceptable.
As can be seen from the above description, in the present embodiment, a face forehead image of a color space image is obtained according to a face feature point; performing face segmentation on the forehead image of the face by adopting a color clustering method to obtain an eyebrow area and a hair area; when the connection between the eyebrow area and the hair area is detected, judging whether the range of the connection area between the eyebrow area and the hair area meets a preset condition or not; if the range of the connection area does not meet the preset condition, a rephotograph prompt is sent to the user, the effectiveness of the user in shooting the photos can be effectively improved, repeated rephotographs are avoided, time is effectively saved, and resource waste is reduced.
Referring to fig. 3, fig. 3 is a schematic flow chart of a first embodiment of obtaining an eyebrow area according to the present invention. The method for acquiring the eyebrow area comprises the following steps:
s201: and acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image.
S202: and carrying out face detection on the original face image to obtain face characteristic points of the original face image, and obtaining a face forehead image of the color space image according to the face characteristic points.
In a specific implementation scenario, steps S201 to S202 are substantially the same as steps S101 to S102 in the first embodiment of the method for detecting occlusion of an eyebrow region provided by the present invention, and are not described herein again.
S203: and acquiring a color value of each pixel point of the forehead image of the human face, and acquiring a data point set according to the color value of each pixel point.
In a specific implementation scenario, the K-means algorithm is used for clustering, and the forehead image of the human face contains 3 regions including a hair region, a skin region and a transition region of the skin and the hair and the eyebrows. So set K to 3.
Firstly, acquiring a color numerical value of each pixel point of the forehead image of the human face. The forehead image of the human face is an image of Lab color space and has two color channels of a and b, so that the color value (V) of each pixel point is obtained a ,V b ) And generating a data point set S according to the color numerical values of all the pixel points, wherein the color numerical values of all the pixel points in the data point set S are distributed according to the position information of the corresponding pixel points.
S204: randomly setting a plurality of clustering centers in the data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, obtaining the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks.
In a specific implementation scenario, K cluster centers are randomly arranged in the data point set, and in the present implementation scenario, K is 3. And calculating the clustering distance from each pixel point in the data point set to K clustering centers, wherein in the implementation scene, the distance is Euclidean distance. In other implementation scenarios, the distance may be other distances, such as a manhattan distance, and so on. And acquiring a clustering center corresponding to each pixel point according to the length of the clustering distance, wherein in the implementation scene, the clustering center corresponding to each pixel point is the clustering center with the shortest clustering distance. For example, the clustering distances from a pixel to 3 clustering centers L, M, N are l, m, and N, respectively, and l > m > N, then the clustering center corresponding to the pixel is N. And taking one clustering center and the corresponding pixel points as a clustering block, and generating K clustering blocks in the implementation scene.
S205: and acquiring the brightness value of each clustering block, dividing the clustering block into a transition region, a skin region and a hair region according to the brightness value, and dividing the hair region into an eyebrow region and a hair region.
In a specific implementation scenario, a luminance value of each cluster block is obtained. The value l of each pixel point represents the brightness of each pixel point, and the average value of the brightness values of all the pixel points in each cluster block can be obtained to serve as the brightness value of the cluster block. It is understood that, in most cases, asian people develop a deep hair color, the luminance value of the cluster block corresponding to the hair region is the lowest, and the luminance value of the cluster block corresponding to the skin region is the highest. Therefore, the region corresponding to the cluster block with the lowest brightness value is used as the hair region.
After the hair region is acquired, the hair region is divided according to the position information, and an eyebrow region and a hair region can be acquired, for example, the eyebrow region has a small area, the hair region has a large area, the eyebrow region is located below the hair region, and eyebrow rough positioning information provided by a human face feature point. In other implementation scenarios, connected component analysis may be performed on the hair region to obtain the eyebrow region and the hair region.
In other implementation scenarios, after the cluster block is generated, iteration updating needs to be performed on the cluster center until the position difference between the updated cluster center and the last cluster center is smaller than the preset difference distance, and the cluster block is obtained based on the cluster center obtained through the last iteration updating.
Specifically, please refer to fig. 4, in which fig. 4 is a flowchart illustrating a method for obtaining a cluster block according to an embodiment of the present invention. The method for acquiring the clustering block provided by the invention comprises the following steps:
s301: and acquiring a color value of each pixel point of the forehead image of the human face, and acquiring a data point set according to the color value of each pixel point.
S302: randomly setting a plurality of clustering centers in the data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, obtaining the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks.
In a specific implementation scenario, steps S301 to S302 are substantially the same as steps S203 to S204 of the second embodiment of the method for detecting occlusion of an eyebrow region provided by the present invention, and are not described herein again.
S303: and obtaining the updating center of each clustering block by an averaging method, and judging whether the difference between the updating center and the clustering center is smaller than a preset difference threshold value. If yes, go to step S304. If not, go to step S302.
In a specific implementation scenario, the update center of each cluster block is obtained by an averaging method, for example, the position information of each pixel point of the cluster block can be obtained, an average value of the position information is calculated, and the average value is used as the update center. And judging whether the difference between the position distances of the updating center and the clustering center is smaller than a preset difference threshold value. If the change is smaller than the preset value, the change of the updating center is small enough, and if the change is larger than or equal to the preset value, the change of the updating center is large, and the position of the updating center is not stable. And taking the updating center as a new clustering center, and repeatedly executing the steps S301-S303 until the difference between the position of the updating center which is obtained last time and the clustering center is smaller than a preset difference threshold value.
S304: and taking the updated center as a new clustering center, and acquiring a new clustering block corresponding to the distance center.
In a specific implementation scenario, the difference between the position distances of the update center and the clustering centers is smaller than a preset difference threshold, and the position of the update center is proved to be stable, then the update center is used as a new clustering center, and the clustering block corresponding to the new clustering center is obtained according to the above method steps.
As can be seen from the above description, in this embodiment, a color value of each pixel point of the forehead image of the human face is obtained, and a data point set is obtained according to the color value of each pixel point; randomly setting a plurality of clustering centers in a data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, acquiring the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks; the brightness value of each clustering block is obtained, the clustering blocks are divided into transition areas, skin areas and hair areas according to the brightness values, the hair areas are divided into eyebrow areas and hair areas, and the accuracy of the identified hair areas can be effectively improved.
Referring to fig. 5, fig. 5 is a schematic flow chart of a second embodiment of obtaining an eyebrow area according to the present invention. The method for acquiring the eyebrow area provided by the invention comprises the following steps:
s401: and acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image.
S402: and carrying out face detection on the original face image to obtain face characteristic points of the original face image, and obtaining a face forehead image of the color space image according to the face characteristic points.
S403: and acquiring a color value of each pixel point of the forehead image of the human face, and acquiring a data point set according to the color value of each pixel point.
S404: randomly setting a plurality of clustering centers in the data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, obtaining the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks.
S405: and acquiring the brightness value of each clustering block, dividing the clustering blocks into a transition region, a skin region and a hair region according to the brightness values, and dividing the hair region into an eyebrow region and a hair region.
In a specific implementation scenario, steps S401 to S405 are substantially the same as steps S201 to S205 in the first embodiment of the method for acquiring an eyebrow region provided by the present invention, and are not described herein again.
S406: and dividing the color space face image into left and right according to the face characteristic points to obtain two half-area face images.
In a specific implementation scenario, the face is not absolutely symmetric left and right, so the color space face image is divided left and right according to the face feature points (for example, along the vertical center lines of the nose, eyes and mouth in the face feature points), and two half-area face images are obtained. In other implementation scenarios, the color space face image can be divided left and right by other image processing techniques to obtain two half-area face images.
S407: and performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image.
In a specific implementation scene, binary segmentation is performed on each half-area face image according to a hair area, according to position information of the hair area, the pixel value of a pixel point at a position corresponding to the hair area in each half-area face image is set to be 255, the pixel values of the other pixel points are set to be 0, and a binary face image of each half-area face image is obtained.
S408: acquiring a plurality of connected domains of the binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position.
In a specific implementation scenario, the binary face image includes a plurality of connected domains, and the connected domain analysis is performed on the binary face image in a 4-way communication mode, so as to calculate the attribute information of the length, width, area and center of each connected domain.
S409: and calculating the pixel distance between the eyebrow marking point of the face characteristic point and each connected domain, taking the connected domain corresponding to the shortest pixel distance as an eyebrow region, and taking the connected domain with the center position in the upper half region of the binary face image as a hair region.
In a specific implementation scenario, eyebrow marking points of human face feature points are used, pixel distance calculation is carried out on the eyebrow marking points and each connected region, and the connected region with the minimum distance is obtained and is the eyebrow region. And taking the connected domain with the center position positioned in the upper half area of the binary face image as a hair area. Because the hair is located at the top of the face, the connected component of the lower half of the binary face image cannot be hair.
As can be seen from the above description, in this embodiment, the color space face image is divided left and right according to the face feature points to obtain two half-area face images; performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image; acquiring a plurality of connected domains of a binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position; the pixel distance of each connected domain is calculated according to the characteristic points of the face, the connected domain corresponding to the shortest pixel distance is used as an eyebrow region, the connected domain with the center position located in the upper half region of the binary face image is used as a hair region, and the accuracy of recognition can be effectively improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a second embodiment of a method for detecting an occlusion of an eyebrow region according to the present invention, the method for detecting an occlusion of an eyebrow region according to the present invention includes the following steps:
s501: and acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image.
S502: and carrying out face detection on the original face image to obtain face characteristic points of the original face image, and obtaining a face forehead image of the color space image according to the face characteristic points.
S503: and acquiring a color value of each pixel point of the forehead image of the human face, and acquiring a data point set according to the color value of each pixel point.
S504: randomly setting a plurality of clustering centers in the data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, obtaining the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks.
S505: and acquiring the brightness value of each clustering block, dividing the clustering blocks into a transition region, a skin region and a hair region according to the brightness values, and dividing the hair region into an eyebrow region and a hair region.
S506: and dividing the color space face image into left and right according to the face characteristic points to obtain two half-area face images.
S507: and performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image.
S508: acquiring a plurality of connected domains of the binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position.
S509: and calculating the pixel distance between the eyebrow marking point of the face characteristic point and each connected domain, taking the connected domain corresponding to the shortest pixel distance as an eyebrow region, and taking the connected domain with the center position in the upper half region of the binary face image as a hair region.
In a specific implementation scenario, steps S501 to S409 are substantially the same as steps S401 to S409 in the second embodiment of the method for acquiring an eyebrow region provided by the present invention, and are not described herein again.
S510: and acquiring connected domain labels of the eyebrow area and the hair area, and judging whether the connected domain labels of the eyebrow area and the hair area are equal. If so, go to step S511, otherwise go to step S516.
In a specific implementation scenario, when performing connected component analysis, a label is added to each connected component, and if the labels are the same, the connected components are represented as the same connected component, that is, if the labels of the connected components of two regions are the same, the two regions are represented as being connected to each other. Therefore, the connected domain labels of the eyebrow area and the hair area are obtained, and whether the connected domain labels of the eyebrow area and the hair area are equal or not is judged.
S511: the eyebrow area and the hair area are connected.
In a specific implementation scenario, the connected component labels of the eyebrow area and the hair area are equal, which indicates that the eyebrow area and the hair area are connected. Therefore, it is necessary to determine whether the range of the connecting region between the eyebrow region and the hair region satisfies a predetermined condition. If the preset condition is not met, the picture needs to be taken again.
S512: and acquiring the length and the width of the eyebrow region according to the characteristic points of the human face, and intercepting the eyebrow image on the binary human face image according to the length and the width.
In one embodiment, please refer to FIG. 2 in combination, according to the human face feature shown in FIG. 2Dot-capturing lateral width W of eyebrow region m Thickness H m . The left and right interval of eyebrow is shifted upward by 1.5 XH m Offset position of magnitude, intercept W m ×H m Size eyebrow image.
S513: and projecting the eyebrow image downwards to obtain a projected image, and obtaining the number of pixel points with the pixel value larger than or equal to 255 in the projected image.
In a specific implementation scene, the eyebrow image is projected downwards to obtain a projected image, and the number C of pixel points with the pixel value larger than or equal to 255 in the projected image is obtained m
S514: and acquiring a connection quantization value according to the number and the width of the pixel points, wherein if the connection quantization value is larger than a preset quantization threshold value, the range of the connection region does not meet the preset condition.
In a specific implementation scenario, the connection quantization value is obtained according to the number and width of the pixel points, that is, ret ═ C m /W m If the connection quantization value is greater than the preset quantization threshold, the range of the connection region does not satisfy the preset condition, that is, the connection range between the hair region and the eyebrow region is too large. If the connection quantization value is less than or equal to the preset quantization threshold, step S518 is executed to indicate that the connection range between the hair region and the eyebrow region is qualified.
S515: and judging that the eyebrow area is blocked, and sending a rephotography prompt to the user.
In a specific implementation scenario, step S515 is substantially the same as step S106 in the first embodiment of the method for detecting occlusion of an eyebrow region provided by the present invention, and details thereof are not repeated here.
S516: the eyebrow area and the hair area are not connected.
In a specific implementation scenario, when the connected domain labels of the eyebrow region and the hair region are not equal, it means that the eyebrow region and the hair region are not connected. Therefore, the original face image at this time meets the photographing requirement.
S517: and sending a qualification prompt to the user.
In a specific implementation scenario, step S517 is substantially the same as step S107 in the first embodiment of the method for detecting occlusion of an eyebrow region provided by the present invention, and details thereof are not repeated here.
As can be seen from the above description, in this embodiment, the connected domain tags of the eyebrow region and the hair region are obtained, and whether the connected domain tags of the eyebrow region and the hair region are equal or not is determined, and if the connected domain tags of the eyebrow region and the hair region are equal, the eyebrow region and the hair region are connected, so that the accuracy of determining whether the eyebrow region and the hair region are connected or not can be effectively improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating a method for obtaining a forehead image of a human face according to an embodiment of the present invention. The method for acquiring the forehead image of the human face comprises the following steps:
s601: and converting the original face image into a chroma brightness face image.
In one specific implementation scenario, the original face image is converted to a chroma brightness face image according to the following formula.
Figure BDA0002978333190000131
Wherein R, G, B represents R, G and B values of pixel points, Y, C b 、C r Respectively Y value and C of pixel point b Value, C r The value is obtained.
S602: and acquiring a skin color statistical template, and acquiring a forehead skin highest point according to the skin color statistical template and the chromaticity of each pixel point in the chromaticity lightness face image.
In one specific implementation scenario, a skin color statistics template is obtained, the skin color statistics template being obtained from a plurality of labeled face images, e.g., C, of skin color pixels of a plurality of people's photographs r And C b Value statistics is carried out to obtain a data set, and then an ellipse equation is used for approximately fitting C of skin color pixel points r And C b And determining the ellipse equation as a skin color statistical template by the approximate distribution edge of the values. Comparing the skin color statistical template with the chroma and lightness face images to obtain eachChroma value (C) of pixel point r ,C b ) And comparing the chromatic value of each point with the skin color counting template, judging whether the pixel point is positioned in the skin color counting template, if so, determining that the pixel point belongs to the skin area, and if not, determining that the pixel point does not belong to the skin area. And recording the pixel points belonging to the skin area as 1, and recording the pixel points not belonging to the skin area as 0, and generating a skin area template.
Traversing the skin area template line by line, and when a certain line has a non-zero point, indicating that the highest point y in the y-axis direction of the forehead skin has been detected min . The highest point P of forehead skin top The coordinate values of (c) can be obtained according to the following formula:
P top .x=M f .width/2
P top .y=y min
s603: and acquiring a human face forehead image according to the forehead skin highest point and the human face characteristic points.
In a specific implementation scenario, please continue to refer to fig. 2, the face feature points are used to locate the forehead area of the face above the eyes and below the hairs, including the forehead and the eyebrows, and the face feature points are used to select the irregular polygonal area of the face to segment the forehead image of the face. Specifically, the coordinates P of the face feature points 20 and 25 in fig. 2 are obtained t_20 And P t_25 . Coordinates P of feature points 19 and 26 on the left and right sides of the eyebrow are acquired t_19 And P t_26 . The face forehead region ROI belongs to { x, y, width, height } and can be obtained according to the following formula.
ROI.x=P t_19 .x
ROI.y=P top .y
ROI.width=P t_26 .x-P t_19 .x
ROI.height=(P t_20 .y+P t_15 .y)/2-P top .y
As can be seen from the above description, in this embodiment, the skin color statistical template is obtained, and the highest point of the forehead skin is obtained according to the skin color statistical template and the chromaticity of each pixel point in the chromaticity brightness face image. The human face forehead image is obtained according to the forehead skin highest point and the human face characteristic points, the human face forehead image can be accurately obtained, and the data volume required by subsequent image processing is reduced.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an eyebrow area detecting system according to an embodiment of the present invention. The eyebrow area detecting system 10 comprises a conversion module 11, a detection module 12, a segmentation module 13, a judgment module 14 and a prompt module 15.
The conversion module 11 is configured to obtain an original face image, convert the original face image into a Lab color space, and obtain a color space face image. The detection module 12 is configured to perform face detection on the original face image, obtain a face feature point of the original face image, and obtain a face forehead image of the color space image according to the face feature point. The segmentation module 13 is configured to perform face segmentation on the forehead image of the face by using a color clustering method to obtain an eyebrow region and a hair region. The judging module 14 is configured to judge whether the eyebrow region is connected to the hair region, and if the eyebrow region is connected to the hair region, judge whether a range of a connection region between the eyebrow region and the hair region meets a preset condition. The prompting module 15 is configured to determine that the eyebrow area is blocked if the range of the connection area does not satisfy the preset condition, and send a rephotography prompt to the user.
The segmentation module 13 is further configured to obtain a color value of each pixel point of the forehead image of the human face, and obtain a data point set according to the color value of each pixel point; randomly setting a plurality of clustering centers in a data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, acquiring the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks; and acquiring the brightness value of each clustering block, dividing the clustering blocks into a transition region, a skin region and a hair region according to the brightness values, and dividing the hair region into an eyebrow region and a hair region.
The segmentation module 13 is further configured to obtain an update center of each cluster block by using an averaging method, and determine whether a difference between the update center and the cluster center is smaller than a preset difference threshold; if the difference is smaller than the preset difference threshold value, taking the updating center as a new clustering center, acquiring clustering blocks corresponding to the new distance center, and executing the steps of acquiring the brightness value of each clustering block; if the difference is larger than or equal to the preset difference threshold value, the updating center is used as a new clustering center, the step of calculating the clustering distance from each pixel point in the data point set to each clustering center is repeatedly executed, the clustering center corresponding to each pixel point is obtained according to the length of the clustering distance, and a plurality of clustering blocks are generated until the difference is smaller than the preset difference distance.
The segmentation module 13 is further configured to divide the color space face image into left and right sides according to the face feature points to obtain two half-area face images; performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image; acquiring a plurality of connected domains of a binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position; and calculating the pixel distance of each connected domain according to the characteristic points of the face, taking the connected domain corresponding to the shortest pixel distance as an eyebrow region, and taking the connected domain with the center position positioned in the upper half region of the binary face image as a hair region.
The judging module 14 is further configured to obtain connected domain tags of the eyebrow region and the hair region, and judge whether the connected domain tags of the eyebrow region and the hair region are equal; if the connected domain labels of the eyebrow area and the hair area are equal, connecting the eyebrow area and the hair area; if the connected domain labels of the eyebrow area and the hair area are not equal, the eyebrow area and the hair area are not connected.
The judging module 14 is further configured to obtain the length and the width of the eyebrow region according to the feature points of the human face, intercept the eyebrow image on the binary face image according to the length and the width, project the eyebrow image downwards, obtain a projected image, and obtain the number of pixels with pixel values greater than or equal to 255 in the projected image; and acquiring a connection quantization value according to the number and the width of the pixel points, wherein if the connection quantization value is larger than a preset quantization threshold value, the range of the connection region does not meet the preset condition.
The detection module 12 is further configured to convert the original face image into a chrominance and brightness face image; acquiring a skin color statistical template, and acquiring a forehead skin highest point according to the skin color statistical template and the chromaticity of each pixel point in a chromaticity brightness face image; and acquiring a human face forehead image according to the forehead skin highest point and the human face characteristic points.
As can be seen from the above description, the detection system for blocking the eyebrow area in this embodiment obtains the face forehead image of the color space image according to the face feature point; performing face segmentation on the forehead image of the face by adopting a color clustering method to obtain an eyebrow area and a hair area; when the connection between the eyebrow area and the hair area is detected, judging whether the range of the connection area between the eyebrow area and the hair area meets a preset condition or not; if the range of the connection area does not meet the preset condition, a rephotograph prompt is sent to the user, so that the effectiveness of the user in shooting the photo can be effectively improved, repeated shooting is avoided, the time is effectively saved, and the resource waste is reduced.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention. The photographing apparatus 20 includes a processor 21, a memory 22. The processor 21 is coupled to a memory 22. The memory 22 has stored therein a computer program which is executed by the processor 21 when in operation to implement the method as shown in fig. 1, 3-7. The detailed methods can be referred to above and are not described herein.
As can be seen from the above description, the photographing apparatus in this embodiment can determine whether the range of the connection region between the eyebrow region and the hair region satisfies the preset condition when detecting that the eyebrow region is connected to the hair region; if the range of the connection area does not meet the preset condition, a rephotograph prompt is sent to the user, the effectiveness of the user in shooting the photos can be effectively improved, repeated rephotographs are avoided, time is effectively saved, and resource waste is reduced.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a storage medium according to an embodiment of the present invention. The storage medium 30 has at least one computer program 31 stored therein, the computer program 31 being adapted to be executed by a processor to implement the method as shown in fig. 1, 3-7. The detailed methods of the methods shown can be referred to above, and are not described herein again. In one embodiment, the storage medium 30 may be a memory chip in a terminal, a hard disk, or a removable hard disk or a flash disk, an optical disk, or other readable and writable storage tool, and may also be a server or the like.
As is apparent from the above description, the computer program in the storage medium in the present embodiment may be configured to determine whether a range of a connection region between an eyebrow region and a hair region satisfies a preset condition when connection between the eyebrow region and the hair region is detected; if the range of the connection area does not meet the preset condition, a rephotograph prompt is sent to the user, the effectiveness of the user in shooting the photos can be effectively improved, repeated rephotographs are avoided, time is effectively saved, and resource waste is reduced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method for detecting occlusion of an eyebrow region, comprising:
acquiring an original face image, converting the original face image into a Lab color space, and acquiring a color space face image;
performing face detection on the original face image, acquiring face characteristic points of the original face image, and acquiring a face forehead image of the color space image according to the face characteristic points;
carrying out face segmentation on the forehead image of the face by adopting a color clustering method, dividing the forehead image into a hair area comprising hair and eyebrows, a skin area and a transition area between the skin and the hair and eyebrows, and dividing the hair area into an eyebrow area and a hair area;
judging whether the eyebrow area is connected with the hair area or not, and if the eyebrow area is connected with the hair area, judging whether the range of the connection area of the eyebrow area and the hair area meets a preset condition or not;
if the range of the connection region does not meet the preset condition, judging that the eyebrow region is shielded, and sending a rephotography prompt to a user;
the dividing of the hair region into an eyebrow region and a hair region includes:
dividing the color space face image left and right according to the face characteristic points to obtain two half-area face images;
performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image;
acquiring a plurality of connected domains of the binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position;
and calculating the pixel distance between the eyebrow labeling point of the face characteristic point and each connected domain, taking the connected domain corresponding to the shortest pixel distance as the eyebrow region, and taking the connected domain with the center position positioned in the upper half region of the binary face image as the hair region.
2. The method for detecting occlusion of an eyebrow region according to claim 1, wherein the step of performing face segmentation on the forehead image by using a color clustering method to divide the forehead image into a hair region including hair and eyebrows, a skin region, and a transition region between skin and hair and eyebrows, and dividing the hair region into an eyebrow region and a hair region comprises:
acquiring a color value of each pixel point of the forehead image of the human face, and acquiring a data point set according to the color value of each pixel point;
randomly setting a plurality of clustering centers in the data point set, calculating the clustering distance from each pixel point in the data point set to each clustering center, acquiring the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks;
acquiring the brightness value of each clustering block, dividing the clustering blocks into a transition region, a skin region and a hair region according to the brightness values, and dividing the hair region into the eyebrow region and the hair region.
3. The method according to claim 2, wherein the step of obtaining the cluster center corresponding to each pixel point according to the length of the cluster distance to generate a plurality of cluster blocks comprises:
obtaining an updating center of each clustering block through an averaging method, and judging whether the difference between the updating center and the clustering center is smaller than a preset difference threshold value or not;
if the difference is smaller than the preset difference threshold, taking the updating center as a new clustering center, acquiring clustering blocks corresponding to the new distance center, and executing the acquiring of the brightness value of each clustering block and the subsequent steps;
if the difference is larger than or equal to the preset difference threshold, taking the updating center as a new clustering center, repeatedly executing the step of calculating the clustering distance from each pixel point in the data point set to each clustering center, acquiring the clustering center corresponding to each pixel point according to the length of the clustering distance, and generating a plurality of clustering blocks until the difference is smaller than the preset difference threshold.
4. The method for detecting occlusion of an eyebrow region according to claim 1, wherein the step of determining whether the eyebrow region is connected to the hair region comprises:
acquiring connected domain labels of the eyebrow area and the hair area, and judging whether the connected domain labels of the eyebrow area and the hair area are equal;
if the connected domain labels of the eyebrow region and the hair region are equal, connecting the eyebrow region and the hair region;
and if the connected domain labels of the eyebrow area and the hair area are not equal, the eyebrow area and the hair area are not connected.
5. The method for detecting occlusion of an eyebrow area according to claim 1, wherein the step of determining whether the range of the connecting area satisfies a predetermined condition comprises:
acquiring the length and the width of the eyebrow area according to the face characteristic points, intercepting an eyebrow image on the binary face image according to the length and the width,
projecting the eyebrow image downwards to obtain a projected image, and obtaining the number of pixel points with the pixel value larger than or equal to 255 in the projected image;
and acquiring a connection quantization value according to the number of the pixel points and the width, wherein if the connection quantization value is larger than a preset quantization threshold value, the range of the connection area does not meet the preset condition.
6. The method for detecting occlusion of an eyebrow region according to claim 1, wherein the step of obtaining the forehead image of the face in the color space image according to the face feature points comprises:
converting the original face image into a chroma brightness face image;
acquiring a skin color statistical template, and acquiring the highest point of the forehead skin according to the skin color statistical template and the chromaticity of each pixel point in the chromaticity brightness face image;
and acquiring the human face forehead image according to the forehead skin highest point and the human face characteristic point.
7. A system for detecting occlusion of an eyebrow region, comprising:
the conversion module is used for acquiring an original face image, converting the original face image into a Lab color space and acquiring a color space face image;
the detection module is used for carrying out face detection on the original face image, acquiring face characteristic points of the original face image and acquiring a face forehead image of the color space image according to the face characteristic points;
the segmentation module is used for carrying out face segmentation on the face forehead image by adopting a color clustering method, dividing the face forehead image into a hair area comprising hair and eyebrows, a skin area and a transition area between the skin and the hair and the eyebrows, and dividing the hair area into an eyebrow area and a hair area;
the judging module is used for judging whether the eyebrow area is connected with the hair area or not, and judging whether the range of the connecting area of the eyebrow area and the hair area meets a preset condition or not if the eyebrow area is connected with the hair area;
the prompting module is used for judging that the eyebrow area is shielded and sending a rephotography prompt to a user if the range of the connecting area does not meet the preset condition;
the segmentation module is specifically configured to:
dividing the color space face image left and right according to the face characteristic points to obtain two half-area face images;
performing binary segmentation on each half-area face image according to the hair area to obtain a binary face image of each half-area face image;
acquiring a plurality of connected domains of the binary face image, and acquiring attribute information of each connected domain, wherein the attribute information comprises a central position;
and calculating the pixel distance between the eyebrow marking point of the face characteristic point and each connected domain, taking the connected domain corresponding to the shortest pixel distance as the eyebrow region, and taking the connected domain with the center position positioned in the upper half region of the binary face image as the hair region.
8. A photographing apparatus, comprising: a processor, a memory and a communication circuit, the processor coupled to the memory and the communication circuit, the memory having stored therein a computer program, the processor executing the computer program to implement the method of any of claims 1-6.
9. A storage medium, characterized in that a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1-6.
CN202110280891.0A 2021-03-16 2021-03-16 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium Active CN113095148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280891.0A CN113095148B (en) 2021-03-16 2021-03-16 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280891.0A CN113095148B (en) 2021-03-16 2021-03-16 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium

Publications (2)

Publication Number Publication Date
CN113095148A CN113095148A (en) 2021-07-09
CN113095148B true CN113095148B (en) 2022-09-06

Family

ID=76668154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280891.0A Active CN113095148B (en) 2021-03-16 2021-03-16 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium

Country Status (1)

Country Link
CN (1) CN113095148B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399617B (en) * 2021-12-23 2023-08-04 北京百度网讯科技有限公司 Method, device, equipment and medium for identifying shielding pattern
CN115619410B (en) * 2022-10-19 2024-01-26 闫雪 Self-adaptive financial payment platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493887A (en) * 2009-03-06 2009-07-29 北京工业大学 Eyebrow image segmentation method based on semi-supervision learning and Hash index
CN102930259A (en) * 2012-11-19 2013-02-13 山东神思电子技术股份有限公司 Method for extracting eyebrow area
CN105404846A (en) * 2014-09-15 2016-03-16 中国移动通信集团广东有限公司 Image processing method and apparatus
CN106503625A (en) * 2016-09-28 2017-03-15 维沃移动通信有限公司 A kind of method of detection hair distribution situation and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570447B (en) * 2015-12-16 2019-07-12 黄开竹 Based on the matched human face photo sunglasses automatic removal method of grey level histogram
EP3502955A1 (en) * 2017-12-20 2019-06-26 Chanel Parfums Beauté Method and system for facial features analysis and delivery of personalized advice

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493887A (en) * 2009-03-06 2009-07-29 北京工业大学 Eyebrow image segmentation method based on semi-supervision learning and Hash index
CN102930259A (en) * 2012-11-19 2013-02-13 山东神思电子技术股份有限公司 Method for extracting eyebrow area
CN105404846A (en) * 2014-09-15 2016-03-16 中国移动通信集团广东有限公司 Image processing method and apparatus
CN106503625A (en) * 2016-09-28 2017-03-15 维沃移动通信有限公司 A kind of method of detection hair distribution situation and mobile terminal

Also Published As

Publication number Publication date
CN113095148A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
EP3477931B1 (en) Image processing method and device, readable storage medium and electronic device
US8983152B2 (en) Image masks for face-related selection and processing in images
CN113095148B (en) Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
EP3454250A1 (en) Facial image processing method and apparatus and storage medium
US8135215B2 (en) Correction of color balance of face images
CN107491755B (en) Method and device for gesture recognition
WO2013143390A1 (en) Face calibration method and system, and computer storage medium
CN105844242A (en) Method for detecting skin color in image
US9378564B2 (en) Methods for color correcting digital images and devices thereof
JP2010003118A (en) Image processing apparatus, image processing method and image processing program
CN110648336B (en) Method and device for dividing tongue texture and tongue coating
CN109255761B (en) Image processing method and device and electronic equipment
JP2006268820A (en) Method, apparatus and program for image identification, image processing method and integrated circuit
JP4496005B2 (en) Image processing method and image processing apparatus
US20150098649A1 (en) Image processing apparatus, image processing method, and computer-readable medium
EP3699865A1 (en) Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium
CN113947708A (en) Lighting device lamp efficiency control method, system, device, electronic device and medium
JP2009251634A (en) Image processor, image processing method, and program
WO2017101570A1 (en) Photo processing method and processing system
CN113395407A (en) Image processing apparatus, image processing method, and computer readable medium
KR102135155B1 (en) Display apparatus and control method for the same
WO2015117464A1 (en) Device and method for processing video image
Rahman et al. Real-time face-based auto-focus for digital still and cell-phone cameras
CN112785683B (en) Face image adjusting method and device
WO2022246663A1 (en) Image processing method, device and system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant