CN113128377A - Black eye recognition method, black eye recognition device and terminal based on image processing - Google Patents

Black eye recognition method, black eye recognition device and terminal based on image processing Download PDF

Info

Publication number
CN113128377A
CN113128377A CN202110364920.1A CN202110364920A CN113128377A CN 113128377 A CN113128377 A CN 113128377A CN 202110364920 A CN202110364920 A CN 202110364920A CN 113128377 A CN113128377 A CN 113128377A
Authority
CN
China
Prior art keywords
image
pixel value
target area
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110364920.1A
Other languages
Chinese (zh)
Inventor
乔峤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Rongzhifu Technology Co ltd
Original Assignee
Xi'an Rongzhifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Rongzhifu Technology Co ltd filed Critical Xi'an Rongzhifu Technology Co ltd
Priority to CN202110364920.1A priority Critical patent/CN113128377A/en
Publication of CN113128377A publication Critical patent/CN113128377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention discloses a black eye identification method, a black eye identification device and a terminal based on image processing, which are used for improving the accuracy of identifying black eye types by terminal equipment. The method provided by the embodiment of the invention comprises the following steps: acquiring an image to be identified; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1; determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.

Description

Black eye recognition method, black eye recognition device and terminal based on image processing
Technical Field
The invention relates to the field of terminal equipment application, in particular to a black eye recognition method, a black eye recognition device and a terminal based on image processing.
Background
With the improvement of the quality of life of people, people pay more and more attention to the skin quality of people, and people usually want to obtain accurate evaluation of the skin quality of people, so that maintenance and treatment measures with pertinence are taken.
Currently, the skin quality evaluation of people is completed by a cosmetologist or a dermatologist, so that the efficiency of the skin quality evaluation process is low, and the skin quality evaluation obtained by people is often lack of objectivity. The black eye is an important component of the facial skin, people can detect and judge the type of the black eye through various terminal devices or measuring instruments, but more errors occur in the process of identifying the type of the black eye, so that the accuracy of identifying the type of the black eye by the terminal devices or the measuring instruments is low.
Disclosure of Invention
The embodiment of the invention provides a black eye identification method, a black eye identification device and a terminal based on image processing, which are used for improving the accuracy of identifying the type of a black eye by terminal equipment.
The first aspect of the embodiments of the present invention provides a black eye recognition method based on image processing, which may include:
acquiring an image to be identified;
frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image;
performing Garbor filtering on the green channel image to obtain a second image;
multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
determining a wrinkle area in the first target image;
and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
Optionally, the Frangi filtering is performed on the green channel image of the image to be recognized to obtain a first image, and the method includes: determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; and performing Frangi filtering on the green channel image of the first target area image to obtain a first image.
Optionally, Frangi filtering is performed on the green channel image of the image to be identified to obtain a first image; the method comprises the following steps: frangi filtering is carried out on the green channel image of the image to be identified, and morphological processing of closed operation is carried out to obtain a first image; should carry out Garbor filtering to this green channel image, obtain the second image, include: carrying out Garbor filtering on the green channel image, and carrying out morphological processing of opening operation to obtain a second image; the obtaining a first target image by superposing the pixel value of the first image multiplied by a first coefficient and the pixel value of the second image multiplied by a second coefficient, includes: and multiplying the pixel value of the first image by a first coefficient, multiplying the pixel value of the second image by a second coefficient, then superposing, and performing closed-loop morphological processing to obtain a first target image.
Optionally, the step of multiplying the pixel value of the first image by a first coefficient and multiplying the pixel value of the second image by a second coefficient for superposition to obtain a first target image includes: multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then overlapping to obtain a third image; and performing threshold segmentation on the third image to obtain a first target image.
Optionally, the method further includes: determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; carrying out gray level processing on the first target area image to obtain a first gray level image; determining a second face characteristic point and a third face characteristic point in the image to be recognized through the preset algorithm; determining a second target area image according to the second face characteristic point; determining a third target area image according to the third face characteristic point; and determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
Optionally, the determining the type of the black eye according to the first target area image, the first grayscale image, the second target area image, and the third target area image includes: converting the pixel value of the second target area image into a first pixel value, and converting the pixel value of the image except the second target area image in the first target area image into a second pixel value to obtain a fourth target area image; converting the pixel value of the third target area image into the first pixel value, and converting the pixel values of the images except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image; multiplying the pixel value of the fourth target area image by the pixel value of the first gray level image to obtain a third pixel value; multiplying the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value; calculating to obtain a first difference value between the third pixel value and the fourth pixel value; when the first difference value is larger than or equal to a preset pixel difference threshold value, determining the type of the black eye is a pigment type; and when the first difference value is smaller than the preset pixel difference threshold value, determining that the type of the black eye is a blood vessel type.
Optionally, the method further includes: determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization; and determining the area of the black eye according to the total number and the number of the black pixel values.
Optionally, the method further includes: carrying out gray processing on the red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image; calculating to obtain a second difference value between the first average pixel value and the second average pixel value; and obtaining a target value according to the second difference and the area of the black eye, wherein the target value is used for representing and evaluating the severity of the black eye.
A second aspect of an embodiment of the present invention provides a black eye recognition apparatus, which may include:
the acquisition module is used for acquiring an image to be identified; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
a processing module for determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
Optionally, the processing module is specifically configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point;
and the acquisition module is specifically used for performing Frangi filtering on the green channel image of the first target area image to obtain a first image.
Optionally, the obtaining module is specifically configured to perform Frangi filtering on a green channel image of the image to be identified, and perform morphological processing of a closed operation to obtain a first image; carrying out Garbor filtering on the green channel image, and carrying out morphological processing of opening operation to obtain a second image;
and the processing module is specifically used for multiplying the pixel value of the first image by a first coefficient, multiplying the pixel value of the second image by a second coefficient, then superposing the pixel values, and performing closed-loop morphological processing to obtain a first target image.
Optionally, the processing module is specifically configured to multiply a pixel value of the first image by a first coefficient, and multiply a pixel value of the second image by a second coefficient, and then superimpose the first image and the second image to obtain a third image;
and the acquisition module is specifically used for carrying out threshold segmentation on the third image to obtain a first target image.
Optionally, the processing module is further configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point;
the acquisition module is also used for carrying out gray processing on the first target area image to obtain a first gray image;
the processing module is further used for determining a second face characteristic point and a third face characteristic point in the image to be recognized through the preset algorithm; determining a second target area image according to the second face characteristic point; determining a third target area image according to the third face characteristic point; and determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
Optionally, the obtaining module is specifically configured to convert the pixel value of the second target area image into a first pixel value, and convert the pixel value of an image, except the second target area image, in the first target area image into a second pixel value, so as to obtain a fourth target area image; converting the pixel value of the third target area image into the first pixel value, and converting the pixel values of the images except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image;
the processing module is specifically used for multiplying the pixel value of the fourth target area image by the pixel value of the first gray level image to obtain a third pixel value; multiplying the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value; calculating to obtain a first difference value between the third pixel value and the fourth pixel value; when the first difference value is larger than or equal to a preset pixel difference threshold value, determining the type of the black eye is a pigment type; and when the first difference value is smaller than the preset pixel difference threshold value, determining that the type of the black eye is a blood vessel type.
Optionally, the processing module is further configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization; and determining the area of the black eye according to the total number and the number of the black pixel values.
Optionally, the obtaining module is further configured to perform gray processing on a red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image;
the processing module is further used for calculating to obtain a second difference value of the first average pixel value and the second average pixel value;
and the obtaining module is further used for obtaining a target value according to the second difference and the area of the black eye, and the target value is used for representing and evaluating the severity of the black eye.
A third aspect of an embodiment of the present invention provides a black eye recognition apparatus, which may include:
a memory storing executable program code;
and a processor coupled to the memory;
the processor calls the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method according to the first aspect of an embodiment of the present invention.
A fourth aspect of the embodiments of the present invention provides a terminal device, which may include the black eye recognition apparatus according to the second or third aspect of the embodiments of the present invention.
Yet another aspect of embodiments of the present invention provides a computer-readable storage medium having stored thereon executable program code, which when executed by a processor, implements a method according to the first aspect of embodiments of the present invention.
In another aspect, an embodiment of the present invention discloses a computer program product, which, when running on a computer, causes the computer to execute any one of the methods disclosed in the first aspect of the embodiment of the present invention.
In another aspect, an embodiment of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute any one of the methods disclosed in the first aspect of the embodiment of the present invention.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, an image to be identified is obtained; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1; determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type. The terminal device can perform Frangi filtering on the obtained green channel image of the image to be identified to obtain a first image, and performs Garbor filtering on the green channel image to obtain a second image; the terminal equipment can determine the wrinkle area according to the first image and the second image; the terminal device determines the type of the black eye according to the wrinkle area, namely when the wrinkle area is larger than a preset wrinkle area threshold value, the terminal device can determine that the type of the black eye is a structural type. The method can improve the accuracy of identifying the black eye type by the terminal equipment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
FIG. 1a is a schematic diagram of an embodiment of a black eye recognition method based on image processing according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of an embodiment of a first target area image in an embodiment of the present invention;
FIG. 1c is a schematic diagram of another embodiment of the first target area image in the embodiment of the present invention;
FIG. 2a is a schematic diagram of another embodiment of a black eye recognition method based on image processing according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of an embodiment of a second target area image in an embodiment of the present invention;
FIG. 2c is a schematic diagram of an embodiment of a third target area image according to the embodiment of the present invention;
FIG. 3 is a schematic diagram of another embodiment of a black eye recognition method based on image processing according to an embodiment of the present invention;
FIG. 4a is a schematic diagram of another embodiment of a black eye recognition method based on image processing according to an embodiment of the present invention;
FIG. 4b is a schematic diagram of an embodiment of a second grayscale image in an embodiment of the invention;
FIG. 4c is a schematic diagram of an embodiment of a binarized image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of a black eye recognition device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another embodiment of a black eye recognition device in an embodiment of the present invention;
fig. 7 is a schematic diagram of an embodiment of a terminal device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a black eye identification method, a black eye identification device and a terminal based on image processing, which are used for improving the accuracy of identifying the type of a black eye by terminal equipment.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
It is understood that the terminal device according to the embodiment of the present invention may include a general handheld electronic terminal device, such as a mobile phone, a smart phone, a portable terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP) device, a notebook computer (NotePad), a wireless broadband (Wibro) terminal, a tablet computer (PC), a smart PC, a point of sale (POS), a car computer, and the like.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or may be a portable electronic device integrated into the user's clothing or accessory. Wearable equipment is not only a hardware equipment, can realize powerful intelligent function through software support and data interaction, high in the clouds interaction more, for example: the system has the functions of calculation, positioning and alarming, and can be connected with a mobile phone and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., wrist watches, wrist-supported products), foot-supported shoes types (e.g., shoes, socks, or other leg-worn products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and various types of non-mainstream products such as smart clothing, bags, crutches, accessories, and the like.
It should be noted that the execution subject of the embodiment of the present invention may be a black eye recognition device, or may be a terminal device. The technical solution of the present invention is further described below by taking a terminal device as an example.
As shown in fig. 1a, a schematic diagram of an embodiment of a black eye recognition method based on image processing in an embodiment of the present invention may include:
101. and acquiring an image to be identified.
It should be noted that the image to be recognized may be an image of the face area of the user, and may be an image of the face and other parts (for example, the neck and shoulders) of the user; the image to be recognized may be obtained by shooting with a camera in the terminal device, or may be obtained by shooting with another shooting device in the terminal device, which is not specifically limited herein.
Optionally, the terminal device acquires the image to be recognized, which may include but is not limited to the following implementation manners:
implementation mode 1: the method comprises the steps that terminal equipment detects the distance between a user and the terminal equipment; and when the distance is within the preset distance range, the terminal equipment acquires the image to be identified.
It should be noted that the preset distance range is an interval constructed by the first distance threshold and the second distance threshold. The distance is within a preset distance range, namely the distance is greater than the first distance threshold and is less than or equal to the second distance threshold.
For example, assuming that the first distance threshold is 10 centimeters (cm), and the second distance threshold is 25cm, the preset distance range is (10cm, 25 cm). The terminal device detects that the distance between a user and the terminal device is 18cm, the 18cm is located within a preset distance setting range (10cm, 25cm), and at the moment, the terminal device acquires an image to be recognized.
Implementation mode 2: the terminal equipment detects the current environment brightness value; and when the current environment brightness value is within the preset brightness range, the terminal equipment acquires an image to be identified.
It should be noted that the preset luminance range is an interval constructed by the first luminance threshold and the second luminance threshold. The current environment brightness value is within a preset brightness range, i.e. the current environment brightness value is greater than the first brightness threshold and less than or equal to the second brightness threshold.
Illustratively, assume that the first luminance threshold is 120 candelas per square meter (cd/m for short)2) The second luminance threshold is 150cd/m2Then, the preset luminance range is (120 cd/m)2,150cd/m2). The terminal equipment detects that the current environment brightness value is 136cd/m2136cd/m of the same2Within a preset brightness range (120 cd/m)2,150cd/m2) And at the moment, the terminal equipment acquires the image to be identified.
It can be understood that the image to be recognized, which is acquired by the terminal device within the preset distance range or within the preset brightness range, is relatively clear, and subsequent processing of the image to be recognized is facilitated.
102. And performing Frangi filtering on the green channel image of the image to be identified to obtain a first image.
It should be noted that the Frangi filtering can be divided into two-dimensional Frangi filtering and three-dimensional Frangi filtering; the Frangi filtering refers to filtering the green pixel value of the image to be identified by the terminal equipment.
Optionally, the terminal device performs Frangi filtering on the green channel image of the image to be recognized to obtain the first image, which may include but is not limited to the following implementation manners:
implementation mode 1: the terminal equipment determines a first face characteristic point in the image to be identified through a preset algorithm; the terminal equipment determines a first target area image according to the first face characteristic point; and the terminal equipment performs Frangi filtering on the green channel image of the first target area image to obtain a first image.
It should be noted that the preset algorithm may be at least one of an Open Source Computer Vision Library (OpenCV Library), an edge detection algorithm, a sobel algorithm, and an active contour model. The face feature points may be extracted from a preset algorithm.
Optionally, the terminal device determines the first target area image according to the first face feature point, where the first face feature point includes a first feature point, a second feature point, a third feature point, a fourth feature point, a fifth feature point, a sixth feature point, a seventh feature point, and an eighth feature point.
Illustratively, the first feature point is feature point No. 40, the second feature point is feature point No. 41, the third feature point is feature point No. 36, the fourth feature point is feature point No. 39, the fifth feature point is feature point No. 46, the sixth feature point is feature point No. 47, the seventh feature point is feature point No. 42, and the eighth feature point is feature point No. 45 of the OpenCV function library.
The terminal device may first determine a first ordinate having a larger value in the ordinate of feature point No. 40 and the ordinate of feature point No. 41, use the first ordinate as an upper boundary of a target area image corresponding to a first eye area (e.g., left eye), and add h to the first ordinate1Unit length, which is the lower boundary of the target area image corresponding to the left eye, obtained by subtracting w from the abscissa of the 36-th feature point1Unit length, which is the left boundary of the target area image corresponding to the left eye, and the abscissa of feature point number 39 added to w1The unit length is used as the right boundary of the target area image corresponding to the left eye, so that the target area image corresponding to the left eye can be determined. Similarly, the terminal device may determine a second ordinate having a larger value between the ordinate of feature point No. 46 and the ordinate of feature point No. 47, use the second ordinate as an upper boundary of a target area image corresponding to a second eye area (e.g., the right eye), and add the second ordinate to the h1Unit length, which is the lower boundary of the target region image corresponding to the right eye, obtained by subtracting w from the abscissa of feature point No. 421Unit length, which is the left boundary of the target area image corresponding to the right eye, and the abscissa of feature point No. 45 plus w1The unit length is used as the right boundary of the target area image corresponding to the right eye, so that the target area image corresponding to the right eye can be determined.
Wherein h is1And w1Is a positive integer.
Optionally, as shown in fig. 1b, which is a schematic diagram of an embodiment of the first target area image in the embodiment of the present invention, the method may include: a target area image 107 for the left eye and a target area image 108 for the right eye.
It should be noted that, in the figure, only the determination process of the target area image 107 corresponding to the left eye is shown, and the determination process of the target area image 108 corresponding to the right eye is similar to the determination process of the target area image 107 corresponding to the left eye, and details are not described here.
Optionally, as shown in fig. 1c, the first target area image is a schematic view of another embodiment of the first target area image in the embodiment of the present invention.
It is understood that the first target area image may be an eye area image of the user or a face area image of the user, and the above description takes the eye area image as an example.
Implementation mode 2: and the terminal equipment performs Frangi filtering on the green channel image of the image to be identified and performs morphological processing of closed operation to obtain a first image.
The morphological processing of the closed operation refers to that the terminal device performs morphological processing of corroding and then expanding the Frangi filtered image, and the morphological processing can obtain a contour corresponding to the Frangi filtered image, so that the terminal device can conveniently identify the black eye.
103. And carrying out Garbor filtering on the green channel image to obtain a second image.
It should be noted that Garbor filtering refers to that a terminal sets a green channel image of an image to be identified for edge-sensitive identification, so that good direction selection and scale selection characteristics can be provided, and an image which is insensitive to illumination change can provide good adaptability to illumination change.
It should be noted that the terminal device may filter the green channel image according to a first formula used in Garbor filtering to obtain a second image.
Wherein, the first formula used in the Garbor filtering is as follows:
Figure BDA0003006487420000081
Figure BDA0003006487420000084
represents an optional standard deviation on the x-axis;
Figure BDA0003006487420000085
represents an optional standard deviation on the y-axis; f representsSpatial frequency of floating-point harmonic functions; λ represents the spatial aspect ratio, which determines the ellipticity of the shape of the Garbor function.
In an exemplary manner, the first and second electrodes are,
Figure BDA0003006487420000083
f=0.1,λ=0.5。x1,x2and x'1,x′2The mapping relationship of (2) is shown as a second formula:
Figure BDA0003006487420000082
alpha denotes an alternative arc direction.
Illustratively, α ═ π/4.
Optionally, the terminal device performs Garbor filtering on the green channel image to obtain a second image, which may include: and the terminal equipment performs Garbor filtering on the green channel image and performs morphological processing of opening operation to obtain a second image.
It should be noted that the morphological processing of the opening operation refers to that the terminal device performs morphological processing of expanding and then corroding the image after the Garbor filtering, the morphological processing can separate specific areas on the image after the Garbor filtering, and eliminate some irrelevant areas, and the terminal device can avoid effectively reducing the influence of the irrelevant areas on the process in the process of identifying the black eye, so that the terminal device can conveniently identify the black eye.
It is understood that, regardless of the morphological processing of the on operation or the morphological processing of the off operation, the noise in the image can be removed to make the obtained image clearer.
104. And multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image.
Wherein the sum of the first coefficient and the second coefficient is 1. The first coefficient and the second coefficient may be obtained by the terminal device through experimental data.
For example, the ratio of the first coefficient to the second coefficient may be: 0.01:0.99.
It is understood that the terminal device multiplies the pixel value of the first image by the first coefficient to obtain a first target pixel value; the terminal device multiplies the pixel value of the second image by the second coefficient to obtain a second target pixel value. The terminal device may superimpose the first target pixel value and the second target pixel value to obtain a first target image. And the pixel value of the first target image is obtained by superposing the first target pixel value and the second target pixel value.
Optionally, the terminal device multiplies the pixel value of the first image by a first coefficient, and multiplies the pixel value of the second image by a second coefficient, and then superimposes them, so as to obtain the first target image, which may include but is not limited to the following implementation manners:
implementation mode 1: and multiplying the pixel value of the first image by a first coefficient, multiplying the pixel value of the second image by a second coefficient, then superposing, and performing closed-loop morphological processing to obtain a first target image.
Implementation mode 2: multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then overlapping to obtain a third image; performing threshold segmentation on the third image to obtain a first target image
It should be noted that, the terminal device may perform threshold segmentation, that is, retain pixel values within a preset pixel value range, and filter pixel values outside the preset pixel value range to obtain the first target image.
The preset pixel value range is an interval constructed by the first pixel threshold value and the second pixel threshold value. The pixel value is within a predetermined pixel value range, i.e., the pixel is greater than the first pixel threshold and less than or equal to the second pixel threshold.
105. In the first target image, a wrinkle area is determined.
106. And when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
The preset wrinkle area threshold may be set before the terminal device leaves a factory, or may be set by the terminal device according to the conditions of different users, and is not specifically limited herein.
In the embodiment of the invention, an image to be identified is obtained; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1; determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type. The terminal device can perform Frangi filtering on the obtained green channel image of the image to be identified to obtain a first image, and performs Garbor filtering on the green channel image to obtain a second image; the terminal equipment can determine the wrinkle area according to the first image and the second image; the terminal device determines the type of the black eye according to the wrinkle area, namely when the wrinkle area is larger than a preset wrinkle area threshold value, the terminal device can determine that the type of the black eye is a structural type. The method can improve the accuracy of identifying the black eye type by the terminal equipment.
As shown in fig. 2a, a schematic diagram of another embodiment of a black eye recognition method based on image processing in an embodiment of the present invention may include:
201. and acquiring an image to be identified.
202. And performing Frangi filtering on the green channel image of the image to be identified to obtain a first image.
203. And carrying out Garbor filtering on the green channel image to obtain a second image.
204. And multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image.
Wherein the sum of the first coefficient and the second coefficient is 1.
205. In the first target image, a wrinkle area is determined.
206. And when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
It should be noted that the steps 201-206 are similar to the steps 101-106 shown in fig. 1 in this embodiment, and are not described herein again.
207. And determining a first face characteristic point in the image to be recognized through a preset algorithm.
208. And determining a first target area image according to the first face characteristic point.
It should be noted that steps 207 and 208 are similar to implementation 1 in step 102 shown in fig. 1 in this embodiment, and are not described here again.
209. And carrying out gray level processing on the first target area image to obtain a first gray level image.
It should be noted that the grayscale processing refers to the terminal device processing three color components on the first target area image, where the three color components respectively include: red (Red, R), Green (Green, G) and Blue (Blue, B). The gray scale processing may include, but is not limited to, the following four methods: component, maximum, average, and weighted average.
The classification method is that the terminal device determines RGB on the first target area image as three target color components, for example: determining R as a first target color component, the first target color component having a grayscale of N; determining G as a second target color component, the second target color component having a grayscale of P; b is determined as a third target color component having a gray scale of Q. The maximum value method is that the terminal device determines a color component with the maximum brightness value in RGB on the first target area image as a maximum target color component, and the gray scale of the maximum target color component is M. The average value method is that the terminal device averages three brightness values corresponding to RGB on the first target area image to obtain a fourth target color component, and the gray value of the fourth target color component is the average gray value of RGB. The weighted average method is that the terminal device performs weighted average on three brightness values corresponding to RGB on the first target area image according to different weight proportions to obtain a fifth target color component, and the gray value of the fifth target color component is the weighted average gray value H of RGB.
It should be noted that N, P, Q, M and H both represent gray scale values different from R, G and B; n, P, Q, M and H may be the same or different, and are not particularly limited herein.
210. And determining a second face characteristic point and a third face characteristic point in the image to be recognized through the preset algorithm.
The second face characteristic points may include number 36 characteristic points, number 39 characteristic points, number 40 characteristic points, and number 41 characteristic points, and may further include number 42 characteristic points, number 45 characteristic points, number 46 characteristic points, and number 47 characteristic points; the third face feature points may include feature point number 29 and feature point number 40; the third face feature point may further include a feature point number 47.
Optionally, the determining, by the terminal device, the second face feature point and the third face feature point in the image to be recognized through the preset algorithm may include: and the terminal equipment determines a second face characteristic point and a third face characteristic point in the first target area image through the preset algorithm.
211. And determining a second target area image according to the second face characteristic point.
Illustratively, the terminal device takes the feature point No. 36, the feature point No. 39, the feature point No. 40 and the feature point No. 41 as the upper boundary point of the second target area image (for example, the left-eye strip area); adding 40 unit lengths to the vertical coordinates of the four characteristic points respectively to obtain a lower boundary point of the second target area image; the region surrounded by the upper boundary point and the lower boundary point is used as a left-eye band region mask. Similarly, the terminal device takes the feature point No. 42, the feature point No. 45, the feature point No. 46 and the feature point No. 47 as the upper boundary point of the second target area image (for example, the right-eye strip area); adding 40 unit lengths to the vertical coordinates of the four characteristic points respectively to obtain a lower boundary point of the second target area image; and the area formed by the upper boundary point and the lower boundary point is used as a right-eye strip area mask.
Alternatively, as shown in fig. 2b, which is a schematic diagram of an embodiment of the second target area image in the embodiment of the present invention, a left-eye strip area mask 214 and a right-eye strip area mask 215 may be included.
212. And determining a third target area image according to the third face characteristic point.
For example, the terminal device determines the vertex of a triangular region, namely, the ordinate of the feature point No. 29 is taken as the ordinate of the mask vertex of the triangular region, and the abscissa of the feature point No. 40 is taken as the abscissa of the mask vertex of the triangular region; the region between the vertex and the lower boundary point of the left-eye band-shaped region is used as a third target region image, that is, as a left-eye triangle region mask. Similarly, the terminal equipment determines the vertex of the triangular area, namely the ordinate of the No. 29 characteristic point is taken as the ordinate of the mask vertex of the triangular area, and the abscissa of the No. 47 characteristic point is taken as the abscissa of the mask vertex of the triangular area; the region between the vertex and the lower boundary point of the right-eye strip region is used as a third target region image, that is, as a right-eye triangular region mask.
Optionally, as shown in fig. 2c, which is a schematic diagram of an embodiment of a third target area image in the embodiment of the present invention, a left-eye triangle area mask 216 and a right-eye triangle area mask 217 may be included.
213. And determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
Optionally, the terminal device determines the type of the black eye according to the first target area image, the first grayscale image, the second target area image, and the third target area image, which may include but is not limited to the following implementation manners:
implementation mode 1: the terminal equipment converts the pixel value of the second target area image into a first pixel value, and converts the pixel values of the images except the second target area image in the first target area image into second pixel values to obtain a fourth target area image; the terminal equipment converts the pixel value of the third target area image into the first pixel value, and converts the pixel value of the image except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image; the terminal equipment multiplies the pixel value of the fourth target area image by the pixel value of the first gray level image to obtain a third pixel value; the terminal equipment multiplies the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value; the terminal device calculates a first difference value between the third pixel value and the fourth pixel value; when the first difference value is larger than or equal to a preset pixel difference threshold value, the terminal equipment determines that the type of the black eye is a pigment type; and when the first difference value is smaller than the preset pixel difference threshold value, the terminal equipment determines that the type of the black eye is a blood vessel type.
The first pixel value may be 0, and the second pixel value may be 255.
Implementation mode 2: the terminal equipment converts the pixel value of the first target area image into a first pixel value, converts the pixel value of the second target area image into a second pixel value to obtain a sixth target area image, and converts the pixel value of the third target area image into the second pixel value to obtain a seventh target area image; the terminal equipment multiplies the pixel value of the sixth target area image by the pixel value of the first gray level image to obtain a fifth pixel value; the terminal equipment multiplies the pixel value of the seventh target area image by the pixel value of the first gray level image to obtain a sixth pixel value; the terminal device calculates a second difference value between the fifth pixel value and the sixth pixel value; when the second difference value is larger than or equal to a preset pixel difference threshold value, the terminal equipment determines that the type of the black eye is a pigment type; and when the second difference value is smaller than the preset pixel difference threshold value, the terminal equipment determines that the type of the black eye is a blood vessel type.
It will be appreciated that the fourth target area image and the sixth target area image are the same and the fifth target area image and the seventh target area image are the same, with the difference that the terminal device acquires the fourth target area image and the terminal device acquires the sixth target area image in a different manner and the terminal device acquires the fifth target area image and the terminal device acquires the seventh target area image in a different manner.
In the embodiment of the invention, the terminal device can perform Frangi filtering on the green channel image of the acquired image to be identified to obtain a first image, and perform Garbor filtering on the green channel image to obtain a second image; the terminal equipment can determine the wrinkle area according to the first image and the second image; the terminal equipment determines the type of the black eye according to the wrinkle area, namely when the wrinkle area is larger than a preset wrinkle area threshold value, the terminal equipment can determine that the type of the black eye is a structural type; the terminal equipment determines a first face characteristic point in the image to be identified through a preset algorithm, and further determines a first target area image; the terminal equipment performs gray processing on the first target area image to obtain a first gray image; the terminal equipment determines a second face characteristic point in the image to be recognized through the preset algorithm, further determines a second target area image, determines a third face characteristic point and further determines a third target area image; and the terminal equipment determines the type of the black eye according to the first target area image, the first gray level image, the second target area image and the third target area image. The method enables the terminal equipment to identify whether the type of the black eye is structural type, vascular type or pigment type, and further can improve the accuracy of identifying the type of the black eye by the terminal equipment.
As shown in fig. 3, a schematic diagram of another embodiment of a black eye recognition method based on image processing in an embodiment of the present invention may include:
301. and acquiring an image to be identified.
302. And performing Frangi filtering on the green channel image of the image to be identified to obtain a first image.
303. And carrying out Garbor filtering on the green channel image to obtain a second image.
304. And multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image.
Wherein the sum of the first coefficient and the second coefficient is 1.
305. In the first target image, a wrinkle area is determined.
306. And when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
307. Determining a first face characteristic point in the image to be recognized through a preset algorithm;
308. determining a first target area image according to the first face characteristic point;
it should be noted that the steps 301-308 are similar to the steps 201-208 shown in fig. 1 in this embodiment, and are not described herein again.
309. And determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization.
Optionally, the determining, by the terminal device, the total number of pixel values and the number of black pixel values in the image to be identified after the binarization, may include: the terminal equipment performs threshold segmentation on the image to be identified to obtain a fourth image; the terminal equipment binarizes the fourth image to obtain a fifth image; the terminal device determines, on the fifth image, the total number of the determination pixel values and the number of the black pixel values.
It can be understood that the image obtained by the terminal device after binarizing the image to be identified is an image only having a pixel value of 0 and a pixel value of 255, that is, only black pixel points and white pixel points are on the binarized image to be identified. And the fifth image is an image obtained after the binarization of the terminal equipment.
310. And determining the area of the black eye according to the total number and the number of the black pixel values.
Optionally, the determining, by the terminal device, the area of the black eye according to the total number and the number of the black pixel values may include: and according to a third formula, determining the area of the black eye.
The third formula is that S is M/M;
wherein S represents the area of the black eye; m represents the number of the black pixel values; m represents the total number of pixel values.
Optionally, after step 310, the method may further include: and outputting first suggestion information according to the area of the black eye.
It is understood that the first suggestion information may be used for the terminal device to propose a suggestion for the user to improve the area according to the area of the black eye.
In the embodiment of the invention, the terminal device can perform Frangi filtering on the green channel image of the acquired image to be identified to obtain a first image, and perform Garbor filtering on the green channel image to obtain a second image; the terminal equipment can determine the wrinkle area according to the first image and the second image; the terminal equipment determines the type of the black eye according to the wrinkle area, namely when the wrinkle area is larger than a preset wrinkle area threshold value, the terminal equipment can determine that the type of the black eye is a structural type; the terminal equipment determines a first face characteristic point in the image to be identified through a preset algorithm, and further determines a first target area image; the terminal equipment determines the total number of pixel values and the number of black pixel values in the image to be identified after binarization, and determines the area of a black eye according to the total number and the number of the black pixel values. The method can not only improve the accuracy of the terminal equipment in identifying the type of the black eye, but also provide a suggestion for improving the area for the user according to the determined area of the black eye so as to improve the skin quality of the user.
As shown in fig. 4a, a schematic diagram of another embodiment of a black eye recognition method based on image processing in an embodiment of the present invention may include:
401. and acquiring an image to be identified.
402. And performing Frangi filtering on the green channel image of the image to be identified to obtain a first image.
403. And carrying out Garbor filtering on the green channel image to obtain a second image.
404. And multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image.
Wherein the sum of the first coefficient and the second coefficient is 1.
405. In the first target image, a wrinkle area is determined.
406. And when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
407. Determining a first face characteristic point in the image to be recognized through a preset algorithm;
408. determining a first target area image according to the first face characteristic point;
409. determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization;
410. and determining the area of the black eye according to the total number and the number of the black pixel values.
It should be noted that the steps 401-410 are similar to the steps 301-310 shown in fig. 3 in this embodiment, and are not described herein again.
411. And carrying out gray processing on the red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second pixel value of the second gray image.
Optionally, the performing, by the terminal device, gray processing on the red pixel value of the image to be recognized to obtain a second gray image, and the first average pixel value of the second gray image may include: the terminal equipment performs gray processing on the red pixel value of the image to be identified to obtain a second gray image; the terminal equipment determines the image to be identified after binarization as a binarized image; the terminal equipment determines black pixel points and white pixel points on the binary image, and determines pixel points corresponding to the black pixel points and pixel points corresponding to the white pixel points on the second gray scale image; the terminal equipment obtains a first average pixel value of the second gray image according to the counted number of the pixel points corresponding to the black pixel points and the sum of the pixel values of the pixel points corresponding to the black pixel points; and the terminal equipment obtains a second average pixel value of the second gray image according to the counted number of the pixel points corresponding to the white pixel points and the pixel value sum of the pixel points corresponding to the white pixel points.
Optionally, the terminal device obtains a first average pixel value of the second gray level image according to the counted number of the pixels corresponding to the black pixel and the sum of the pixel values of the pixels corresponding to the black pixel; the terminal device obtains a second average pixel value of the second gray image according to the counted number of the pixel points corresponding to the white pixel points and the pixel value sum of the pixel points corresponding to the white pixel points, and may include: the terminal equipment obtains a first average pixel value of the second gray scale image according to a fourth formula; the terminal device obtains a second average pixel value of the second gray scale image according to a fifth formula.
Wherein, the fourth formula; ave1=pixel1/N1
ave1Representing the first average pixel value; pixel1The sum of the pixel values of the pixel points corresponding to the black pixel point is represented; n is a radical of1And expressing the number of the pixel points corresponding to the black pixel points.
The fifth formula; ave2=ave2=pixel1/N2
ave2Representing the second average pixel value; pixel1Expressing the sum of pixel values of the pixel points corresponding to the white pixel points; n is a radical of2And the number of the pixel points corresponding to the white pixel points is represented.
Exemplarily, as shown in fig. 4b, the second grayscale image is a schematic diagram of an embodiment of the second grayscale image in the embodiment of the present invention; fig. 4c is a schematic diagram of an embodiment of binarizing an image according to an embodiment of the present invention. It should be noted that the second grayscale image shown in fig. 4b may be a grayscale image corresponding to the left eye region, and the binarized image shown in fig. 4c may be a black-and-white image corresponding to the left eye region.
412. And calculating to obtain a second difference value of the first average pixel value and the second average pixel value.
Optionally, the calculating, by the terminal device, a second difference between the first average pixel value and the second average pixel value may include: and the terminal equipment obtains a second difference value according to a sixth formula.
Wherein the sixth formula is dev-ave2-ave1(ii) a dev represents the second difference.
413. And obtaining a target numerical value according to the second difference value and the area of the black eye.
Wherein the target value is used for characterizing the evaluation of the severity of the dark circles.
Optionally, the terminal device obtains a target value according to the second difference and the area of the black eye, which may include but is not limited to the following implementation manners:
implementation mode 1: and when the area of the black eye is smaller than or equal to the first area threshold, or when the area of the black eye is larger than the first area threshold and smaller than or equal to the second area threshold, and the second difference is smaller than the first difference threshold, the terminal device obtains the first target value according to a sixth formula.
Wherein the sixth formula is G1=100-(10S)/S1
G1Representing the first target value; s represents the area of the black eye; s1Representing the first area threshold.
Implementation mode 2: and when the area of the black eye is larger than the first area threshold value and smaller than or equal to the third area threshold value, and the second difference value is larger than or equal to the first difference value threshold value and smaller than the second difference value threshold value, the terminal equipment obtains a second target value according to a seventh formula.
Wherein the seventh formula is G2=90-[10(S-S1)]/(S2-S1);
G2Representing the second target value; s2Representing the second area threshold.
Implementation mode 3: and when the area of the black eye is larger than the second area threshold, smaller than or equal to the third area threshold and the second difference is larger than or equal to the second difference threshold, or when the area of the black eye is larger than the third area threshold, the terminal equipment obtains a third target value according to an eighth formula.
Wherein the second area threshold is located between the first area threshold and the third area threshold.
The eighth formula is G3=80-[10(S-S2)]/(1–S2);
G3Representing the third target value.
Optionally, S1=0.18;S2=0.28;S3=0.43;dev1=28;dev2=31。
It will be understood that S1、S2、S3、dev1And dev2The terminal equipment can obtain the data according to experiments.
Illustratively, G is when S is 0.324 and dev is 31.3683=80-[10(0.324-0.28)]And 79.39 points for (1-0.28).
It is understood that different target values are targeted to evaluate different degrees of severity of dark circles.
Optionally, after step 414, the method may further include: and outputting first prompt information, wherein the first prompt information is used for the area of the black eye of the user and/or the severity of the black eye and corresponding evaluation and suggestion of the severity of the black eye.
In the embodiment of the invention, the terminal device can perform Frangi filtering on the green channel image of the acquired image to be identified to obtain a first image, and perform Garbor filtering on the green channel image to obtain a second image; the terminal equipment can determine the wrinkle area according to the first image and the second image; the terminal equipment determines the type of the black eye according to the wrinkle area, namely when the wrinkle area is larger than a preset wrinkle area threshold value, the terminal equipment can determine that the type of the black eye is a structural type; the terminal equipment determines a first face characteristic point in the image to be identified through a preset algorithm, and further determines a first target area image; the terminal equipment determines the total number of pixel values and the number of black pixel values in an image to be identified after binarization, and determines the area of a black eye according to the total number and the number of the black pixel values; the terminal equipment performs gray processing on the red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image; and the terminal equipment calculates to obtain a second difference value of the first average pixel value and the second average pixel value, and obtains a target numerical value according to the second difference value and the area of the black eye. The method can improve the accuracy of the terminal equipment in identifying the type of the black eye, can also determine the severity of the black eye, and correspondingly evaluates the severity of the black eye.
As shown in fig. 5, which is a schematic diagram of an embodiment of a black eye recognition device in an embodiment of the present invention, the black eye recognition device may include: an acquisition module 501 and a processing module 502.
An obtaining module 501, configured to obtain an image to be identified; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
a processing module 502 for determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
Alternatively, in some embodiments of the present invention,
a processing module 502, specifically configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point;
the obtaining module 501 is specifically configured to perform Frangi filtering on the green channel image of the first target area image to obtain a first image.
Alternatively, in some embodiments of the present invention,
an obtaining module 501, configured to perform Frangi filtering on a green channel image of the image to be identified, and perform morphological processing of a closed operation to obtain a first image; carrying out Garbor filtering on the green channel image, and carrying out morphological processing of opening operation to obtain a second image;
the processing module 502 is specifically configured to multiply the pixel value of the first image by a first coefficient, and multiply the pixel value of the second image by a second coefficient, and then perform superposition and morphological processing of a closed operation to obtain a first target image.
Alternatively, in some embodiments of the present invention,
a processing module 502, specifically configured to multiply a pixel value of the first image by a first coefficient, and multiply a pixel value of the second image by a second coefficient, and then superimpose them to obtain a third image;
the obtaining module 501 is specifically configured to perform threshold segmentation on the third image to obtain a first target image.
Alternatively, in some embodiments of the present invention,
the processing module 502 is further configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point;
the obtaining module 501 is further configured to perform gray processing on the first target area image to obtain a first gray image;
the processing module 502 is further configured to determine a second face feature point and a third face feature point in the image to be recognized through the preset algorithm; determining a second target area image according to the second face characteristic point; determining a third target area image according to the third face characteristic point; and determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
Alternatively, in some embodiments of the present invention,
an obtaining module 501, configured to specifically convert a pixel value of the second target area image into a first pixel value, and convert a pixel value of an image, excluding the second target area image, in the first target area image into a second pixel value, so as to obtain a fourth target area image; converting the pixel value of the third target area image into the first pixel value, and converting the pixel values of the images except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image;
a processing module 502, specifically configured to multiply a pixel value of the fourth target area image by a pixel value of the first grayscale image to obtain a third pixel value; multiplying the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value; calculating to obtain a first difference value between the third pixel value and the fourth pixel value; when the first difference value is larger than or equal to a preset pixel difference threshold value, determining the type of the black eye is a pigment type; and when the first difference value is smaller than the preset pixel difference threshold value, determining that the type of the black eye is a blood vessel type.
Alternatively, in some embodiments of the present invention,
the processing module 502 is further configured to determine a first person face feature point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization; and determining the area of the black eye according to the total number and the number of the black pixel values.
Alternatively, in some embodiments of the present invention,
the obtaining module 501 is further configured to perform gray processing on a red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image;
the processing module 502 is further configured to calculate a second difference between the first average pixel value and the second average pixel value;
the obtaining module 501 is further configured to obtain a target value according to the second difference and the area of the black eye, where the target value is used to characterize and evaluate the severity of the black eye.
As shown in fig. 6, which is a schematic diagram of another embodiment of a black eye recognition device in an embodiment of the present invention, the black eye recognition device may include: a processor 601 and a memory 602.
In this embodiment, the processor 601 has the following functions:
acquiring an image to be identified;
frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image;
performing Garbor filtering on the green channel image to obtain a second image;
multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
determining a wrinkle area in the first target image;
and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
In this embodiment, the processor 601 further has the following functions:
the Frangi filtering is performed on the green channel image of the image to be identified to obtain a first image, and the Frangi filtering comprises the following steps: determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; and performing Frangi filtering on the green channel image of the first target area image to obtain a first image.
In this embodiment, the processor 601 has the following functions:
frangi filtering is carried out on the green channel image of the image to be identified, and morphological processing of closed operation is carried out to obtain a first image; carrying out Garbor filtering on the green channel image, and carrying out morphological processing of opening operation to obtain a second image; and multiplying the pixel value of the first image by a first coefficient, multiplying the pixel value of the second image by a second coefficient, then superposing, and performing closed-loop morphological processing to obtain a first target image.
In this embodiment, the processor 601 has the following functions:
the obtaining a first target image by superposing the pixel value of the first image multiplied by a first coefficient and the pixel value of the second image multiplied by a second coefficient, includes: multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then overlapping to obtain a third image; and performing threshold segmentation on the third image to obtain a first target image.
In this embodiment, the processor 601 has the following functions:
determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; carrying out gray level processing on the first target area image to obtain a first gray level image; determining a second face characteristic point and a third face characteristic point in the image to be recognized through the preset algorithm; determining a second target area image according to the second face characteristic point; determining a third target area image according to the third face characteristic point; and determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
In this embodiment, the processor 601 has the following functions:
the determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image includes: converting the pixel value of the second target area image into a first pixel value, and converting the pixel value of the image except the second target area image in the first target area image into a second pixel value to obtain a fourth target area image; converting the pixel value of the third target area image into the first pixel value, and converting the pixel values of the images except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image; multiplying the pixel value of the fourth target area image by the pixel value of the first gray level image to obtain a third pixel value; multiplying the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value; calculating to obtain a first difference value between the third pixel value and the fourth pixel value; when the first difference value is larger than or equal to a preset pixel difference threshold value, determining the type of the black eye is a pigment type; and when the first difference value is smaller than the preset pixel difference threshold value, determining that the type of the black eye is a blood vessel type.
In this embodiment, the processor 601 has the following functions:
determining a first face characteristic point in the image to be recognized through a preset algorithm; determining a first target area image according to the first face characteristic point; determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization; and determining the area of the black eye according to the total number and the number of the black pixel values.
In this embodiment, the processor 601 has the following functions:
carrying out gray processing on the red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image; calculating to obtain a second difference value between the first average pixel value and the second average pixel value; and obtaining a target value according to the second difference and the area of the black eye, wherein the target value is used for representing and evaluating the severity of the black eye.
In the present embodiment, the memory 602 has the following functions:
for storing the processing procedures and processing results of the processor 601.
As shown in fig. 7, which is a schematic diagram of an embodiment of a terminal device in an embodiment of the present invention, a black eye recognition apparatus as shown in fig. 5 or fig. 6 may be included.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (ssd)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A black eye recognition method based on image processing is characterized by comprising the following steps:
acquiring an image to be identified;
frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image;
performing Garbor filtering on the green channel image to obtain a second image;
multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
determining a wrinkle area in the first target image;
and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
2. The method according to claim 1, wherein the Frangi filtering the green channel image of the image to be recognized to obtain the first image comprises:
determining a first face characteristic point in the image to be recognized through a preset algorithm;
determining a first target area image according to the first face characteristic point;
and performing Frangi filtering on the green channel image of the first target area image to obtain a first image.
3. The method according to claim 1, wherein Frangi filtering is performed on a green channel image of the image to be identified to obtain a first image; the method comprises the following steps:
frangi filtering is carried out on the green channel image of the image to be identified, and morphological processing of closed operation is carried out to obtain a first image;
the Garbor filtering is carried out on the green channel image to obtain a second image, and the method comprises the following steps:
carrying out Garbor filtering on the green channel image, and carrying out morphological processing of opening operation to obtain a second image;
the obtaining a first target image by superposing the pixel value of the first image multiplied by a first coefficient and the pixel value of the second image multiplied by a second coefficient includes:
and multiplying the pixel value of the first image by a first coefficient, multiplying the pixel value of the second image by a second coefficient, then superposing, and performing closed-loop morphological processing to obtain a first target image.
4. The method according to claim 1 or 3, wherein the superimposing the pixel value of the first image multiplied by a first coefficient and the pixel value of the second image multiplied by a second coefficient to obtain a first target image comprises:
multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then overlapping to obtain a third image;
and carrying out threshold segmentation on the third image to obtain a first target image.
5. The method of claim 1, further comprising:
determining a first face characteristic point in the image to be recognized through a preset algorithm;
determining a first target area image according to the first face characteristic point;
carrying out gray level processing on the first target area image to obtain a first gray level image;
determining a second face characteristic point and a third face characteristic point in the image to be recognized through the preset algorithm;
determining a second target area image according to the second face characteristic point;
determining a third target area image according to the third face characteristic point;
and determining the type of the black eye according to the first target area image, the first gray scale image, the second target area image and the third target area image.
6. The method of claim 5, wherein determining the type of the black eye based on the first target area image, the first grayscale image, the second target area image, and the third target area image comprises:
converting the pixel value of the second target area image into a first pixel value, and converting the pixel value of the image except the second target area image in the first target area image into a second pixel value to obtain a fourth target area image;
converting the pixel value of the third target area image into the first pixel value, and converting the pixel value of the image except the third target area image in the first target area image into the second pixel value to obtain a fifth target area image;
multiplying the pixel value of the fourth target area image by the pixel value of the first gray level image to obtain a third pixel value;
multiplying the pixel value of the fifth target area image by the pixel value of the first gray level image to obtain a fourth pixel value;
calculating to obtain a first difference value of the third pixel value and the fourth pixel value;
when the first difference value is larger than or equal to a preset pixel difference threshold value, determining the type of the black eye is a pigment type;
and when the first difference value is smaller than the preset pixel difference threshold value, determining that the type of the black eye is a blood vessel type.
7. The method of claim 1, further comprising:
determining a first face characteristic point in the image to be recognized through a preset algorithm;
determining a first target area image according to the first face characteristic point;
determining the total number of pixel values and the number of black pixel values in the image to be identified after binarization;
and determining the area of the black eye according to the total number and the number of the black pixel values.
8. The method of claim 7, further comprising:
carrying out gray processing on the red pixel value of the first target area image to obtain a second gray image, and a first average pixel value and a second average pixel value of the second gray image;
calculating to obtain a second difference value of the first average pixel value and the second average pixel value;
and obtaining a target value according to the second difference and the area of the black eye, wherein the target value is used for representing and evaluating the severity of the black eye.
9. A black eye recognition device, comprising:
the acquisition module is used for acquiring an image to be identified; frangi filtering is carried out on the green channel image of the image to be identified to obtain a first image; performing Garbor filtering on the green channel image to obtain a second image; multiplying the pixel value of the first image by a first coefficient, and multiplying the pixel value of the second image by a second coefficient, and then superposing to obtain a first target image, wherein the sum of the first coefficient and the second coefficient is 1;
a processing module for determining a wrinkle area in the first target image; and when the wrinkle area is larger than a preset wrinkle area threshold value, determining the type of the black eye ring as a structural type.
10. A black eye recognition device, comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor calls the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method of any one of claims 1-8.
CN202110364920.1A 2021-04-02 2021-04-02 Black eye recognition method, black eye recognition device and terminal based on image processing Pending CN113128377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110364920.1A CN113128377A (en) 2021-04-02 2021-04-02 Black eye recognition method, black eye recognition device and terminal based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110364920.1A CN113128377A (en) 2021-04-02 2021-04-02 Black eye recognition method, black eye recognition device and terminal based on image processing

Publications (1)

Publication Number Publication Date
CN113128377A true CN113128377A (en) 2021-07-16

Family

ID=76774887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110364920.1A Pending CN113128377A (en) 2021-04-02 2021-04-02 Black eye recognition method, black eye recognition device and terminal based on image processing

Country Status (1)

Country Link
CN (1) CN113128377A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086651A1 (en) * 2005-10-04 2007-04-19 Lvmh Recherche Method and apparatus for characterizing the imperfections of skin and method of assessing the anti-aging effect of a cosmetic product
WO2013098512A1 (en) * 2011-12-26 2013-07-04 Chanel Parfums Beaute Method and device for detecting and quantifying cutaneous signs on an area of skin
US20140323873A1 (en) * 2013-04-09 2014-10-30 Elc Management Llc Skin diagnostic and image processing methods
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
CN105844213A (en) * 2016-02-05 2016-08-10 宁波工程学院 Green fruit recognition method
CN109919029A (en) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 Black eye kind identification method, device, computer equipment and storage medium
CN109919030A (en) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 Black eye kind identification method, device, computer equipment and storage medium
CN110533651A (en) * 2019-08-29 2019-12-03 维沃移动通信有限公司 A kind of image processing method and device
CN111241889A (en) * 2018-11-29 2020-06-05 华为技术有限公司 Method and device for detecting and evaluating black eye
CN111814520A (en) * 2019-04-12 2020-10-23 虹软科技股份有限公司 Skin type detection method, skin type grade classification method, and skin type detection device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086651A1 (en) * 2005-10-04 2007-04-19 Lvmh Recherche Method and apparatus for characterizing the imperfections of skin and method of assessing the anti-aging effect of a cosmetic product
WO2013098512A1 (en) * 2011-12-26 2013-07-04 Chanel Parfums Beaute Method and device for detecting and quantifying cutaneous signs on an area of skin
US20140323873A1 (en) * 2013-04-09 2014-10-30 Elc Management Llc Skin diagnostic and image processing methods
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
CN105844213A (en) * 2016-02-05 2016-08-10 宁波工程学院 Green fruit recognition method
CN111241889A (en) * 2018-11-29 2020-06-05 华为技术有限公司 Method and device for detecting and evaluating black eye
CN109919029A (en) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 Black eye kind identification method, device, computer equipment and storage medium
CN109919030A (en) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 Black eye kind identification method, device, computer equipment and storage medium
CN111814520A (en) * 2019-04-12 2020-10-23 虹软科技股份有限公司 Skin type detection method, skin type grade classification method, and skin type detection device
CN110533651A (en) * 2019-08-29 2019-12-03 维沃移动通信有限公司 A kind of image processing method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FOURMI: "图像特征的提取(gaussian, gabor, frangi, hessian, Morphology...)及将图片保存为txt文件", pages 1 - 5, Retrieved from the Internet <URL:https://www.cnblogs.com/fourmi/p/8453762.html> *
WENDESON S. OLIVEIRA等: "Unsupervised Retinal Vessel Segmentation Using Combined Filters", 《PLOS ONE》, pages 1 - 21 *
叶龙宝: "基于人脸识别的图像美化系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 1715 *
许言兵等: "基于超像素仿射传播聚类的视网膜血管分割", 光学学报, no. 02, pages 60 - 70 *

Similar Documents

Publication Publication Date Title
Ramlakhan et al. A mobile automated skin lesion classification system
CN111414831A (en) Monitoring method and system, electronic device and storage medium
CN111488756A (en) Face recognition-based living body detection method, electronic device, and storage medium
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
CN111814520A (en) Skin type detection method, skin type grade classification method, and skin type detection device
CN106022209A (en) Distance estimation and processing method based on face detection and device based on face detection
CN104637031B (en) Eyes image treating method and apparatus
KR102393298B1 (en) Method and apparatus for iris recognition
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
CN107506708B (en) Unlocking control method and related product
CN113570052B (en) Image processing method, device, electronic equipment and storage medium
CN112884666B (en) Image processing method, device and computer storage medium
CN107392135A (en) Biopsy method and Related product
WO2020175878A1 (en) Electronic device for measuring biometric information and method for operating the same
CN109583330B (en) Pore detection method for face photo
US20160345887A1 (en) Moisture feeling evaluation device, moisture feeling evaluation method, and moisture feeling evaluation program
CN113642358B (en) Skin color detection method, device, terminal and storage medium
CN113128377A (en) Black eye recognition method, black eye recognition device and terminal based on image processing
CN108805838A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN112257501A (en) Face feature enhancement display method and device, electronic equipment and medium
CN113516602B (en) Image defogging method, image defogging device, electronic equipment and storage medium
CN111444555A (en) Temperature measurement information display method and device and terminal equipment
CN113128373B (en) Image processing-based color spot scoring method, color spot scoring device and terminal equipment
CN113128376A (en) Wrinkle recognition method based on image processing, wrinkle recognition device and terminal equipment
CN115330610A (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination