CN113128372A - Blackhead identification method and device based on image processing and terminal equipment - Google Patents
Blackhead identification method and device based on image processing and terminal equipment Download PDFInfo
- Publication number
- CN113128372A CN113128372A CN202110362210.5A CN202110362210A CN113128372A CN 113128372 A CN113128372 A CN 113128372A CN 202110362210 A CN202110362210 A CN 202110362210A CN 113128372 A CN113128372 A CN 113128372A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- blackhead
- threshold
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010000496 acne Diseases 0.000 title claims abstract description 273
- 238000012545 processing Methods 0.000 title claims abstract description 112
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000001914 filtration Methods 0.000 claims abstract description 24
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 230000006870 function Effects 0.000 claims description 50
- 206010067868 Skin mass Diseases 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 19
- 238000010606 normalization Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 230000000877 morphologic effect Effects 0.000 description 10
- 230000000875 corresponding effect Effects 0.000 description 8
- 238000003706 image smoothing Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000003707 image sharpening Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000013441 quality evaluation Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000005808 skin problem Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration by non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/70—
-
- G06T5/73—
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The embodiment of the invention discloses a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of identifying blackheads by the blackhead identification device. The method provided by the embodiment of the invention comprises the following steps: acquiring an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image; performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; performing threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image; and identifying and obtaining the total number and the total area of the blackheads from the target image.
Description
Technical Field
The invention relates to the field of terminal equipment application, in particular to a blackhead identification method and device based on image processing and terminal equipment.
Background
At present, the skin problem of the face of people is serious, and especially, the blackheads appearing on the face of people are common skin problems. Moreover, the severity of blackheads varies depending on different factors such as gender, age, and geographic attributes of people.
In the field of beauty and make-up, a picture of the face of a user can be photographed by a terminal device, the terminal device can automatically detect the number of blackheads of the user, and provide skin care opinions for the user by combining with other skin features of the face of the user, for example, the terminal device can select a proper skin care product and food for the user to improve the facial skin problem of the user. In the prior art, a method for detecting the number of blackheads generally irradiates the skin with two light rays, namely, ordinary white light and ultraviolet light, so as to extract a highlight target in the ultraviolet light, obtain a target highlight image, generate a blackhead region image according to the target highlight image, and mark a blackhead region in a white light image by using the blackhead region image as a mask, thereby identifying the blackhead.
However, the equipment used in the method for identifying the blackheads in the prior art is complex in requirement and can be doped with a plurality of influence factors, so that the accuracy of identifying the blackheads by the terminal equipment is low.
Disclosure of Invention
The embodiment of the invention provides a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of identifying blackheads by the blackhead identification device.
The first aspect of the embodiments of the present invention provides a blackhead identification method based on image processing, which may include:
acquiring an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image;
homomorphic filtering and local histogram equalization processing are carried out on the first image to obtain a second image;
carrying out threshold segmentation on the second image to obtain an initial gray threshold;
adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
from the target image, the total number and the total area of the blackheads are identified.
Optionally, the obtaining an image to be identified and performing gray processing on the image to be identified to obtain a first image includes:
acquiring an image to be identified; determining face characteristic points in the image to be recognized through a preset algorithm; determining a target area image according to the face characteristic points; and carrying out gray level processing on the target area image to obtain a first image.
Optionally, threshold segmentation is performed on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image, including: obtaining an initial gray threshold value for the second image by using an Otsu algorithm; obtaining a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value; obtaining a target image according to a preset function formula; wherein, the predetermined function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t; bin represents the target image; threshold (A) represents a threshold function formula; gray _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
Optionally, obtaining the target image according to a preset function formula includes: obtaining a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the identifying, from the target image, a total number and a total area of blackheads includes: determining the area of each blackhead from the target image; determining the blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads; determining the number of the first blackheads and the total area of the first blackheads.
Optionally, the method further includes: carrying out normalization processing on the total number of the blackheads to obtain the total number of targets; carrying out normalization processing on the total area of the blackheads to obtain a target total area; obtaining a target numerical value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is B ═ λ C + (1- λ) D; b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient.
Optionally, the method further includes: when the target value is smaller than a first preset value, generating and outputting a first skin quality score according to a second formula; when the target value is greater than or equal to the first preset value and smaller than a second preset value, generating and outputting a second skin quality score according to a third formula; when the target value is larger than or equal to the second preset value, generating and outputting a third skin quality score according to a fourth formula; wherein the second formula is E1100-10B/F; the third formula is E290-10 (B-F)/(G-F); the fourth formula is E3=80-10(B-G)/(1-G);E1Representing the first skin mass fraction; f represents the first preset value; e2Representing a second skin mass fraction; g represents the second preset value; e3Representing the third skin mass fraction.
A second aspect of an embodiment of the present invention provides a blackhead identification apparatus, which may include:
the acquisition module is used for acquiring an image to be identified and carrying out gray level processing on the image to be identified to obtain a first image;
the processing module is used for carrying out homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; carrying out threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
and the identification module is used for identifying and obtaining the total number and the total area of the blackheads from the target image.
Alternatively, in some embodiments of the present invention,
the acquisition module is specifically used for acquiring an image to be identified;
the processing module is specifically used for determining human face characteristic points in the image to be recognized through a preset algorithm; determining a target area image according to the face characteristic points;
the acquisition module is further configured to perform gray processing on the target area image to obtain a first image.
Optionally, the processing module is specifically configured to obtain an initial gray level threshold by using an Otsu algorithm for the second image; obtaining a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value; obtaining a target image according to a preset function formula; wherein, the predetermined function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t; bin represents the target image; threshold (A) represents a threshold function formula; gray _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
Optionally, the processing module is specifically configured to obtain a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the processing module is specifically configured to determine an area of each blackhead from the target image; determining the blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads;
the identification module is specifically configured to determine the number of the first blackheads and the total area of the first blackheads.
Optionally, the processing module is further configured to perform normalization processing on the total number of blackheads to obtain a target total number; carrying out normalization processing on the total area of the blackheads to obtain a target total area; obtaining a target numerical value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is B ═ λ C + (1- λ) D; b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient.
Optionally, the processing module is further configured to generate and output a first skin quality score according to a second formula when the target value is smaller than a first preset value; when the target value is greater than or equal to the first preset value and smaller than a second preset value, generating and outputting a second skin quality score according to a third formula; when the target value is larger than or equal to the second preset value, generating and outputting a third skin quality score according to a fourth formula; wherein the second formula is E1100-10B/F; the third formula is E290-10 (B-F)/(G-F); the fourth formula is E3=80-10(B-G)/(1-G);E1Representing the first skin mass fraction; f represents the first preset value; e2Representing a second skin mass fraction; g represents the second preset value; e3Representing the third skin mass fraction.
A third aspect of an embodiment of the present invention provides a blackhead recognition apparatus, which may include:
a memory storing executable program code;
and a processor coupled to the memory;
the processor calls the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method according to the first aspect of an embodiment of the present invention.
In another aspect, an embodiment of the present invention provides a terminal device, which may include the blackhead identification apparatus according to the second aspect or the third aspect of the embodiment of the present invention.
Yet another aspect of embodiments of the present invention provides a computer-readable storage medium having stored thereon executable program code, which when executed by a processor, implements a method according to the first aspect of embodiments of the present invention.
In another aspect, an embodiment of the present invention discloses a computer program product, which, when running on a computer, causes the computer to execute any one of the methods disclosed in the first aspect of the embodiment of the present invention.
In another aspect, an embodiment of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute any one of the methods disclosed in the first aspect of the embodiment of the present invention.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; performing threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image; and identifying and obtaining the total number and the total area of the blackheads from the target image. The blackhead recognition device performs gray processing on an image to be recognized and performs noise point removal processing according to the adjusted initial gray threshold value to obtain a target image; the blackhead recognition device recognizes blackheads in the target image. The method improves the accuracy of the blackhead recognition device in recognizing the blackheads.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
FIG. 1 is a schematic diagram of an embodiment of a blackhead identification method based on image processing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a blackhead identification method based on image processing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a blackhead recognition device for recognizing and scoring blackheads according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of a blackhead identification device for marking and scoring blackheads according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of a blackhead recognition apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another embodiment of a blackhead recognition apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an embodiment of a terminal device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of identifying blackheads by the blackhead identification device.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
It should be noted that the terms "first", "second", "third", "fourth", and the like in the description and the claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The execution subject according to the embodiment of the present invention may be a blackhead recognition apparatus or a terminal device. The following embodiment further illustrates the technical solution of the present invention by taking the blackhead recognition device as an example.
As shown in fig. 1, which is a schematic diagram of an embodiment of a blackhead identification method based on image processing in an embodiment of the present invention, the blackhead identification method may include:
101. the method comprises the steps of obtaining an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image.
It should be noted that the image to be recognized may be an image of the face of the user, or may be an image of a specific part (e.g., a nose part) of the face of the user; the image to be recognized may be obtained by shooting with a camera or other shooting devices, and is not limited specifically here.
The gray processing means that the blackhead recognition device processes three color components on an image to be recognized, wherein the three color components respectively comprise: red (Red, R), Green (Green, G) and Blue (Blue, B). The gray scale processing may include, but is not limited to, the following four methods: component, maximum, average, and weighted average.
It is understood that the classification method means that the blackhead recognition device determines RGB on the image to be recognized as three target color components, respectively, for example: determining R as a first target color component, the first target color component having a grayscale of N; determining G as a second target color component, the second target color component having a grayscale of P; b is determined as a third target color component having a gray scale of Q. The maximum value method is that the blackhead recognition device determines the color component with the maximum brightness value in the RGB on the image to be recognized as the maximum target color component, and the gray scale of the maximum target color component is M. The average value method is that the blackhead recognition device averages three brightness values corresponding to RGB on the image to be recognized to obtain a fourth target color component, and the gray value of the fourth target color component is the average gray value of RGB. The weighted average method is that the blackhead recognition device performs weighted average on three brightness values corresponding to RGB on an image to be recognized according to different weight proportions to obtain a fifth target color component, and the gray value of the fifth target color component is the weighted average gray value H of RGB.
Wherein N, P, Q, M and H both represent grayscale values different from R, G and B; n, P, Q, M and H may be the same or different, and are not particularly limited herein.
Optionally, the blackhead recognition apparatus acquires an image to be recognized, and performs gray processing on the image to be recognized to obtain a first image, which may include but is not limited to the following implementation manners:
implementation mode 1: the blackhead recognition device acquires an image to be recognized; the blackhead recognition device determines face characteristic points in the image to be recognized through a preset algorithm; the blackhead recognition device determines a target area image according to the face characteristic points; the blackhead recognition device performs gray processing on the target area image to obtain a first image.
It should be noted that the preset algorithm may be at least one of an Open Source Computer Vision Library (OpenCV Library), an edge detection algorithm, a sobel algorithm, and an active contour model. The face feature points may be extracted from a preset algorithm. The target area image may be a specific partial image of the user's face.
Optionally, the determining, by the blackhead recognition device, the target area image according to the face feature point may include: the face feature points comprise a first feature point, a second feature point, a third feature point and a fourth feature point. The blackhead identification device takes the vertical coordinate of the first characteristic point as an upper boundary; taking the ordinate of the second characteristic point as a lower boundary; taking the abscissa of the third feature point as a left boundary; taking the ordinate of the fourth feature point as a right boundary; and determining a target area image according to the upper boundary, the lower boundary, the left boundary and the right boundary.
Illustratively, the first feature point is feature point number 28 of the OpenCV function library, the second feature point is feature point number 29, the third feature point is feature point number 32, and the fourth feature point is feature point number 34. The blackhead identifying means may determine the target area image based on the ordinate of the feature point No. 28, the ordinate of the feature point No. 29, the abscissa of the feature point No. 32, and the abscissa of the feature point No. 34. Wherein the target area image may be a nose area of the user.
Implementation mode 2: the blackhead recognition device detects the distance between a user and the blackhead recognition device; and when the distance is within the preset distance range, the blackhead recognition device acquires an image to be recognized and performs gray processing on the image to be recognized to obtain a first image.
It should be noted that the preset distance range is an interval constructed by the first distance threshold and the second distance threshold. The distance is within a preset distance range, namely the distance is greater than the first distance threshold and is less than or equal to the second distance threshold.
Illustratively, assume that the first distance threshold is 10 centimeters (cm for short), the second distance threshold is 25cm, and the preset distance range is (10cm, 25 cm). The blackhead recognition device detects that the distance between a user and the blackhead recognition device is 18cm, the 18cm is located within a preset distance setting range (10cm, 25cm), and at the moment, the blackhead recognition device acquires an image to be recognized.
Implementation mode 3: the blackhead recognition device detects the current environment brightness value; and when the current environment brightness value is within a preset brightness range, the blackhead recognition device acquires an image to be recognized and performs gray processing on the image to be recognized to obtain a first image.
It should be noted that the preset luminance range is an interval constructed by the first luminance threshold and the second luminance threshold. The current environment brightness value is within a preset brightness range, i.e. the current environment brightness value is greater than the first brightness threshold and less than or equal to the second brightness threshold.
Illustratively, assume that the first luminance threshold is 120 candelas per square meter (cd/m for short)2) The second luminance threshold is 150cd/m2The preset luminance range is (120 cd/m)2,150cd/m2). The black head identification device detects that the current environment brightness value is 136cd/m2136cd/m of the same2Within a preset brightness range (120 cd/m)2,150cd/m2) In this case, the blackheadThe recognition device acquires an image to be recognized.
It can be understood that the image to be recognized, which is acquired by the blackhead recognition device within the preset distance range or within the preset brightness range, is relatively clear, so that the gray processing of the image to be recognized is facilitated.
102. And performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image.
It should be noted that homomorphic filtering means that a homomorphic filter can perform contrast enhancement on a first image in a frequency domain, and at the same time, compress the brightness range of the first image. The homomorphic filter can reduce low frequencies and increase high frequencies, thereby reducing illumination variations in the first image and sharpening edge details of the first image. The homomorphic filtering is based on the principle that the blackhead recognition device illuminates the reflective imaging in the process of acquiring the first image, the gray scale range of the first image is adjusted, the problem of nonuniform illumination on the first image is eliminated, and the gray scale value of the target area image is effectively enhanced, wherein the first image comprises the target area image.
Wherein, the realization process of homomorphic filtering is as follows: after taking logarithm of the first image, the homomorphic filter carries out Fourier transform to obtain a first target image; the homomorphic filter filters the first target image to obtain a gray scale range; and the homomorphic filter performs inverse Fourier transform on the gray scale amplitude range, and then takes an index to obtain a second image.
Optionally, the filtering, by the homomorphic filter, the first target image to obtain a gray scale range may include: the homomorphic filter obtains a gray scale amplitude range by using a filter function formula in the first target image.
Wherein the filter function formula is H ═ (γ)H-γL)[1-1/ecX]+γL;X=Y2/Z2;
H represents a filter function formula; gamma rayHRepresenting a first filtering threshold; gamma rayLRepresents a second filtering threshold; c represents the slope of the transition from low frequency to high frequency; x represents a frequency ratio; y represents an input frequency; z representsThe cut-off frequency.
In general, γ isH> 1 (e.g.. gamma.)H=2);γL< 1 (e.g.: gamma.)L=0.5)。
Exemplarily, c is 4; and Z is 10.
The local histogram equalization is to perform a smoothing process on the first image (referred to as image smoothing for short) and perform a sharpening process on the first image (referred to as image sharpening for short). Among them, image smoothing is a low-frequency enhanced spatial filtering technique. The image smoothing may blur the first image or may remove noise from the first image. The image smoothing generally adopts a simple average method, i.e., an average brightness value between two adjacent pixel points is obtained through calculation. The neighborhood size between two adjacent pixel points is directly related to the smooth effect of the first image, the larger the neighborhood is, the better the smooth effect is, but the larger the neighborhood is, the larger the loss of the edge information of the first image is, so that the output second image is blurred, namely, the smooth effect of the first image is poor, and therefore, the blackhead recognition device needs to set the proper neighborhood size to ensure the definition of the second image. However, image sharpening is an inverse image equalization technique to image smoothing, and is a spatial filtering technique with high frequency enhancement. The image sharpening is to enhance the high frequency components to reduce the blurring degree in the first image, i.e. to enhance the detail edges and contours of the first image, and at the same time, to enhance the gray contrast on the first image, so as to obtain a clearer second image. However, sharpening of the image increases the noise of the first image while enhancing the edges of detail of the first image. Therefore, the blackhead identification device performs local histogram equalization on the first image to obtain the second image by combining image smoothing and image sharpening.
Specifically, the blackhead recognition device divides the first image into small regions, which are called tiles, and then performs histogram equalization on each tile. Because the histogram of each tile is concentrated in a small gray scale range, when noise exists in the first image, the noise in the first image is amplified by the histogram. The blackhead recognition device limits the side by using contrastTo avoid amplifying noise present on the first image. For each tile, if the number of pixels of a certain gray value in the histogram exceeds the contrast upper limit, the redundant pixels are evenly distributed to other gray values, after the histogram reconstruction operation, the blackhead identification device performs histogram equalization again, and finally the boundary of each tile is spliced by using bilinear interpolation values. Under normal conditions, the sizes of tiles are 8 × 8, the unit is the format of pixel points, and the upper limit of contrast is 3cd/m2。
103. And performing threshold segmentation on the second image to obtain an initial gray threshold.
The threshold segmentation means that the blackhead recognition device classifies the pixels of the second image by setting different gray thresholds.
Optionally, the threshold segmentation is performed on the second image by the blackhead recognition device to obtain an initial gray threshold, and the method may include: and the blackhead identification device obtains an initial gray threshold value by using an Otsu algorithm on the second image.
It should be noted that the Otsu algorithm may be an Otsu function formula.
Wherein, the Otsu function formula is thr ═ Otsu (B), and B ═ gray _ h;
thr represents an initial gray threshold; otsu (B) represents an Otsu function formula, and gray _ h represents a second image.
104. And adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image.
Optionally, the adjusting, by the blackhead recognition device, the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image, which may include: the blackhead recognition device obtains a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value; the blackhead recognition device obtains a target image according to a preset function formula;
wherein, the predetermined function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t;
bin represents the target image; threshold (A) represents a threshold function formula; gray _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
It should be noted that the threshold function formula means that the blackhead recognition apparatus binarizes the pixel value of the second image by traversing the second image map, so as to obtain the target image, where the target image has only two color components. The preset gray level difference value is used for adjusting an initial gray level threshold, and the preset gray level threshold may be set by the blackhead recognition device before leaving a factory, or may be set by a user in a user-defined manner according to the user's own needs, and is not specifically limited here.
It can be understood that the preset function formula is different from the existing threshold segmentation method, the preset function formula is suitable for all images, and meanwhile, the noise on the target image obtained by the blackhead identification device according to the preset function formula is relatively less, so that the blackhead on the target image can be effectively identified.
Optionally, the obtaining, by the blackhead recognition device, the target image according to a preset function formula may include: the blackhead recognition device obtains a third image according to a preset function formula; the blackhead recognition device removes noise on the third image to obtain a fourth image; and the blackhead recognition device cuts the fourth image to obtain a target image.
Optionally, the blackhead recognition device removes noise on the third image to obtain a fourth image, which may include but is not limited to the following implementation manners:
implementation mode 1: and removing the noise on the third image through low-pass filtering by the blackhead identification device to obtain a fourth image.
It should be noted that the low-pass filtering retains the low-frequency portion to remove the high-frequency portion, which can filter the noise in the third image, so that the smoothness of the third image is higher, and the texture details of the third image are blurred, so as to effectively remove the noise.
Implementation mode 2: the blackhead recognition device removes the noise on the third image through morphological processing to obtain a fourth image.
The morphological treatment may include a dilation morphological treatment and/or a erosion morphological treatment. The morphological processing of expansion and then corrosion refers to an opening operation, which can separate a target area on a third image and eliminate some irrelevant areas; the morphological processing of erosion and then dilation refers to a closed-loop operation that can obtain a contour corresponding to the target region on the third image. Whether the operation is the opening operation or the closing operation, the noise in the third image can be removed, so that the obtained fourth image is clearer.
It will be appreciated that both the morphological processing and the low pass filtering can remove noise on the third image. Since the operation steps of removing noise by morphological processing are more concise, the blackhead recognition apparatus usually removes noise on the third image by morphological processing.
Optionally, the cropping the fourth image by the blackhead recognition device to obtain the target image may include: the blackhead recognition device determines the widths to be cut of four edges on the fourth image; and the blackhead recognition device cuts the fourth image according to the width to be cut to obtain a target image.
The target image may include a target region, i.e., a Region Of Interest (ROI).
It should be noted that the widths to be cut of the four sides of the fourth image may be the same or different. The width of the to-be-cut scissors can be set by the blackhead recognition device before leaving a factory, or can be set by a user according to own habits in a self-defined manner, and is not particularly limited here. For example, the width of the blackhead recognition device to be cut on four sides before factory shipment is 0.5 cm.
105. And identifying and obtaining the total number and the total area of the blackheads from the target image.
Optionally, the identifying the total number and the total area of the blackheads by the blackhead identifying device from the target image may include: the blackhead recognition device determines the area of each blackhead from the target image; the blackhead identification device determines blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads; the blackhead identification device determines the number of the first blackheads and the total area of the first blackheads.
It should be noted that the area of the first blackhead refers to the number of the first pixel points corresponding to the first blackhead; the preset area threshold refers to a preset pixel number threshold. The area of the first blackhead is smaller than a preset area threshold, that is, the number of the pixels corresponding to the first blackhead is smaller than a preset pixel number threshold.
For example, assume that the threshold value of the number of pixels is 10. The blackhead identification device determines that 9 first pixel points exist in a first target blackhead from a target image, wherein the 9 first pixel points are less than 10 first pixel points; the second target blackheads have 13 second pixel points, and the 13 second pixel points are more than 10 second pixel points; the third target blackhead has 7 third pixel points, and the 7 third pixel points are less than 10 third pixel points; the fourth target blackhead has 10 fourth pixel points, and the 10 fourth pixel points are equal to 10 fourth pixel points. At this time, the blackhead recognition device determines that the first target blackhead is the first blackhead, the third target blackhead is the second blackhead, and the fourth target blackhead is the third blackhead, that is, the blackhead recognition device can determine that three blackheads exist on the target image, and the total area of the three blackheads is 26 pixels.
In the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; performing threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image; and identifying and obtaining the total number and the total area of the blackheads from the target image. The blackhead recognition device performs gray processing on an image to be recognized and performs noise point removal processing according to the adjusted initial gray threshold value to obtain a target image; the blackhead recognition device recognizes blackheads in the target image. The method improves the accuracy of the blackhead recognition device in recognizing the blackheads.
As shown in fig. 2, which is a schematic diagram of another embodiment of a blackhead identification method based on image processing in an embodiment of the present invention, the blackhead identification method may include:
201. the method comprises the steps of obtaining an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image.
202. And performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image.
203. And performing threshold segmentation on the second image to obtain an initial gray threshold.
204. And adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image.
205. And identifying and obtaining the total number and the total area of the blackheads from the target image.
It should be noted that step 201-.
206. And carrying out normalization processing on the total number of the blackheads to obtain the total number of the targets.
In the normalization process, the total number of blackheads is transformed to obtain the target total number, which becomes a scalar.
207. And carrying out the normalization processing on the total area of the blackheads to obtain the total target area.
208. And obtaining a target numerical value according to a first formula.
Wherein the target value is used to characterize the severity of the blackhead.
The first formula is B ═ λ C + (1- λ) D;
b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient, and is used to adjust the influence ratio of the target total area of the blackhead on the severity of the blackhead.
However, as can be seen from the first experimental data, in general, when λ is 0.6, the obtained target value, that is, the severity of blackheads is relatively accurate.
It should be noted that the severity of the blackhead refers to the ratio of the total target area of the blackhead to the target area. The larger the ratio, the more serious the blackhead is, and vice versa, the less serious the blackhead is.
Exemplarily, as shown in fig. 3, a schematic diagram of an embodiment of identifying and scoring a blackhead for a blackhead identification apparatus in an embodiment of the present invention may include: an image to be recognized 301, a target area image 302, a first image 303, a second image 304, a third image 305, a fourth image 306, a target image 307, and a score image 308.
It should be noted that the image to be recognized 301 and the target area image 302 are in color; the blackhead recognition device processes the image to be recognized 301 according to a preset algorithm to obtain a target area image 302; the blackhead recognition device performs gray processing on the target area image 302 to obtain a first image 303; the blackhead recognition device performs homomorphic filtering and local histogram equalization processing on the first image 303 to obtain a second image 304; the blackhead recognition device processes the second image 304 by using an Otsu algorithm and a preset function formula to obtain a third image 305; the blackhead recognition device performs morphological processing on the third image 305 to obtain a fourth image 306; the blackhead recognition device cuts the fourth image 306 to obtain a target image 307; the blackhead recognition device recognizes blackheads on the target image 307 and performs score evaluation to obtain a score image 308.
The scoring image 308 may include a position obtained by labeling the blackhead obtained by the blackhead recognition device, and may further include scores. It will be appreciated that the score may be used to characterize the skin quality of the user, for example, Grades is 72 points.
It is to be understood that the process of the blackhead recognition apparatus performing the cropping process on the fourth image 306 may be regarded as the process of the blackhead recognition apparatus extracting the ROI. Optionally, after the blackhead identification device extracts the ROI, "false blackheads" on the ROI may be removed, where the "false blackheads" may be blackheads with the number of pixels greater than or equal to a preset threshold of the number of pixels; the blackhead recognition device gets a target image 307 after eliminating the false blackhead; the blackhead recognition device recognizes blackheads on the target image 307, obtains the number and area of the blackheads, and performs score evaluation of skin quality.
Optionally, after step 208, the method may further include: when the target value is smaller than a first preset value, the blackhead recognition device generates and outputs a first skin quality score according to a second formula; when the target value is greater than or equal to the first preset value and smaller than a second preset value, generating and outputting a second skin quality score according to a third formula; and when the target value is larger than or equal to the second preset value, generating and outputting a third skin quality score according to a fourth formula.
Wherein the second formula is E1100-10B/F; the third formula is E2=90-10(B-F)/(G-F);
The fourth formula is E3=80-10(B-G)/(1-G);
E1Representing the first skin mass fraction; f represents the first preset value; e2Representing a second skin mass fraction; g represents the second preset value; e3Representing the third skin mass fraction.
However, as can be seen from the second experimental data, the skin mass score obtained is generally accurate when F is 0.27 and G is 0.40.
Optionally, after step 208, the method may further include: when the target value is smaller than a first preset value, the blackhead recognition device marks the blackhead, and generates and outputs a first skin quality score according to a second formula; when the target value is greater than or equal to the first preset value and smaller than a second preset value, the blackhead recognition device marks a blackhead, and generates and outputs a second skin quality score according to a third formula; and when the target numerical value is larger than or equal to the second preset numerical value, the blackhead recognition device marks the blackhead, and generates and outputs a third skin quality score according to a fourth formula.
It can be understood that the blackhead recognition device marks the blackheads, that is, the blackhead recognition device marks the blackheads in the target area, so that the user can know the number and the positions of the blackheads through the target image.
Exemplarily, as shown in fig. 4, a schematic diagram of an embodiment of labeling and scoring a blackhead for a blackhead recognition device in an embodiment of the present invention may include: (1) an original nose tip; (2) and black heads are marked with results and scores.
It is understood that, if the blackhead recognition apparatus takes the original nose tip as the target region, (1) the original nose tip represents an image containing the original pen tip, i.e., a target region image; (2) the blackhead labeling result and the score represent the labeling result of the blackhead recognition device on the number and the position of the blackheads and the generated skin quality score, wherein the skin quality score is 74.
Optionally, the skin mass fraction may include: a first skin mass score, a second skin mass score, or a third skin mass score; after the blackhead recognition device generates and outputs the skin quality score, the method may further include: and generating and outputting a skin quality evaluation result and suggestion according to the skin quality score.
The skin quality evaluation result is that the blackhead recognition device evaluates the blackheads of the user according to the skin quality score; the skin quality suggestion refers to that the blackhead identification device provides targeted suggestion on how the user removes blackheads according to the skin quality score and the skin quality evaluation result.
It can be understood that the blackhead recognition device outputs the skin quality evaluation result and the suggestion, so that the user can conveniently make corresponding actions according to the targeted suggestion to improve the skin quality of the user.
In the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; performing threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image; identifying and obtaining the total number and the total area of blackheads from the target image; carrying out normalization processing on the total number of the blackheads to obtain the total number of targets; performing the normalization processing on the total area of the blackheads to obtain a target total area; according to the first formula, a target value is obtained, wherein the target value is used for representing the severity degree of the blackhead.
The blackhead recognition device performs gray processing on an image to be recognized and performs noise point removal processing according to the adjusted initial gray threshold value to obtain a target image; the blackhead recognition device recognizes blackheads in the target image to obtain the total number and the total area of the blackheads, and outputs a target numerical value for representing the severity of the blackheads according to the total number and the total area of the blackheads. The method enables the blackhead recognition device to remove noise for a plurality of times through the image to be recognized, not only improves the accuracy of the blackhead recognition device in recognizing the blackhead, but also facilitates the user to timely master the severity of the blackhead of the user.
As shown in fig. 5, which is a schematic diagram of an embodiment of a blackhead identification apparatus in an embodiment of the present invention, the blackhead identification apparatus may include:
an obtaining module 501, configured to obtain an image to be identified, and perform gray processing on the image to be identified to obtain a first image;
a processing module 502, configured to perform homomorphic filtering and local histogram equalization on the first image to obtain a second image; carrying out threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
and an identifying module 503, configured to identify and obtain the total number and the total area of blackheads from the target image.
Alternatively, in some embodiments of the present invention,
an obtaining module 501, configured to obtain an image to be identified;
a processing module 502, specifically configured to determine a face feature point in the image to be recognized through a preset algorithm; determining a target area image according to the face characteristic points;
the obtaining module 501 is further configured to perform gray processing on the target area image to obtain a first image.
Alternatively, in some embodiments of the present invention,
a processing module 502, specifically configured to obtain an initial gray threshold for the second image by using an Otsu algorithm; obtaining a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value; obtaining a target image according to a preset function formula; wherein, the predetermined function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t; bin represents the target image; threshold (A) represents a threshold function formula; gray _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
Alternatively, in some embodiments of the present invention,
a processing module 502, specifically configured to obtain a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Alternatively, in some embodiments of the present invention,
a processing module 502, specifically configured to determine an area of each blackhead from the target image; determining the blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads;
the identifying module 503 is specifically configured to determine the number of the first blackheads and the total area of the first blackheads.
Alternatively, in some embodiments of the present invention,
the processing module 502 is further configured to perform normalization processing on the total number of blackheads to obtain a target total number; carrying out normalization processing on the total area of the blackheads to obtain a target total area; obtaining a target numerical value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is B ═ λ C + (1- λ) D; b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient.
Alternatively, in some embodiments of the present invention,
the processing module 502 is further configured to generate and output a first skin quality score according to a second formula when the target value is smaller than a first preset value; when the target value is greater than or equal to the first preset value and smaller than a second preset value, generating and outputting a second skin quality score according to a third formula; when the target value is larger than or equal to the second preset value, generating and outputting a third skin quality score according to a fourth formula; wherein the second formula is E1100-10B/F; the third formula is E290-10 (B-F)/(G-F); the fourth formula is E3=80-10(B-G)/(1-G);E1Representing the first skin mass fraction; f represents the first preset value; e2Representing a second skin mass fraction; g represents the second preset value; e3Representing the third skin mass fraction.
As shown in fig. 6, which is a schematic diagram of another embodiment of the blackhead identification apparatus in the embodiment of the present invention, the blackhead identification apparatus may include: a processor 601 and a memory 602;
the processor 601 has the following functions:
acquiring an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image;
homomorphic filtering and local histogram equalization processing are carried out on the first image to obtain a second image;
carrying out threshold segmentation on the second image to obtain an initial gray threshold;
adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
from the target image, the total number and the total area of the blackheads are identified.
Optionally, the processor 601 further has the following functions:
acquiring an image to be identified; determining face characteristic points in the image to be recognized through a preset algorithm; determining a target area image according to the face characteristic points; and carrying out gray level processing on the target area image to obtain a first image.
Optionally, the processor 601 further has the following functions:
obtaining an initial gray threshold value for the second image by using an Otsu algorithm; obtaining a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value; obtaining a target image according to a preset function formula; wherein, the predetermined function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t; bin represents the target image; threshold (A) represents a threshold function formula; gray _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
Optionally, the processor 601 further has the following functions:
obtaining a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the processor 601 further has the following functions:
determining the area of each blackhead from the target image; determining the blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads; determining the number of the first blackheads and the total area of the first blackheads.
Optionally, the processor 601 further has the following functions:
carrying out normalization processing on the total number of the blackheads to obtain the total number of targets; carrying out normalization processing on the total area of the blackheads to obtain a target total area; obtaining a target numerical value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is B ═ λ C + (1- λ) D; b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient.
Optionally, the processor 601 further has the following functions:
when the target value is smaller than a first preset value, generating and outputting a first skin quality score according to a second formula; when it is used asWhen the target value is greater than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin quality score according to a third formula; when the target value is larger than or equal to the second preset value, generating and outputting a third skin quality score according to a fourth formula; wherein the second formula is E1100-10B/F; the third formula is E290-10 (B-F)/(G-F); the fourth formula is E3=80-10(B-G)/(1-G);E1Representing the first skin mass fraction; f represents the first preset value; e2Representing a second skin mass fraction; g represents the second preset value; e3Representing the third skin mass fraction.
The memory 602 has the following functions:
the processing procedure and the processing result of the processor 601 are stored.
As shown in fig. 7, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention, a blackhead recognition apparatus as shown in fig. 5 or fig. 6 in this embodiment may be included.
It is understood that the terminal device in fig. 7 may include a general handheld screen electronic terminal device, such as a mobile phone, a smart phone, a portable terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP) device, a notebook Computer, a notebook (Note Pad), a Wireless Broadband (Wibro) terminal, a tablet Computer (PC), a smart PC, a Point of Sales (POS), a car Computer, and the like.
The terminal device may also include a wearable device. The wearable device may be worn directly on the user or may be a portable electronic device integrated into the user's clothing or accessory. Wearable equipment is not only a hardware equipment, can realize powerful intelligent function through software support and data interaction, high in the clouds interaction more, for example: the system has the functions of calculation, positioning and alarming, and can be connected with a mobile phone and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., wrist watches, wrist-supported products), foot-supported shoes types (e.g., shoes, socks, or other leg-worn products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and various types of non-mainstream products such as smart clothing, bags, crutches, accessories, and the like.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A blackhead identification method based on image processing is characterized by comprising the following steps:
acquiring an image to be identified, and carrying out gray level processing on the image to be identified to obtain a first image;
performing homomorphic filtering and local histogram equalization processing on the first image to obtain a second image;
performing threshold segmentation on the second image to obtain an initial gray threshold;
adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
and identifying and obtaining the total number and the total area of the blackheads from the target image.
2. The method according to claim 1, wherein the obtaining of the image to be recognized and the performing of the gray processing on the image to be recognized to obtain the first image comprises:
acquiring an image to be identified;
determining human face characteristic points in the image to be recognized through a preset algorithm;
determining a target area image according to the face characteristic points;
and carrying out gray level processing on the target area image to obtain a first image.
3. The method of claim 1, wherein the thresholding of the second image results in an initial gray level threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image, including:
obtaining an initial gray threshold value for the second image by using an Otsu algorithm;
obtaining a target gray level threshold value according to the initial gray level threshold value and a preset gray level difference value;
obtaining a target image according to a preset function formula;
wherein, the preset function formula is bin ═ threshold (a), a ═ gray _ h, η; eta is thr-delta t;
bin represents the target image; threshold (A) represents a threshold function formula; gram _ h represents the second image; thr represents an initial gray threshold; Δ t represents a preset gray level difference value; η represents the target grayscale threshold.
4. The method according to claim 3, wherein obtaining the target image according to the preset function formula comprises:
obtaining a third image according to a preset function formula;
removing noise on the third image to obtain a fourth image;
and cutting the fourth image to obtain a target image.
5. The method of claim 1, wherein the identifying the total number and the total area of blackheads from the target image comprises:
determining the area of each blackhead from the target image;
determining the blackheads with the area smaller than a preset area threshold value in each blackhead as first blackheads;
determining the number of the first blackheads and the total area of the first blackheads.
6. The method according to any one of claims 1-5, further comprising:
carrying out normalization processing on the total number of the blackheads to obtain the total number of targets;
performing the normalization processing on the total area of the blackheads to obtain a target total area;
obtaining a target numerical value according to a first formula;
wherein the target value is used to characterize the severity of the blackhead;
the first formula is B ═ λ C + (1- λ) D;
b represents the target value; c represents the total number of targets; d represents the total area of the target; λ represents a severity coefficient.
7. The method of claim 6, further comprising:
when the target value is smaller than a first preset value, generating and outputting a first skin quality score according to a second formula;
when the target value is greater than or equal to the first preset value and smaller than a second preset value, generating and outputting a second skin quality score according to a third formula;
when the target numerical value is larger than or equal to the second preset numerical value, generating and outputting a third skin quality score according to a fourth formula;
wherein the second formula is E1=100-10B/F;
The third formula is E2=90-10(B-F)/(G-F);
The fourth formula is E3=80-10(B-G)/(1-G);
E1Representing the first skin mass fraction; f represents the first preset numerical value; e2Representing a second skin mass fraction; g represents the second preset numerical value; e3Representing the third skin mass fraction.
8. A blackhead recognition apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be identified and carrying out gray level processing on the image to be identified to obtain a first image;
the processing module is used for carrying out homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; performing threshold segmentation on the second image to obtain an initial gray threshold; adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image;
and the identification module is used for identifying and obtaining the total number and the total area of the blackheads from the target image.
9. A blackhead recognition apparatus, comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor calls the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method of any one of claims 1-7.
10. A computer readable storage medium having executable program code stored thereon, wherein the executable program code, when executed by a processor, implements the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110362210.5A CN113128372A (en) | 2021-04-02 | 2021-04-02 | Blackhead identification method and device based on image processing and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110362210.5A CN113128372A (en) | 2021-04-02 | 2021-04-02 | Blackhead identification method and device based on image processing and terminal equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113128372A true CN113128372A (en) | 2021-07-16 |
Family
ID=76774747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110362210.5A Pending CN113128372A (en) | 2021-04-02 | 2021-04-02 | Blackhead identification method and device based on image processing and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113128372A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139027A (en) * | 2015-08-05 | 2015-12-09 | 北京天诚盛业科技有限公司 | Capsule head defect detection method and apparatus |
CN106846276A (en) * | 2017-02-06 | 2017-06-13 | 上海兴芯微电子科技有限公司 | A kind of image enchancing method and device |
US20170301095A1 (en) * | 2015-12-31 | 2017-10-19 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for image processing |
CN108550131A (en) * | 2018-04-12 | 2018-09-18 | 浙江理工大学 | Feature based merges the SAR image vehicle checking method of sparse representation model |
CN109033954A (en) * | 2018-06-15 | 2018-12-18 | 西安科技大学 | A kind of aerial hand-written discrimination system and method based on machine vision |
CN109285171A (en) * | 2018-09-21 | 2019-01-29 | 国网甘肃省电力公司电力科学研究院 | A kind of insulator hydrophobicity image segmentation device and method |
CN110084791A (en) * | 2019-04-18 | 2019-08-02 | 天津大学 | A kind of early blight of tomato based on image procossing and late blight automatic testing method |
CN110321896A (en) * | 2019-04-30 | 2019-10-11 | 深圳市四季宏胜科技有限公司 | Blackhead recognition methods, device and computer readable storage medium |
CN110533648A (en) * | 2019-08-28 | 2019-12-03 | 上海复硕正态企业管理咨询有限公司 | A kind of blackhead identifying processing method and system |
-
2021
- 2021-04-02 CN CN202110362210.5A patent/CN113128372A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139027A (en) * | 2015-08-05 | 2015-12-09 | 北京天诚盛业科技有限公司 | Capsule head defect detection method and apparatus |
US20170301095A1 (en) * | 2015-12-31 | 2017-10-19 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for image processing |
CN106846276A (en) * | 2017-02-06 | 2017-06-13 | 上海兴芯微电子科技有限公司 | A kind of image enchancing method and device |
CN108550131A (en) * | 2018-04-12 | 2018-09-18 | 浙江理工大学 | Feature based merges the SAR image vehicle checking method of sparse representation model |
CN109033954A (en) * | 2018-06-15 | 2018-12-18 | 西安科技大学 | A kind of aerial hand-written discrimination system and method based on machine vision |
CN109285171A (en) * | 2018-09-21 | 2019-01-29 | 国网甘肃省电力公司电力科学研究院 | A kind of insulator hydrophobicity image segmentation device and method |
CN110084791A (en) * | 2019-04-18 | 2019-08-02 | 天津大学 | A kind of early blight of tomato based on image procossing and late blight automatic testing method |
CN110321896A (en) * | 2019-04-30 | 2019-10-11 | 深圳市四季宏胜科技有限公司 | Blackhead recognition methods, device and computer readable storage medium |
CN110533648A (en) * | 2019-08-28 | 2019-12-03 | 上海复硕正态企业管理咨询有限公司 | A kind of blackhead identifying processing method and system |
Non-Patent Citations (3)
Title |
---|
FANG YAN: ""Feature extraction and analysis on X-ray image of Xinjiang Kazak Esophageal cancer by using gray-level histograms"", 《2013 IEEE INTERNATIONAL CONFERENCE ON MEDICAL IMAGING PHYSICS AND ENGINEERING》, pages 61 - 65 * |
FANG_YANG: ""图像增强综述"", Retrieved from the Internet <URL:《https://www.cnblogs.com/fydeblog/p/10734733.html》> * |
张迪: ""图像稀疏表示的K-SVD算法及其在人脸识别中的应用研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2021, pages 138 - 1393 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2711050C2 (en) | Image and attribute quality, image enhancement and identification of features for identification by vessels and faces and combining information on eye vessels with information on faces and / or parts of faces for biometric systems | |
CN108323204B (en) | Method for detecting face flaw point and intelligent terminal | |
WO2018176938A1 (en) | Method and device for extracting center of infrared light spot, and electronic device | |
CN107958453B (en) | Method and device for detecting lesion region of mammary gland image and computer storage medium | |
Xiong et al. | An enhancement method for color retinal images based on image formation model | |
CN111295664B (en) | Method and device for positioning hairline contour and computer readable storage medium | |
CN107172354B (en) | Video processing method and device, electronic equipment and storage medium | |
JP2007272435A (en) | Face feature extraction device and face feature extraction method | |
WO2019014813A1 (en) | Method and apparatus for quantitatively detecting skin type parameter of human face, and intelligent terminal | |
CN108369644B (en) | Method for quantitatively detecting human face raised line, intelligent terminal and storage medium | |
CN111860369A (en) | Fraud identification method and device and storage medium | |
KR20160115663A (en) | Image processing apparatus and image processing method | |
Firmansyah et al. | Detection melanoma cancer using ABCD rule based on mobile device | |
CN114782984A (en) | Sitting posture identification and shielding judgment method based on TOF camera and intelligent desk lamp | |
JPWO2017061106A1 (en) | Information processing apparatus, image processing system, image processing method, and program | |
CN110473176B (en) | Image processing method and device, fundus image processing method and electronic equipment | |
CN114298985B (en) | Defect detection method, device, equipment and storage medium | |
CN113128376A (en) | Wrinkle recognition method based on image processing, wrinkle recognition device and terminal equipment | |
CN113128373A (en) | Color spot scoring method based on image processing, color spot scoring device and terminal equipment | |
CN114140481A (en) | Edge detection method and device based on infrared image | |
CN113487473A (en) | Method and device for adding image watermark, electronic equipment and storage medium | |
CN113569708A (en) | Living body recognition method, living body recognition device, electronic apparatus, and storage medium | |
Fathy et al. | Benchmarking of pre-processing methods employed in facial image analysis | |
US11354925B2 (en) | Method, apparatus and device for identifying body representation information in image, and computer readable storage medium | |
CN111311610A (en) | Image segmentation method and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |