CN117392733B - Acne grading detection method and device, electronic equipment and storage medium - Google Patents

Acne grading detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117392733B
CN117392733B CN202311686743.4A CN202311686743A CN117392733B CN 117392733 B CN117392733 B CN 117392733B CN 202311686743 A CN202311686743 A CN 202311686743A CN 117392733 B CN117392733 B CN 117392733B
Authority
CN
China
Prior art keywords
value
pixel
face image
determining
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311686743.4A
Other languages
Chinese (zh)
Other versions
CN117392733A (en
Inventor
王念欧
郦轲
刘文华
万进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Accompany Technology Co Ltd
Original Assignee
Shenzhen Accompany Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Accompany Technology Co Ltd filed Critical Shenzhen Accompany Technology Co Ltd
Priority to CN202311686743.4A priority Critical patent/CN117392733B/en
Publication of CN117392733A publication Critical patent/CN117392733A/en
Application granted granted Critical
Publication of CN117392733B publication Critical patent/CN117392733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a acne grading detection method, a device, electronic equipment and a storage medium; the method comprises the following steps: acquiring a face image to be processed, and determining a skin reference pixel value based on the face image to be processed; transforming pixel values of pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values; determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value; judging whether the pixel value meets a forward normalization condition according to each transformed pixel value, if so, determining the absolute value of the ratio of the pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the pixel value to the negative normalization parameter as a target pixel value; determining a target face image based on each target pixel value; and detecting the acne on the target face image to obtain acne classification, so that the problem of inaccurate detection during acne classification detection is solved, and the accuracy of acne classification is improved.

Description

Acne grading detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for detecting acne in a grading manner, an electronic device, and a storage medium.
Background
In puberty, most men and women suffer from acne vulgaris, a common skin problem. More seriously, acne may leave pigments and scars on the patient's face, which may lead to the patient losing confidence and possibly even avoiding social activity. Therefore, automatic diagnosis and treatment of acne is an important research topic. The existing models and methods for detecting acne grades cannot well identify and distinguish key areas and background areas of a human face when detecting the human face, so that the identification accuracy is low, and the acne cannot be accurately graded.
Disclosure of Invention
The invention provides an acne grading detection method, an acne grading detection device, electronic equipment and a storage medium, which are used for solving the problem of inaccurate acne grading.
According to an aspect of the present invention, there is provided a method for classifying acne, comprising:
acquiring a face image to be processed, and determining a skin reference pixel value based on the face image to be processed;
Transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values;
determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value;
judging whether the transformed pixel value meets a forward normalization condition or not according to each transformed pixel value, if so, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value;
determining a target face image based on each of the target pixel values;
and detecting the acne of the target face image to obtain the acne classification.
According to another aspect of the present invention, there is provided an acne grade detection device comprising:
the skin reference value determining module is used for acquiring a face image to be processed and determining a skin reference pixel value based on the face image to be processed;
the pixel transformation module is used for transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values;
The normalization parameter determining module is used for determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value;
the target pixel value determining module is used for judging whether the transformed pixel value meets a forward normalization condition according to each transformed pixel value, and if so, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value;
a target image determining module, configured to determine a target face image based on each of the target pixel values;
and the acne grading module is used for carrying out acne detection on the target face image to obtain acne grading.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the acne classification detection method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute the acne classification detection method according to any of the embodiments of the present invention.
According to the technical scheme, the skin reference pixel value is determined based on the face image to be processed by acquiring the face image to be processed; transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values; determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value; judging whether the transformed pixel value meets a forward normalization condition or not according to each transformed pixel value, if so, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value; determining a target face image based on each of the target pixel values; performing acne detection on the target face image to obtain acne classification, solving the problem of inaccurate detection during acne classification detection, determining skin reference pixel values according to the face image to be processed, transforming each pixel value in the face image to be processed based on the skin reference pixel values, enhancing the difference between skin and background areas, determining positive normalization parameters and negative normalization parameters according to the transformed pixel values, folding and transforming the transformed pixel values based on positive normalization conditions, positive normalization parameters and negative normalization parameters to obtain target pixel values, further enhancing the difference between skin and background areas, determining the face image by processing each target pixel value, filtering useless background images, effectively enhancing key information, and accurately distinguishing the face and the background areas; and detecting acne based on the target face image with enhanced information to obtain more accurate acne classification, and improving the accuracy of acne classification.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for classifying acne detection according to a first embodiment of the present invention;
fig. 2 is a flowchart of a method for detecting acne classification according to a second embodiment of the present invention;
fig. 3 is a diagram showing an example of implementation of acne classification detection according to the second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an acne classification detecting device according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the acne classification detection method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of an acne classification detection method according to a first embodiment of the present invention, where the method may be performed by an acne classification detection device, and the acne classification detection device may be implemented in hardware and/or software, and the acne classification detection device may be configured in an electronic device. As shown in fig. 1, the method includes:
s101, acquiring a face image to be processed, and determining a skin reference pixel value based on the face image to be processed.
In this embodiment, the face image to be processed may be specifically understood as an image with the requirement of acne classification detection, where the image is usually obtained by capturing the face by an image capturing device, and the image capturing device may be a camera, a video recorder, or the like. Skin reference pixel values are understood to mean in particular pixel values that are used to represent the skin in a human face.
The image acquisition device can be fixed or movable according to the requirement, for example, the device is arranged in a fixed position, for example, in a large-scale device of a hospital or a beauty institution, and when the user needs to perform acne grading detection, the user can perform image acquisition by standing in front of the image acquisition device; alternatively, the acne classification detection device is arranged on a movable device, such as a movable intelligent terminal, a wearable portable device and the like, and a user can perform acne classification detection at any time and any place. The image collected by the image collection device can be directly used as the face image to be processed, the collected image can be preprocessed, useless information or interference information in the image is filtered, and the preprocessed image is used as the face image to be processed. Identifying the face image to be processed, determining the position of the face skin, determining the pixel value of the position of the face skin as a skin reference pixel value, for example, determining the position of the face by identifying the face image to be processed, taking one or more points in the face according to the position of the face, calculating the pixel value of the points, for example, calculating the mean value, the maximum value, the minimum value and the like, and taking the obtained pixel value as the skin reference pixel value.
S102, transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values.
The pixel value of each pixel point in the face image to be processed is determined, and the pixel value of each pixel point is converted according to the skin reference pixel value, which may be calculated as a difference between the skin reference pixel value and the pixel value, or the difference is used as a converted pixel value, or the processing is performed again according to the difference, for example, the difference is assigned differently in different ranges, and so on. And transforming each pixel value in the face image to be processed to obtain a transformed pixel value, and enhancing the distinction between the face and the background.
S103, determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value.
In this embodiment, the forward normalization parameter may be specifically understood as a parameter that performs normalization processing when the pixel value is a positive number, and is determined according to the brightness range between the brightest pixel and the skin pixel; the negative normalization parameter is specifically understood to be a parameter that performs normalization processing when the pixel value is negative, and is determined according to the brightness range between the darkest pixel and the skin pixel. The pixel value is positive or negative, and the pixel value may be determined by whether there is negative in the RGB three components of the pixel value, or the number of negative in the RGB three components, or the sum of the RGB three components is positive or negative.
And analyzing the three component values of each transformed pixel value, comparing the maximum value with the minimum value, determining a positive normalization parameter according to the maximum value, and determining a negative normalization parameter according to the minimum value.
S104, judging whether the transformed pixel values meet the forward normalization condition for each transformed pixel value, if so, executing S105; otherwise, S106 is performed.
In this embodiment, the forward normalization condition may be specifically understood as a condition for determining whether to perform forward normalization on data, that is, normalization processing when data is positive. The forward normalization condition is set in advance, for example, positive numbers in all three components of the pixel value, the number of three-component heavy positive numbers of the pixel value is larger than the number of negative numbers, and the like.
S105, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value.
In this embodiment, the target pixel value may be specifically understood as a pixel value after the folding transformation processing. When the transformed pixel value satisfies the forward normalization condition, the transformed pixel value is normalized forward, and the transformed pixel value is divided by the forward normalization parameter, and the absolute value of the ratio is taken as the target pixel value because negative numbers may exist in the transformed pixel value. When calculating the ratio of the converted pixel value to the forward normalization parameter, calculating the ratio of three components of RGB respectively, wherein the obtained ratio is also three, the obtained ratio is the normalized value of three channels of RGB respectively, and the ratio of the three channels is taken as an absolute value to obtain the target pixel value.
S106, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value.
When the transformed pixel value does not meet the positive normalization condition, the transformed pixel value is negatively normalized, and the transformed pixel value is divided by a negative normalization parameter, and the absolute value of the ratio is taken as the target pixel value because the transformed pixel value can have negative numbers. When calculating the ratio of the transformed pixel value to the negative normalization parameter, calculating the ratio of three components of RGB respectively, wherein the obtained ratio is also three, the obtained ratio is the normalized value of three channels of RGB respectively, and the ratio of the three channels is taken as an absolute value to obtain the target pixel value.
The transformed pixel values may be positive or negative and cannot be directly displayed. And performing folding transformation on each transformed pixel value through the steps of S104-S106, and converting all the pixel values into positive numbers. The pixel value generally comprises three components of RGB, and the three components of RGB can be positive or negative, and the three components are not necessarily positive or negative at the same time.
S107, determining a target face image based on each target pixel value.
In this embodiment, the target face image may be specifically understood as an image obtained after information enhancement processing, where the target face image is filtered compared with the face image to be processed, and invalid information such as a background area, etc., so that key information in the face is effectively reserved.
The target face image is directly formed according to each target pixel value, or the target face image is processed again according to each target pixel value, useless information is filtered to obtain the target face image, and the like. According to the embodiment of the application, the information enhancement is realized by carrying out the processing such as transformation, folding transformation and the like on the face image to be processed, so that the target face image is obtained.
S108, detecting the acne of the target face image to obtain the acne classification.
In this embodiment, the acne classification may be mild, moderate, severe, very severe, etc., or may be primary, secondary, tertiary, etc., with the severity decreasing or increasing sequentially. The neural network model is trained in advance, a large number of training samples are obtained, the neural network model is trained according to the training samples, and the trained neural network model can identify the face image according to the learned knowledge. And detecting the acne of the target face image through the neural network model, and determining the acne classification. Only the classification of acne may be detected when detecting acne, or the type of acne may be detected simultaneously.
The acne classification detection method provided by the embodiment of the application can assist the user in acne detection, for example, the acne classification detection method is integrated in special detection equipment, and the user is assisted in acne detection, so that the accuracy and objectivity of acne classification are ensured; or, the detection can be carried out by the user at any time and any place without going to a hospital, so that the detection cost can be saved while the time of the user is saved.
The embodiment of the invention provides a acne grading detection method, which solves the problem of inaccurate detection during acne grading detection, determines skin reference pixel values according to a face image to be processed, transforms each pixel value in the face image to be processed based on the skin reference pixel values, enhances the difference between skin and background areas, determines positive normalization parameters and negative normalization parameters according to the transformed pixel values, folds and transforms the transformed pixel values based on positive normalization conditions, positive normalization parameters and negative normalization parameters to obtain target pixel values, further enhances the difference between skin and background areas, determines the face image by processing each target pixel value, filters useless background images, effectively enhances key information, and accurately distinguishes the face and the background areas; and detecting acne based on the target face image with enhanced information to obtain more accurate acne classification, and improving the accuracy of acne classification.
Example two
Fig. 2 is a flowchart of a method for detecting acne classification according to a second embodiment of the present invention, where the method is refined based on the foregoing embodiment. As shown in fig. 2, the method includes:
s201, acquiring an original face image.
In this embodiment, the original face image may be specifically understood as an original image acquired by the image acquisition device. The user can trigger the image acquisition device to acquire images through triggering operation, and when the image acquisition device acquires face images of the user, the image acquisition device can set optimal acquisition distance and angle, prompt the user to adjust the pose of the user according to the optimal acquisition distance and angle, and facilitate the acquisition of the image acquisition device; or the user is not moving, and the pose of the image acquisition device is adjusted to acquire the image. And taking the face images acquired by the image acquisition device as original face images.
S202, performing face detection and data filtering on the original face image to obtain a face image to be processed.
The method comprises the steps of training a face detection model in advance, carrying out face detection on an original face image through the face detection model, filtering background areas of non-faces, and removing useless background areas to obtain a face image to be processed, wherein the face detection model can be DBface. For example, after the face area is determined, the coordinate range of the face area is determined, and the original face image is cut to obtain the face image to be processed. Or after the original face image is shot, manually selecting a cutting area or a proportion by a user, determining the face position according to the cutting area or the proportion, cutting the original face image, and filtering out a background area to obtain the face image to be processed.
Since the background of the original face image has a large proportion in the image, the model recognition is greatly disturbed, and meaningless areas are filtered through face detection. Face detection can remove a large area of background, but some invalid information is still retained around the face, so further filtering is required.
S203, determining the coordinates of the central point of the face image to be processed.
In this embodiment, the center point coordinates can be understood as position coordinates of the center point in the image. And determining the center point coordinates of the face image to be processed according to the resolution or the size of the face image to be processed.
S204, determining at least one candidate pixel point based on the center point coordinates.
In this embodiment, the candidate pixel point may be specifically understood as a pixel point for calculating the skin reference pixel value. The number of candidate pixel points may be set to a fixed value or a variable according to the need, for example, the number of candidate pixel points is set to a fixed value, and the pixel points near the center point coordinates are acquired as candidate pixel points according to the number; alternatively, a certain area range, for example, a square with a side length n is set, the center point coordinate is taken as the middle point of the square, all points in the square are taken as candidate pixel points, the side length of the square can be set to be a fixed value, a variable can be set according to the size of the image, for example, the side length of the square is equal to 1/4 of the side length of the image, and the like.
S205, calculating the average value of the pixel values of each candidate pixel point to obtain a skin reference pixel value.
After the candidate pixel points are obtained, determining the pixel value of each candidate pixel point, and averaging the pixel values of all the candidate pixel points to obtain an average value serving as a skin reference pixel value. When calculating the average value of the pixel values, the average value may be calculated for each component of RGB, so as to obtain the average value corresponding to each component of RGB.
S206, for each pixel point in the face image to be processed, taking the difference value obtained by subtracting the pixel value of the pixel point from the skin reference pixel value as the transformed pixel value.
For each pixel point in the face image to be processed, subtracting the pixel value of the pixel point from the skin reference pixel value to obtain a difference value which is the transformed pixel value. The detected values of each pixel in the face image to be processed are centered by the skin reference pixel values, the conversion in this step causes the values of the skin pixels to approach 0, the values of the darker pixels to be less than 0, and the values of the lighter pixels to be greater than 0.
S207, determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value.
As an optional embodiment of the present embodiment, the present optional embodiment further optimizes determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value as:
A1, comparing the values of the red, green and blue three channels of the converted pixel values, and determining the maximum value and the minimum value of the red channel, the maximum value and the minimum value of the green channel and the maximum value and the minimum value of the blue channel.
Each pixel value has components of red, green and blue RGB three channels, the value of the RGB three channels of each pixel value after transformation is determined, and the maximum value and the minimum value of the components of each channel are compared for each channel to obtain the maximum value and the minimum value of the red channel, the maximum value and the minimum value of the green channel and the maximum value and the minimum value of the blue channel, and at the moment, the maximum value and the minimum value can be positive numbers or negative numbers.
A2, determining a channel maximum operator based on the maximum value of the red channel, the maximum value of the green channel and the maximum value of the blue channel, and determining the absolute value of the channel maximum operator as a forward normalization parameter.
In this embodiment, the channel maximum operator can be understood as specifically the maximum value of the pixel channel. And taking the maximum value of the red channel, the maximum value of the green channel and the maximum value of the blue channel as RGB three components of a channel maximum operator, wherein the RGB three component values have the possibility of being negative, and determining the absolute value of the RGB three components of the channel maximum operator as a forward normalization parameter, namely the forward normalization parameter comprises the normalization parameters of the RGB three channels.
A3, determining a channel minimum operator based on the minimum value of the red channel, the minimum value of the green channel and the minimum value of the blue channel, and determining the absolute value of the channel minimum operator as a negative normalization parameter.
In this embodiment, the channel minimum operator can be understood as specifically the minimum value of the pixel channel. And taking the minimum value of the red channel, the minimum value of the green channel and the minimum value of the blue channel as RGB three components of a channel minimum operator, wherein the RGB three component values are possible to be negative, and determining the absolute value of the RGB three components of the channel minimum operator as a forward normalization parameter, namely the forward normalization parameter comprises the normalization parameters of the RGB three channels.
S208, judging whether the transformed pixel values meet the forward normalization condition for each transformed pixel value, if so, executing S209; otherwise, S210 is performed.
Optionally, the forward normalization condition is: the sum of the values of the red, green and blue three channels of the transformed pixel value is greater than 0.
The forward normalization condition is preferably set such that the sum of the RGB three components of the pixel value is greater than 0.
S209, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value.
S210, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value.
When the sum of the values of the red, green and blue three channels of the transformed pixel values is larger than 0, the pixel values are brighter than the skin, so that the pixel values are normalized by using a forward normalization parameter. A sum of the values of the red, green and blue three channels of transformed pixel values of no more than 0 indicates that the pixel value is darker than the skin, and is therefore normalized using a negative normalization parameter. The negative pixel is folded into the absolute value operation direction of the positive pixel by taking the absolute value. Normalization and folding shifts all pixels with value scales in the range of 0,1, bringing the skin pixels close to 0 and the other pixels close to 1.
S211, clustering the target pixel values, and determining the pixel points with the type of skin and the pixel points with the type of background area.
And clustering the obtained target pixel values, wherein a clustering algorithm can be set according to requirements, for example, a K-means clustering algorithm is used for separating the pixel points with the type of skin and the pixel points with the type of background area by clustering the target pixel values. Taking a K-means algorithm as an example, setting k=2 to cluster all target pixel values, selecting an initial cluster center, determining the type of the pixel points with smaller cluster center distance as skin, and determining the types of other pixel points as background areas, so as to realize the segmentation of the key areas.
S212, carrying out pixel value assignment on the pixel points with the type of the background area to form a background pixel value.
In this embodiment, the background pixel value can be understood as a pixel value of the background area in particular. And (3) assigning values to the pixel values of the pixel points of each background area, and assigning a fixed value to obviously distinguish the pixel values from the skin. For example, the pixel value of the pixel point of the background area is assigned to (0, 0), and the background pixel value is obtained.
S213, determining a target face image based on each background pixel value and each target pixel value of each type of pixel point of the skin.
And forming a target face image by the background pixel values and the target pixel values of the pixel points of the skin according to the corresponding pixel coordinates. And (3) blackening all the non-face areas, and reserving the face areas to strengthen key information.
S214, global feature extraction is carried out on the target face image, and global feature information is obtained.
In this embodiment, the global feature information may be specifically understood as feature information obtained by extracting global features of the target face image, and may represent global information of the target face image, where the global feature information may be a feature vector.
A neural network model, e.g., a convolutional neural network, of global feature extraction is pre-constructed and trained. And carrying out global feature extraction on the target face image through a pre-trained neural network model to obtain global feature information. The embodiment of the application selects a plurality of advanced classification models for comparison experiments, and experiments prove that the shuffle model is the most effective model, can have the advantages of both precision and speed, and is suitable for being deployed to mobile terminals and edge devices.
S215, extracting local features of at least one dimension of the face image to be processed to obtain at least one piece of local feature information.
In this embodiment, the local feature information may be specifically understood as feature information obtained after extracting local features of the target face image, and may represent rough local information of the face image to be processed, where the local feature information may be a feature vector.
Extracting local features from the face image to be processed in one or more dimensions, extracting the local features, and extracting the local features from the face image to be processed in an algorithm, model or other modes to obtain at least one piece of local feature information. Global features are complemented by local features.
As an optional embodiment of the present embodiment, the present optional embodiment further performs local feature extraction of at least one dimension on a face image to be processed, to obtain at least one local feature information, which is optimized as:
b1, carrying out center cutting on the face image to be processed, and determining a target recognition area.
In this embodiment, the target recognition area may be specifically understood as an area range for extracting local features in the face image to be processed. And determining the center point coordinates of the face image to be processed, selecting an area as a target recognition area according to a preset size by taking the center point coordinates as a reference point, for example, taking the center point coordinates as the center of the target recognition area, and cutting out the positive direction with the length of s as the target recognition area.
And B2, determining a maximum pixel value, a minimum pixel value, a pixel mean value and a pixel median value according to the pixel values of all the pixel points in the target identification area.
And determining a pixel value of each pixel point in the target identification area, comparing the sizes of the pixel values, and determining a maximum pixel value, a minimum pixel value and a pixel median value. And calculating the average value of the pixel values of all the pixel points, and determining the average value of the pixels.
In the embodiment of the application, when calculating the maximum pixel value, the minimum pixel value, the pixel mean value and the pixel median value, the maximum pixel value, the minimum pixel value, the mean value and the median value of the R channel can be obtained by respectively calculating for each channel, for example, comparing the numerical values of the R channels of each pixel value, determining the maximum value, the minimum value, the mean value and the median value, and similarly obtaining the maximum pixel value, the minimum pixel value, the pixel mean value and the median value of the G channel and the B channel, and obtaining the final maximum pixel value, the minimum pixel value, the pixel mean value and the median value according to the maximum pixel value, the minimum pixel value, the pixel mean value and the median value of the RGB three channels.
And B3, taking the maximum pixel value, the minimum pixel value, the pixel mean value and the pixel median value as local characteristic information respectively.
The method and the device indicate the local features in four static statistical feature modes, and roughly indicate the skin through the local features so as to realize skin observation.
S216, feature fusion and recognition are carried out on the global feature information and the local feature information, and an acne grading result is output.
The local feature information is mapped into three-dimensional features through a multi-layer perceptron MLP network and is fused with the global feature information, the fused global feature information and local feature information are identified, and acne grading results are determined and output. Feature fusion and recognition of the global feature information and the local feature information can be realized through a two-layer fully connected network, so that the difference between the whole face and the local skin color is effectively recognized. According to the embodiment of the application, the global deep features and the local features are fused, so that the color difference perception of the skin and the whole face is further enhanced.
By way of example, fig. 3 provides an exemplary illustration of an implementation of acne rating detection. A face image to be processed 31 is acquired, and information enhancement is performed on the face image to be processed to obtain a target face image 32. The target face image 32 is input into a shufflenet network 33 for global feature extraction, and global feature information 34 is obtained. Local feature extraction is performed on the face image 31 to be processed, and local feature information 35 is obtained. The global feature information 34 is subjected to average pooling through an average pooling layer 36, the global feature information subjected to average pooling is subjected to feature fusion with the local feature information through a fully connected network 37, the fused features are input into an output layer 38 for hierarchical prediction, and an acne classification result is output. In the training process, the acne grading result is used for calculating a loss function and adjusting model parameters, in the prediction process, the acne grading result is directly output, and the acne grading result can be obtained by mapping a softmax function.
The embodiment of the application can train a model in advance to grade acnes, for example, a large number of facial acne images are collected, and each image is marked according to four categories of acnes (mild/moderate/severe/very severe). And carrying out information enhancement on each image to obtain a corresponding image. And then carrying out global feature extraction and local feature extraction, and carrying out model training.
The embodiment of the application preferably uses the shuffle in global feature extraction, and the Shufflenet can well balance between speed and precision. The core design concept of the ShuffleNet is to transform the shuffle for different channel channels to solve the drawbacks of the packet convolution group convolution. Group convolution groups the different feature maps of the input layer, and then convolves each group with different convolution kernels, which reduces the computational effort of the convolution. Because the general convolution is performed on all the input feature maps, it can be said to be a full-channel convolution, which is a channel-dense connection (channel dense connection). Group convolution is a sparse connection (channel sparse connection). Networks using group convolution such as Xception, mobileNet, resNeXt, etc. The use of depthwise convolution by Xception and MobileNet is a rather special group convolution, where the number of packets is exactly equal to the number of channels, meaning that there is only one profile per group. However, these networks suffer from a significant disadvantage in that a dense 1x1 convolution is employed, meaning that the convolution is performed on all channels. So, in practice, for example, the 1x1 convolution in the ResNeXt model essentially takes up 93.4% of the multiply-add operation. By adopting channel sparse connection for the 1x1 convolution, the calculation amount is effectively reduced. However, group convolution has another disadvantage in that when the feature maps between different groups are not communicated after stacking GConv layers, the feature extraction capability of the network is reduced.
According to the embodiment of the application, a migration learning fine adjustment strategy is provided during model training, model priori knowledge trained by imagenet is transferred to acne recognition, and the performance of a framework is effectively improved under the condition of insufficient data. Cross-Entropy Loss (Cross-Entropy Loss) is used to adjust the model parameters, and is mainly used for classification tasks, especially multi-class classification, and is usually used for evaluating the difference between the output of the model and the real labels. In the model prediction process, acne is classified into a total of 4 levels, k=4 class classification is performed, each element of the vector is converted into a numerical value representing a probability by softmax, which is very useful for multi-class classification problems, and input data can be mapped onto one probability distribution.
The embodiment of the invention provides a acne grading detection method, which solves the problem of inaccurate detection during acne grading detection, determines skin reference pixel values according to a face image to be processed, transforms each pixel value in the face image to be processed based on the skin reference pixel values, enhances the difference between skin and background areas, folds and transforms the transformed pixel values according to positive normalization parameters and negative normalization parameters to obtain target pixel values, further enhances the difference between the skin and the background areas, processes each target pixel value, determines a target face image, realizes information enhancement, filters useless background images, effectively enhances key information, and accurately distinguishes the face and the background areas; the key information enhancement utilizes the particularity of the face image, and the meaningless information is filtered out by carrying out three processes of face detection on the original face image, information folding and key region segmentation on the face image to be processed, so that the key information is effectively enhanced. And carrying out global feature extraction on the target face image subjected to information enhancement, carrying out local feature extraction on the face image to be processed, fusing global depth features and local depth features, further enhancing the perception of the color difference between local skin and the whole face, and further improving the accuracy of acne classification.
Example III
Fig. 4 is a schematic structural diagram of an acne classification detecting device according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes: a skin reference value determination module 41, a pixel transformation module 42, a normalization parameter determination module 43, a target pixel value determination module 44, a target image determination module 45, and an acne classification module 46.
Wherein, the skin reference value determining module 41 is configured to acquire a face image to be processed, and determine a skin reference pixel value based on the face image to be processed;
the pixel transformation module 42 is configured to transform pixel values of pixel points in the face image to be processed according to the skin reference pixel values, so as to obtain transformed pixel values;
a normalization parameter determining module 43, configured to determine a positive normalization parameter and a negative normalization parameter according to each of the transformed pixel values;
a target pixel value determining module 44, configured to determine, for each transformed pixel value, whether the transformed pixel value meets a forward normalization condition, and if so, determine an absolute value of a ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value;
A target image determining module 45, configured to perform folding transformation on each of the transformed pixel values to obtain a target pixel value, and determine a target face image based on each of the target pixel values;
and the acne grading module 46 is used for carrying out acne detection on the target face image to obtain an acne grade.
The embodiment of the invention provides a acne grading detection method, which solves the problem of inaccurate detection during acne grading detection, determines skin reference pixel values according to a face image to be processed, transforms each pixel value in the face image to be processed based on the skin reference pixel values, enhances the difference between skin and background areas, determines positive normalization parameters and negative normalization parameters according to the transformed pixel values, folds and transforms the transformed pixel values based on positive normalization conditions, positive normalization parameters and negative normalization parameters to obtain target pixel values, further enhances the difference between skin and background areas, determines the face image by processing each target pixel value, filters useless background images, effectively enhances key information, and accurately distinguishes the face and the background areas; and detecting acne based on the target face image with enhanced information to obtain more accurate acne classification, and improving the accuracy of acne classification.
Optionally, the skin reference value determining module 41 includes:
the image acquisition unit is used for acquiring an original face image;
and the data filtering unit is used for carrying out face detection and data filtering on the original face image to obtain a face image to be processed.
Optionally, the skin reference value determining module 41 includes:
the center point determining unit is used for determining the center point coordinates of the face image to be processed;
a candidate point determining unit configured to determine at least one candidate pixel point based on the center point coordinates;
and the skin reference value determining unit is used for calculating the average value of the pixel values of the candidate pixel points to obtain a skin reference pixel value.
Optionally, the pixel transformation module 42 is specifically configured to: and for each pixel point in the face image to be processed, taking a difference value obtained by subtracting the pixel value of the pixel point from the skin reference pixel value as a transformed pixel value.
Optionally, the normalization parameter determining module 43 includes:
a pixel value comparing unit, configured to compare the values of the red, green and blue three channels of each of the transformed pixel values, and determine the maximum and minimum values of the red channel, the maximum and minimum values of the green channel, and the maximum and minimum values of the blue channel;
The forward parameter determining unit is used for determining a channel maximum operator based on the maximum value of the red channel, the maximum value of the green channel and the maximum value of the blue channel, and determining the absolute value of the channel maximum operator as a forward normalization parameter;
and the negative parameter determining unit is used for determining a channel minimum operator based on the minimum value of the red channel, the minimum value of the green channel and the minimum value of the blue channel, and determining the absolute value of the channel minimum operator as a negative normalization parameter.
Optionally, the forward normalization condition is: the sum of the values of the red, green and blue three channels of the transformed pixel value is larger than 0.
Optionally, the target image determining module 45 includes:
the clustering unit is used for clustering the target pixel values and determining the pixel points with the type of skin and the pixel points with the type of background area;
the background assignment unit is used for assigning pixel values to the pixel points with the type being the background area to form background pixel values;
and a target image determining unit for determining a target face image based on each of the background pixel values and the target pixel value of each of the types of pixels of the skin.
Optionally, acne classification module 46 includes:
The global feature extraction unit is used for carrying out global feature extraction on the target face image to obtain global feature information;
the local feature extraction unit is used for extracting local features of at least one dimension from the face image to be processed to obtain at least one piece of local feature information;
and the acne grading unit is used for carrying out feature fusion and identification on the global feature information and the local feature information and outputting an acne grading result.
Optionally, the local feature extraction unit is specifically configured to: performing center clipping on the face image to be processed to determine a target recognition area; determining a maximum pixel value, a minimum pixel value, a pixel mean value and a pixel median value according to the pixel values of all the pixel points in the target identification area; and respectively taking the maximum pixel value, the minimum pixel value, the pixel mean value and the pixel median value as local characteristic information.
The acne classification detection device provided by the embodiment of the invention can execute the acne classification detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 5 shows a schematic diagram of an electronic device 50 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 50 includes at least one processor 51, and a memory, such as a Read Only Memory (ROM) 52, a Random Access Memory (RAM) 53, etc., communicatively connected to the at least one processor 51, in which the memory stores a computer program executable by the at least one processor, and the processor 51 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 52 or the computer program loaded from the storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data required for the operation of the electronic device 50 can also be stored. The processor 51, the ROM 52 and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
Various components in the electronic device 50 are connected to the I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, etc.; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the electronic device 50 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 51 performs the various methods and processes described above, such as the acne grade detection method.
In some embodiments, the acne grade detection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 50 via the ROM 52 and/or the communication unit 59. When a computer program is loaded into RAM 53 and executed by processor 51, one or more steps of the acne grade detection method described above may be performed. Alternatively, in other embodiments, processor 51 may be configured to perform the acne grade detection method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (12)

1. A method for classifying acne, comprising:
acquiring a face image to be processed, and determining a skin reference pixel value based on the face image to be processed;
transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values;
determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value;
Judging whether the transformed pixel value meets a forward normalization condition or not according to each transformed pixel value, if so, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value;
determining a target face image based on each of the target pixel values;
and detecting the acne of the target face image to obtain the acne classification.
2. The method according to claim 1, wherein the acquiring the face image to be processed includes:
acquiring an original face image;
and carrying out face detection and data filtering on the original face image to obtain a face image to be processed.
3. The method of claim 1, wherein the determining skin reference pixel values based on the face image to be processed comprises:
determining the coordinates of a central point of the face image to be processed;
determining at least one candidate pixel point based on the center point coordinates;
and calculating the average value of the pixel values of the candidate pixel points to obtain a skin reference pixel value.
4. The method according to claim 1, wherein transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values includes:
and for each pixel point in the face image to be processed, taking a difference value obtained by subtracting the pixel value of the pixel point from the skin reference pixel value as a transformed pixel value.
5. The method of claim 1, wherein said determining positive and negative normalization parameters from each of said transformed pixel values comprises:
comparing the values of the red, green and blue three channels of the transformed pixel value, and determining the maximum value and the minimum value of the red channel, the maximum value and the minimum value of the green channel and the maximum value and the minimum value of the blue channel;
determining a channel maximum operator based on the maximum value of the red channel, the maximum value of the green channel and the maximum value of the blue channel, and determining the absolute value of the channel maximum operator as a forward normalization parameter;
and determining a channel minimum operator based on the minimum value of the red channel, the minimum value of the green channel and the minimum value of the blue channel, and determining the absolute value of the channel minimum operator as a negative normalization parameter.
6. The method of claim 1, wherein the forward normalization condition is: the sum of the values of the red, green and blue three channels of the transformed pixel value is larger than 0.
7. The method of claim 1, wherein said determining a target face image based on each of said target pixel values comprises:
clustering the target pixel values, and determining the pixel points with the type of skin and the pixel points with the type of background area;
carrying out pixel value assignment on the pixel points with the type being the background area to form background pixel values;
and determining a target face image based on the background pixel values and the target pixel values of the pixel points of the type skin.
8. The method of claim 1, wherein performing acne detection on the target face image to obtain an acne score comprises:
extracting global features of the target face image to obtain global feature information;
extracting local features of at least one dimension from the face image to be processed to obtain at least one piece of local feature information;
and carrying out feature fusion and recognition on the global feature information and the local feature information, and outputting an acne grading result.
9. The method according to claim 8, wherein the performing local feature extraction of at least one dimension on the face image to be processed to obtain at least one local feature information includes:
performing center clipping on the face image to be processed to determine a target recognition area;
determining a maximum pixel value, a minimum pixel value, a pixel mean value and a pixel median value according to the pixel values of all the pixel points in the target identification area;
and respectively taking the maximum pixel value, the minimum pixel value, the pixel mean value and the pixel median value as local characteristic information.
10. An acne grade detection device, comprising:
the skin reference value determining module is used for acquiring a face image to be processed and determining a skin reference pixel value based on the face image to be processed;
the pixel transformation module is used for transforming the pixel values of the pixel points in the face image to be processed according to the skin reference pixel values to obtain transformed pixel values;
the normalization parameter determining module is used for determining a positive normalization parameter and a negative normalization parameter according to each transformed pixel value;
the target pixel value determining module is used for judging whether the transformed pixel value meets a forward normalization condition according to each transformed pixel value, and if so, determining the absolute value of the ratio of the transformed pixel value to the forward normalization parameter as a target pixel value; otherwise, determining the absolute value of the ratio of the transformed pixel value to the negative normalization parameter as a target pixel value;
A target image determining module, configured to determine a target face image based on each of the target pixel values;
and the acne grading module is used for carrying out acne detection on the target face image to obtain acne grading.
11. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the acne grade detection method of any one of claims 1-9.
12. A computer readable storage medium storing computer instructions for causing a processor to perform the acne grade detection method of any one of claims 1-9.
CN202311686743.4A 2023-12-11 2023-12-11 Acne grading detection method and device, electronic equipment and storage medium Active CN117392733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311686743.4A CN117392733B (en) 2023-12-11 2023-12-11 Acne grading detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311686743.4A CN117392733B (en) 2023-12-11 2023-12-11 Acne grading detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117392733A CN117392733A (en) 2024-01-12
CN117392733B true CN117392733B (en) 2024-02-13

Family

ID=89472482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311686743.4A Active CN117392733B (en) 2023-12-11 2023-12-11 Acne grading detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117392733B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128375B (en) * 2021-04-02 2024-05-10 西安融智芙科技有限责任公司 Image recognition method, electronic device, and computer-readable storage medium
CN117611580B (en) * 2024-01-18 2024-05-24 深圳市宗匠科技有限公司 Flaw detection method, flaw detection device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071215A (en) * 2003-08-27 2005-03-17 Mitsubishi Space Software Kk Image normalization device, image normalization method, computer readable recording medium in which program is recorded and program
CN107886110A (en) * 2017-10-23 2018-04-06 深圳云天励飞技术有限公司 Method for detecting human face, device and electronic equipment
CN112837304A (en) * 2021-02-10 2021-05-25 姜京池 Skin detection method, computer storage medium and computing device
WO2023234622A1 (en) * 2022-06-03 2023-12-07 주식회사 브라이토닉스이미징 Image spatial normalization and normalization system and method using same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071215A (en) * 2003-08-27 2005-03-17 Mitsubishi Space Software Kk Image normalization device, image normalization method, computer readable recording medium in which program is recorded and program
CN107886110A (en) * 2017-10-23 2018-04-06 深圳云天励飞技术有限公司 Method for detecting human face, device and electronic equipment
CN112837304A (en) * 2021-02-10 2021-05-25 姜京池 Skin detection method, computer storage medium and computing device
WO2023234622A1 (en) * 2022-06-03 2023-12-07 주식회사 브라이토닉스이미징 Image spatial normalization and normalization system and method using same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
米根锁 等.雷达图法在轨道电路分路不良预警中的应用.铁道学报.2013,(第11期),第66-70页. *

Also Published As

Publication number Publication date
CN117392733A (en) 2024-01-12

Similar Documents

Publication Publication Date Title
CN117392733B (en) Acne grading detection method and device, electronic equipment and storage medium
CN106886216B (en) Robot automatic tracking method and system based on RGBD face detection
WO2019080203A1 (en) Gesture recognition method and system for robot, and robot
CN112381837B (en) Image processing method and electronic equipment
CN113205063A (en) Visual identification and positioning method for defects of power transmission conductor
CN109934077B (en) Image identification method and electronic equipment
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN111126162A (en) Method, device and storage medium for identifying inflammatory cells in image
CN110598574A (en) Intelligent face monitoring and identifying method and system
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN111582278B (en) Portrait segmentation method and device and electronic equipment
CN115661757A (en) Automatic detection method for pantograph arcing
Nikam et al. Bilingual sign recognition using image based hand gesture technique for hearing and speech impaired people
CN108563997B (en) Method and device for establishing face detection model and face recognition
WO2022222036A1 (en) Method and apparatus for determining parking space
Das et al. Human face detection in color images using HSV color histogram and WLD
CN116703925B (en) Bearing defect detection method and device, electronic equipment and storage medium
CN117437590A (en) Method, device and equipment for detecting switch state of disconnecting link
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
CN115116111B (en) Anti-disturbance human face living body detection model training method and device and electronic equipment
CN115018784B (en) Method, device, equipment and medium for detecting wire strand scattering defect
CN114445898B (en) Face living body detection method, device, equipment, storage medium and program product
US10146042B2 (en) Image processing apparatus, storage medium, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant